Encoding best practices with IAM policy conditions

One of the best things about AWS is the programatic access to resources, and the confidence and consistency that can be achieved through controlling what can be executed.

Most users of AWS will at one time or another run into limitations on IAM policies – whether that is developing Lambda functions that you want to have least privilige or logging in with a limited user role.  In most cases people will simply enable the action they require, and possibly restrict it to a resource and be done with it.

This is a great way to ensure the right level of access is granted, but doesn’t provide much protection as to best practice use of those services.

In this blog post, we will look at how you can enable IAM and SCP policy conditions to help enforce not only the right services are used, but that they are following aligned practice of your organisation

The anatomy of an IAM policy

This post does not aim to be an extensive replacement for the IAM documentation, but it is worth us being on the same page on the elements contained within these policies.

There are various elements that make up a statement:

  • Effect: The effect can be Allow or Deny. By default, IAM users don’t have permission to use resources and API actions, so all requests are denied. An explicit allow overrides the default. An explicit deny overrides any allows.
  • Action: The action is the specific API action for which you are granting or denying permission. 
  • Resource: The resource that’s affected by the action. Some Amazon API actions allow you to include specific resources in your policy that can be created or modified by the action. You specify a resource using an Amazon Resource Name (ARN) or using the wildcard (*) to indicate that the statement applies to all resources. 
  • Condition: Conditions are optional. They can be used to control when your policy is in effect. And the subject for this blog post.

As IAM policies Effect can be to Allow or Deny, in this blog post we will be focused on Denying access to specific API calls when certain conditions are true.  These Deny policies need to work hand in hand with broader Allow policies to control exactly what you have access to.

What are conditions?

Take this simple policy which allows access to publish objects to a given bucket. 

- Sid: PublishDeploymentObjects
  Effect: Allow
    - s3:PutObject
    - !Sub '${DeployBucket.Arn}/*'

This type of policy you might see on a Lambda function or CodeBuild job to ensure that the AWS service can only Put Objects into a specified bucket.  This type of IAM policy is common and adheres to the “Least Privilege” principal.

But it doesn’t protect us against higher level issues, such as the data being tampered with in transit – to do that we would want to enforce that interactions with the bucket are done in a secure way.  

We know we can ensure access S3 using HTTPS, but how can we enforce that all clients do that?

This is where our IAM policy Conditions come in, we can add another policy statement with a Condition that will Deny the PutObject call if it is not executed in a secure way.  With this condition in place we can not only control WHO can write to our bucket, but also enforce a best practice of writing ONLY via a secure method.

- Sid: DenyHTTPAccess
  Effect: Deny
    - s3:*
    - !Sub '${DeployBucket.Arn}'
    - !Sub '${DeployBucket.Arn}/*'
        - false

Once you see Conditions at work, you can start to identify more scenarios where you can enforce a level of control – that you KNOW the AWS API will enforce for you.

For each of the published AWS API calls, there is a list of conditions available for you to build controls around – with these controls at hand you can confirm that not only your service is accessed BY who you want, but also HOW they are configuring them too.

And the available detail is large – for just the ‘PutObject’ API call, the following condition keys are available:

With this, you could create complex rules around what grants and permissions can be created, how the objects need to be tagged even set up thresholds on data retention and storage rules.

Once you see the flexibility in these Conditions you can spend your effort once to build the IAM policy, and sleep comfortably in knowing that all accesses of the AWS API need to be compliant with your rules.

Like a lot of AWS documentation, the list of conditions is explained in great detail, broken down for each service for example here is the documentation for just the S3 service. https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html
This documentation not only shows the conditions that are available but also show which operation they are applicable for.

Lets look at some more examples

Now that we have explored the basics, what else can we do?  Remembering that these policies can be applied in multiple locations, we can control the restrictions per Role (via IAM policies) or make more broader enforcements using Service Control Policies (SCP’s) at the organisation level.

Lets say as an organisation you want to ensure that everyone standardises on using postgres as the RDS engine – you could apply the following IAM statement as an SCP to the Organisation to enforce that no-one can call the CreateDBInstance API unless they specify the postgres database engine.

- Sid: OnlyAllowPostgresRDS
  Effect: Deny
  Action: 'rds:CreateDBInstance'
  Resource: '*'
        - 'postgres'

Or if you want to control spending on a sandbox account, and want to limit the size of servers launched, you could enforce the following as a SCP on the sandbox account to ensure nothing larger than a medium can be launched.

- Sid: DenyLargeServers
  Effect: Deny
  Action: 'ec2:RunInstances'
  Resource: 'arn:aws:ec2:*:*:instance/*'
        - '*.nano'
        - '*.small'
        - '*.micro'
        - '*.medium'

The options available here allow best practices to be encoded in policy to deny consumers of the cloud the opportunity to use it in insecure or expensive ways.

Some other example policies you can build are:

  1. Ensure that full disk encryption is enabled for every S3 bucket, EFS, EBS and RDS created.
  2. Enforce that any Lambda function deployed is VPC attached to meet outbound traffic filtering requirements.
  3. Limit the regions in with specific services can be launched.

I’m sure now that you know, you’ll have any number of guardrails to encode as policies.

- Sid: DenyUnencryptedEFS
  Effect: Deny
  Action: 'elasticfilesystem:CreateFileSystem'
  Resource: '*'
      'elasticfilesystem:Encrypted': 'false'
- Sid: DenyUnencryptedRDS
  Effect: Deny
  Action: 'rds:CreateDBInstance'
  Resource: '*'
      'rds:StorageEncrypted': 'false'
- Sid: DenyNonVPCAttachedLambda
  Effect: Deny
    - 'lambda:CreateFunction'
    - 'lambda:UpdateFunctionConfiguration'
  Resource: '*'
      'lambda:VpcIds': 'vpc-*'
- Sid: DenyServicesOutsideSydney
  Effect: Deny
    - '*'
    - 'acm:*'
    - 'apigateway:*'
    - 'cloudformation:*'
    - 'cloudwatch:*'
    - 'codebuild:*'
    - 'codepipeline:*'
    - 'dynamodb:*'
    - 'ebs:*'
    - 'ec2:*'
    - 'ec2messages:*'
    - 'ecr:*'
    - 'ecs:*'
    - 'eks:*'
    - 'elasticache:*'
    - 'elasticfilesystem:*'
    - 'elasticloadbalancing:*'
    - 'events:*'
    - 'glue:*'
    - 'kms:*'
    - 'lambda:*'
    - 'logs:*'
    - 'rds:*'
    - 'route53:*'
    - 'route53domains:*'
    - 'route53resolver:*'
    - 's3:*'
    - 'secretsmanager:*'
    - 'shield:*'
    - 'sns:*'
    - 'sqs:*'
    - 'ssm:*'
    - 'ssmmessages:*'
    - 'states:*'
    - 'support:*'
    - 'tag:*'
    - 'tiros:*'
    - 'xray:*'
    - 'wellarchitected:*'
      'aws:RequestedRegion': 'ap-southeast-2'

I hope this post has shown you the power hidden in the lesser-used Condition statements of IAM and SCP policies.

It is well worth perusing the Service Authorisation Reference (https://docs.aws.amazon.com/service-authorization/latest/reference/reference.html) for services you are working with, especially looking at the Create or Update actions to consider how you could use IAM policies to enforce best practices and keep your cloud secure.

As always, if you’d like assistance in migrating, developing or operating secure cloud workloads, please reach out to anyone from Cevo for support.