The fourth — and final — blog post in our series of AWS Security Tips and Quotes offers tips on AWS Security Best Practices. So far the series has covered:
Today’s post offers recommendations that include running a configuration audit, using automation to reduce errors, ensuring that you stay abreast of the latest best practices and recommendations provided by AWS and other resources — and more.
AWS Security Best Practices
1. Always stay abreast of the latest best practices and recommendations from AWS and other resources.
“A good place to begin is with Amazon Web Services (AWS). In its online AWS Security Center and security whitepaper, the company gives a good overview of AWS’ native security capabilities. Beyond AWS, publications from the European Union Agency for Network and Information Security, a cloud security guide for SMEs, and information from the Cloud Security Alliance may be helpful.
“You can’t be overeducated when it comes to cloud security. The pace of change and development at Amazon is so swift it’s easy to lose track of the latest and greatest, noted Rich Morrow, an independent consultant and trainer who specializes in AWS. ‘Schedule one person to spend half a day a week just catching up on what is happening,’ Morrow said. ‘You can’t be secure if you’re not staying abreast of new developments.’”
2. Benchmark normal activity.
“Benchmark what is normal. It’s important to correctly identify traffic. For example, if your website gets media attention and suddenly receives a lot of traffic, you would not want to block that. Monitor and log your sites’ traffic trends and user behavior analysis patterns. This helps you determine if you’re seeing a false positive or an actual attack so you can quickly alert AWS.”
— Farzam Ebadypour, AWS DDoS Mitigation: Challenges, Best Practices and Tips, Imperva Incapsula;
3. Use a tool like Threat Stack to monitor your AWS accounts in a single dashboard.
“The Threat Stack platform scans and lets you monitor multiple AWS accounts in a central dashboard. Compare the security policies with AWS best and industry benchmark for EC2, IAM, RDS, and S3.
“Scheduling scans and integrating the alerting with Slack, PagerDuty is possible.
“By integrating with CloudTrail, you can have full visibility and be notified of unauthorized changes in AWS resources.”
— Chandan Kumar, How to Perform AWS Security Scanning and Configuration Monitoring?, Geek Flare; Twitter: @ConnectCK
4. Enable traceability for an audit trail, and enforce access privilege rules.
“Use tags to indicate which users created and accessed which data when, then use permissions to define which users have access to do what functions. Use the strictest access controls to limit the ability to change root settings, which control the master controls for the environment. Add authorization and multi-factor authentication to the root access controls and any other highly sensitive functions.”
5. Track application event logs and API call logs.
“There are two types of logs you can track in AWS — application event logs, and API call logs. CloudWatch Logs is the default logging service for all your AWS resources, like EC2, DynamoDB, or RDS. It captures application event and error logs, and lets you monitor and troubleshoot application performance. For example, if any application performance metric crosses a threshold, you can dig deeper in the logs to investigate what is causing the anomaly. During an incident, you can view your CloudWatch Logs to estimate the extent of the damage to your system, and get a good idea of how user experience is being affected.
“CloudTrail, on the other hand, works at a lower level, monitoring APIs for various AWS services. It tracks and logs API calls to any AWS API, complete with details like IP address, and the account from which the call originated. CloudTrail is essential when investigating an attack.
“Together, CloudWatch Logs and CloudTrail give you deep and wide visibility across your AWS services. However, to get the most out of the log data collected by these two tools, you need to integrate them with other AWS services, and even external log analysis platforms.”
6. Periodically move logs from the source to a log-processing system.
“Capturing logs is critical for investigating everything from performance to security incidents. The current best practice is for the logs to be periodically moved from the source either directly into a log-processing system (for example, CloudWatch Logs, Splunk, Papertrail) or stored in an S3 bucket for later processing based on business needs. Common sources of logs are AWS APIs and user-related logs (for example, AWS CloudTrail), AWS service-specific logs (for example, Amazon S3, Amazon CloudFront), operating system-generated logs, and third-party application-specific logs. You can use CloudWatch Logs to monitor, store, and access your log files from EC2 instances, AWS CloudTrail, and other sources.”
7. Store additional information in your logs.
“Log lines normally have information like timestamp, pid, etc. You’ll also probably want to add instance-id, region, availability-zone and environment (staging, production, etc.), as these will help debugging considerably. You can get this information from the instance metadata service. The method I use is to grab this information as part of my bootstrap scripts, and store it in files on the filesystem (/env/az, /env/region, etc.). This way I’m not constantly querying the metadata service for the information. You should make sure this information gets updated properly when your instances reboot, as you don’t want to save an AMI and have the same data persist, as it will then be incorrect.”
8. Run a configuration audit.
“The best process for spotting misconfigurations is to scan for them as soon as you move to the cloud and again each time you make a change to your environment. Running a configuration audit will help you see what you may have missed and give you the opportunity to remediate before attackers can find and exploit it.
“Looking for some examples? Mishaps like leaving SSH wide open to the internet can allow an attacker to attempt remote server access from anywhere, rendering traditional network controls like VPN and firewalls moot. Failing to enforce multi-factor authentication (MFA) is another big misconfiguration concern. In our survey, 62 percent of companies did not actively require users to use MFA, making brute force attacks all too easy for adversaries to carry out. Auditing your configurations regularly will show you how you hold up against CIS Benchmarks and AWS best practices.
“The sooner you begin to regularly audit your configurations, the faster you’ll be able to spot misconfigurations before someone else does.”
— Josh Trota, What Makes a Misconfiguration Critical? AWS Security Tips, Threat Stack; Twitter: @threatstack
9. Know when your access keys were last used.
“Access keys consist of an access id and secret access key. You need these keys when you want to access AWS services using an API, command line, or SDK. In the AWS IAM console, you can see when these keys were last used. AWS will show you timestamps, regions, and the AWS services that were accessed. AWS IAM console also displays the date and time when an IAM user or root account last accessed the AWS Management Console, forums, Support Center, or Marketplace. Finally, you can Download an access key’s “last used” report for your entire account.”
— Nitheesh Poojary, AWS Security Best Practices: User Management using IAM and Automate using Chef, Six Nines; Twitter: @SixNinesAWS
10. Don’t commit your access keys or credentials.
“AWS access keys are meant to be used by your infrastructure and/or your code. Do not commit them into your source code. It would else make them available to a lot of 3rd parties, such as contractors or continuous integration tools. It will also make them very difficult to change. A good way to approach this is to use environment variables. It would also allow you to easily run your code in a non-production environment. These ideas are described in The Twelve-factor App.”
11. Remove unnecessary credentials.
“It is always a good security practice to regularly audit user credentials and remove them in case they are not in use. AWS provides an out-of-the-box ‘credential report’ which helps you track the lifecycle of passwords and access keys. The report includes user details, date created, when the password was last used, and when the password was last changed. Also, if you have set the password rotation policy, this report mentions the date and time at which the user is supposed to change the password.
“For access keys, reports highlight whether a user has an access key and if it is active or not; date and time when the key was rotated or created, when the access key was used for the last time, AWS region where the key was used for the last time, and the AWS service (Amazon S3, EC2) where the key was used.
“These details are quite useful for internal and external audits. With AWS, you can grant a role to an auditor so he/she can directly download the credential report on a requirement basis.
“Credential reports can be generated every four hours. If you try to generate a new report within four hours, the last-generated report will be shared with the user. AWS IAM internally checks when the last report was generated and takes a decision whether to generate a new one or not.”
12. Leverage detective controls to identify potential security incidents.
“You can use detective controls to identify a potential security incident. They are an essential part of governance frameworks, and can be used to support a quality process, a legal or compliance obligation, and threat identification and response efforts. There are different types of detective controls. For example, conducting an inventory of assets and their detailed attributes promotes more effective decision making (and lifecycle controls) to help establish operational baselines. Or you can use internal auditing, an examination of controls related to information systems, to ensure that practices meet policies and requirements, and that you have set the correct automated alerting notifications based on defined conditions. These controls are important reactive factors that help organizations identify and understand the scope of anomalous activity.”
— Statement of Work for Well-Architected Framework Model, Eplexity; Twitter: @CloudXOS
13. Don’t forget to include your mobile apps in an audit.
“If you have created a mobile app that makes requests to AWS, take these steps:
- Make sure that the mobile app does not contain embedded access keys, even if they are in encrypted storage.
- Get temporary credentials for the app by using APIs that are designed for that purpose. We recommend that you use Amazon Cognito to manage user identity in your app. This service lets you authenticate users using Login with Amazon, Facebook, Google, or any OpenID Connect (OIDC)-compatible identity provider. You can then use the Amazon Cognito credentials provider to manage credentials that your app uses to make requests to AWS.
“If your mobile app doesn’t support authentication using Login with Amazon, Facebook, Google, or any other OIDC-compatible identity provider, you can create a proxy server that can dispense temporary credentials to your app.”
14. Never use root access keys to request access through APIs or other common methods.
“IT teams should never use root access keys to request access through APIs or other common methods. Essentially, root access keys are the master keys for an AWS account; there is no way to curtail privileges for compromised root keys. Admins must protect AWS root account keys at all costs, and implement alternative login credentials, such as a user name and password or multifactor authentication, for AWS management and service access. This additional layer of authentication prevents hacked login credentials from launching remote attacks and protects critical workloads.”
15. It’s your responsibility to apply the latest security patches to EC2 instances.
“Once you launch an EC2 instance, the responsibility for properly applying the latest security patches to the operating system is yours as we can see from the infrastructure model. AWS will not notify you when a new patch is released for your EC2 instance OS; you must manage EC2 OS security.
“Whether you’re running Windows or some flavor of Linux (like CentOS, Ubuntu, or SUSE), you must manage the operating system’s security settings. Do not assume that the latest AMIs (Amazon Machine Images) have the very latest security patches. Always check for updates, by, for example, using ‘yum update’ (or ‘aptitude safe-upgrade’) for Linux, and the Windows update program for Windows.”
16. Scan your Git repositories and history for AWS keys.
“Install git-secrets and use it to scan your git repositories and history for AWS keys. If found, even in the history, the keys must be considered compromised and revoked.”
— Security Architecture, Hortonworks Data Cloud; Twitter: @hortonworks
17. Use private subnets with appropriate ACLs for anything that doesn’t need to be public.
“Deploy everything in a VPC and place critical components that don’t need to be publicly available such as databases in private subnets with appropriate ACLs.”
— AWS Security Best Practices: Multi-Factor Authentication & Beyond, Logicworks; Twitter: @Logicworks
18. Get rid of any IAM access keys that haven’t been used in 30 days or more.
“It’s one of the easiest best practices you can do. If you have not used certain IAM access keys in the last 30 days or since creation, it’s time to remove them. It will not only give you better security but also avoid key compromises.”
19. Eliminate blind spots.
“Blind spots are the enemy to any security posture. You need to be able to see the state of your environment at all times. This includes what’s going on with your infrastructure, applications, data, and users. By knowing exactly how everything is operating, you minimize the chance that an attack will go unnoticed.
“Having deep visibility into your cloud environment at all times is essential to maintaining operations, pinpointing issues, and adhering to compliance standards.”
— Michal Ferguson, 10 Best Practices for Securing Your Workloads on AWS, Threat Stack; Twitter: @threatstack
20. Use permissions and versioning to protect data in S3 buckets.
“Data stored in S3 can be protected from accidental information disclosure or data integrity compromise. To achieve this, limit the scope of access to sensitive data in S3 by enforcing rigid bucket and/or object level permissions. To allow data integrity, S3 supports a feature called Versioning which stores the newest version of every modified object, allowing quick restoration in case of accidental deletion.”
— Kanishka Mohaia, 10 AWS security best practices every team MUST implement, DataEngineer.pro
21. Utilize the principle of least privilege.
“Clearly define and grant only the minimum privileges to users, groups, and roles that are needed to accomplish business requirements.”
22. Automate your system security.
“Your attackers are using automated tools to scan ports and identify misconfigured devices, so you should be automating your system security. Automating security tasks not only mitigates human errors, but frees up precious developer time to focus on more strategic initiatives.”
— Ben Sanders, Secure online products and services with security best practices, Digital Craftsman; Twitter: @DCHQ
23. Ensure that changes are properly verified and tracked.
“As you consider each of these areas of security, it’s important to not only ensure that any initial configuration is accurate, but also that any changes are appropriately verified and tracked and that the ongoing security posture continues to be validated. This ongoing regular validation will ideally be via automated mechanisms that provide both monitoring and reporting of some of the key security elements. AWS provides some best-practice security reporting via the AWS Console with ‘Trusted Advisor’. This gives visibility to elements such as open ports, use of MFA (multi-factor authentication), the use of strong passwords (or not), globally readable/writable storage bucket use and a range of other security and audit capabilities. It’s great practice to ensure your security processes include the review of the reports from Trusted Advisor on a regular basis.”
— Gary Marshall, Security and AWS: Strategies and services to protect your environment, Bulletproof
24. Bundle native and third-party tools to create a secure AWS environment.
“IT administrators can implement a variety of tools and security measures through AWS and other software vendors to fit their own specific needs and address their concerns.
“AWS tools and services include AWS IAM, AWS Key Management Service, AWS Certificate Manager, and AWS Organizations. Admins can bundle these tools to create a secure cloud environment and ensure that data residing in the cloud remains safe. Admins enforce policies across services to generate role-based security and limit access to user groups.
“In addition to native AWS tools, a variety of third-party security tools are available in the AWS Marketplace. Be certain of enterprise security and compliance requirements when matching your needs to a security tool.”
— Tim Culverhouse, Follow this expert advice to improve security in AWS, SearchAWS; Twitter: @TechTarget, @tculverhouse_TT