As part of its mission, Threat Stack has always brought its readers security-related content to help them make informed decisions that will strengthen their organizations’ security.
With more companies than ever leveraging cloud services like AWS, and with cloud environments becoming more and more complex, it’s critical that organizations develop proactive, comprehensive security strategies that build security in from the very beginning and evolve as their infrastructures scale to keep systems and data secure.
So last week we kicked off a 4-part mini-series on AWS Security Tips and Quotes starting with Part 1: Essential Security Practices.
This week we’re bringing you Part 2 — Securing Your AWS Environment — and in the coming weeks we’ll wrap up with:
- Part 3: Best Practices for Using Security Groups in AWS
- Part 4: AWS Security Best Practices
Securing Your AWS Environment
1. Focus on control.
“While the shared responsibility model lays out that providers should focus on security of the cloud, the reality is that you still need to have the right controls in place. As this TechTarget article puts it, “Controls around logging and identity and access management give customers more granular control and greater insight into workload security.” In other words, while you may trust your cloud provider, access controls are a very good idea because they let you enforce rules and policies that make sense for your unique business. Even if AWS is adhering to industry best practices, there may be areas where it makes sense to tweak the rules to suit your unique situation.
“Even better, the more insight you get via controls, the better off you’ll be when it comes to uncovering and addressing shadow IT (a big security risk) and overall monitoring and management of threats. So if you don’t have proper identity and access management controls in place currently, now is a good time to add this layer of security to your posture.”
— Pete Cheslock, The Real Implications of The Shared Security Model, Threat Stack; Twitter: @threatstack
2. Identify, define, and categorize information assets.
“The first step when opting to implement AWS security best practices is to identify all the information assets that you need to protect (application data, users data, code, applications) and then define an efficient and cost effective approach for securing them from internal and external threats.
“After that, it is recommended to categorize all the information assets into:
- Essential information assets, such as business related information, internal specific processes and other data from strategic activities.
- Components/elements that support the essential information assets, such as hardware infrastructure, software packages, personnel roster and partnerships.”
3. Control access to AWS IoT resources using your own identity and access management solution.
“AWS IoT is a managed cloud platform that lets connected devices easily and securely interact with cloud applications and other devices by using the Message Queuing Telemetry Transport (MQTT) protocol, HTTP, and the MQTT over the WebSocket protocol. Every connected device must authenticate to AWS IoT, and AWS IoT must authorize all requests to determine if access to the requested operations or resources is allowed. Until now, AWS IoT has supported two kinds of authentication techniques: the Transport Layer Security (TLS) mutual authentication protocol and the AWS Signature Version 4 algorithm. Callers must possess either an X.509 certificate or AWS security credentials to be able to authenticate their calls. The requests are authorized based on the policies attached to the certificate or the AWS security credentials.
“However, many of our customers have their own systems that issue custom authorization tokens to their devices. These systems use different access control mechanisms such as OAuth over JWT or SAML tokens. AWS IoT now supports a custom authorizer to enable you to use custom authorization tokens for access control. You can now use custom tokens to authenticate and authorize HTTPS over the TLS server authentication protocol and MQTT over WebSocket connections to AWS IoT.”
— Ramkishore Bhattacharyya, How to Use Your Own Identity and Access Management Systems to Control Access to AWS IoT Resources, AWS Security Blog; Twitter: @AWSCloud
4. Be mindful about where you store your access keys.
“Never store your access keys and secret key in ec2 instances or any other cloud storage. If you need to access AWS resources from an ec2 instance, you can always use IAM roles.”
— AWS Security Tips : 16 Things To Do For Securing Your AWS Account, DevOpsCube; Twitter: @devopscube
5. Use AWS security services when integrating additional services or migrating new workloads into your deployment.
“When a dev team deploys a workload in AWS, the cloud provider doesn’t protect that application from all external security threats, such as distributed denial-of-service (DDoS) attacks.
“Even when an AWS infrastructure works properly, external attacks can reduce workload performance or render it unavailable. These types of attacks can stop an IT team in its tracks — not to mention cost a fortune in wasted resources.
“This makes it critical to use AWS security services when you integrate additional services or migrate new workloads into your deployment.”
— Stephen Bigelow, Eight tips to roll a service or app into an AWS deployment, SearchAWS; Twitter: @TechTarget, @Stephen_Bigelow
6. Utilize the AWS Identity and Access Management Tool.
“AWS has Identity and Access Management Tool also known as the AWS IAM to better manage users who can access the resources in the Cloud directly. The tool helps to keep a check on unauthorized access and identity theft (ensures that passwords of the users are changed frequently). Multi-Factor Authentication or MFA, which is one the features of Identity and Access Management tool, is an important practice that enhances the security of the data in the cloud. Additionally, Access Management Control, which is yet another added feature of AWS IAM, ensures that EC2 key pairs can have access to resources only through protocols.”
— Meenakshi Vashisht, AWS Security Challenges – Tips for Effective Management, ISHIR; Twitter: @ISHIR
7. Use multifactor authentication on your root account.
“Your root account has access to all AWS resources, that’s how critical it is. Multifactor authentication helps add additional layers of protection to weed out possibilities of unauthorized access. A safe practice is to have a secured and dedicated device to receive one-time passwords, instead of linking it to a mobile phone. This dedicated device must also be placed in a restricted environment, with automated alerts to help you detect attempts of theft. When you use a mobile phone device for the one-time passwords, there’s a tangible risk of device theft contributing to compromising the security of your root account’s access.
“You can take your AWS security a notch higher by setting up multifactor authentication to delete CloudTrail buckets. This ensures that anybody who’s able to access your AWS account is not able to manipulate the CloudTrail logs to hide their activities.”
— Rahul Sharma, AWS security tips: How to lock down and protect your data, TechGenix.com; Twitter: @TechGenix
8. Encrypt your Amazon relational database services.
“Tackle some of the most common security missteps made when companies make the shift to AWS. This includes encrypting your Amazon relational database services (RDS) if they’re not already encrypted at the storage level; AWS provides RDS encryption to ensure data at rest is not at risk. In many cases, this also fulfills corporate compliance requirements such as those mandated by HIPAA or PCI DSS.
“It’s also a good idea to rotate IAM keys for users every three months to ensure old keys aren’t being used to access high-level services. Finally, opt for written access policies over S3 bucket permissions; the List access function, for example, can cause cost spikes if users who don’t need the function are listing objects at high frequency.”
9. Name your EC2 instances logically.
“Naming (tagging) your EC2 instances logically and consistently has several advantages such as providing additional information about the instance location and usage, promoting consistency within the selected environment, distinguishing fast similar resources from one another, improving clarity in cases of potential ambiguity and classifying them accurately as compute resources for easy management and billing purposes.”
10. Tagging can help you manage resources at scale.
“Tagging is an effective tool to help manage AWS resources at increasing scale, providing the ability to identify, classify and locate resources for management and billing purposes.
“Amazon EC2 filtering provides a way to both locate tagged resources and validate that the tagging standards in your organization are being properly implemented. Naming best practices can be leveraged to achieve consistency across your environment and maximize the benefits that tagging has to offer.”
11. Use automated tools to help manage resource tags.
“Implement automated tools to help manage resource tags. The Resource Groups Tagging API enables programmatic control of tags, making it easier to automatically manage, search, and filter tags and resources. It also simplifies backups of tag data across all supported services with a single API call per AWS Region.”
12. Your naming security and development convention should be easily understood across your dev and infrastructure teams.
“The problem in organizing each resource and how it can relate to other AWS resource is to make sure that each resource is able to be reused by the next AWS Engineer in your team. A good approach in doing so is creating a naming security and deployment convention that can easily be understood across the development and infrastructure teams.”
— Kenichi Shibata, AWS Primer on Best Practice in Resource Tagging, Hacker Noon;
13. Use MFA for bucket deletion, and restrict access to CloudTrail bucket logs.
“Unrestricted access, even to administrators, increases the risk of unauthorized access in case of stolen credentials due to a phishing attack. If the AWS account becomes compromised, multifactor authentication will make it more difficult for hackers to hide their trail.”
— How to Secure Your Information on AWS: 10 Best Practices, Tripwire; Twitter: @TripwireInc
14. Keep instances off when they’re not in use.
“Scheduling your instances to be turned off on nights and weekends when you aren’t using them saves you a ton of money on your cloud bill, but also provides security and protection. Leaving servers and databases on 24/7 is just asking for someone to try to break in and connect to servers within your infrastructure, especially during off-hours when you don’t have as many IT staff keeping an eye on things. By aggressively scheduling your resources to be off as much as possible, you minimize the opportunity for outside attacks on those servers.”
15. Grants are a more flexible way to control access to CMKs in KMS.
“Key Policies are the primary way to control access to customer master keys (CMKs) in KMS. On top of that, you can use IAM policies to authorize. The second way to control access are Grants. With a Grant, you can allow another AWS principal (e.g. an AWS account) to use a CMK with some restrictions. You could also implement this with the Key policy, but grants are more flexible to control.”
16. Limit access to S3 buckets to trusted administrators.
“Data stored in S3 buckets is secure by default. Through Identity and Access Management (IAM) policies, bucket policies, and Access Control Lists, users can control exactly who can access S3 buckets. Authenticating identity and restricting access may seem like common sense best practices, however, these actions are often overlooked. You should limit S3 bucket access to trusted administrators and audit their permissions frequently. Know who your vendors are and thoroughly examine their permissions as well. Companies will frequently allow vendors access to vulnerable areas of their network.”
— Ryan O’Donnell, 5 Ways to Avoid Cloud Creepers, Relus Technologies;
17. Allow instances to communicate only for the TCP/UDP ports required.
“EC2 instances are going to communicate each other but there should be communication for only those TCP/UDP ports required. Therefore, it’s recommended to configure Security Groups as virtual firewalls to allow and deny traffic to or from instances. This is the best way to protect instances, or group of instances, because instances which are in a group are not going to communicate to instances of another group unless we allow it explicitly. As you can see, it’s no longer enough a network perimeter firewall to allow and deny traffic between networks but we are increasingly demanding firewalls to protect virtual machines from virtual machines even when they are in the same subnet.”
— David Romero Trejo, AWS Security Best Practices, David Romero Trejo
18. Set alarms on billing to aid in DDoS attack detection.
“Set alarms on billing using Amazon CloudWatch. This practice can be very useful to detect DDoS attacks and high data transfer occurrences. For steps to set alerts on billing, click here.”
19. Use security groups.
“AWS Security Groups act as a virtual firewall, allowing you to control inbound and outbound traffic. Use AWS Security Groups to limit access to administrative services (SSH, RDP, etc.) as well as databases.
“In addition, try to restrict access and allow only certain network ranges when possible. It is also important to monitor and delete security groups that are not being used and to audit them periodically.”
— Nick Ismail, 10 tips for securing AWS public cloud environments, Information Age; Twitter: @InformationAge
20. Place virtual firewalls on every virtual network created.
“Instead of just having a firewall at the edge of the infrastructure, place virtual firewalls (available in the AWS Marketplace) on each virtual network that is created.”
— Brandon Butler, 5 Amazon Web Services security tips for businesses | How to secure Amazon Web Services: AWS security tips, Network World; Twitter: @computerworlduk
21. Assign IAM roles to EC2 instances.
“IAM roles can be used to define permission levels for different resources and applications that run on EC2 instances. When you launch an EC2 instance, you can assign an IAM role to it, eliminating the need for your applications to use AWS credentials to make API requests. This is one of the best tools when it comes to security in AWS. First of all, IAM roles can be very granular; you can control access at a resource level and for actions that can be performed. And when using IAM roles, if your EC2 instance gets compromised, you do not need to revoke credentials.”
22. Not everyone needs to be an admin.
“Access keys and user access control are integral to AWS security. It may be tempting to give developers administrator rights to handle certain tasks, but you shouldn’t. Not everyone needs to be an admin, and there’s no reason why policies can’t handle most situations. Saviynt’s research found that 35 percent of privileged users in AWS have full access to a wide variety of services, including the ability to bring down the whole customer AWS environment. Another common mistake is leaving high-privilege AWS accounts turned on for terminated users, Saviynt found.
“Administrators often fail to set up thorough policies for a variety of user scenarios, instead choosing to make them so broad that they lose their effectiveness. Applying policies and roles to restrict access reduces your attack surface, as it eliminates the possibility of the entire AWS environment being compromised because a key was exposed, account credentials were stolen, or someone on your team made a configuration error.”
23. Set MFA on your root account with a hard token, rather than a soft token.
“Activate MFA on your root account. This should be set with a hard token, rather than a soft token. Once set the hard token should be stored in a secure place, like buried in the backyard or an office safe. I’ve seen teams set the MFA with a soft token, then store the token in a password manager next to the password. While this is convenient, storing both the password and second factor together is not a great strategy, e.g. if your password safe is comprised, the second factor is not effective.
“It’s also good practice to have a few extra hard tokens in the office in case one breaks or you need to create a new account.”
— A best practice guide to getting your Enterprise AWS Account Setup, Stax.io; Twitter: @staxapp
24. Avoid using overlapping CIDRs.
“The first best practice is organize your AWS environment. We recommend that you use tags. As you continue to add instances, create route tables and subnets, it’s nice to know what connects with what. And the simple use of tags will make life so much easier when it comes to troubleshooting. Make sure you plan your CIDR block very carefully. We would suggest that you go a little bit bigger than you think you need and not smaller.
“Remember that for every subnet that you create, AWS takes five of those IP addresses for subnet. So when you create a subnet know that off the top there’s a five IP overhead. Avoid using overlapping CIDR blocks, and the reason being that at some point, you may not want to do it today but you may want to do it down the road, you may want to pair this VPC with another VPC, and if you have overlapping CIDR blocks, the pairing of the VPC will not function correctly and you’re going to find yourself in a world of configuring nightmare in order to be able to get those VPCs to pair.
“Try to avoid using overlapping CIDRs, and always save a little bit of space for future expansion. There’s no cost associated here with using a bigger CIDR block, so don’t undersize what you think you may need from an IP’s perspective just to try to make it clean and easy.”
— Taran Soodan, Best Practices Learned from 1,000 AWS VPC Configurations, SoftNAS; Twitter: @SoftNAS
25. If your server infrastructure uses more than one server, you should be using a VPC.
“Amazon Virtual Private Cloud (VPC) is a networking feature of EC2 that allows you to define a private network for a group of servers. Using it greatly simplifies fencing off components of your infrastructure and minimizing the externally facing pieces.
“The basic idea is to separate your infrastructure into two halves, a public half and a private half. The external endpoints for whatever you are creating goes in the public half. For a web application this would be your web server or load balancer.
“Services that are only consumed internally such as databases or caching servers belong in the private half. Components in the private half are not directly accessible from the public internet.
“This is a form of the principle of least privilege and it’s a good idea to implement it. If your server infrastructure involves more than one server, then you probably should be using a VPC.”
— Sehrope Sarkuni, AWS Tips, Tricks, and Techniques, Launch by Lunch