AWS launches new security offering which mitigates S3 misconfigurations – if customers get it right
Amazon Web Services (AWS) has announced extra steps to ensure customers’ S3 buckets don’t become misconfigured – but don’t assume responsibility has been taken away from the customer.
The new service, Amazon S3 Block Public Access, can work at the account level, on individual buckets, as well as future buckets created. Users can also block existing public access, or ensure public access is not available for newly created items.
The move can be seen as an extension of the various access controls users already have on AWS buckets, through either Access Control Lists (ACL), or identity and access management (IAM) bucket policies. Users will not be charged for this additional usage, aside from usual prices for all requests made to the S3 API.
As Jeff Barr, chief evangelist for Amazon Web Services, put it in a blog post explaining the new system: “We want to make sure that you use public buckets and objects as needed, while giving you tools to make sure that you don’t make them publicly accessible due to a simple mistake or misunderstanding.”
This has been a long-term problem for both AWS and its customers. The model of shared responsibility states that the provider is liable for security ‘of’ the cloud, such as infrastructure, while the customer is responsible for security ‘in’ the cloud – in other words, ensuring data is properly configured.
A series of high profile breaches, including Verizon, Accenture and Booz Allen Hamilton, have exacerbated the issue. Last month, research from cloud access security broker (CASB) Netskope argued the majority of Center for Internet Security (CIS) benchmark violations found in AWS environments fell under the IAM remit.
AWS has taken steps previously to make the issue more visible – literally. This time last year the company revamped its design to give bright orange warning indicators as to which buckets were public. Yet the message of personal and organisational responsibility still needs to be hammered home.
In April, CloudTech published two articles exploring S3 security as part of its monthly topic focusing on the subject. Doug Hazelman, vice president of technical marketing at backup service provider CloudBerry, argued there were no excuses for errors of this nature.
“By virtue of having a service readable and writeable from anywhere in the world, this sort of [attack] is bound to happen, one might say. But that is not true: even the lowest functionality devices, such as sensors, can be configured to authenticate via a put request to an S3 bucket,” Hazelman wrote.
“Put simply: this shouldn’t happen. There is no reason to have a world-readable and world-writeable S3 bucket,” he added. “Preventing this type of lift of private data requires making sure one simple setting is configured as is the default when setting up a new Amazon S3 instance.
“To be honest, it is beyond me why projects make it into production with this setting at anything but its secure default, but too many breaches – and it’s a stretch to call them breaches because accessing the data is essentially as simple as browsing to a public website – have shown that for whatever reason, companies are not being careful enough in their S3 configurations.”
Micah Montgomery, cloud services architect at cybersecurity firm Mosaic451, cited a lack of understanding at the cloud’s complexity as a concern.
“The ease of using AWS or other cloud environments can make it easy to forget just how complex the cloud is,” he wrote. “This complexity is why the cloud is so visible, but it also decreases visibility. In many cases, AWS breaches happen because organisations have non-IT personnel, or IT personnel who do not fully understand the cloud, configuring their AWS buckets.
“In a general IT environment, there is a management console for every area and tool,” Montgomery added. “Once you add a cloud environment, you add another management console. There are already hundreds of ways to screw things up in an on-premises data environment. The cloud adds yet another layer of complexity, and organisations must understand how it will impact their overall cybersecurity.”
With this latest update, AWS is giving even more possibilities to get it right – but bear in mind they cannot hold customers’ hands every step of the way. Read the full blog post here.
Interested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.
- » Edge computing and ITOps: Analysing the opportunities and challenges ahead
- » How financial services can stay secure in the cloud: A guide
- » CloudKnox raises $12 million in funding to further continuous cloud security mission
- » More sensitive data moves to the enterprise cloud – but the security risk widens with it
- » Don’t forget to budget for business objectives to gain digital transformation ROI