Docker security: How to monitor and patch containers in the cloud
Nearly a quarter of enterprises are already using Docker and an additional 35% plan to use it. Even sceptical IT executives are calling it the future. One of the first questions enterprises ask about containers is: What is the security model? What is the fallout from containerisation on your existing infrastructure security tools and processes?
The truth is that many of your current tools and processes will have to change. Often your existing tools and processes are not “aware” of containers, so you must apply creative alternatives to meet your internal security standards. The good news is that these challenges are by no means insurmountable for companies that are eager to containerise.
Monitoring & IDS
The most important impact of Docker containers on infrastructure security is that most of your existing security tools — monitoring, intrusion detection, etc. — are not natively aware of sub-virtual machine components, i.e. containers. Most monitoring tools on the market are just beginning to have a view of transient instances in public clouds, but are far behind offering functionality to monitor sub-VM entities.
In most cases, you can satisfy this requirement by installing your monitoring and IDS tools on the virtual instances that host your containers. This will mean that logs are organised by instance, not by container, task, or cluster. If IDS is required for compliance, this is currently the best way to satisfy that requirement.
Key takeaway: Consider installing monitoring and security tools on the host, not the container.
Incident forensics and response
Every security team has developed a runbook or incident response plan that outlines what actions to take in the case of an incident or attack. Integrating Docker into this response process requires a significant adjustment to existing procedures and involves educating and coordinating GRC teams, security teams, and development teams.
Traditionally, if your IDS picks up a scan with a fingerprint of a known security attack, the first step is usually to look at how traffic is flowing through an environment. Docker containers by nature force you to care less about your host and you cannot track inter-container traffic or leave a machine up to see what is in memory (there is no running memory in Docker). This could potentially make it more difficult to see the source of the alert and the potential data accessed.
The use of containers is not really understood by the broader infosec and auditor community yet, which is potential audit and financial risk. Chances are that you will have to explain Docker to your QSA — and you will have few external parties that can help you build a well-tested, auditable Docker-based system.
That said, the least risk-averse companies are already experimenting with Docker and this knowledge will trickle down into risk-averse and compliance-focused enterprises within the next year. Logicworks has already helped PCI-compliant retailers implement Docker and enterprises are very keen to try Docker in non-production or non-compliance-driven environments.
Key takeaway: Before you implement Docker on a broad scale, talk to your GRC team about the implications of containerisation for incident response and work to develop new runbooks. Or try Docker in a non-compliance-driven or non-production workload first.
In a traditional virtualised or AWS environment, security patches are installed independently of application code. The patching process can be partially automated with configuration management tools, so if you are running VMs in AWS or elsewhere, you can update the Puppet manifest or Chef recipe and “force” that configuration to all your instances from a central hub.
A Docker image has two components: the base image and the application image. To patch a containerised system, you must update the base image and then rebuild the application image. So in the case of a vulnerability like Heartbleed, if you want the ensure that the new version of SSL is on every container, you would update the base image and recreate the container in line with your typical deployment procedures. A sophisticated deployment automation process (which is likely already in place if you are containerised) would make this fairly simple.
One of the most promising features of Docker is the degree to which application dependencies are coupled with the application itself, offering the potential to patch the system when the application is updated, i.e., frequently and potentially less painfully. But somewhat counterintuitively, Docker also offers a bright line between systems and development teams: systems teams support the infrastructure, the compute clusters, and patch the virtual instances; development teams support the containers. If you are trying to get to a place where your development and systems teams work closely together and responsibilities are clear, this is an attractive feature. If you are using are a Managed Service Provider (like Logicworks), there is a clear delineation between internal and external teams’ responsibilities.
Key takeaway: To implement a patch, update the base image and then rebuild the application image. This will require systems and development teams to work closely together, and responsibilities are clear.
Almost ready for prime time
If you are eager to implement Docker and are ready to take on a certain amount of risk, then the methods described here can help you monitor and patch containerised systems. At Logicworks, this is how we manage containerised systems for enterprise clients every day.
As AWS and Azure continue to evolve their container support and more independent software vendors enter the space, expect these “canonical” Docker security methods to change rapidly. Nine months from now or even three months from now, a tool could develop that automates much of what is manual or complex in Docker security.
When enterprises are this excited about a new technology, chances are that a whole new industry will follow.
The post Docker Security: How to Monitor and Patch Containers in the Cloud appeared first on Logicworks Gathering Clouds.
- » CloudBees acquires Electric Cloud to further CI/CD mission
- » The cloud migration landscape: Multi-cloud and hybrid battle for supremacy as security remains key
- » Organisations looking to get best of breed and guaranteed app availability with ‘true’ multi-cloud
- » Addressing the concerns of data management and sovereignty in multi-cloud and edge scenarios
- » How AI and big data analytics keep the most innovative companies ahead of the pack