WannaCry and the public cloud: The CISO perspective
By Matthew Sharp, CISO, Logicworks
I recently attended a CISO Executive Summit here in NYC. The room was packed with 175 CISOs and top-level security leaders from various industries. There was broad agreement that WannaCry was a scramble for many of their teams, and created a long weekend for some. We concurred that we were lucky the “kill switch” was triggered, and we soberly recognised that the exploit is being redeployed with newly weaponised malware.
The consensus among CISOs is that some key processes were tested, and those with critical structures in place fared much better than those with less mature programs. At the same time, the incident highlighted the benefits of public cloud computing – and the need to apply automation in order to respond quickly and proactively to threats.
Implementing a strategy to protect and respond to attacks like these goes beyond patching and extends to automating provisioning that supports continuous integration / continuous delivery (CI/CD) pipelines, and adopting the tenants of immutable infrastructure. When your infrastructure is designed to operate like a piece of software, you can reduce or eliminate the time it takes to respond to events such as WannaCry. We have found AWS indispensable in that regard.
In the best case, clients have a defence in depth strategy with strong endpoint technologies employing artificial intelligence, machine learning, statistical analysis or other buzz-wordy endpoint mitigation technologies.
This is then combined with the abstraction layer afforded by public cloud providers that empowers a clear use of automation, often driven via Infrastructure as Code (IaC) and purposeful orchestration. The powerful result is that clients can perfectly define the intended state of every environment. They can then provide assurance that the congruence between dev, stage, test, prod is precise. By doing so, they accelerate their ability to deploy micro changes in addition to patches and configuration updates while understanding and mitigating many of the risks associated with change.
This year’s DevOps report again confirms that DevOps practices lead to better IT and organizational performance. High-performing IT departments achieve superior speed and reliability relative to lower-performing peers. The 2015 survey showed that high-performing teams deploy code 30 times more often and with 200 times shorter lead times than their peers. And they achieve this velocity and frequency without compromising reliability — in fact, they improve it. High-performing teams experience 60 times fewer failures.
In the case of WannaCry, the malware exploited a critical SMB remote code execution vulnerability for which Microsoft has already released a patch (MS17-010) in mid-March.
For clients already taking advantage of agile operations and leveraging public cloud technologies, their environments were unaffected because patches were applied months ago. If it had been a zero-day exploit, the ability to implement configuration changes efficiently means that teams must still scramble to patch, but you avoid the long weekends.
- » Four ways to migrate to the cloud without missing a beat: A guide
- » Best security practices for migrating to the cloud: A guide
- » Why the future of data security in the cloud is programmable
- » Cloud services and infrastructure spending breaks $150bn in six months, says Synergy
- » Puppet’s 2019 State of DevOps report: How security needs to fit into continuous delivery