The spotlight falls on data centre resiliency as data keeps growing

The spotlight falls on data centre resiliency as data keeps growing
Adrian Barker is General Manager EMEA at RF Code, a provider of end-to-end data center asset tracking, asset lifecycle management and environmental solutions. Adrian has worked for RF Code since 2011.


It’s hard to imagine the sheer scale of data created on a daily basis. In just one second 747 Instagram photos are uploaded, 7,380 tweets are sent and 38,864GB of traffic is processed across the internet. This equates to 2.5 quintillion bytes of data every single day. With IoT connections booming, this number is expected to keep growing exponentially. Almost all this data travels through a data centre at some point.

Businesses are handing the hosting and management of their infrastructure and systems software to third-party data centre operators, a move that has enabled companies of all sizes to become more agile and cost-conscious.

This phenomenon is projected to grow, with 86% of workloads expected to be processed by cloud data centres by 2019, and only 14% by traditional data centres. Perhaps even more striking is that the same forecast indicates 83% of data centre traffic will be cloud traffic in the next three years.

This explosion in data, cloud applications, services and infrastructure has brought about a change in data centre usage which in turn has also demanded a change in physical facilities.

It is essential four features are woven into the design and functionality of every data centre – scalability, availability, resiliency and security. Outsourced data centre owners must be able to handle a surge in demand – without adequate capacity and environmental monitoring servers can quickly become overworked and cause outages.  

In addition, data centres have to demonstrate resiliency in order to reassure their customers. Corporate enterprises, particularly those who have migrated to hybrid environments, live in fear of an outage and the resulting impact on costs and reputation. And with good reason.

Downtime damage

In autumn 2015 a data centre owned by Fujitsu suffered a power outage which took down a number of cloud services. This was not a short-lived problem; the effects persisted for some time and affected customers on the Fujitsu public cloud and its private hosted cloud as well as other infrastructure services.  

As with Fujitsu, data centre and service availability can be disrupted in many ways. Power supply failure is one of the biggest causes, as are cyber-attacks, but data centres can also be affected by overheating if efficient cooling is not in place, or even by extreme weather incidents. Examples vary from the mundane to the unbelievably absurd.

Despite the risks of failure, few of the listed scenarios actually have to result in downtime if there is a good understanding of the data centre environment, a suitable level of real-time operational intelligence and procedures are in place to identify issues before they can lead to disaster or failure.

Sophisticated solutions are available to provide real-time insight, control and predictability that help data centre managers to deal with environmental and operational challenges. Environmental conditions can be monitored constantly for any potential issues, and assets tracked and managed to maintain their performance and guard against technical breakdown.

As data continues to grow and cloud traffic increases, utilising intuitive insight and fit-for-purpose tools such as those described above will help data centres and their operators to maintain resilience, ensure uptime and support their customers as they move away from internally managed IT estates.

View Comments
Leave a comment

Leave a Reply

Your email address will not be published.