Four nines and failure rates: Will the cloud ever cut it for transactional banking?
Banks looking to take their cloud plans to the next level are likely to have returned to the drawing board following the latest Amazon Web Services outage, which disrupted the online activities of major organizations from Apple to the US Securities and Exchange Commission. One estimate suggests US financial services companies alone lost $160 million – in just four hours. It’s been a timely reminder that any downtime is too much in an always-on digital economy, certainly for financial services.
The sobering point is that AWS was still delivering within the terms of its service-level agreement (SLA). This promises 99.99% service and data availability (otherwise known as “four nines” availability). This may be good enough for a lot of things, but it won’t do for banking.
Over a year that 0.01% scope for unavailability equates to almost nine hours of unplanned outage – and that’s on top of any planned downtime for maintenance or updates. Combine the two and you’re looking at more than a day’s worth of service loss across a 12-month period. It’s hardly a recommendation for banks to move critical, live data into the cloud – however compelling the business drivers.
Banks need five nines (99.999%) service and data availability – the levels aimed for on their own premises. That’s a downtime tolerance of between 0.32 and three seconds per year. And public cloud services are not set up to match that. It would be uneconomical.
The end doesn’t justify the means
Moving real-time transactions into the cloud is the final frontier for traditional regulated financial institutions. And there’s no question that they need, and want, to do this. It’s vastly more cost-efficient, and it’s the only way they can hope to compete with nimble financial upstarts, whose agility owes everything to being able to crunch huge numbers at high-speed using someone else’s top-of-the-range server farms.
Financial authorities such as the UK’s Financial Conduct Authority have already accepted the cloud, which on the face of it gives banks the green light to be more ambitious. But not really, because the issued guidance doesn’t bridge the reality gap traditional banks need to get across – in other words, the inadequate service level for scenarios other than data archiving or disaster recovery.
In data archiving and backup applications, the cloud’s appeal hinges on its cost-efficiency, scalability and durability. But durability should not be confused with availability. Even if data is tightly safeguarded, and can be brought back online efficiently after a system crash or other crisis, this adds no value in a live-data scenario. If there is any chance that at some point access may be interrupted, the other merits of cloud don’t matter in this context.
And that’s why banks haven’t made the final leap to using cloud in a production environment – because these otherwise very viable on-demand data centres can’t offer them the very high availability assurances they need.
Lost market opportunity
So banks are stuck. The inability to move core systems and live data into the cloud is costing them competitively in lost market opportunity.
If they could make the leap, it would pave the way foradvanced customer analytics, intelligent service automation, complex stock correlations, and predictive fraud detection: data-intensive applications that demand massive computer power – at a scale that their proprietary data centres simply can’t deliver.
But AWS and other mainstream cloud infrastructure providers have designed their services and service level agreements to meet the needs of the majority: where the risk of interrupting a morning’s business, social feeds or even hedge fund activity, though costly, is at least partly offset by huge infrastructure savings.
Remaining open to new options
Banks absolutely need to be more ambitious and creative in their use of the cloud. Their future differentiation depends on having access to the same computer power, speed and flexible resource as their more nimble, less risk-averse competitors. But they are not going to make the transition until the service levels they rely on for core systems can be delivered.
Inadequate service levels are a significant stumbling block, but lessons will be learnt each time a high-profile cloud service is compromised. In the meantime, barriers to what banks need to do can be overcome. Solving the data availability issue comes down to the way data is synchronized between sites (e.g. primary servers and secondary data centres), so that live data is always available in more than one place at the same time. It sounds impossible, but it isn’t.
Achieve this (and at WANdisco we have) and the nines will take care of themselves.
- » Google boasts strong cloud performance in Q4 – but is shy over revealing specifics
- » The cloud in 2020: Enterprise compatibility with edge computing, containers and serverless
- » How channel partners are set to drive new cloud computing growth
- » How new cloud agents are increasing confidence in the public cloud
- » AWS hits $7.4bn in Q4 revenues, comprised three quarters of 2018 overall Amazon profit