Cloud computing and the 4th Dimension
It’s time to think about time.
According to Wikipedia, in physics, spacetime is any mathematical model that combines space and time into a single continuum. In cosmology, the concept of spacetime combines space and time to a single abstract universe. But you know we’re not here to talk about physics, cosmology, or the universe, as interesting as they may be.
For this moment in time, I want you to think of Cloud computing along the lines of space and time, because too many businesses shifting to the Cloud are focused on space, and space alone. You see, traditionally, for business or enterprise IT, time has not been a factor, so it was not an element usually given much consideration.
Unfortunately, when that 4th dimension is overlooked, many businesses conclude that “the Cloud doesn’t work”.
If you’re shifting your business to the Cloud, it is imperative to understand your processes and the underlying technology infrastructure. It’s true that infrastructure can be abstracted and an application may simply require resources (processing, memory, storage) to be available, though we still need to remember that for the business logic to work, all the necessary components must be available when the business logic (or tool) is executing.
In a traditional enterprise setting, that was not an issue, because servers, once put into operation in your datacenter, were not usually taken down. If servers were taken down, it was planned and coordinated during an outage window, and if unplanned /emergency, the focus was to bring it back into operation. So your servers were always “hot” or “live” 24 hours a day, 7 days a week, within or across departments. In many cases, inter-departmental coordination was non-existent or unnecessary.
With a transition to the Cloud, one of the cost savings is shutting VMs down when they’re not in use. This is where you have to think four dimensionally.
If you have an automatic process that runs in the evenings traversing multiple departments, say departments A, B, and C, and you get unexpected results or your process did not complete, you’ll have to consider the availability of those resources during the time the process is running. Department B may have been achieving cost savings by shutting down their VM when no-one in their department was using it, during evenings and weekends, though unwittingly breaking an enterprise process as they did so.
Historically, we’ve always assumed that the resources would be available when we kicked off an automated process intended to run when we are not working, whether it’s an automated enterprise billing tool, security scan, backup, or business logic, we now have to consider not only where the application and process resides, but the resources that the application or tool requires, and their availability state.
So as you move to the Cloud, remember to consider the time factor, and coordinate the availability of VMs accordingly. It’ll save you a lot of time. The Cloud does work, you just have to know how to use it.
How are you handling the Cloud-time factor?
- » Kubernetes and multi-cloud: How to monitor your modern applications effectively
- » How companies can tell good cloud sprawl from bad: A guide
- » SQL Server high availability and disaster recovery for AWS, Azure and GCP: A guide
- » Chaos engineering is integrated into the DevOps toolchain – but what about IT ops?
- » No Mickey Mouse Microsoft migration: Walt Disney Studios utilising Azure for content workflows