If my Computing is Unified, why can’t I Determine my System's Health?
Unified Computing is an attempt to recreate what was desireable about mainframes using industry standard systems, read X86-based, off-the-shelf memory, storage and networking equipment. The goal was to recentralize all of the components of modern computing systems into one package.
Recently Cisco joined the ranks of those offering unified systems, which included eGenera, IBM, HP, Dell and others, in offering unified computing systems. Ciscso, by the way, cleverly named their products Unified Computing Systems or UCS.
Why is this good?
The industry and learned through extensive experience that industry standard systems used to support distributed applications are powerful, flexible, reliable and scalable. A modern application is designed to take advantage of powerful X86-based systems, off-the-shelf operating systems such as Windows, Linux and UNIX and database engines offered by many suppliers, including Oracle, IBM, Sybase and quite a number of other players.
The result is that applications no longer have to live or die based upon the health of a single system. Applications now can be segmented into separate functions that can be distributed into several different data centers or even out into the cloud.
Are there challenges?
This approach offers a great deal of flexibility and reliablility, these configurations still are not as manageable as Mainframe approaches. Special features were designed into the hardware, the operating systems, the database engine and even application monitors to make mainframes extremely reliable and manageable. Industry standard systems and supporting hardware, even if they are packaged together in the same cabinet have not been designed the same way.
Applications are now constructed as a number of system services that each resides on its own host. The host may be physical, virtual or cloud-based. The service may be replicated on a number of hosts and linked together by a workload management product (a form of processing virtualization, by the way).
It is now possible to spread application functions over hundreds or, perhaps, thousands of systems. These systems may be deployed in data centers around the globe or hosted in a cloud computing supplier’s data center.
More different types of expertise are needed to keep them running. Each application service or component needs to be managed as a separate entity. While there are some management standards, some of the most critical functions often can only be managed by the supplier’s own tools.
This means that organizations are required to adopt a patchwork quilt of management tools from different suppliers in order to really see what is happening.
Why hasn’t this worked as well as expeted?
As application systems become more and more complex, they became more expensive to operate. Expertise is needed for each operating system, virtual machine software product, database engine, application framework, storage server and network server. As functions are deployed in many different data centers, the number of staff members needed has also increased. This, unfortunately, submerged many of the benefits offered the lower cost hardware.
Management software still is rather immature when management of unified computing systems is considered. IT exectutives remember the benefits of mainframes and expect suppliers to keep trying to make industry standard systems work together and provide a computing environment more like the mainframe environment they remember.
The challenge suppliers face is that operational data is not made available by all system components in the same way. Since each supplier uses its own format for this data, it is challenging to gather and analyze this data.
So, while the hardware looks similar to a mainframe configuration, it isn’t a mainframe from many perspectives.
What’s needed to make this dream a reality?
Organizations using unified computing systems need to adopt tools that find resources deployed in these “unified computing systems” without help, collect all of the runtime data from all of the components and provide a quick analysis of what’s happening right now. Furthermore, these tools can’t impose a large amount of overhead and reduce overall peformance.
The best tools would go on from there to suggest ways to optimize the environment and prevent problems in the future.
There are only a few suppliers of management tools that can do all of these things. Tools, such as those offered by Zenoss, that offer the capability to gather data, analyze that data and present a succinct and yet comprehensive view of what is happening are certainly worthy of consideration as organizations install these unified computing solutions.
- » Practical cloud considerations: Security and the decryption conundrum
- » Why standardisation is good for NetOps: Innovation instead of impediment
- » Red Hat: On bridging between the first wave of cloud and next generation platforms
- » AWS will support NVIDIA’s T4 GPUs focusing on intensive machine learning workloads
- » How businesses can capitalise on a multi-cloud IT strategy with SD-WAN