Capacity, redundancy, and transparency: The three big cloud question marks
From my perch in the cloud computing ecosystem, I see three concerns on the horizon for cloud computing: capacity, redundancy, and transparency.
In certain ways, these topics are inter-related with security. Certainly, from the customer’s viewpoint, they all touch on perceptions of value and trust. And possible solutions, at least to my mind, tend to reflect old-school logic around risk management principles.
I know from my conversations with other professionals in the cloud computing industry that my concerns are widely shared and that a variety of solutions are actively being sought. Because these are broad-based concerns that affect everyone in this domain, however, they’re worth sharing to generate further discussion of potential solutions.
Capacity: are we ‘there’ yet?
Our capacity to store data is being challenged by big data – the vast amounts of data that exist and that are being generated afresh each day in an increasingly digital world. Currently, it may appear as if there’s an infinite amount of data storage and processing capacity available in the cloud, but that’s not the case. Industry players tell me they’re concerned that, at some point, we may “fall off a cliff.” Although many believe that the proverbial cliff will be prompted by the advent of the Internet of Things (IoT), it may come sooner. (These issues are being addressed by the IEEE Big Data Initiative, a tremendous resource for CloudTech readers.)
Embedded systems generate enormous amounts of unstructured data and companies with legacy embedded systems are starting to analyze it for insights into how their products have been used. This need may challenge us before IoT’s exponentially vast data generation becomes an issue.
In IoT’s case, however, a number of strategies suggest themselves, including processing at the network’s edge to reduce the amount of data flowing upstream to the cloud. Other strategies include variations on current practices such as representative sampling of data and logical partitioning of data sets. Intercloud services could divvy up the work, but these remain a work-in-progress.
Redundancy: déjà vu all over again?
Redundancy to my mind means that cloud-based data repositories should be retrievable or self-regenerating, if they are lost or purposefully or inadvertently discarded by the cloud service. This is a form of security. Our IEEE Cloud Computing Initiative’s Facebook page, for example, inexplicably vanished recently – and, with it, a great deal of valuable, time-consuming work.
The issue is that many companies are turning to the cloud to provide redundancy for critical business continuity data and processes. If the user relies on the cloud for storage and redundancy, what are the redundancy capabilities and policies used by your cloud service? After all, the cloud’s key selling point is that it eliminates the time and cost of maintaining one’s own databases.
Transparency: it isn’t clear
Transparency is a quality that might be applied to both the capacity and the redundancy issues – what is a cloud provider’s policy regarding its own capacity to store and process vast amounts of data and to back it up, should it be discarded or lost? For competitive reasons, the answer is likely not forthcoming from your cloud service provider.
More important to current and future users of the cloud (and the Internet), however, is the vast amounts of data being gathered – and, increasingly, applied in various ways – on a user’s identity, location, browsing habits, topics of interest, social media activities and purchasing patterns, to name a few.
How many of us really realise the degree to which that data, in ever more granular form, is being collected in the first place? And wouldn’t we all like to know exactly what is being collected and how that data is being used? Certainly we should all have some understanding of what personally identifiable information is gathered and how it is used. It might also be valuable to know how and when our activities are tracked and analysed, even if and when that data is anonymised.
Trust and value
Becoming smarter about the cloud vendors we work with is a critical piece of the solution to all the issues I’ve just raised. In a recent issue of IEEE Cloud Computing magazine, “Managing Risk in a Cloud Ecosystem,” the authors offer the following, logical reassurance: “Due to economies of scale, cutting-edge technology advancements and higher concentration of expertise, cloud providers have the potential to offer state-of-the-art cloud ecosystems that are resilient, self-regenerating and secure – far more secure than the environments of consumers who manage their own systems.”
Yet the article’s ultimate conclusion is that cloud consumers must follow a rigorous, established process to determine their own approach to risk management and identify their security requirements. That’s the basis for understanding how cloud providers manage risk and the degree to which they can meet a customer’s security requirements.
I urge you to read the article, which is full of practical guidance on these issues. And I invite you to join the community that has coalesced around IEEE Cloud Computing to broaden your knowledge of current challenges and participate in discovering solutions.
- » Platform as a service solutions are secure – as long as they’re not misconfigured
- » The rise of SD-WAN: How scaling cloud services is key to growing a digital business
- » Databricks raises $400m in series F funding and tops $6bn valuation
- » A guide to enterprise cloud cost management – understanding and reducing costs
- » How to avoid the big upcoming cloud storage problem – which could run you down