Amazon Web Services starts to assert itself
At the end of November 2012, Amazon Web Services (AWS) held its first partner and customer conference in Las Vegas. Dubbed AWS “re: Invent”, the event was a success. It enabled AWS to assert itself as a large and influential player in the IT marketplace.
However, as well as AWS’s strengths, it also highlighted some of its traditional weaknesses. These include poor communications and an inability to put forward messages adapted to the needs of enterprise executives.
A growing footprint but needs more transparency
AWS shares as little information about itself as it can get away with. As the infrastructure-as-a-service (IaaS) market matures, it becomes increasingly awkward for the organisation to be so closely guarded.
Nevertheless, the conference itself, in its size, energy, and quality of networking opportunities, was a good reminder of the growing influence that AWS yields in the IT marketplace.
A large conference to get to know AWS better
AWS re: Invent helped AWS educate the market, which is critical as there is still a gap between the market perception of AWS and the reality of its abilities at a variety of levels such as market influence, strategy, and product portfolio.
The company made a number of announcements:
- New services – Amazon Redshift, a managed, petabyte-scale relational data warehouse service; AWS Data Pipeline, an orchestration service for data-driven workflows; and two new Amazon EC2 instances for analytic workloads
- Price cuts – for Amazon Simple Storage Service, Amazon Reduced Redundancy Storage, and Amazon Elastic Block Storage snapshots
- Partners – it identified 15 partners who have achieved the Premier Consulting Partner designation.
However, the organisation did not communicate these announcements particularly well. It used a variety of channels (such as keynote presentations, conference sessions, press releases, and blogs) without any consistency, making it more difficult than it should be to find out what had actually been announced and identify the most important announcements.
It is part of a wider problem: AWS seems to struggle to keep up with the number of announcements it makes. The same applies, to a much larger extent, to its customers.
A product portfolio that needs more context
The main objective of the conference was to familiarise enterprises with the variety of services that AWS provides. It fully succeeded in not just enlightening participants on the nature of the options available to them, but also emboldening them to take advantage of them via a variety of best-practice sessions.
For the business executives, AWS did have an Enterprise track that featured customer speakers explaining how and why they have moved to a public cloud. However, none of the track sessions helped the audience put AWS’s sprawling portfolio in context.
Instead of just talking about itself in terms of service capabilities, AWS needs to puts these capabilities in the context of a larger strategic picture. For example, even though it had a Big Data track at the conference, and its service announcements (such as Amazon Redshift) were Big Data-related, AWS did not put together a Big Data “big picture” to help enterprise executives understand how it approaches Big Data, where it is coming from, and where it is heading.
The more the company encroaches into enterprise territory, the more the need to do so.
A mature debate on enterprise concerns
Customers were firmly at the heart of the event, allowing AWS to point out that its customer base features not just start-ups but also mainstream enterprises using its public cloud not just for development and test purposes but also production systems.
The first-day keynote featured Nasa, Netflix, and Nasdaq. SAP AG President Sanjay Poonen also appeared in one of the keynotes to extoll the virtues of SAP systems on the AWS cloud.
Ovum was impressed by the quality of the debate around issues such as security, reliability, and value for money. While it was natural for AWS to tout its credentials at all levels, the audience showed a much more practical approach to these issues than the hyped up concerns reported in the press.
AWS cost optimisation is very much a “black art”. AWS needs to do more in this space, and disclose its roadmap. However, various sessions (but no specific conference track) helped enterprises start to get to grips with the problem.
AWS was right to point out the need for developers and architects to be aware of the cost impact of their cloud activities: something that Ovum highlighted two years ago in a report on cloud computing costs and something that will take at least a decade to take root.
When it comes to reliability, AWS emphasised, and customers acknowledged, the need for good service-oriented architecture (SOA), as well as cloud-centric software design principles (such as loose coupling, abstraction, late binding, and design for failure) backed by automated processes for continuous service delivery and change management. Indeed, AWS has done at least as much for SOA in the past six years as IBM has in the past 20.
- » Financial services moving to hybrid cloud – but rearchitecting legacy systems remains a challenge
- » The unforgiving cycle of cloud infrastructure costs – and the CAP theorem which drives it
- » Uncovering the insight behind Gartner’s $331 billion public cloud forecast
- » Enterprise NoSQL adoption is now mainstream: What will happen from here
- » Facebook records exposed on AWS cloud server lead to more navel-gazing over shared responsibility