The Open Compute Project gets down to business

Karen Liu, Principal Analyst, Components

Open Compute Summit V, held last week, was a birthday party for open source hardware. The movement seeks to map lessons learned from open source software to the hardware world. Open Compute Project (OCP) has grown beyond serving the unique needs of its hyperscale founders and playing in the sandbox of open source architecture.

This year’s announcements got down to business, with designs adapted to fit more traditional enterprises. The goal is no longer just to demonstrate new hardware, but to offer a path to migrate from old to new data centers. Two partners, ITRI (Taiwan) and University of Texas at San Antonio, have set up compliance certification labs. OCP appears – despite its anti-standards rhetoric – to be taking the best of the standardization world as well as the open source world.

Time to market is a primary motivation, but standards do have their uses

Attendance at the Summit doubled from last year and included Microsoft, Dell, and other boldface names. There are two reasons competitors will band together in a cooperative project, both to address a common enemy. The first is when multiple market attackers ally against a dominant incumbent. Open source has been used to combat proprietary and dominant products as in the case of Linux versus Unix. OCP is driven by the second scenario, where the common enemy is finite resources in the face of myriad opportunities and a fast-changing market.

The stated goal of OCP is to get rid of unnecessary differentiation in order to free resources to create greater real differentiation. Leading incumbent vendors are participating in order to field more customized offerings even while maintaining their traditional product lines.

Open consortia in the second scenario are motivated by members’ frustration with the methodical pace of standards processes as well as conventional development processes. Standards, of course, do have great utility to assure interoperability. For networking, interoperability is everything. So far OCP has only dealt with new ways to build boxes. It has not yet addressed any issues that could relate to hardware interoperability across a network, but the possibility looms.

Speakers at the Summit mentioned the case of 100G Ethernet cost-reduced single-mode optics, which is currently stuck in the IEEE process with no one proposal able to garner the required votes. If OCP were to address this issue, it would have to tackle interoperability. (So far, the only interconnect contribution is Intel’s intra-rack and multimode optics.)

OCP evolving to address enterprise and vendor needs

This year saw design forks from last year’s hyperscale-centric submission to ones that bridge the gap to traditional enterprise needs. Microsoft presented a taxonomy of data center requirements divided into SMB/enterprise (<10K servers), hosters (100K servers), and cloud-scale (1M servers) data centers. Though they lack scale, the biggest additional challenge cited by both enterprises and hosters at the Summit is the complexity of their environments due to history and diverse end-user demands.

Open source allows development to branch more than the conventional standards process, which makes open source a good fit to address the diversity of hosting and vertical markets. AMD has a server motherboard design aimed specifically at financial services applications.

Goldman Sachs argues that all of FinTech (financial services technology) together has the same scale as any one hyperscale operator. Flipping this argument around, we can argue that hyperscale is not really one vertical: each operator might be considered its own vertical. Hyperscale operators are able to simplify their hardware requirements by optimizing for their own needs. However, each of them winds up with slightly different optimizations. But again, open source is a good fit to address the diversity without having to reinvent from scratch.

With maturity comes pragmatism

A number of announcements at the Summit directed new attention at business needs. One speaker said that the argument about how to build hardware was over – what remains is to figure out the migration path to get from today’s environment to the new model. Fidelity Investments submitted Open Bridge Rack, a design that can be reconfigured in situ in 45 minutes, using the same piece parts, between OCP’s 23-inch Open Rack design and a conventional 19-inch rack. The company needs such modularity so it can deploy OCP hardware into both legacy and new environments.

The OCP community now has seven solution providers. Hyperscale operators at the summit spoke of relying on the ecosystem because of their slim IT departments, but in fact enterprises typically rely more on vendor support. The solution providers are critical to craft hardware system designs for enterprises that might otherwise be unable to develop OCP technology on their own.

OCP also announced that it plans to change its current license from Open Web Foundation to the two most popular types of open source license: Apache-like and GPL-like. At the Summit, the OCP president stated that the purpose was to provide better return to vendors. It is good to see OCP’s concern about meeting its members’ commercial interests, but it will also be interesting to monitor which model attracts more submissions. Neither was particularly designed with vendor returns as a primary goal. It will also be interesting to see where OCP’s licensing agreements must differ from those for open source software: hardware’s intellectual property landscape differs from that of software, with greater relevance to patent rather than copyright law.

Can open source solve the problem it exacerbates: obsolete hardware?

The Open Compute Project has demonstrated its ability to enable agility. Since last year, Rackspace modified Open Compute hardware to fit its own needs, then modified again as its needs changed. It said that open source could “enable hardware finally to evolve as quickly as software.”

Yet, if only due to the human tendency to find the grass greener on the other side, software folks see Moore’s law as irrefutable evidence that hardware actually evolves much faster than software. Hardware, unlike software, has physical instantiations to deal with as well. The real drag on hardware evolution is the cost of throwing stuff away and replacing it.

The open source development model is based on re-use of existing designs as building blocks or building material to be modified. What about re-use of the actual physical building blocks, not just the designs? Clearly the modular designs such as Open Bridge Rack and servers with variable amounts of storage already address reconfiguration of hardware as application needs change. But OCP sets out to increase the speed of design – is hardware obsolescence deferred or accelerated?

Data center technology has prominently championed renewable energy for its own interests. Could it also champion another environmental direction: re-use of actual hardware?

Related Stories

Leave a comment


This will only be used to quickly provide signup information and will not allow us to post to your account or appear on your timeline.