Caching in on the Cloud with the Power of More Cores

Cache is a word you hear every day when you when you work for a silicon vendor. For CPUs, cache plays the very important role of reducing memory access by buffering frequently used data. But caching goes way beyond the CPU and there are many different methods to cache data – all with the common goal of increasing application performance.

The world of Cloud Computing is filled with dynamic Web applications that have many small transactions, such as page clicks and search query strings,  that are better off cached in memory as opposed to letting them pass to storage. Enter Memcached, a high-performance, distributed memory object caching system which is used by web sites like WordPress, YouTube, and Twitter to alleviate database load and reduce response time between a Web user and an application.

Memcached was originally developed in 2003 by Danga Interactive to help speed up LiveJournal . This was an era when the majority of server processors were single core.  Previous studies* have indicated that Memcached is thread limited, not scaling in performance beyond 4-6 threads on Linux. This limits the usage of the additional cores on the system and requires the addition of more physical systems as the workload scales up.

We now live in a multi-core world with processors like the AMD OpteronTM 6100 Series and our upcoming “Bulldozer” technology offering a wealth of real cores. So how can thread limited software make effective use of core rich processors? This is where virtualization comes into play.

AMD recently did a study that demonstrates a 3x improvement in throughput when running Memcached in a virtualized environment as compared to natively (unvirtualized).  A server with AMD Opteron  6100 Series processors and VMware ESX was used to create 12 VMs each running Red Hat Linux 6.1 and Memcached. Our study indicates that the throughput levels scale linearly with the number of VMs used, thus making it attractive to use systems with larger number of cores.

* Neil Gunther, Shanti Subramanyam, and Stefan Parvu,
“Hidden Scalability Gotchas in Memcached and Friends”, http://assets.en.oreilly.com/1/event/44/Hidden%20Scalability%20Gotchas%20in%20Memcached%20and%20Friends%20Presentation.pdf  and
Paul Saab, “ Scaling Memcached at Facebook”, http://www.facebook.com/note.php?note_id=39391378919″

Margaret Lewis (@margaretjlewis) is a Product Marketing Director at AMD. Her postings are her own opinions and may not represent AMD’s positions, strategies or opinions. Links to third party sites are provided for convenience and unless explicitly stated, AMD is not responsible for the contents of such linked sites and no endorsement is implied.

Related Stories

Leave a comment

Alternatively

This will only be used to quickly provide signup information and will not allow us to post to your account or appear on your timeline.