« VMware Admins are Storage Admins - vStorage Integration Part 3 | Main | Transparent Storage Cache Sharing– Part 2: More Use Cases »

March 16, 2010

Comments

Trey Layton

Nice post Vaughn

RDANIE01

Vaughn quick question regarding the actual Caching algorithms you use. Within your customer base do you see a frequency of Read Cache being maxed out? All these technoligies sound really cool but from what I understand the algorithm would have to be predictive enough to take advantage of 1) the larger Cache size and 2) the DeDuped footprint in cache which theoretically would allow you to store even more data in Read cache. Is there a new caching algorithim in a new version of Ontap? Or was algorithm very good at predictive read chache so much so that when diagnosing performance problems in the past that you witnessed cache being 75-100% utilized and thus all these technologies would alleviate that. Point being that the SW needs to be good enough at pulling the right blocks into cache before being able to take advantage of larger DeDuped Cache sizes.

Aaron Chaisson

Vaughn could you break this down a bit more? I'm not saying that you don't have some secret sauce, but the way you described this is not clearly differentiated from how read cache works in general. Also, you're pictures, though interesting at first glance, don't explain what the array is doing differently as compared to the linked clone use case. In the case of VMware Linked clones, VMs 2-8 would point to VM1 which would point to a single cache instance and a single storage location, even on such "traditional" arrays as you refer to. Whether the snapshot/linkedclone/snapclone whatever you want to call it happens at the array level or the VMware level, the resulting impact on the back end drives, the array cache and the effective cache hit rate would be relatively the same given an equivalent amount of read cache. Read cache performance is all about fall through rates (how long a block stays in cache before it is timed out) and the likelyhood of any given IO requesting a block that is still in cache. Obviously the longer the fallthrough rate the mire likely that you will have a higher hit rate. The way to improve fall through rate is by either increasing locality of reference and reusing common blocks (helped by using linked clones or array replicas) or by blindly increasing read cache to basically just hold more stuff before it falls out of cache ... not a bad strategy, but cache isn't cheap.
If you are doing something special, cool, but I'm still not clear as to what that is and honestly ... I would like to know.

Mike Slisinger

Aaron, the key difference here is that TSCS provides this shared block caching to any deduplicated dataset. So while VMware linked clones have a rather specific use case, TSCS will work with any VM's including permanently provisioned virtual servers.

Of course, this does not even take to account different types of datasets but I don't want to steal Vaughn's thunder...

Jonas Irwin

I don't see this as unique, unless I've missed something. This seems more like a post about why deduplication helps minimize cache overhead for read centric workloads. More specifally, for reads of the same blocks from many hosts, at the same time. There are lots of other great implementations of dedupe in the market that behave in an even more sophisticated manner..specifially, at a variable length..sub-block level.

Creedom2020

Great post! It seems as if cache is where it is at for the near future. Having these large layers of cache (ie. PAM) enables compleaty new use cases for Storage. If cache like this is available in the VSeries (which I should know but can't recall) basicaly that can pop into any storage environment as a transparent storage accellerator. With that said do you feel like the current cache strategy is a stop gap between complete SSD storage at higher capacties? (Looking years out not months) Yes, I am asking for a look into the crystal ball. Thanks for the post!

John Martin

@Jonas - "There are lots of other great implementations of dedupe ..."

Lots ? I can think of about two other dedup implementations that deserve the epithet of "great", and neither of these are suitable for primary workloads, certainly not mission critical ones. As far as I'm aware, most of the variable block dedup caching/readahead algorithm are optimised for single threaded "sequential" reads of logically contiguous (though physically discontiguous) datasets not "reads of the same blocks from many hosts".

@RDANIE01 "the algorithm would have to be predictive enough ..."

This is where readsets come in really handy, Alex McDonald has blogged about them in the past.

Getting the most out of cache involves a collection of technologies working together, including, readsets, deduplication, and write optimised data layouts. Its the "little details" that end up making a big difference.

Regards
John Martin
Consulting Systems Engineer
ANZ

Vaughn Stewart

@John - thanks for taking on these questions. I owe you one.

@Aaron - check out part 2 in the series. After you do, shoot me your questions.

http://blogs.netapp.com/virtualstorageguy/2010/03/transparent-storage-cache-sharing-part-2-more-use-cases.html

@Jonas - yes cache helps read, and as one sees with larger arrays, the more cache the better the performance... and this is whats great about TSCS. We can provide more usable cache capacity beyond the physical capacity and as such provide greater performance. TSCS reduces disk requests for objects with blocks in common with another object, whether stored in the same or separate LUNs and VMDKs.

Some called it 'magic cache' ;)

@Creedom2020 - yes PAM and TSCS is available for 3rd party array via the vSeries.

The comments to this entry are closed.

TRUSTe CLICK TO VERIFY