It's been well documented that for shared datastores and most applications NFS delivers the greatest amount of flexibility, scalability, and storage virtualization. The true value of NFS is not a technical discussion but rather one of operational savings as NFS provides unbelievable simplicity when managing massive numbers of VMs. Think 'set-it-and-forget-it.'
For applications that have requirements to be ran on a block-based storage protocol, say for reasons like block-based data management toolsets or out-of-date technical support statements, FC/FCoE/iSCSI fulfills these areas of need and completes the value of a unified storage platform.
I understand those who have yet to experience VMware on NetApp NFS maybe skeptical of my statements. Such is fair criticism, as always one should be skeptical of all claims made by all vendors.
As a means to help validate some of the points shared around the usage of NFS I'd like to introduce you to a recent post from Martin Glassborow, aka the StorageBod, entitled, "NFS, VMware, and Unintended Consequences!" Martin is a serious data center architect and storage expert who knows his stuff.
Check out his short post and comments section on running Vmware on NFS and the unexpected results it delivers for Martin. BTW - I love this quote from the post, "(NFS) moved VMWare firmly and squarely into NetApp's sweet spot."
Nice article. One question - I have an NFS data store with deduplication turned on and I have recently started receiving messages to the effect that my data store is over 500% deduplicated. I assume that this is meant to warn me that should I decide to deduplicate I might experience a shortage of storage - is this correct? Are there any downsides to this degree of deduplication?
Posted by: Jeff | August 08, 2010 at 04:47 PM
Jeff,
First let me say nice job on your deduplication rate!
This "over-deduplicated" message is a threshold recently added to Operations Manager. There are two reasons NetApp informs you of this. The first is that un-deduplicating your storage would now require more physical capacity than you have (I've yet to see a VMware/NetApp customer want to do this). The second is if you use SnapVault for D2D backups. SnapVault does not transfer data in its deduplicated state like SnapMirror (even though it does automatically deduplicate once at the secondary storage.) So you would just need to think about the ramifications to your D2D backups.
Posted by: Reid | August 09, 2010 at 09:12 AM
This is a question I get from customers periodically (I'm a VAR engineer). Basically it's more just a "heads up" message....consider it confirmation that dedup is doing what you want it to.
What I generally recommend to customers is that as you start to get higher and higher dedup ratios leave more free space in the datastore -- you're saving so much space there's no need to skimp.
Generally speaking on larger NFS datastores (500 GB and above) I recommend leaving at least 20% free if not maybe 25% as you start to see better dedup and/or thin provisioning savings.
Posted by: Andrew | August 09, 2010 at 09:19 AM
Thanks for the comments - I just noticed in my initial post I should have said "should I decide to de-deduplicate". Or maybe just "duplicate" - sounds silly when you put it that way...unless you have no choice I suppose.
Posted by: Jeff | August 09, 2010 at 11:02 AM
@Jeff - Good point, should you decide to disable data deduplication, you would need to purchase a a fairly sizable amount of additional disk storage. ;^)
Posted by: Vaughn Stewart | August 09, 2010 at 11:22 AM