@sakacc @chuckhollis – Guys EMC & NetApp just look at virtualization differently. Allow me to explain…
@sakacc @chuckhollis - VMware allows customers to share CPU, memory, and network ports among multiple VMs allowing a reduction in servers
@sakacc @chuckhollis - Cisco allows customers to share ports among multiple connection protocols thus reducing network and storage switches
@sakacc @chuckhollis - NetApp allows customers to share disk capacity at a sub VM level among multiple VMs which reduces total storage
@sakacc @chuckhollis - EMC provides shared disk access to multiple VMs. Shared access is not shared resource usage; this is not virtualized
@sakacc @chuckhollis – VMware, Cisco, & NetApp want customers to purchase less hardware. What is EMC’s plan to reduce their HW footprint?
As you can see my premise is clear, for customers to be successful in their virtualization efforts, they must virtualize the entire datacenter this includes servers, networks, and storage.
Who decided that server and network hardware should be reduced while the shared storage footprint is permitted to grow uncontrollably?
So after last night’s rumble I find Chuck Hollis dropped from of the conversation in order to craft his thoughts in a blog post on the key component in this discussion – storage virtualization should provide hardware reduction in the same manner as server and network virtualization technologies, and this means begins by deduplicating production workloads.
I’m unclear as to Chuck’s intention. Is he warning potential customers based on…
• Facts which he will share with us
• Misinformation provided to him by an EMC competitive analysis
• Fear mongering because EMC doesn’t have a comparable offering
As Chuck is an upstanding individual and has been in the storage industry since the mainframe days, I’d like to suggest that he is misinformed and void of malice.
Let’s Review Chuck’s Misunderstandings
Point One: I/O Density
“Now let's consider the primary data store use case. By definition, it's a "hot" storage workload. Maybe you've taken a database that used to run on 20 disks, and now found that you can fit it on 10. The I/O density of those 10 disks has now doubled.”
Point Two: Disks are Inherently Slow
“Typically, when storage admins run into I/O density problems, they have two fundamental approaches: more disks, or faster disks… Indeed, if one of those media types is enterprise flash, primary storage dedupe can create incredible I/O densities, and we're good with that. “
Point Three: Disks Fail
“And when they fail, the array has to rebuild them. This inevitably puts a big hurt on I/O response times during the rebuild -- different schemes have different impacts. Mirroring schemes tend to have less impact than parity schemes, but require (wait for it!) more storage.”
Chuck You Are Correct on Every Point!
What would you expect me to say, Chuck’s not stupid, in fact he’s very clever. He’s so clever that he is only providing partial information in order to make his point and influence your purchasing decisions.
The Truth with Full Disclosure
Let’s take Chuck’s concerns out of order… in order to make sense here.
Disks are Inherently Slow
Regardless of rotational speed or drive type disk drives are slow. Traditionally performance is increased by providing a combination of storage array cache and additional disk drives to the array. Did I miss the storage best practice of adding lots of disks versus a small amount of cache when performance gains are required?
Starve any workload of cache and performance suffers horribly.
This is why Data ONTAP can deduplicate the storage array cache of NetApp FAS, IBM N-Series, and 3rd party arrays with our vSeries. With Intelligent Caching we eliminate redundant data within the cache thus resulting in exponentially more available cache to serve the work load. This technology was covered extensively in my post - VCE-101: Deduplication: Storage Capacity and Array Cache
I/O Density Impacts Performance
I/O density does increase per disk when you remove disks and an excellent example of this would be Virtual Desktops or VDI. With VDI customers want to leverage a single desktop image in order to serve thousands of desktop users. For more on VDI see my post – A New Era for Virtual Desktops
The proposed I/O density issue is addressed by array cache (see the section above). I’d like to introduce some proof points…
below are the results of the most I/O intensive operation known to VDI it is called a ‘boot storm.’ In this test I am simultaneously booting 1,000 virtual desktops. This activity is known as the bane of VDI – Don’t take my word for it, ask VDI expert Brian Madden.
In the test run we have 1,000 desktops, each at 10GBs in size. In addition, the dataset includes a 512 MB vswap file and 4 GBs of user data per desktop. This test is serving 14.5 TBs on 5.2 TBs of physical storage on a FAS3170 mid tier array with a PAM I module installed.
Below are the test results. As you can see we have very good results just running on the deduplicated data set; however, I'd like to highlight that at the 15:39 mark we enable Intelligent Caching. Note the total data being served remains constant at ~250 MBs while the disk I/O is reduced by ~60%. As an extra bonus I/O latency is reduced ~90%.
Data MBs
Disk IOPs
I/O Latency
A Special thanks to Chris Gebhardt for whipping up this little test for me
I believe this data clearly demonstrates that customers can run deduplicated datasets without any performance impact. If you need more data try these posts –The Highlight of SNW and Deduplication Guarantee from NetApp – Fact or Fiction?
If you need more data points, maybe you could ping a few of the VMware vExperts, Chad knows who they are, and ask them what they are seeing with deduplicating their production data sets.
Disks Fail
This is true spinning disks are prone to failures and when they fail the storage array spends a tremendous amount of resources to rebuild their content. I cannot speak for EMC, but NetApp arrays monitor the health of disk drives and allowing the array to identify and proactively replacing drives before they physically fail.
The success of proactive failing is measured at greater than 99% of drive failures. Come on Chuck, EMC must offer something similar to this technology. The arrays don’t still rebuild drives from parity sets do they?
Deduplicating the Production Data Completes the Cloud / Virtual data Center
It is well known that shared storage is required for the high availability features of VMware, Hyper-V, XenServer, etc. So while others are consolidating storage vendors are enjoying a boom – every system virtualized must move from direct attached storage to shared storage.
May it be possible that vendors of traditional, legacy storage array architectures want to poo-poo storage savings technologies like dedupe in order to preserve their ability to grow their footprint and revenue based on your virtualization efforts? The more your virtualize, the more storage you must buy…
In addition to the production footprint server virtualization makes backup to tape difficult and DR very easy (thank you VMware for SRM). This statement may be obvious to most, but both of these business continuity functions require additional storage arrays.
The Pervasive Effect of Dedupe
What EMC, HP, HDS, Dell, and other traditional storage array vendors don’t want to tell you is by deduplicating the production footprint you realize savings throughout your data center. Backup disk pools are reduced, storage for DR is cut in half, and replication bandwidth requirements also receive the same savings.
In order to wrap up this post, may I offer you these demos from VMworld 2009.
Technical Details on Running VMware on Deduplicated Storage
The Pervasive Effect of Storage Reduction Technologies
In Closing
Granted, everyone isn't going to deduce every dataset. But as more servers are virtualized the result will be more data residing on our shared storage platforms, and just think of the capacity reductions from deduplicating even 80% of these datasets (and their numerous versions / copies with backup, DR, test & dev, etc).
Customers win with Dedupe, it is the enablement to increasing storage utilization rates in order for them to be on par with what is available with server and network virtualization technologies.
As I often say, ‘Virtualization Changes Everything’ including one's understanding of storage architectures.
Data deduplication is useful stuff. Much the way that compression shows up everywhere in the infrastructure stack, data deduplication can be thought of in the same way.
Posted by: Cheap Computers | September 01, 2010 at 12:44 PM