« VMware Admins are Storage Admins - vStorage Integration Part 2 | Main | Transparent Storage Cache Sharing– Part 1: An Introduction »

March 11, 2010

Comments

Chuck Hollis

Vaughn

I thought this was good post until you started making stuff up again.

I think we'd all agree that the proliferation of VMware has increased the popularity of the "VMware admin in charge" model for storage management.

And then you went off the deep end again.

It seems that anytime you comment on EMC's capabilities, you get it wrong. That chart towards the end is yet another excellent example of this.

Once again, you're making stuff up (MSU).

Not only that, as Chad and I have both pointed out repeatedly, you don't need to overstate NetApp's capabilities to be successful here.

Recent egregious examples include your market penetration with VDI ("1 million desktops", "9 out 10", etc.) makes all of us wonder what you've been smoking. And your "5000 user" statements were candidates for an industry blooper reel.

The sad thing is that this behavior on your part isn't really needed.

All customers want is the truth, and if you're not prepared to give it to them, best for you to keep quiet.

-- Chuck

Dan Isaacs

Hi Chuck,

Can you identify the specific things you feel were made up? There are specific claims being made in the post, which of them are incorrect?

Thanks!

Itzik Reich

Vaugh,
please fix the following:

dedeup VM's - celerra
reports dedupe savings - celerra
auto configure ESX nfs settings - celerra

High Performace Multipathing I/o...this one made me laugh so hard, my wife looked at me weird.
yea dude, we have an amazing product that cost money which is by far superior to NMP RR !!

SRM Failback, please add the Clarrion, RecoverPoint, oh, and we also have an interface showing all the
vm's that are not in compliance for SRM!

Vaughn Stewart

@ Mr. Hollis

Thank you for initial compliment on the discussion. Believe me when I say NetApp and EMC are clearly leading the way with VMware integration, and in the end the customers will benefit from the innovations of our engineering staffs.

Regarding the comparison of the technologies , I clearly state in my post that my intentions are to provide accurate data specifically in the areas of technology produced outside of NetApp. More over, I have asked for a public review of the content and stated that I would make correction in the event of an oversight. I’m attempting to be as transparent as I can be.

I believe one can label your statement dismissing the accuracy of the chart without providing supporting data as you making an “argument from authority.” This is commonly referred to as a logical fallacy and is defined as one who positions a claim as true without having to substantiate the their claim as it is derived from a privileged position of knowledge.

I’d like to ask you to change the dialog regarding this topic. Instead I’d like to see you contribute to the discussion by providing myself, and the community, with the data points that allow us to increase the level of accuracy in these (and other) technical comparisons. I’m sure the sales teams in both companies would love to have a mutually agreed upon document listing the capabilities of each other’s technologies. In fact this is what I’m striving to provide and if you could help it would display a level of transparency on your part.

Vaughn Stewart

@Itzik

Thank you for providing feedback and contributing to the sharing of information.

May I ask for you to clarify a few points before I post your suggested edits?

- dedeup VM's - I believe they can only be compressed with EMC. I am taking this data directly from Chad's post on the capabilities of F-RDE

- reports dedupe savings - celerra

Again, sorry to be literal here. As the arrays do not offer the ability to reduce the storage consumed between two VMs via block level data deduplication I would rather use another term or phrase. Would reports storage savings provided by array work?

- auto configure ESX nfs settings - celerra

Got it, my oversight

- High Performace Multipathing I/O

I know you tout the use of the native multipathing but if I quote implicitly from the EMC website, "PowerPath/VE enables you to automate optimal server, storage, and path utilization in a dynamic virtual environment. This eliminates the need to manually load-balance hundreds or thousands of virtual machines and I/O-intensive applications in hyper-consolidated environments." This is where I get a bit confused by EMC's pitch. You offer X number of wys to accomplish a task and every way is the right way. I'm gonna deny this request as it directly contradicts the corporate positioning of EMC. Is this fair?

- SRM Failback, please add the Clarrion

Got it. Which protocols are supported? (btw - not that I'm adding it to the chart, can you add which replication models are supported?)

- On RecoverPoint, the compliance piece is cool. Are you suggesting I add a line to the chart introducing a new metric? If so, i'll need more details.

Itzik, I commend you for standing up to help refine this data. I look forward to publishing the updated chart upon receiving your feedback.

Thanks again.
V

Lee McColgan

Reposting from memory, as my original post never appeared.
----

One change would be to remove the "FC only" and "iSCSI only" notations in the "Physical to Virtual Storage Mgmt" row. EMC Storage Viewer has included support for both Symmetrix and CLARiiON, FC and iSCSI, since the beginning.

Also, I have to agree with Itzik that spinning native multi-path support from VMware into a win for Data ONTAP is a bit of a stretch. To be fair, maybe you should break that row into two: 1) Native Multipathing I/O and 2) 3rd Party MultiPath Plugin. Then both EMC Storage and Data ONTAP will get green checks for #1, and ONTAP will get red exclamations for #2.

Vaughn Stewart

Re: [NetApp - The Virtual Storage Guy] Lee McColgan submitted a comment to VMware Admins are Storage Admins - vStorage Integration Part 3


@Lee

Thank you for the comments and ensuring accuracy in this chart. I am very concerned about presenting accurate data.

On multipathing... I can see your point around multipathing; however, I dont believe you see ours. NetApp arrays work optimally with VMwares Round Robin path selection policy as our arrays do not serve with per LUN queue depths. As you know EMC arrays do serve LUNs with queues, and when a LUN enters a queue full condition all I/O on the path is subject to performance degradation. It is for this condition that EMC recommends customers concerned with performance should purchase PPV/E.

I dont think it is accurate to say EMC arrays run optimally with the native multipathing software included in ESX/ESXi. Do you?

On Storage Viewer support for iSCSI FC on Symm Clariion... Can you can point to public documents so I can verify your statement? If it is correct I will absolutely update the chart.

Thanks you again for the feedback.

Chad Sakac

DISCLOSURE - EMC Employee here.

@Vaughn - it's is:

1) ABSOLUTELY correct to say that EMC arrays run optimally with NMP Round-Robin mode. No worse, no better than all other arrays (including NetApp) that support this model. The array LUN queue model is a total red-herring. Implying that the network is never congested or unbalanced is an implication that all network QoS mechanisms are a waste of time, always.

2) ABSOLUTELY correct to say that NMP RR is relatively primitive still by other open systems multipathing (improving, no doubt!). It has no adaptive host-side queue management, doesn't do predictive path testing, and manual new path discovery. PowerPath/VE does all of that, so it's completely accurate to say it's BETTER. It's also not free. Customers can choose, just like they choose between the VMware dVS and the Nexus 1000v. Good, and Better.

You don't have to try to make us seem more complex :-) It's basic:

- VI3.x multipathing = was really not great.
- vSphere NMP with any array on the VMware HCL that supports - - either active/active or ALUA = good
vSphere MMP with PowerPath/VE = best.

simple enough for ya? :-)

OK - now for fact check, row by row.

Table as a whole:
- over-positioning vSeries, N-Series, FAS - they are all the same, right? I mean, if you're going to hammer us for having different array types, isn't it fair to say you have one? But, whatever, let's just let that slide.
- Symmetrix is one family, and the columns are the same for vCenter plugins (and will stay that way), so like the comment above, you're artificially making us look like we have more than 3, just like you're artifically making you look like you have more than 1.

1) Auto-Provision Datastores - correct.
2) Dynamically mask LUNs - incorrect. There is a plugin for Celerra that does that.
3) Dynamically grow/shrink datastores - correct.
4) Dedupe VMs - putting an exclamation mark in there is fine if it makes you feel good :-) We both save capacity, in different ways. EMC's F-RDE is a combo file-level dedupe and sub-file compress. In the VM datastore use case (which we're talking about here), file-level dedupe does nothing, sub-file compress generally nets a 40-50% capacity savings (in general purpose NAS use cases, file-level dedupe often saves more than block-level or compress). It has the advantages of having NO impact on filesystem size, features, or behavior, and being unaffected by local and remote replication (ergo, there is no "pinning" for elements of a filesystem that are being referenced by a snapshot). Can the same be said of the NetApp approach? Not saying better/worse - just pointing out that a NetApp constructed table, from a NetApp constructed world-view won't note the pros/cons on both sides - just like an EMC-constructed one would it?
5) Report dedupe savings - same as above. I personally would say "report space savings", and then it's incorrect.
6) Auto-configured iSCSI settings - correct.
7) Auto-configure NFS settings - incorrect, it's auto in the Celerra NFS plugin.
8) High Performance Multipathing - incorrect, see above.
9) Physical to virtual storage management - incorrect. iSCSI and FC are supported across the board, the Celerra also has it for NFS.
10) array based VM cloning - this is incorrect, but on your side. this can only be done on NFS datastores, not VMFS datastores. We both can "cheat" (taking a snapshot/clone) of the LUN, mount and copy out a VM, but that is not a VM-level snapshot. the reason we're able to do it on NFS is that both Celerra (as of DART 5.6.47) and ONTAP (I think as of 7.2.3?) can do file-level snapshots (which in the VMware use case manifest as "VM-level" snapshots). Is this statement of mine correct?
11) Arra based datastore cloning - incorrect. Celerra can do it for NFS and iSCSI.
12) IO/offload for VM clones - incorrect, on your side. See 10.
13) SRM support - correct
14) SRM failback - incorrect. it is supported on all four - in the latest SRDF SRA, Recoverpoint 3.3, Mirroview Insight, and the Celerra SRA.
15) Not sure if I understand the last one - can you explain to me what it means?

BTW - you know why I think this is an exercise in futility (and why I avoid making direct comparisons to others, and something I have talked to you verbally OVER and OVER again?)

a) if you look at the list above, you're wrong on more than half. If this was in front of a customer, you, and NetApp would have lost credibility. This is why I try my darnest to train EMCers to never go negative on a competitor, but emphasize why customers choose EMC. When you're a tiny startup, you HAVE to go negative. NetApp is not a tiny startup. You guys are a great company, with great products and people. Coming from a short guy, lose the Napoleon/David vs Goliath complex :-)

b) The list will be COMPLETELY wrong within about a month (seriously), and yet you, and the NetApp folks, and partners who use this will continue to use a doc that is completely out of date.

Will you commit to constantly (literally constantly!) updating this thing? If so, what a waste of YOUR time. Personally, I try to make sure my team and I, along with EMC partners simply stay on top of VMware, Cisco and EMC technologies. We have little time to try to track others. Inevitably things we would say about others would be incorrect, just like I pointed out with the table above. Don't get me wrong, that when good companies (like NetApp) do innovative things that customers like (like RCU), that we work with engineering to see what we can do. No "not invented here" syndrome allowed.

c) EMC is more than a storage company. What about our VADP integration with Avamar for backup use cases (not just VDDK but also CBT)? What about SCM's integration to extend ESX host profiles and guest remediation? What about RSA envision's integration with vCenter for security auditing/remediation? Do you have any of those? Anything like that? Anything planned? BTW, that's a SHORT LIST of all the integration points beyond simply storage.

d) You selected the battlefield here to be "vCenter Plugins", and extrapolated out to "VMware integrated". Well, let's broaden the context a bit shall we?
- Does NetApp's array element management tool directly connect to the vCenter APIs, showing VMware contextual info directly in the storage context, like EMC's midrange does? Can I see what VM is being replicated via Snapmirror and which arent? If you can, is that a free feature?
- Do ESX host initiator records (or igroups I believe they are called in NetApp-land) automatically get registered by vSphere 4?

Moving on to perhaps a more productive discussion...

It **IS** fair to say that EMC has multiple platforms, and that increases our engineering burden as we develop plugins.

I've always said - whether it's a company or an individual, our strengths are our weaknesses. NetApp's strength is laser focus on in essence one product. EMC;s strength is breadth of capability. They each have flipside weaknesses.

Three quick comments on that one (why would we intentionally "burden ourselves" with more than one storage array type?):

1) Does NetApp have something like a V-Max? What percentage of that market does FAS6000 series have relative of that "enterprise array" (the definitions of that vary - but generally it's defined by broader host attach than open systems only, coupled with N+1 architectural designs sharing a global cache model) to IBM, HDS and EMC? How hard would it be to extend FAS into that space? Scale out is not enough in that market.

2) If one product is always the way, all the time, why the big fight over Data Domain? Isn't that an implicit statement that growth into new markets aren't always best served by a given technological approach and that perhaps not all problems can be solved "in the storage platform"? I wouldn't be embarrassed to make that statement - it seems patently obvious to me.

3) EMC DOES need to simplify our product families - **where the needs can be met via one architectural model, one approach - we need to simplify.** No argument from me there. You can bet your bottom dollar we're beavering away at it here :-)

All just my 2 cents (my opinion) of course. If you want to commit yourself to consistently stating stuff about others that are always going to be a little wrong, a little behind the times (in some cases not a little, but a lot), then so be it, I'll be glad that you're doing that rather than playing with betas of VMware and your own products :-)

The comments to this entry are closed.

TRUSTe CLICK TO VERIFY