« vSphere on NetApp Book Available at VMworld | Main | VMworld 2009 – Thoughts from the show »

September 02, 2009

Comments

Steve Marsden

Isn't this the same thing that NetApp did in Australia? Nothing new here:

http://www.itnews.com.au/News/141841,netapp-backs-storage-guarantee-with-1m-offer.aspx

I have never seen any results from this Australian challenge so can only assume it didn't happen. Can we expect to see the results from the Australian challenge any time soon - it's been 5 months now - surely it doesn't take that long to implement this stuff - does it?

Vaughn Stewart

Steve - thanks for the follow up. I don't have any details around Australia's program but I will follow up.

Paul P

Yeah Steve,

I know what you mean, all lies and no substance. I heard that this so called lucky (unlucky?) customer in Australia was so upset at NetApp’s claims that they told them not to bother bringing in any storage kit as they just 'knew' that none of the NetApp claims could possibly be true and didn’t want to get caught up in a cheap (expensive?) marketing stunt. "You can go and pull the wool over someone else's eyes" was supposedly the comment made.

Hey while on the topic, I’ve just spent two minutes googling some of this stuff to do my own research and I found some fairly scurrilous NetApp claims (and I would advise the following material is unsuitable for people under the age of 18):

• That they can support more serious applications like Exchange and Oracle with (only?) RAID 6. Here’s something you wouldn’t know, an engineer friend of a friend of mine told me that they just couldn’t create the RAID 10 algorithm, so now they go around telling everyone not to use RAID 10. I would be LOL if this wasn’t so serious.

• Now get this, they claim (yet to be proven BTW) they can actually connect into a Fibre Channel SAN and also present (FC?) LUNS. Which is obviously not possible because they do not support Real FC… They use the fake version, which I believe is not (yet?) certified by INCITS. Here’s a dirty little secret for ya – the real reason NetApp do this is so they can save money by not using genuine (real deal) FC components.

• Here is something else they try to hide, they actually use a file system. A downright and dirty file system within the storage controller itself. That’s… pretty weird.

• Oh yeah and another thing about file systems and files – you don’t have to draw much of a conclusion to realise if they (NetApp) use a file system in the storage controller (I can’t bring myself to call it a SAN array), that their systems do not use ‘blocks’ – the foundation of any respectable storage device. Now, there is no place for files in the current hot stuff like virtualisation, and what about the new boy in town, Mr Cloud himself – it’s all blocks baby. What were NetApp thinking, Files and Storage - it’ll never take off – they just keep pumping this poison out to the poor unsuspecting populace, Mums and Dads, even kids – that’s downright low. And what’s worse, how does a customer know what’s real and what’s not.

• Oh and while on the subject of files, I have not come across one site that uses NetApp for Exchange – because how could they, if they only do NAS, right?

• I was worried about thin provisioning there for a while, glad that never took off, thank goodness. I mean one more thing to allow the (the already lazy) storage administrators to take their eye off the ball. Hundreds of thousands of people would have lost their jobs over this very irresponsible technology. Anyways, a respectable community organisation came along and fixed it – they virtualised it – it works much better now, all the (irresponsible) bugs have been removed and no-one lost their job… NetApp, they rely on others to fix their problems.

• And now they are onto this de-duping production data. Again, ‘what the…’ I mean everyone knows it’s foolish to de-dupe any type of ‘in use' production data. Look, it won’t work. Thank goodness no-one has tried yet, because if even one site turned it on (which clearly they can't, yet), the whole internet would grind to a halt – and you know what that would mean - no way to operate these new fangdangled Cloud devices.

NetApp is such an irresponsible and dangerous company. Steve, I understand your fears and concerns, as I too go to bed worrying that, while most of this is just marketing fluff that you cannot actually use, that one day a customer accidently comes across something dangerous and turns it on.

What damage would that do to us and the rest of the industry? They must not care for their customers.

steve

Ermmm... thanks Paul. Interesting stuff. I was only asking about the Australian $1M challenge though as I haven't seen the results. Hopefully Vaughn can look into the details for us.

steve

Just noticed the update in the blog post - thanks Vaughn. Will await the results with interest.

Vaughn Stewart

Paul, thanks for the reply, but I can’t tell if you’re serious with your points…

- On lies and no substance…

Which is more probable? That EMC gave away hardware, software, and support in order to prevent the introduction of NetApp technology into the winning account or that our data deduplication is actually just smoke and mirrors?

It is widely understood that account control is vital to any vendor in any industry.

So did NetApp make a mistake? We sure did, we publically announced the winner of the challenge. Note: Another all-EMC shop in Australia has accepted the challenge and this time the account is not been announced publically.

- On Exchange and Oracle…

Exchange on RAID-DP is absolutely supported by Microsoft, unless their support website is not to be trusted.

Do you know that Oracle on Demand runs on NetApp? That’s more than 8 PB of data, most of it NFS, all running on RAID-DP and being managed by a ‘skeleton crew’ (over 400 TBs managed per storage admin).

I wonder if you should warn Oracle and Microsoft?

- On the topic of ‘real’ versus un-real’ Fibre Channel LUNs…

Please let me know what a real LUN is and how QLogic and Emulex HBAs can communicate via un-real FC to the un-real NetApp LUNs. Maybe QLogic and Emulex are conspiring with NetApp on this un-real form of the FC protocol.

On this subject of un-real devices, can you ask VMware if a virtual machine has a real CPU? Is there a worldwide conspiracy underway which is replacing direct hardware access with some form of abstracted access? If so, what should we do about it?

Can this ‘virtualization’ be stopped?!?!

While there’s so much to digest in your comments for the sake of time I need to jump to your last point…

- On NetApp being an irresponsible and dangerous company…

I have sworn a NetApp vow of secrecy, which if I break places my family in great danger, otherwise I would share with you how we are able to have $4 billion in annual sales while delivering dangerous technology.

Paul – Again, I'm not sure if you're being serious. If youd like to engage in an intelligent conversation versus this current form of technical terrorism stop by the NetApp booth. I'll make time to help you.

Cheers,

Vaughn

Paul P

Vaughn, Vaughn, Vaughn,

Thats both real funny and real sad that you had to ask - Yes I was being sarcastic...

And yes I come across this all the time myself...

Chad Sakac

Vaughn - just as an FYI (for factual correctness), the original customer had a previous issue - during a DART 5.5 to 5.6 upgrade, the iSCSI LUN went from an SCSI-2 device to a SCSI-3 device.

In VMware (as you know), this causes a block device to run into the resignaturing behavior, requiring datastore renaming, and sometime re-registering the VMs.

This of course happens to many arrays - and is an important procedural thing to know. We (EMC) dropped the ball with that customer during the process (as this was a know procedure with a known workaround).

We didn't give them equipment, money, services, etc. What we did was explain the issue through a thorough root cause analysis (of technology and process), and helped make them whole.

Vendors make mistakes, and customers recognize that. What's more important is how vendors react after having made them.

Just wanting to keep the crazy libel in this back-and-forth to a minimum and the truth to a maximum.

They are a happy EMC customer.

eric barlier

I am keen on getting to know who the customer is down under.

Is there an ETA on the info?

Eric

Vaughn Stewart

Eric,

I spoke with the NetApp team running the program. We are currently wrapping up the legal agreements and plan to have announcement in November with updates posted 60 days after.

Evan Unrue

Ok, I have a question. The WAFL file system performs very well (for a time), its function.. finding enough contiguous blocks to perform full stripe writes negates the requirement for parity calculations on disk.. but as multiple hosts are services random IO over perioud of time.. less contiguous blocks are available... meaning that we have to start looking to the disks for parity again. A single parity drive in RAID 5 has a write penalty of 4 (approx), so with double parity you're write penalty is going to be greater. As I wouldn't have thought you are sizing spindles in the truest sense of the word here (because spindle counts could end up being excessive).. there is a requirement to defrag the netapp FAS alot more often than typical SANs and with a 24/7 operation, you don't want to be doing online defrags of highly transactional critical systems because response times suffer. On that alone, how is RAID DP suitable for IO intensive applications ?

Vaughn Stewart

Re: [NetApp - The Virtual Storage Guy] Evan Unrue submitted a comment to The NetApp $1 Million Dollar Virtualization Challenge


Evan

Thanks for the comments. Im not sure that all of your points are correct, but I think what you are concerned about is contiguous free space for write operations and the performance impact of RAID.

WAFL has many components. Write requests receive immediate ACKs as theses requests are logged by NVRAM. This ensures high client performance while also allowing ONTAP to prepare for the write operation (which occurs either every 10 seconds or when half of the NVRAM capacity has been reached). WAFL also writes in passes, never returning to write to a block a second time until all have be written to once. This allows for contiguous block ranges to become available as a part of natural aging process of the data.

Now one can technically defrag WAFL (its not exactly defrag but the term will suffice here) it is rarely ever done as WAFL is optimized for random read access. Id love to point out 3rd party public docs on this last point, but I dont have any references available at this time. Youre gonna have to trust me on this one here.

For more info on WAFL see the original NetApp technical report TR-3001 http://media.netapp.com/documents/tr-3001.pdf

As for RAID-DP, it is high performing, highly available, and provides high utilization. Dont take my word for it, check out the configurations used by all the storage vendors as they trick out their arrays for optimal performance and then compare them configs to NetApp. NetApp is the only array with parity RAID protection in place, because we have very little performance overhead.

For more info on RAID-DP see TR-3298 http://media.netapp.com/documents/wp_3298.pdf

The comments to this entry are closed.

TRUSTe CLICK TO VERIFY