« Raising Awareness Around the Misalignment of Data | Main | NetApp #2 of the Best Places to Work in Canada »

April 09, 2010

Comments

Andrew Mitchell

I wouldn't mind running a datastore that size. Restoring it across a WAN in the event of a failure though............

Ewan

The VMware datastore sizing issue has always struck me as a bit of an oddity given the scale of the VMware solution across dozens of servers, TB of RAM, 100s of CPUS.

I wouldn't give it a second thought if I was provisioning a 12TB volume group on a Unix server, or if I saw a 12TB NFS share mounted across dozens of servers.

Fundamentally 12TB is now 2-3 shelves of disks, even after RAID6 protection, and saying we'd have to stick to breaking these 3 shelves down into 10 "logical" units because of a software limitation is a step-backwards.

I'm sure VMware are working hard on reducing the limitations on VMFS, otherwise people will have to start looking at NFS to reduce management costs.

Vaughn Stewart

Re: [NetApp - The Virtual Storage Guy] Andrew Mitchell submitted a comment to Poll Results - What is the Size of Your Average Largest Datastores


@Andrew Hey man, how have you been?!? You attending the TechSummit in May?

Yeah, 12Tbs would be tough. I think we could pull it off with our Long Distance VMotion as it allows for VM access while the data is in flight between sites. Ill post on this next week.

Cheers!

Vaughn Stewart

Re: [NetApp - The Virtual Storage Guy] Ewan submitted a comment to Poll Results - What is the Size of Your Average Largest Datastores


@Ewan Thanks for chiming in. I thought your comment, ...otherwise people will have to start looking at NFS to reduce management costs. was interesting as this is exactly what we have been seeing since 2006 (with the release of VI3). Id suggest that NetApp customers who run 500 VMs or more are commonly on NFS primarily for ease of management reasons. Now Id clarify to say that most of the larger installations started on Fibre Channel and as such may have some amount of a legacy footprint running with their NFS datastores.

Duncan

One thing to keep in mind that for large environment the max amount of NFS Shares vs the max amount of VMFS volumes might slightly effect the outcome of this poll.

I think for VMFS the general best practice has always been 300-500GB datastores to avoid scsi reservation conflicts. However the mechanism has been vastly improved and 1TB is not uncommon in vSphere environments. 12TB is however a completely new ball game. But this is of course very specific to a single customer and a single usecase!

@vaughn Yes Andrew will be at Tech Summit.

Duncan
Yellow-Bricks.com

Vaughn Stewart


@Duncan - Thanks for chiming in. The max number of datastores per host is effectively a cluster limit. I havent seen a customer reach this limit on VMFS or NFS; however, Im confident that in the future the max will be the same for both VMFS and NFS.


The results of this poll did align with what we see in our customer engagements, which is NFS deployments have larger (and fewer) datastores and larger DRS/HA clusters. With the release of the Atomic Test Set locking mechanism in ESX/ESXi 4.1 i would expect we will see customers move to larger VMFS datstores and clusters.

This message was sent by my thumbs... fault them for the brevity and poor spelling

invisible

Honestly, I'd be glad if I can create 32TB volume and consolidate 15 2TB-something volumes I've got right now attached to 20 host ESX cluster. Hope that after upgrading to ONTAP 8 it would be possible.

Vaughn do you have any reports of how an aggregate with 36 1TB SATA disks would behave? My plan is to create one, max two aggregates and put inside as many disks as possible, create one huge NFS volume and get rid of VMFS/FC altogether.

I'm asking that because couple of your competitors had some problems in past (I have no idea what is the situation righ now) putting 1TB SATA disks in huge disk pool. I know it because was working as a Storage Solutions Architect that time for that vendor. And even it was possible to create a disk pool with 96 1TB disks such kind of implementation was 'not recommended'.

Have your folks at Netapp tested 64 bit aggregates with bunch of 1TB SATA disks? Has anyone tested such an aggregate/volume particularly for VMware deployment?

I've got >60TB of usable storage with 6080 running >400 VMs and twice that number of VMs is planned in near future. Unfortunately I do not have extra 6080 in my garage to test 64bit aggregates.

Sam

Following the previous post. We are currently moving from fc (pillar) to nfs (netapp) datastores and are trying to decide if we want 1 large nfs datastore or if we should break it up. Currently have 7 hosts and about 110VMs. I am having a real hard time tracking down whether we will see performance issues with one large nfs datastore for all the VMs. Does anyone have any input/links? Thanks

Vaughn Stewart

@Sam - great question. You could easily run 110 VMs on an NFS datastore. The strength of NFS is how easily it handles large numbers of VMs. NFS delivers dorect access to hardware accelerated VM cloning, transparent dedupe, etc... The storage network design with 1GbE Ethernet can be a little complex (as compared to say iSCSI).

Will you deploy on 10GbE or 1GbE?

I would suggest to you that if the total number of VMs you plan to deploy is less than 150 or 200 and you are running on 1GbE then you may want to consider iSCSI. The storage network setup is incredibly simple to configure for link resiliency and throughput aggregation (with vSphere).

Long story short, you cant go wrong here.

Punched by thumbs, please excuse typos

Chris Waltham

@Sam We have 170 VMs and have 3 NFS datastores. Feel free to email me if you want to chat

Rene

What is the volume size limit for a FAS2040HA? I would like to know so we can look into a proposal using 2TB drives.

Thanks!
Rene

The comments to this entry are closed.

TRUSTe CLICK TO VERIFY