« NetApp Video Demos at VMworld 2010 – Part 1 (VDI) | Main | NetApp Video Demos at VMworld 2010 – Part 2 (VMware vCloud Director) »

September 21, 2010

VMware vCloud Director Storage Design – Why is Unified Storage Architecture compelling?

Posted by Abhinav Joshi - Virtualization and Cloud Solutions Architect 

For the last few months, I have been talking to a lot of customers (both enterprise and service providers) about their storage architecture for VMware vCloud Director deployments. This led to the evolution of the joint VMware and NetApp Reference Architecture on VMware vCloud Director and the various NetApp and vCloud Director integration demos we showcased at VMworld 2010 couple of weeks back.

For the sake of this discussion, I will refer to vCloud Director as vCD. As we drill down the storage requirements for enabling the different tenants, and also the storage required for enabling the vCD infrastructure, it becomes evident that the value of NetApp Unified Storage architecture is very compelling for real world vCD deployments. It enables the customers to deploy an agile and scalable shared storage infrastructure that can meet all the vCD storage requirements from a single unified storage array, without inducing any negative tradeoffs.

The overall data requirements for any vCD deployment can be categorized as follows, along with the supported/required storage protocols:

1. Shared Storage required for vCD Tenant Data: This is the shared storage required to meet the data requirements for tenant VMs and supporting business applications. This storage requirement can be further categorized as follows:

  • VMFS/NFS datastores for hosting the vCD tenant data: These are the datastores attached to the underlying vSphere infrastructure for hosting the tenant vApps, VMs, templates, and ISO images. These datastores can be hosted over different storage protocols: FC, FCoE, iSCSI, or NFS. 

    How about performance!!!
    A lot of the customers question the performance of Ethernet (FCoE, iSCSI, NFS) based deployments as compared to Fibre Channel. It is important to note that for VMware vSphere deployments on NetApp storage, performance comparison across different protocols is no longer a discussion as all the protocols perform within 5% of each other. This topic was discussed in great details here.

  • Storage as a service offering for tenants: This is typically shared storage directly accessible by single or multiple VMs in/across different vApps required for meeting the architecture requirements for different business applications, and/or other business reasons. Typical examples include shared NFS exports or CIFS file shares mounted across different Linux and Windows VMs in vApps, or iSCSI LUN directly connected inside the VMs.

    1

2. Shared Storage required for vCD Infrastructure VMs: This is the shared storage required to host the vCD infrastructure VMs and also enabling the upload/download of vApps etc. in the cloud. This storage requirement can be further categorized as follows:

  • VMFS/NFS datastores for hosting the vCD Infrastructure VMs: These are the datastores attached to the underlying vSphere infrastructure for hosting the vCD infrastructure VMs i.e. vCenter server, vCD server hosts, vCD database server, vCenter Chargeback server and database, vShied Manager, VMs hosting other infrastructure management tools (e.g. NetApp SANscreen, NetApp Operations Manager, AD, DNS), etc. These datastores can also be hosted over different storage protocols: FC, FCoE, iSCSI, or NFS.

  • Transfer Server Storage: To provide temporary storage for uploads and downloads of vApps, templates, and media files to/from tenant local computers, shared storage must be accessible across all the vCD server hosts (a.k.a cells) in a vCD cluster. This is typically NFS storage mounted across all the Red Hat Enterprise Linux servers in the vCD Cluster i.e. vCD server hosts a.k.a Cells.

2

From the discussion above, we can clearly see that for real world, scalable vCD deployments, the underlying shared storage infrastructure should be able to efficiently meet all the vCD data requirements, spanning multiple protocols (i.e. FC, FCoE, iSCSI, NFS, and CIFS). At the same time, the storage infrastructure should be scalable, simple to manage, and cost effective without any negative tradeoffs.

This is where the value of NetApp Unified Storage is very compelling for vCD deployments. The same scalable NetApp storage array can be used to meet all the different vCD storage requirements highlighted above. All NetApp storage systems utilize the same Data ONTAP operating system to provide SAN (FC, FCoE, and iSCSI), and NAS (CIFS and NFS) capabilities from the same storage array. This provides a significant cost savings for building a scalable, reliable, highly available, and easy to manage vCD environment without any negative tradeoffs.  Here are couple of great slides that showcase why the value of NetApp Unified Storage is so compelling for real world cloud deployments.

 Unified1


Unified2

I hope you found this blog post informative and helpful as you design the next generation cloud solution built on VMware vCloud Director.

In the next blog post, I will cover the storage networking architecture for this solution in detail.

As always, feedback is highly appreciated.

Follow me on twitter abhinav_josh

Comments

Storarch

Great blog Abhinav.

One of the other aspects that I want to highlight from the Unified Storage Point of View is the ability to provide the same features, functionality, resiliency, data protection and availability regardless of which protocol one is using. Unless a storage has these, the decision on which protocol to use becomes feature dependent. For example not all storages give an option of doing sync replication on NAS protocols or if one uses NFS for some stuff, you can't guarantee five 9s availability.

I think NetApp is the only storage that does a great job in offering the feature parity across all protocols with it's True Unified Storage hence giving the customers the freedom and flexibility to choose.

Abhinav Joshi

@Storarch Thanks for highlighting those key points.

The comments to this entry are closed.

Subscribe to This Blog


RSS


Virtualization Events

Photos

TRUSTe CLICK TO VERIFY