June 30, 2011

Who to Follow for FlexPod at Cisco Live 2011

Friea Berg, Cisco Alliance Team

With Cisco Live rapidly approaching, NetApp preparations are in full swing. NetApp reference architect David Klem met last week with Walz Group Chief Information Security Officer Bart Falzarano to review the deck for Wednesday’s joint prezo, “Real-World Clouds Built on Cisco and NetApp” and when he’s not showing off pictures of his brand new baby Cisco Alliance lead Nick DeRose is prepping a number of live demos that will run off the FlexPod in NetApp booth #939.

This is just the tip of the iceberg of the goodness you can expect from NetApp at CiscoLive. If you’re interested in NetApp, FlexPod, cloud or just a more efficient & flexible infrastructure, check out the regularly updated NetApp Cisco Live 2011 communities page and Cisco’s Data Center at Cisco Live activity summary.

A number of folks from Cisco, NetApp, and other partners are active online via Twitter and blogs, and many of them will be at the show. SoMeGo’s Sunday night Tweet-Up and Cisco’s Blogger Meet-Up Monday night offer the perfect opportunities to start your CiscoLive experience with a bang.

Below is the latest snapshot of NetApp bloggers and tweeps attending the show plus a variety of technology and channel partner contacts working with or building solutions on FlexPod.

(The below is a work-in-progress ; leave me a comment and we’ll add your info.)

FlexPod

FlexPod

Facebook Fan page

@askflexpod

Vaughn Stewart

Virtualization & Cloud Evangelist

Virtual Storage Guy

@vstewed

Jason Blosil

Product Marketing Manager

SANbytes

@jasonblosil

David Klem

FlexPod Reference Architect

 

@davidklem

Nick DeRose

Cisco Alliance Team: all (technical) things FlexPod

 

@nickderose

Friea Berg

Cisco Alliance Team: All (non-technical) things FlexPod

Virtualization Effect

@friea

Robert McDonald

Cisco-NetApp-VMware Solutions Marketing        

@robmcntap

Cisco

 

Didier Rombaut

Data Center Media Strategist

Cisco Data Center Blog

@drombaut

Tony M Paikeday

Senior Marketing Manager for Cisco Desktop Virtualization

Cisco Data Center Blog

@TonyPaikeday

Abhinav Joshi

Solutions Architect

Cisco Data Center Blog

@abhinav_josh

Brian Gracely

Cloud Evangelist

Clouds of Change

@bgracely

Rick Speyer

Senior Marketing Manager for Application Relevancy

Cisco Data Center Blog

@RickSpeyer

David Antkowiak

Cisco Solutions Architect

Cisco Data Center Blog

@dmn0211

J Metz

Cisco FCoE PM

http://jmichelmetz.wordpress.com

@jmichelmetz

Andy Sholomon

Network Consulting Engineer, Central Eng. Performance and Validation Testing team

 

@asholomon

Rodrigo Flores

Cisco Intelligent Automation Architect & founder of NewScale

http://www.servicecatalogs.com

@RFFlores

Jason Schroedl

Marketing lead for Cisco Cloud Portal (newScale)

http://www.servicecatalogs.com

@JSchroedl

Looking for more Cisco contacts? Check out Dane DeValcourt’s awesome post Cisco Live 2011 US – Twitter Users

 Citrix

Natalie Lambert

Director Product Marketing, XenDesktop

The Citrix Blog

@nflambert

 Cloupia

Bhaskar Krishnamsetty

VP of Products and Marketing

blog.cloupia.com   

@cloupian

 Intel

Brian Yoshinaka

Marketing Programs Manager

Brian Yoshinaka on Intel Communities

@IntelEthernet

Brian Johnson

LAN Access Division

 

@thehevy

 VMware

Mitchell Ratner

National Partner Manager - NetApp, Cisco

 

@mjratner

Sean Gilbert        

Sr. Alliance Technology Manager
VMware, Inc.

vTonic

@sean_gilbert

Wade Holmes    

Technical Solution Architect – Partner Cloud

http://www.vwade.com

@wholmes                  

Channel Partners

Joe Onisick

Data center architectures and cloud computing from a systems integrator perspective

@jonisick

Define the Cloud

Jed Ayres

Senior Vice President, MTM Technologies

@mtmvda

 Virtual Desktop Alliance

As noted above – if you'd like to be listed leave a comment or ping me on Twitter with your info.

Cheers! Looking forward to seeing you in Vegas …

June 20, 2011

Virtualize your NetApp

Posted by Keith Aasen - Consulting Systems Engineer

 

My role as a field engineer takes me around the country where I get to meet lots interesting customers. These customers are always innovating and I am constantly impressed with the innovations I see. Recently one of the solutions reminded me of my early days as a VMware consultant.

 

Some of my first consulting jobs were P2V contracts where I was brought in to convert a number of physical servers into VMs on a customers new VMware environment. These were fun times, each night I would migrate the data from the physical server into a VM, do the necessary conversion steps then make the cutover. Consistently the next morning the end users were unaware that anything had changed on their servers. Things worked just the way they had before.

 

Recently I was at a customer who had purchased a number of older NetApp 200 series controllers to serve files to their organization. At the time they thought it would be good to physicaly separate the different file servers. One for the development group, one for the Test Dev organization ect. They have since refreshed their NetApp storage and instead of separate storage system they have opted for a larger shared system. This larger system will save them space, power, cooling and reduce the number of devices requiring management. Changing a file server is usually a challenging project as there is usually lots of changes to the clients to get them pointed to the new storage. This can be time consuming and requires lots of coordination.

 

This client however leveraged some unique NetApp technology. They performed a SnapMirror from the old storage array to the new large array. This was easy to configure and took care of the data migration. Then when they were ready to cut over, they simply built a vFiler which matched the old Physical storage controller. This allowed them to not only preserve the isolation between the storage groups but also meant they did not have to touch the end clients. They effectively P2Ved their NetApp controllers.

 

The best part of this is they are now positioned to move these vFilers onto the next generation of hardware when the time comes. Again no disruption to the end user (DataMotion).

vFilers are provided as part of the MultiStore which is available on every NetApp controller and is now included with many new systems at no charge.

 

Very soon though, this will be the norm. Watch the NetApp blogs for further announcements regarding Data ONTAP Cluster mode in Virtual Server environments.

 

Keith

 

May 06, 2011

Desktop Virtualization From “Wow” to “Now”

Guest Post: Rich Brumpton, National Virtualization Director, MTM Technologies

Desktop virtualization is among the top 2011 IT priorities1 for many organizations. This deceptively simple term spans a massive array of options ranging from incumbent providers to upstarts to folks solving problems, like printing, that many of us considered pretty well solved a few years ago in the “Server Based Computing” age.

It’s no wonder IT teams tasked with implementing (and later managing) virtualized desktop environments are deeply concerned about complexity and often end up embroiled in seemingly endless review and evaluation processes. Individually selecting best-of-breed components and fitting them together just-so can only take you so far. Every time you get ready to make a decision there is someone waving a new widget or thingamabob in your face that is better, faster, bigger and/or more efficient than what you planned on using.

The most reliable outcome: Paralysis by Analysis.

As an award-winning partner in the deployment of Citrix-based virtualization infrastructures over the past two decades, MTM Technologies has helped hundreds of customers facing this very challenge build efficient solutions. Invariably, however, pulling all the pieces of the puzzle together has required an excessive amount of time or money (and sometimes both).Model-Home-And-Blueprints-300x239

Just like with building a home, the best way to avoid this trap is to start with a basic, standard design that has been thoroughly tested and architecturally validated yet still gives you options and choice in the details.

The blueprint for our desktop virtualization solution is based on a rigorously tested Cisco Validated Design for Citrix XenDesktop on Cisco UCS and NetApp storage. This CVD outlines the most cost-effective, scalable and high performance architecture for hosting, securing and optimizing the delivery of virtual desktops. Using a validated design as your foundation can dramatically reduce upfront planning time, reduce risk and yield almost immediate results.

A full end-to-end solution, however, requires more than a solid foundation. In addition to a comprehensive desktop virtualization reference architecture, Cisco’s Virtualization Experience Infrastructure (VXI) spans collaboration, borderless networking and data center technologies and includes a full ecosystem of partners. MTM Technologies integrates products from Wyse, Trend-Micro, AppSense and others to build a complete, easily managed desktop virtual desktop infrastructure that provides end users with a pristine Windows 7 desktop at every login. We also provide frontline, first tier support across the entire solution.  

This enables you to streamline your desktop operation, reduce the number of preventable support incidents and keep your business users focused on the business … not resolving technology problems.

Dozens of customers around the country have embraced this approach. Thorlabs, for example, embarked on a desktop virtualization project to provide secure around-the- clock remote data and application access to their entire global workforce.

If you have been looking at desktop virtualization and say “wow” while wondering “how” you can get there, join Citrix, Cisco, NetApp, and MTM Technologies at Synergy.

Picture 1

See how easy it can be to deploy Virtual Desktops when you don’t have to bolt together whatever parts you are given. You’ll also have the opportunity to talk directly with Thorlabs Global IT Director Dave Manhas about his deployment, impact, and lessons learned. 

In addition to fantastic Citrix, Cisco and NetApp Synergy sessions MTM Technologies will be hosting live demos every half hour at the W Hotel. Register now to reserve your spot, and check out the ultimate in Synergy schwag ...

-------------------------------------------------------------------------------------------------------------
1 For more details, check out Gartner’s “Desktop Virtualization Is Top PC Investment Priority for 2011” and ESG’s “Desktop Virtualization Extends Server Virtualization Experience and Skills

April 25, 2011

FlexPod Rocks WWT Geek Day 2011

Guest Post by Scott Miller, Director of Business Development, World Wide Technology

Last month World Wide Technology, Inc. (WWT), a St. Louis based Systems Integrator with offices around the globe, proudly hosted our seventh annual Geek Day. What’s a Geek Day, you ask? For the uninitiated it's a full-day event focused on bringing together people who care about technology – and specifically, best-of-breed data center technologies – to find common platforms for conversation, discoveries and a path of progress.

With 900+ guests attending 26 breakouts and 48 labs on topics ranging from desktop virtualization to private cloud computing, Geek Day 2011 was a rocking success!

One of the most talked about solutions this year was FlexPod for VMware. (For more details, check out the recently published Cisco Validated Design for FlexPod for VMware.)

WWT is working closely with Cisco, NetApp, and VMware to deliver FlexPod-based solutions to our customers and many couldn't wait to check out a live, working FlexPod. As one of my customers commented, “It's great to cut through the marketing hype and actually see a working Flexpod. GeekDay2 The NetApp guys had fun but they also knew their stuff and were there for me anytime I had a question.”

Another thing that impressed me personally was NetApp’s “presence” and collaboration with other WWT technology partners. Geek Day very much demonstrated NetApp’s thriving eco-system of partners.

For example:

  • Citrix hosted two Cisco UCS “C-Class” chassis and used NetApp storage for live demos
  • Syncsort demoed a co-developed NetApp solution using a FAS2020
  • Wyse helped NetApp with their latest smart terminals and technical support
  • F5 showcased a FAS3240 in their booth
  • BMC highlighted NetApp’s latest disk shelf technology (DS4243 and DS2246) and demo’ed the tight integration between NetApp & BMC management applications
  • Elliptical had a “bullet proof” demo of NetApp FAS3050 and DS14 in their awesome hardened rack.

Of course, NetApp also hosted it's own labs showcasing FlexPod and On-Command with live demos of FlexPod, SANscreen, Akorri, Operations, Protection and Provisioning Manager. Good stuff.

Hope to see you at Geek Day 2012. And in the meantime, if you're looking for a perspective on FlexPod or curious about the types of solutions WWT is building on it - or just dying to know the secret identifies of the NetApp folks pictured above - drop a note to the WWT team on Twitter.

April 20, 2011

XenDesktop 5 on NetApp deployment guide

Posted by Rachel Zhu– Reference Architect (Server and Desktop Virtualization)

I am happy to share that a NetApp new Technical Report (TR-3915) is now available. This document focuses on hosted VDI desktops and provides a step-by-step guide and best practices for leveraging Citrix Machine Creation Services (MCS) and the NetApp VSC 2.0.1P1. This document covers Citrix XenDesktop 5 on VMware vSphere 4.1.0 and Citrix XenServer 5.6.0 using NetApp storage, details the deployment of a typical Windows 7 virtual desktop infrastructure, and demonstrates a mixed-deployment environment with pooled and assigned desktops in XenDesktop.

tr-3915

I hope you will find this TR useful when you deploy Citrix VDI on NetApp. I am waiting for your feedback and question.

Also you can follow me on Twitter, @rachelzhu.

March 24, 2011

The 4 Most Common Misconfigurations with NetApp Deduplication

Posted by Keith Aasen - CSE Virtualization

Being a field engineer I work with customers from all industries. When I tell customers that the usual deduplication ratio I see on production VMware workloads is 60-70% I am often met with skepticism. “But my VM workload is different” is usually the response I get, followed by “I’ll believe it when I see it”. I do also get the occasional “Thats not what your competitor tells me I will see” I love those ones.

 

Consistently though when the customer does a proof of concept or simply buys our gear and begins their implementation this is exactly the savings they tend see in their VMware environment. Quite recently one of my clients moved 600+ VMs from their incumbent array which were using 11.9TB of disk to a new NetApp array. Those 600 VMs of varied application, OS type and configuration deduped back to 3.2 TB, a 73% savings!

 

Once in a while though I get the call from a customer saying “Hey, I only got 5% dedupe! What gives?”  These low dedupe numbers are almost always because of one of the following deduplication configuration mistakes.

 

Misconfiguration  #1 - Not turning on dedupe right away (or forgetting the -s or scan option)

As Dr. Dedupe pointed out in a recent blog, NetApp recommends dedupication on all VMware workloads. You may have noticed that if you use our Virtual Storage Console (VSC) plugin for vCenter that creation of a VMware datastore using the plugin results in dedupe being turned on. We recommend enabling dedupe right away for a number of reasons but here is the primary reason why;

 

Enabling dedupe on a NetApp volume (ASIS) starts the controller tracking the new blocks that are written to that volume. Then during the scheduled deduplication pass the controller looks at those new blocks and eliminates any duplicates. What if, however, you already had some VMs in the volume before you enabled deduplication? Unless you told the NetApp specifically to scan the existing data, those VMs are never examined or deduped! This results in the low dedupe results. The good news, this is a very easy fix. Simply start a deduplication pass from the VSC with the “scan” option enabled or from the command line with the “-s” switch.

 Screen shot 2011-03-23 at 2.28.49 PM
Above, where to enable a deduplication volume scan in VSC. Below, how to do one in Systems Manager;

Screen shot 2011-03-23 at 2.34.43 PM
 

For you command line guys its "sis start -s /vol/myvol" note the -s, amazing what 2 characters can do!

This is by far is the most common mistake I come across but thanks to more customers provisioning their VMware storage with the free VSC plug-in it is becoming less common.

 

Misconfiguration #2 - LUN reservations

Thin Provisioning has gotten a bad reputation in the last few years. Storage admins who have been burned by thin provisioning in the past tend to get a bit reservation happy. On a NetApp controller we have multiple levels of reservations depending on your needs but with regard to VMware two stand out. First there is the volume reservation. This reserves space away from the large storage pool (the Aggregate) and insures whatever object you place into that volume has space. Inside the volume we now create the LUN for VMware. Again you can choose to reserve the space for the LUN which removes the space away from the available space in the volume. There are two problems with this. First, there is no need to do this. You have already reserved the space with the volume reservation, no need to reserve the space AGAIN with a LUN reservation. Second, the LUN reservation means that the unused space in the LUN will aways consume the space reserved. That is, a 600GB LUN with space reservation turned on will consume 600 GB of space with no data in it. Deduping a space reserved LUN will yeild you some space from the used data but any unused space will remain reserved.

 

For example say I had a 90GB LUN in a 100GB volume and the LUN was reserved. With no data in the LUN the volume will show 90GB used, the unused but reserved LUN. Now I place 37 GB of data in the LUN. The volume will still show 90GB used. No change. Next I dedupe that 37 GB and say it dedupes to 10GB. The volume will no report 63 GB used since I reclaimed 27GB from deduping. However when I remove the LUN reservation I can see the data is actually taking up only 10GB with the volume now reporting 90GB free. [I updated this section from my orginal post, Thanks to Svetlana for pointing out my error here]

 

In these occasions, a simple deselection of the LUN reservation reveals the actual savings from dedupe (yes this can be done live with the VMs running). Once the actual dedupe savings are displayed (likely back in that 60-70% range) we can adjust the size of the volume to suit the size of the actual data in the LUN (yes, this too can be done live)

Screen shot 2011-03-23 at 2.47.03 PM

 

Misconfiguration #3 - Misaligned VMs

The problem with some guest operating systems being misaligned with the underlying storage architecture has been well documented. In some cases though this misalignment can cause lower than expect deduplication numbers. Clients are often surprised (I know I was) at how many blocks we can dedupe between unlike operating systems. That is, between say Windows 2003 and 2008 or Windows XP and 2003. However if the starting offset of one of the OS types is different that the starting offset of the other then almost none of the blocks will align. 

 

In addition to lowing your dedupe savings and using more disk space that required, misalignment can also place more load on your storage controller (any storage controller, not a NetApp specific problem). Thus it is a great idea to fix this situation. There are a number of tools on the market that can correct this situation including the MBRalign tool which is free for NetApp customers and included as part of the VSC. As you align the misaligned VMs, you will see your dedupe savings rise and your controller load decrease. Goodness!

 

Misconfiguration #4 - Large amounts of data in the VMs

Now this one isn’t really a misconfiguration, it's more of a design option. You see, most of my customers do not separate their data from their boot VMDK files. The simplicity  of having your entire VMs in a single folder is just too good to mess with. Customers are normally still able to achieve very high deduplication ratios even with the application data mixed in with the OS data blocks. Sometimes though customers have very large data files such as large database files, large image file repositories or large message datastores mixed in with the VM. These large data files tend not to deduplicate well and as such drive down the percentage seen. No harm is done though since the NetApp will deduplicate the all the OS and other data around these large sections. However the customer can also move these VMDKs off to other datastores which can then expose the higher dedupe ratios on the remaining application and OS data. Either option is fine.

 

 

So there it is, the 4 most common misconfigurations I see with deduplication on NetApp in the field. Please feel free to post and share your savings, we always love to hear from our customers directly.

 


March 15, 2011

Deep dive on XenDesktop 5 MCS architecture

Posted by Rachel Zhu– Reference Architect

My previous blog introduced the XenDesktop new features. The most exciting feature is the new desktop provisioning method – Machine Creation Service. This blog, I want to explain MCS architecture, storage best practice and deployment steps of MCS.

XenDesktop 5’s machine create service simplifies the task of creating, managing, and delivering virtual desktops to users. MCS has a collection of services including AD Identity Service, Provisioning Service, and Machine Identity Service. AD identity service automatically creates Active Directory computer accounts in the organizational unit you specified on the Number of VMs page. The account names are the same as the names of the machines.

With the full integration of Citrix XenApp, you can deliver on-demand applications as a seamless part of your overall desktop management strategy, extending the benefits of virtualization throughout the enterprise.

When you create a catalog to provision desktops via MCS in XenDesktop 5, a master image is copied to each storage volume. This master image copy uses hypervisor snapshot. After a few minutes of the master image copy process, MCS creates a differential disk and an identity disk for each VM. This process only takes a few seconds. We tested the creation of 60 VMs which took around 11 minutes total.

.

The differential disk is created the same size as the master image to host the session data. The identity disk is normally 16MB and is hidden by default. The identity disk has the machine identity information such as host name and password.

The following diagram shows the storage tab of a MCS created VM in XenCenter. The first line is the differential disk and the second line is the identity disk.

image

This diagram shows disk management in the Windows 7 VM. Disk 0 is the differential disk and disk 1 is the identity disk.

clip_image002

Storage consideration in MCS:

Best Practice : Citrix recommends NFS as preferred protocol for XenDesktop 5.

For the example below, the master image disk size is 24GB. Because NFS is thin provisioning by default, there is only 7GB space being consumed on NetApp storage. The benefit is even bigger when you have N 24GB differential disk. The actual differential disk size is 0 byte when it is created. And it grows during use.

mcs clone

 

The core components of XenDesktop are:

Controller. Installed on servers in the data center, the controller consists of services that authenticate users, manage the assembly of users' virtual desktop environments, and broker connections between users and their virtual desktops. It controls the state of the desktops, starting and stopping them based on demand and administrative configuration. In some editions, the controller allows you to install Profile management to manage user personalization settings in virtualized or physical Windows environments.

Virtual Desktop Agent. Installed on virtual desktops, the agent enables direct ICA (Independent Computing Architecture) connections between the virtual desktop and user devices.

Citrix online plug-in. Installed on user devices, the Citrix online plug-in enables direct ICA connections from user devices to virtual desktops.

Machine Creation Services. A collection of services that work together to create virtual desktops from a master desktop image on demand, optimizing storage utilization and providing a pristine virtual desktop to each user every time they log on.

Desktop Studio. Enables you to configure and manage your XenDesktop deployment. Desktop Studio provides various wizards to guide you through the process of setting up your environment, creating your desktops, and assigning desktops to users.

Desktop Director. Enables level-1 and level-2 IT Support staff to monitor a XenDesktop deployment and perform day-to-day maintenance tasks. You can also view and interact with a user's session, using Microsoft Remote Assistance, to troubleshoot problems.

 

Here are the steps to create VMs using MCS.

Step

Action

1

At Citrix Desktop Studio, click on Machines, then at Actions panel click on “Create Catalog”.

With MCS, machine type can be “Pooled” or “Dedicated”. Streamed desktops requires Provisioning server.

create catalog

2

Next select the location of the Master Image.

master image

3

Input the number of virtual machines to create and vCPUs, memory allocated to each VM, and if new or existing machine names should be created in Active Directory. For the example below, 4 machines are created using 1 vCPU, 1GB of RAM and new machines will be created in Active Directory.

number of vm

4

Select where in Active Directory the machines will be created and the naming scheme for the VMs. In the example below, the VMs will be stored in Computers and will be called Xen-Test1-4. Note, you have to enter ## after the VM name. ## will be replaced by the VM number automatically.

account

5

Type in a Catalog description for administrators.

administrator

6

To finish, give the Catalog a name and click Finish.

summery

7

The creation of the new machines can be monitored from vSphere or XenCenter.

process

The final step is the user assignment:

user assignment summry

Now you can login the VM.

vm login

February 07, 2011

The Evolution of XenDesktop Strengthens NetApp Story

Posted by Rachel Zhu– Reference Architect

I wrote XenDesktop 5 is around the corner back in October. I have been working in the lab with Desktop 5 for 2 months now and would like to share my experience of this exciting product. I will write a series blogs focus on architecture, deep dive on machine creation service, storage best practice on XenDesktop 5 and disaster recover consideration.

XenDesktop 5 enhanced storage integration. NFS is XenDesktop 5 preferred storage protocol for XenServer and ESX and CVS for Hyper-V. By using NFS, VM disks are dynamic and use less storage. It also improved manageability and scalability.

NetApp provides a scalable, unified storage and data management solution for XenDesktop. The unique benefits of the NetApp solution are:

Storage efficiency: Significant cost savings with multiple levels of storage efficiency for all the VM data components.

Performance: Enhanced user experience with transparent read and write I/O optimization that strongly complements NetApp’s storage efficiency capabilities.

Operational agility: Enhanced XenDesktop solution management with tight partner integration.

Data protection: Enhanced protection of both the virtual desktop OS data and the user data, with very low overhead for both cost and operations.

XenDesktop 5 has a new Service-Oriented Architecture with the Broker Service, Configuration Service, Host Service and Machine Creation Services. XenDesktop 5 no longer uses the IMA data store as the central database. A Microsoft SQL Server database is used instead to store configuration and session information. Each service reads and writes to the SQL database. DDCs communicate with SQL server as well. There is no DDC to DDC communication. The host service talks to the Hypervisor through the Hypervisor Communication Library (HCL) which consists of plug-ins for each type of supported Hypervisor. This design provides flexibility and scalability.

clip_image002

If you have used XenDesktop 4, you are familiar with terms like farm and desktop group. Terminology and concepts have been changed in XenDesktop 5 in line with industry standards. Key conceptual and terminology changes include:

clip_image004

clip_image006

· A site is a deployment of XenDesktop in a single geographical location.

· A host is the infrastructure on which desktops are hosted, which comprises of hypervisors (ESX, XenServer and Hyper-V) and storage etc.

· A catalog is a collection of user desktops managed as a single entity. Catalogs specify virtual machines (VMs) or physical computers that host user desktops, the Active Directory computer accounts assigned to those VMs or computers, and, in some cases, the master VM that is copied to create the user desktops.

· A single desktop group can contain desktops from a number of catalogs rather than being limited, as in earlier versions, to a single hypervisor pool. Also, a single desktop group can be published to users so that a single user may access multiple desktops in the group, and a single desktop may be assigned for use by multiple users. Desktops can also be assigned to client machines, rather than users, if required.

XenDesktop 5 introduced a new technology – Machine Creation Services. MCS is a provisioning service for VDI. A collection of services include AD Identity Service, Provisioning Service, Machine Personality Service, and Hosting Unit Service working together to replicate machines based on the Master VM, set their identity, and manage them.

Next blog I will deep dive into MCS architecture.

November 19, 2010

Storage Must-Haves for Desktop Virtualization

Ben DuBois, Virtualization Solutions Marketing, NetApp

This is the final post in a 3-part series on desktop virtualization. The series started with signs that Desktop Virtualization is Reaching a Tipping Point then Citrix's Natalie Lambert shared her perspective on Why Storage Matters for Desktop Virtualization. Additionally, earlier this week Cisco announced the Cisco Virtualization Experience Infrastructure which Vaughn Stewart discussed in Cisco VXI Ups the Ante.101-beer-mug 

All of this established the need to carefully consider the underlying storage infrastructure as part of a desktop virtualization plan. 

"OK, I get it.  Storage is important."  

A lot of vendors would like you to believe that their storage solution is the greatest thing since COLD beer!  How do you know if those solutions include the functionality that you “must have” to ensure success? 

As with many things, you have to look beyond the blitzkrieg of marketing material and messaging that bombs you on a daily basis.  To uncover additional information that will help guide you in making the right storage decisions you have to take a very forensic approach... ask a ton of questions!  And not just storage questions, but storage questions as they relate to your business goals to manage cost, provide agility, and deliver a high quality experience to the end users.

Storage "must haves" are a minimum set of feature criteria that are critical to the success of a virtual desktop architecture but must be designed in at the onset of the project.  NetApp has been espousing this philosophy for years and, in fact, presented a session specifically on this topic at Citrix Synergy 2008.  The point was driven home again in a Gartner brief based on independent research titled, "Storage Best Practices for Hosted Virtual Desktops." According to Gartner analyst Bob Passmore:

"Users who have deployed the technologies described in this research have been able to achieve usable storage area network (SAN)/network attached storage (NAS) storage costs approaching the costs of consumer disks used in actual desktops." 

Based on what we hear from customers, the Gartner report is right on the money. That said, I won't be surprised if other storage vendors claim to deliver on the same set of features highlighted in the report.  And on the surface it might appear to be almost a check-box item for many vendors... “thin provisioning - check, dedupe - check, RAID-6, cache, snapshots - check, check, and check".  

The issue isn’t just delivering on these features, it is delivering on these features to the level that allows you to actually drive out cost and increase performance without tradeoffs.  Any claims with respect to feature/function must be well qualified up front.  This is where you need to "go granular" on your storage vendor. Below are key questions (for starters) to ensure a solution can deliver the storage "must haves":

  • Snapshots. Do the snapshots impede performance?  What is the overhead when turning them on and at what interval?

  • Thin Provisioning. How efficient is your thin provisioning?  Does the array have intelligence to allow you to reclaim blocks that have been deleted within your virtual machines?

  • Deduplication. Is your deduplication for NAS only? Or can you dedupe NAS and SAN, and include all protocols? Can you dedupe desktop and user data?  When you deduplicate your data do you accelerate performance to that data?  Is your Deduplication block based or file based?  Can you deduplicate virtual machines or only completely identical files? 

  • Cache. What is your cache solution?  Is your cache space efficient (is it dedupe aware) or does it act like standard cache?

  • Efficient Cloning. Does the array have support for fast, space efficient cloning or does it require full copy clones?  Can it clone hundreds of VMs in minutes or hours?

  • Write IO Optimization. As Cisco’s Tony Paikeday recently blogged, VDI environments tend to be very write IO intensive with read/write ratios of up to 20/80.  Do you have technology that addresses write IO optimization?

  • Data Protection. Can you withstand double-disk failure? If so at, what cost and what performance overhead?

  • Back-up and recovery. Can you provide automatic back-ups which only consume block-level changes to each VM and can provide multiple recovery points throughout the day?  Are the backups an integrated component within the storage array and can they provide recovery times that are faster than those provided by any other means?

In my opinion, the Storage “must haves” are so pivotal to the success of a virtual desktop project that they should be considered the "cost of entry" for any vendor to earn your business.  IT teams are asking for them.  Gartner just validated them.  NetApp has been delivering them.  But the point is that you still have to do your own homework and qualify them. 

Talk to your storage vendor.  Heck, talk to a bunch of storage vendors!  Ask the right questions and get answers now, or deploy on sub-optimal storage and get answers later.

November 12, 2010

Guest Post:
Why Storage Matters for Desktop Virtualization

Natalie Lambert, Director of Product Marketing for XenDesktop, Citrix

With over three million seats sold in the first six months of 2010 alone, Citrix is the market share leader in desktop virtualization software. More than half of the Fortune 100 have production deployments using Citrix XenDesktop, and I’ve had the opportunity to talk with a number of them in my previous role as a Forrester analyst and my current role here at Citrix.

Done right, desktop virtualization will have a profound impact on organizations by enabling more flexible workstyles, improving business agility and data security, and a host of other strategic business priorities.

However, one of the key inhibitors for full, mainstream adoption of desktop virtualization is the storage infrastructure requirements and associated cost.  As Cisco’s Tony Paikeday recently commented, “Doesn’t it make sense to ensure that the compute and storage infrastructure are designed and configured around the unique requirements of desktop workloads?” This is echoed by a recent Citrix Community post which identified storage as the “#1 thing that people mess up with desktop virtualization.”

Many IT teams underestimate the impact desktop virtualization will have on storage and as a result costs can quickly spiral out of control while response times plummet and end users revolt.

Here are three examples of why storage is so important:

  1. Storage can make the difference between acceptable and outrageous cost-per-seat. Storage is one of the largest expense categories for desktop virtualization projects and can easily represent 35% to 50% of the overall project. It’s not surprising when you think about how inexpensive a desktop PC’s consumer-grade hard drive is relative to the cost per raw gigabyte of networked storage – at Forrester, I used to warn that storage was $0.10 in a PC and $10 in the datacenter (per gigabyte).

    Citrix has developed great efficiency technologies that reduce the storage foot print required for pooled VDI desktops, but these don’t address dedicated VDI desktops and end user data. Choosing a storage platform that enables you to substantially reduce storage consumption while implementing a data management strategy as part of an overall desktop virtualization plan can help justify the economics for moving from fixed PCs to virtual desktops.

  2. Managing thousands of desktops, and the associated infrastructure, can be painful in a virtual world too. Simply virtualizing thousands of physical desktops does not eliminate the need to manage those desktops – nor does it give desktop admins the ability to pass the buck on other infrastructure management. Ideally you want to give your desktop administrators the ability to manage their desktop environment without requiring deep knowledge of the storage environment. Storage has the potential to play a central role in delivering the agility, availability and manageability that is requisite for virtual desktop administration.

  3. Performance compromises are unacceptable to end users. Certain activities tend to produce high I/O activity at certain times — for example, all users boot their desktops first thing in the morning and many log back in immediately after lunch. Peak periods with 2 to 3 times the steady state load can bring a typical storage system to a crawl and degrade end-user experience to unacceptable levels. Storage must be sized for both performance AND capacity based on extensively tested best practices and should optimize read and write IO traffic.

To help our customers succeed, Citrix works closely with leading storage providers like NetApp to deliver uniquely integrated solutions that address these challenges and enable desktop administrators to perform key storage based data management operations. Since we understand that not all workers can use pooled VDI desktops, we wanted to make sure that we had an answer for those of you that are struggling with storage costs for dedicated VDI desktops. With NetApp, thousands of desktop images can be created in minutes using NetApp’s rapid provisioning capabilities and then imported and managed within XenDesktop. And, regardless of which type of virtual desktops you use, backups, DR replication and failover, deduplication and storage and VM provisioning for NetApp storage can all be performed through XenCenter with no knowledge of the underlying storage.

Our customers need more than great technology, however. They need solutions that are easy to deploy, easy to support and easy to scale. That’s why we’ve partnered with Cisco and NetApp to deliver a comprehensive desktop virtualization solution that combines market leading software and hardware, a complete set of deployment templates, starter kits, expansion packs, validated reference architectures ... all with a single number to call for support. This dramatically simplifies the process of deploying, configuring, supporting and scaling desktop virtualization enterprise-wide without any trade offs in terms of cost, performance, or end user experience.

If you’re considering desktop virtualization, the underlying storage infrastructure can be a key determinant in the project’s success. Stay tuned for the final post in this series highlighting storage requirements for a successful desktop virtualization project:

Subscribe to This Blog


RSS


Virtualization Events

Photos

TRUSTe CLICK TO VERIFY