« NetApp: Winner of Best Storage for Desktop Virtualization Award | Main | What is your VM footprint? »

May 28, 2010

Best Practices for KVM on RHEL and NetApp Storage

We wanted to share the good news that today we published a new solution guide focused on server virtualization with Kernel-Based Virtual Machine (KVM) on Red Hat Enterprise Linux (RHEL) and NetApp storage. This solution guide discusses in detail the best practices for setting up a virtual server environment built around the KVM hypervisor on RHEL and NetApp storage.


We hope this solution guide will provide a lot of value as you design and deploy your KVM environment on NetApp storage. The solution guide guide can be downloaded here.

As always, feedback is highly appreciated.


John Theobald

This is a very good first step. I look forward to seeing some offerings and validated designs around SMT coming out from both RH and NetApp. It would be nice to see a platform like UCS behind this as well.

Thanks for the doc.

Blake Golliher

Why no mention of ASIS? My suspicion is that it would be tremendous in space savings.


This guide misses a few noteworthy items.

Firstly, its useful to set DELAY="0" in the bridge interface's configuration. Otherwise traffic for a new mac address isnt passed until the delay time elapses (default 10seconds, making dhcp very slow and long)

Secondly, there is no mention why there is a gap between the first and second partition. This seems only because of disk start location on 64, with a 100M partition then the next partition starts on the next %8 sector. There is no reason not to have the partitions right next to each other. Also if you have a mixed netapp/emc environment (*gasp!*), you should align on an %64 sector giving you a single solution to satisfy both storage systems. Ive also observed that %64 seems to align on the cylinders more often which avoids a harmless warning from fdisk

Third, the sunrpc.tcp_slot_table_entries = 128 and
sunrpc.udp_slot_table_entries = 128 values are not applied properly by the RHEL init. See https://bugzilla.redhat.com/show_bug.cgi?id=189311 which includes a work around. These values influence nfs performance enormously.

Fourth, libvirtd doesnt allow you to customize your nfs mount options. Its very annoying and weve put RFEs on redhat for it. A work around is to just configure libvirtd to manage "/" but this blinds libvirt to the realities of the mounted nfs volumes. Nicely, you can have libvirtd manager the nfs and still get your mount options because libvirtd will checks first if nfs is mounted before it attempts to mount it. So if you need to set nfs mount options you can do so in fstab allowing init to mount the nfs, then later on the init process libvirtd will manage the volume without attempting to remount it (with default options). We have also found that nfs3 with udp seems to provide the best all around performance. Ive also seen discussion of the virtues of avoiding fragmentation by keeping the wsize and rsize within the ethernet frame size. The noatime option probably doesnt hurt either.

Fifth, elevator=noop has no impact on nfs mounts, only on block devices. Our benchmarks have shown no difference.

Sixth, there are considerable performance improvements in the kvm kernel modules provided in the redhat 'virtualisation' channel.

Were having lots of success with KVM - running over 70 vm's happily on each of our HP BL495's. (yes 200+ vms on three blades... and counting) Its very solid, but there isnt much in the way of best practise’s and lots of dogma from VMware.


woops my last reply was in reference to the document RA-0004-0810

anyway i would love to speak with the author :)

Jon Benedict

Hi Dean,

Your feedback is great, and after testing the items, I will likely add them to the next update of the document.


The comments to this entry are closed.

Subscribe to This Blog


Virtualization Events