|Home > Disk/RAID/Tape/SSDs > News > Seven Myths Surrounding NFS in VMware Storage Environments||
SNS UK Data storage and IT management: Tape storage maintenance, RAID data recovery, disk, RAID, tape, raid controller, recovery software raid, disk recovery raid
VMFS Vs NFS: a distorted playing field
As might be expected for a technology that has been built from the ground up to manage a VMware-based storage environment VMFS is robust, mature and well understood. It provides the VMware administrator with a fair amount of independence from the storage administrator for shared infrastructure. However, VMFS has some well-known shortcomings as follows:
By contrast the advantages of NFS datastores are well documented. An NFS based NAS storage solution is a much simpler, more automated, and altogether superior storage ecosystem for VMware datastores. It should also be acknowledged that are also some substantial obstacles with most NFS storage solutions that need to be overcome before the promised benefits can be realised. However these are not insurmountable yet all too often it is the persistent, false mythology that surrounds virtualization environments and NFS datastores that discourages VMware administrators from pursuing a solution. In some cases the myth is based on past facts that no longer apply to VMware today. Other myths contain an element of truth but they cannot be universally applied to all NFS storage solutions and it is important to clear this up so that VMware administrators can form a balanced view of the choices available.
Myth 1. VMware does not support all advanced functions on NFS
This is simply not true. All VMware vSphere and ESX features are supported with NFS datastores. VMware has in fact fully supported NFS as a standard storage option since ESX 3.0. The myth is probably down to VMware’s development cycle. VMware first develops and supports new advanced features on VMFS datastores with SAN storage. NFS support usually follows within a quarter or two. In exceptional circumstances the delay can be longer – for example Storage Vmotion™ and Site Recovery Manager™ (SRM) were delayed by up to a year because of the impact of a major new release (vSphere 4).
Myth 2. VMware performance is much slower with NFS datastores
This myth is perhaps the most enduring, pervasive and damaging since it is also quite false. The assertion is often repeated by some SAN vendors and experts alike to suit their own ends. However there is plenty of evidence to the contrary. VMware ran tests that showed NFS running on 1Gbps Ethernet performance delivered less than a 9 to 10 per cent decrease from that of 4Gbps Fibre Channel. This test did not evaluate performance when 1Gbps Ethernet NICs are trunked on both the VMware server and the NFS storage system. Tests by Dell® as well as BlueArc have demonstrated that the performance of VMware using NFS running on 10Gbps Ethernet versus VMFS utilizing 8Gbps Fibre Channel is equivalent or even higher.
Myth 3. VMware CPU load is significantly higher with NFS
This myth originated following tests that compared the VMware CPU load when utilizing NFS datastores and TCP/IP protocol with the VMware CPU load of the Fibre Channel and iSCSI (software and hardware based) protocols. The tests showed NFS TCP/IP overhead ranged from 15 to 40 per cent more CPU overhead than the Fibre Channel protocol/drivers. A key problem with this test, and one that completely invalidates it, is that it did not measure the VMware CPU load for VMFS datastores when utilizing Fibre Channel or iSCSI protocols/drivers. It only measured the protocol/driver overhead. The overhead of VMFS was not measured. Since NFS datastores eliminate VMFS datastore overhead, comparing the overall CPU loads would have been far more insightful. Even so differences in the test results were not statistically significant considering Fibre Channel protocol is all silicon based, whereas the NFS and TCP/IP protocols were all software based. In any case solutions are now available that put both NFS and TCP/IP protocols in silicon thereby eliminating any, and all, CPU load issues.
Myth 4. VMware is limited to only 8 NFS datastores
This myth is based on ESX and vSphere NFS datastore setup defaults. In fact ESX supports up to 32 NFS datastores and vSphere supports up to 64.
Myth 5. VMware NFS datastores only scale to 16TB
The notion of a 16TB NFS datastore limitation is nothing to do with VMware; it’s the typical file system size limit for most NFS storage systems. When the NFS storage system has this limit, then so too do the NFS datastores. NFS file systems up to 256TB are available with a global name space that provide up to a 4PB name space. This means a datastore can grow up to 4PB or 256 times larger than a 16TB datastore.
Myth 6. NFS thin provisioned VMDKs automatically rehydrate when moved or cloned
This is true for ESX and but not for vSphere. Thin provisioning is the default setting for NFS VMDK datastores with vSphere 4 and no longer requires rehydration to be moved by Vmotion™ to a different NFS datastore or to be cloned.
Myth 7. Microsoft Windows® VMs can’t boot or utilize NFS datastores
Microsoft Windows server doesn’t really support NFS and cannot boot from NFS. Many people wrongly assume this means that when it runs as a VM in VMware that VMware cannot utilize NFS datastores. VMware built the NFS protocol into the ESX and vSphere virtualization layer. Microsoft Windows VMs never see the NFS protocol and are unaware that they are utilizing NFS datastores.
As good as NFS datastores are for VMware, there are some other non-trivial issues associated with the way most (but not all) vendors implement NFS in their storage systems. Such systems will typically have:
NFS based Networked Storage dramatically simplifies VM storage provisioning, data protection, management, task management, and troubleshooting. But issues with NFS datastore size, per port I/O performance and annoying file object limitations, not to mention manually intensive, application disruptive storage tiering
and data migration, have tended to discourage VMware simplification. Companies should ignore the myths and look for solutions that combine NFS based NAS storage with intelligent data management software. In so doing they will be able to derive the full value of VMware and NFS while eliminating common operational limitations.
Multi-tier virtualised storage
Cinesite, the London-based visual effects house, a Kodak company, recently carried out artistic rendering of animation and scene development for the current Warner Bros blockbuster “Clash of the Titans” using a storage system from BlueArc. The project was the first at Cinesite to be completed which utilized a new BlueArc Mercury™ system as storage for rendered animation and composited shots. During the creation of “Clash of the Titans”, Cinesite managed almost 200 terabytes of data, concurrently feeding over 2500 render cores to produce the visual effects.
Cinesite is currently working on the following forthcoming releases, “Marmaduke” (Fox), “The Chronicles of Narnia: The Voyage of the Dawn Treader” (Fox/Walden) and “John Carter of Mars” (Disney/Pixar), all of which are being produced on the same BlueArc system.
“The movie industry continues to have a voracious appetite for digitally created visual effects of ever increasing complexity and realism,” said Antony Hunt, managing director of Cinesite.
“We identified last year that we would need to increase our storage capacity threefold but did not necessarily want to add to our old cluster and be constrained by the performance of the existing storage nodes on it,” he continued. “We were also conscious that certain aspects of our old system were struggling to keep pace with the ever increasing amount of randomly accessed data and needed to be revisited. After initial selection we conducted a series of acceptance tests on BlueArc and other vendors’ storage before making our final decision. We spoke with colleagues at other visual effects companies here in London who have BlueArc systems and only heard good things.”
“The next challenge for us is going to be stereoscopic content. This will see storage requirements double; maybe triple again, along with a corresponding increase in rendering. We feel confident that the BlueArc storage is going to elegantly grow to those levels and not constrain us.”
Cinesite typically handles up to eight projects at any one time with more than 350 VFX artists concurrently accessing thousands of files with sizes varying from just a few kilobytes to hundreds of gigabytes, frequently in unpredictable sequential and random patterns, placing enormous demands on throughput and I/O performance. Mercury virtualized storage pool allows Cinesite to seamlessly manage 130 terabytes of tiered SAS and SATA disk capacity.
Tier one uses high performance SAS disk for handling output image data as well as 3D movie files comprising thousands of image files per frame and thousands of frames per shot. SATA disk in Tier two provides high bandwidth access to randomly accessed files such as raw scans which are fed sequentially into the pipeline. An additional 36 terabyte tier of near-line archiving is provided by Cinesite’s old storage system. Movement of data between tiers is presently managed by legacy hierarchical storage management (HSM) tools. Going forward Cinesite plans to develop migration policies for the different tiers.ShareThis
|Related White Papers|
|Read more News »|
|Related SNS UK TV & Audio|
|Related Web Exclusives|
|Related Magazine Articles|
|White Paper Downloads|
Keep up to date with the latest industry products, services and technologies from the world's leading IT companies.