[plug] Cheap iSCSI .vs. NFS for VM's

Tim White weirdit at gmail.com
Wed Jan 23 20:58:48 UTC 2013

I've been doing lots of research recently into iSCSI .vs. NFS for our 
virtual machines at work. I now have a preference based on my research, 
but am wondering if I've missed anything that others can point out for 
me, based on their industry experience.

At work we are using QNAP 6 bay NAS, currently in RAID6 but aiming to 
migrate to RAID10 eventually. We have 2 Virtual Hosts connected via a 
dedicated storage network. We'll also be adding another NAS in the near 
future, possibly a QNAP as well.

Our current setup is with ESXi and iSCSI. However I'm migrating to 
Proxmox (KVM) and so have the option to migrate to NFS at that point.

Currently, iSCSI is thick provisioned, and then the VM HDD is thin 
provisioned in that iSCSI. This means we can never "over subscribe" our 
storage. However it also means we are using all the disk space in the 
QNAP even though we've actually only used 2/3 of that (or less). I know 
that iSCSI can be thin provisioned, so this is a moot point.

iSCSI and ESXi (on QNAP, I assume higher end systems do it differently) 
is essentially a file system within a file system within a filesystem. 
You have the QNAP filesystem, which you then create a sparse file in, 
(iSCSI backing), which you then export via iSCSI and create a filesystem 
in (VMFS) which is then filled with your disk images (VMDK) which is 
then shown to the guest who then creates another filesystem in. With 
Proxmox you can reduce this by using it as LVM with iSCSI backing, but 
then you don't get all the qcow features (although LVM is pretty good).
To me, a qcow on NFS seems like the least "filesystem within filesystem" 
that you can get.

NFS .vs. iSCSI speed? Apparently without offloader devices, they are 
basically the same now days.

Handling of network issues. Apparently iSCSI does this better than NFS, 
but if we have a network issue on the storage network, I believe it's 
going to cause problems regardless of protocol.

 From everything I've been reading, I struggle to see why iSCSI is used 
to much. I can see if the iSCSI is exporting a raw raid array for 
example, (no filesystem), then the filesystem within a filesystem issue 
is not really there. But on low end NAS's it seems to me that NFS is 
just as good. And I don't have to worry about iSCSI provisioning, I just 
create the qcow's as needed, and manage the free space so that a sudden 
spike in usage doesn't crash a VM (which will happen regardless of 
protocol as discovered by one of the iSCSI lun's being provisioned 
slightly smaller than the disk inside it by the previous network manager).

So for all those out in the industry, what do you use, and why? How does 
iSCSI make your life better, or is it a left over from when it's 
performance was better than NFS?



More information about the plug mailing list