[plug] Cheap iSCSI .vs. NFS for VM's

Leon Wright techman83 at gmail.com
Thu Jan 24 00:34:48 UTC 2013


Tim,

We're using NFS off our NetApp boxes, hooked up to ESXi. Everything is thin
provisioned, but we still used cooked file systems (vmdks). As we've yet to
get better performance by direct NFS mounting to the filer inside the VM.
We haven't finished testing that though as most of the VMs were migrated
off old iSCSI fibre channel sans. The NetApps also dedup,
so inefficient space usage isn't so much of an issue for us.

Regards,

Leon

--
DRM 'manages access' in the same way that jail 'manages freedom.'

# cat /dev/mem | strings | grep -i cats
Damn, my RAM is full of cats... MEOW!!


On Thu, Jan 24, 2013 at 4:58 AM, Tim White <weirdit at gmail.com> wrote:

> I've been doing lots of research recently into iSCSI .vs. NFS for our
> virtual machines at work. I now have a preference based on my research, but
> am wondering if I've missed anything that others can point out for me,
> based on their industry experience.
>
> At work we are using QNAP 6 bay NAS, currently in RAID6 but aiming to
> migrate to RAID10 eventually. We have 2 Virtual Hosts connected via a
> dedicated storage network. We'll also be adding another NAS in the near
> future, possibly a QNAP as well.
>
> Our current setup is with ESXi and iSCSI. However I'm migrating to Proxmox
> (KVM) and so have the option to migrate to NFS at that point.
>
> Currently, iSCSI is thick provisioned, and then the VM HDD is thin
> provisioned in that iSCSI. This means we can never "over subscribe" our
> storage. However it also means we are using all the disk space in the QNAP
> even though we've actually only used 2/3 of that (or less). I know that
> iSCSI can be thin provisioned, so this is a moot point.
>
> iSCSI and ESXi (on QNAP, I assume higher end systems do it differently) is
> essentially a file system within a file system within a filesystem. You
> have the QNAP filesystem, which you then create a sparse file in, (iSCSI
> backing), which you then export via iSCSI and create a filesystem in (VMFS)
> which is then filled with your disk images (VMDK) which is then shown to
> the guest who then creates another filesystem in. With Proxmox you can
> reduce this by using it as LVM with iSCSI backing, but then you don't get
> all the qcow features (although LVM is pretty good).
> To me, a qcow on NFS seems like the least "filesystem within filesystem"
> that you can get.
>
> NFS .vs. iSCSI speed? Apparently without offloader devices, they are
> basically the same now days.
>
> Handling of network issues. Apparently iSCSI does this better than NFS,
> but if we have a network issue on the storage network, I believe it's going
> to cause problems regardless of protocol.
>
> From everything I've been reading, I struggle to see why iSCSI is used to
> much. I can see if the iSCSI is exporting a raw raid array for example, (no
> filesystem), then the filesystem within a filesystem issue is not really
> there. But on low end NAS's it seems to me that NFS is just as good. And I
> don't have to worry about iSCSI provisioning, I just create the qcow's as
> needed, and manage the free space so that a sudden spike in usage doesn't
> crash a VM (which will happen regardless of protocol as discovered by one
> of the iSCSI lun's being provisioned slightly smaller than the disk inside
> it by the previous network manager).
>
> So for all those out in the industry, what do you use, and why? How does
> iSCSI make your life better, or is it a left over from when it's
> performance was better than NFS?
>
> Thanks
>
> Tim
> ______________________________**_________________
> PLUG discussion list: plug at plug.org.au
> http://lists.plug.org.au/**mailman/listinfo/plug<http://lists.plug.org.au/mailman/listinfo/plug>
> Committee e-mail: committee at plug.org.au
> PLUG Membership: http://www.plug.org.au/**membership<http://www.plug.org.au/membership>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.plug.org.au/pipermail/plug/attachments/20130124/43482078/attachment.html>


More information about the plug mailing list