Are those speeds typical? I feel like I should be getting like 100MB/s
average.
You didn't specify how many drives you have, other than 1TB (SATA3). You could theoretically get some more.. juice by doing RAID10 (4 drives; 2 pairs mirrored and then both pairs striped). That alone (no real configuration tweaks) might squeeze out a better reads for all types you mentioned.
Am I crazy for tyring to run FreeNAS/NAS4Free on a VM? Should I just
use Windows or a distro like Ubuntu? Feels like a lot of overhead so
far.
No, you're not crazy. In fact, I asked the same question on ServerFault. You're not crazy for trying to run those distros on a VM. There are however a crap load of caveats in doing so especially in a VM and especially depending on the VM hypervisor. Using Windows or Ubuntu would probably be simpler, or rather, FreeNAS/NAS4Free unvirtualized would be a lot easier. Not having to deal with the quirks of each hypervisor and how it treats its guests or what direct resources the guests can have can be a nightmare. So far I've tried with VMWare ESXi 4.1U3, 5.1 (patched) and Hyper-V 2012 (v3). I managed to get it working on 5.1 (patched) but it took a lot of time and effort just to get it all working correctly. I'm going to blog about this as I feel like I just went to Mordor to drop off the one ring.
Is there anything I can do to tweak ESX, FreeNAS/NAS4Free, NFS, ZFS to
get better throughput
I think my first answer covers this, but I'll address specifically ESX, ZFS and FreeNAS as that's my exact setup.
- ESXi: Your best bet for performance with ZFS in FreeNAS is to give direct access to the disks. In hypervisor lingo, this is referred to as passthrough (or PCI-passthrough in VMWare). This means that FreeNAS can access each disk it's going to use without using a vmdk file within ESXi; this ultimately means better/faster performance. After reading on and on about ZFS from numerous sources, it's clear that for ZFS, less is more. Don't get a fancy RAID card. That won't help with redundancy or performance. ZFS likes unencumbered, direct disk access. SAS HBA (Host Bus Adapters) in JBOD mode are pretty much the way to go. VMWare's HCL for SAS HBAs is a little limited but the FreeNAS community is pretty good for finding compatible cards and there's also http://ultimatewhitebox.com that may be of some help.
- ZFS: I'll refer you to this guy: http://constantin.glez.de/blog/2010/01/home-server-raid-greed-and-why-mirroring-still-best. His article explains why mirroring is better than doing RAID-Z. I used to be of the RAIDZ/5/6 camp but after reading this article and thinking things over a bit, I've switched to mirroring and striping (RAID10/01). It's easier and faster.
- FreeNAS: I love FreeNAS 8.3. It's a solid distro. I know little to nothing about FreeBSD but I'm working on that. Running FreeNAS in a VM is do-able, but once you introduce HBAs into the equation, things get a little dicey. In my case, I ran into a few problems. Namely, getting FreeNAS to view the disks from the LSI SAS HBA was painful. It wasn't until I found a post in a forum post that stated for VMWare ESXi 5.1, you need to apply 3 patches from VMWare to fix passthrough I was able to move forward. After applying the patches to ESXi I was able to get the disks to be seen in FreeNAS, I had to deal with the VMXNET3 issue. Luckily, I found a great guide for installing the VMWare Tools in FreeNAS. After that, I encountered the lovely IRQ interrupt storm (great error name, right?) - of which there's no 100% clear solution. I managed after several hours of troubleshooting to figure out I had to disable ACPI on the BIOS of the host. Welcome to vmguest troubleshooting hell. (Keep in mind this was my last attempt which was successful; I bailed on VMWare ESXi 4.1U3 and Hyper-V 2012 core). I'd estimate that I spent about 24+ hours in trying to get this to work (total for all 3 hypervisors).
Is there a better configuration given what i'm trying to do and the
hardware i have?
Well, from reading your parts list, I'd argue you'd need more disks and probably a compatible SAS HBA to give your FreeNAS VM direct access to your disks. That will improve performance however, how much better your mileage may vary. Getting VMXNET3 working is a big boost to network performance; avoid the E1000 as that will lower your network throughput.
TL;DR
- FreeNAS with ZFS native is far, far, far, far easier to do than as a VM guest.
- If you must use FreeNAS in a VM, you're obligated to investigate all the hardware aspects that come into play:
- CPUs (AMD-V, Intel VTx)
- Chipset virtualization support (IOMMU, AMD-Vi, Intel VTd)
- I/O virtualization (IOV/SR-IOV)
- Network virtualization (VT-c)
- SAS HBAs and compatibility.
- Don't forget about the hypervisor!
- FreeNAS may/will freak out about resources (IRQ interrupt storm to which there's no magic silver bullet) so your odds of having to do some troubleshooting is more than likely
- Avoid RAIDZ fancy ninja saving storage for faster, easier mirroring
and/or mirror/striping
- There's no real 100% guarantee that this will all work even though it's all compatible. I'd also recommend therapy before, during and after your ordeal.
Feeling up to it after reading this? Maybe you are crazy. Hell, maybe I'm crazy.