6

Long Info: I'm trying to run a windows virtualbox vmdk from a USB 3.0 external SSD. The system I use is Arch Linux which is also installed on this external SSD. The VM runs fine if I load it from the internal SSD (while using the System on the external SSD). The same is true for a Linux VM loaded from the external SSD (also while using the System on the external SSD).

Short info: The external SSD is a samsung 850 evo 512GB (M2 Verison) with a M2 to USB3.0 adapter. As mentioned above other Vms and when running the windows VM from the internal SSD or even from an external USB3.0 HDD (this is slow but it is still much faster then the external SSD in this case) works fine.

VM Settings:

16GB ram
chipset PIIX3
I/O APIC enabled
Hardware Clock UTC Time enabled
4 Cores 100% (4.5Ghz)
VT-x enabled

Problem: The VM boots correctly but will cause the Host system to freeze from time to time and will reach the login screen after ~20 Minutes. As it seemed to be a disk problem, I loaded the vm from the internal SSD and disabled windows disk paging function to reduce disk writes. I then copied the changed vmdk to the external SSD and modified Virtualbox on my external system accordingly. This however didn't change anything. (I even increased the ram to 16GB).

Assumption For some reason the windows VM does an awful amount of disk writes compared to the linux VM, the funny thing is that I did the same some time ago on a external USB 3.0 hard drive which is actually much slower than the SSD (however I din't run the host system on the same drive)

The problem is certainly not the SSD as I'm using the same SSD as an internal SSD. The adapter is working just fine for the Linux System, Linux VM and any other program so I don't think it's that either.

I will try to search the logs for some info but if anyone has another Idear it would be much appreciated.

Question: How can I improve the performance of the windows VM on my external SSD and why does windows need to much IO traffic?

Solution comment:

Using writeback caching as suggested by @Eugen Rieck did indeed make the VM usable, I suppose the extra amount of IO form the host System on the same external SSD was too much for the USB3.0 controller (without caching). In Virtualbox you find this option under:

Your_VM_Settings->Storage->select_your_Controller->Attributes->Use Host I/O Cache

Besides the drawback mentioned by @Eugen Rieck, there seems to be one more according to @aeichner from the virtualbox forum

The host I/O cache is not used by default because it can cause I/O timeouts in the guest if the host faces a high I/O load and the host cache can't cope with it.aeichner, 2011

1 Answer 1

11

The bad news: Windows (the OS only, not applications) does more disk writes than Linux by ca. 2 orders of magnitude, there is nothing you can do about that. In addition to that, it does one order of magnitute more disk reads.

The good news: When using writeback caching on the Hypervisior level (i.e in VirtualBox) you can improve the situation significatnly. This comes with the risk of data corruption if the host goes down hard, but with a good UPS this should be manageable.

One more: If you use snapshots reconsider - snapshots have a significant write amplification factor, which does hurt in such a scenario.

3
  • 7
    Why does windows write so much more?
    – Tim
    Commented Feb 12, 2017 at 20:15
  • 1
    @Tim It's hard to tell; you can't go in and look at what's doing the writes (what right do we have to look at Microsoft's personal code?).
    – wizzwizz4
    Commented Feb 12, 2017 at 20:32
  • My only thought on this would be the indexer. If you are changing file regularly, then the indexer is running constantly. There are many things you can turn off when using SSDs and Windows. My Surface Pro went from 4 hours per charge (good usage) to 5+ hours per charge (good usage) when I turned off the indexer alone. Commented Dec 4, 2017 at 9:38

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .