1

I have an HP Proliant DL385 G5p server which I use to host a personal Debian 9 fileserver via a VM on a hypervisor. This VM has its own 1Gb/s ethernet connection to a switch of the same speed, which my regular PC is also connected to. All three devices are capable of running at 1Gb/s.

I previously used a Debian install direct to the disk on the server with the rest of the network the same, and could achieve transfer speeds near enough to the 1Gb/s advertised, however, since I have started running the fileserver on a VM, transfer speeds are somewhere in the 5MB/s (40Mb/s) range, on a good day.

The software I am using to transfer my files from my PC (running Windows 10) is called "SFTP Net Drive", which allows me to view the contents of the fileserver from within Explorer (I didn't want to have to use a different software tool to connect to the server every time just because Windows doesn't support SFTP). When I was running the server directly on the disk without a hypervisor, I was using a program called "WinSCP", which allows multiple (up to 9) simultaneous transfers over the same network. This would saturate the 1000Mb/s connection and I would see no poor speeds except when transferring really small files (less than 1KB).

I have used IPerf to test the connection to the server from my PC (and vice-versa to be sure) and the connection is near enough what it should be, ~1000Mb/s. I have also tested disk write speeds on the server but thet also seem to be running fine (I think around 6000MB/s but I can't quite remember. Don't remember what tool I used to test with either). There are 4 72GB physical disks in RAID 5, which the hypervisor interprets as one logical drive. The hypervisor then assigns the VM another logical partition of this drive, which can presumably be split up again - by Debian in my case - using LVM. (Don't think you need all of that info but it might be useful).

Using the fact that the server performed fine before, I believe it's safe to assume that this is a software issue or misconfiguration, probably on the Windows side. One possible explanation for the slowness could be the fact that Windows seems to be able to transfer only one thing at a time using SFTP Drive? Any help in figuring this out and rectifying it will be much appreciated.

Edit: Okay, so I've found another strange thing that happens when I'm transferring files to the server using the software that I used to use, WinSCP. When transferring some music files to the server (~50MB each, roughly 300 of them) after all 9 simultaneous connections had been established, the transfer rate peaked at 110MB/s, where it stayed for about 20 seconds. It then promptly dropped back to 20-30MB/s and stayed there until the transfer was complete. This leads me to believe that there is some kind of buffer that, once saturated, slows down the transfer rate to keep up with the requests to write to disk? Not really sure if that makes any sense but it seems logical to me.

Edit 2: The transfer speeds are just as bad when moving files from the server to my PC, around 3-8MB/s according to Windows.

1
  • Google "bufferbloat". It's difficult to find those buffers, they could be for example somewhere in your router.
    – dirkt
    Commented Nov 23, 2018 at 12:17

1 Answer 1

0

did you benchmark disk speed of your VM. Looks like that to me especially if you are using QCOW2 you can get bad speeds: https://serverfault.com/questions/407842/incredibly-slow-kvm-disk-performance-qcow2-disk-files-virtio or https://serverfault.com/questions/675704/extremely-slow-qemu-storage-performance-with-qcow2-images just google "slow qcow2" and see,

5
  • Sorry for the slow response, been quite busy lately. Anyway, I have tested the read and write speeds of the VM and got some decent results. I tested read speeds using hdparm -Tt /dev/xvda5 and got 2000+MB/s cached reads and around 100MB/s buffered disk reads. Tested write speeds with dd with a few different data sizes and got over 100MB/s when using small files > 10KB but pretty rubbish results when using 1MB; under 50MB/s. Commented Nov 23, 2018 at 11:26
  • Looks like we have found one of the issue with your vm. But it does not explain everything else. Next steps to check are two things. Try to create ramdisk under you vm to be used as sharing and then check cpu usage when you hit the network limits with your ramdisk. I fear you are having also bad virtualization for your network device routing
    – Abdurrahim
    Commented Nov 23, 2018 at 15:58
  • I am using a program on my PC called XCP-ng Centre with performance data readouts. I will include screenshots of this in an edit. Commented Nov 23, 2018 at 17:46
  • Again sorry for the slow response. As of yet I've not been able to do this, however, a thought did cross my mind: I am currently using RAID 5 on the server, and according to [this site](www.raid-calculator.com/default.aspx) with 4 0.072TB disks in RAID 1+0 I could have potentially higher disk performance. Would this be worth testing? I would probably have to wipe the server and start afresh though. Commented Nov 29, 2018 at 17:49
  • No altought it will improve speed it won't resolve your actual problem. What you are struggling is not about host performance it's virtualization issue. Most likely I would find something wrong if I checked this XCP-ng (some kind of preconfigured xen hypervisor). If you have time try something else like using libvirt directly or maybe go easiest try virtualbox or vmware trial. You would get good speeds when you configure correctly. Also be sure to check what this XCP-ng offers as well: If you can test different network virtualization, raw disks (preallocated) etc and see
    – Abdurrahim
    Commented Nov 29, 2018 at 19:18

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .