0

I am trying to create a virtual home lab in Hyper-V for the purposes of self-education.

I have run into a problem with networking and I don't know how to diagnose or fix it.

Basically, I have Hyper-V installed on my Win10 Pro machine here, and I have setup two Windows Server 2016 virtual machines on it. Both of them are Domain Controllers, but the first server also does DHCP, DNS and NAT routing.

Since the first server does the routing, it has two virtual network cards. One is connected to the default switch in Hyper-V, to share in the Internet connection of my Wifi card. The second virtual network card is connected to a Hyper-V internal virtual switch I made, so it can offer DHCP and routing services over that virtual network card to my other virtual machines.

I have it 'working' in the sense that, when I boot up my two server VMs everything works fine for a while. The first server feeds the Internet connection to the internal virtual switch and the second server uses the first server as a gateway to get to the Internet.

I have also got two other VMs with Windows 10 Pro on my tiny Windows domain that can connect fine. They get DHCP from the primary server, they can ping google.com as well as the two domain controllers fine, and can generally access the Internet, all routed from the primary server.

Let me reiterate that NONE of my VMs have access to the default virtual switch in Hyper-V, except for the primary server to do its job routing / feeding the Internet to the other VMs.

Anyway, after say, an hour or so of this working flawlessly, suddenly networking freezes on my host machine entirely (and consequently my VMs too), in that I can't even get DNS resolution for names.

This is where my experience falls off sharply and my n00bishness shines forth:

What is causing my otherwise working setup to freeze up after like, 30 to 60 minutes?

It doesn't matter how many VMs I have active or not, but if I shut them all down, especially the primary server, networking functionality returns and I don't have to reboot the host machine.

So I think the problem resides with routing on the primary server, or with how Hyper-V handles that............

I understand that diagnosing this problem might require some very specific information, but I don't know what to provide, so just ask and I'll do my best to get you the info you need.

I tried switching to VirtualBox as an alternative solution and it's laughably unstable for anything I try in any configuration, so I'll just keep bonking my head into this Hyper-V problem...

Host machine specs: i5 9600K (6 logical processors.....whatever, I know it's not the latest and greatest, but I don't see much slowdown). 64 GB RAM All VMs are hosted on separate physical disks, but I've tried this several ways and the same networking issues crop up every time I setup a new test Windows domain and I'm sick of trying that :P

1
  • Thanks John! Your comment gave me the idea to try Private instead of Internal as my virtual switch type. I had not realized that Private was what I really needed. Anyway, I'm going to let it run for a day or so, and if the issue disappears, I'll try posting this stuff as an answer to my own question. Thanks again!
    – Dan
    Commented Nov 1, 2022 at 17:33

1 Answer 1

1

So I'll answer my own question... I should have used "Private" instead of "Internal" as my virtual switch type in Hyper-V. If you use Internal, VMs will still be able to contact the Host through the switch...not good if you're trying to accomplish exactly what a "Private" switch does, limit the VMs from accessing the host through the network.

I was getting TCP packets repeating over and over (I had a look in Wireshark) so after John's helpful comment it seemed obvious I was choosing the wrong virtual switch type in Hyper-V manager.

Anyway, it's all fixed now and I'm very pleased. What a great community this is!

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .