none
Windows Server 2019 Hyper-V Guest on Windows Server 2019 Hyper-V Host Networking Performance Error, Bug, Problem RRS feed

  • Question

  • Hi,

    I work at an engineering college/university as an IT professional where we are having issues with Windows Server 2019 and Hyper-V networking in a test setup (non-prod).  We have exaustively tried about everything we can think of to solve our issue, but it persists so I'm jumping on the forum to find out if anyone else has seen this problem.

    For bug re-production purposes we have reduced our setup to the minimum required.

    We have installed a vanilla copy of Windows Server Datacenter 2019 on a generic Dell PowerEdge R740xd rack server with 1GB networking.  We then added Hyper-V feature role, and then created a VM guest (gen2) running Windows Server Datacenter 2019 in the VM.

    All VM guest settings were very basic: 8 vcore, 8 GB mem (non-dynamic), 100 GB HD.

    The guest Hyper-V VHDx file lives on a secondary drive D: (not the primary host OS C: drive).

    The guest VM is connected to the public network through the standard Hyper-V virtual switch that was generated when the Hyper-V role was installed.  No special settings here.

    We copy a 100 MB zip file to the VM guest server 2019 OS c: drive, and then share the folder that the ZIP is in.

    The Problem:

    If we make a standard network connection to the Win Server 2019 guest VM share folder where the ZIP file is stored, and then unzip the file, over the network, to a local Win10 1709 client disk, the unzip process is painfully slow, and even fails with errors in some instances.

    If we move the ZIP file to the host hyper-v 2019 server's C: drive, and then perform the same unzip process it is fast, and normal without errors.

    If instead of unzipping to a Win10 1709 client, we move over to a Windows server (any version), and again try the unzip from the Win Serv 2019 guest VM it is normal speed (as in great).

    We even tried unzipping to a Win7x64 client (using SMB 2.1) and found that the speed goes back to being painfully slow with errors.

    We found that we don't even need to unzip the file contents to the local client disk.  We can instead simply "test" the zip file integrity over the network and the integrity test is slow.

    Our summary is:

    1.  Unzipping files from a Windows Server 2019 guest VM SMB 3.1.1 network share hosted on a Win Server 2019 Hyper-V system results in terrible performance (and even errors) IF and only IF the client you are trying to unzip to is a Windows Client OS (Win7,8,10).

    2.  Unzipping from the network share to a local Windows Server OS (any kind) does not seem to have this issue.

    3.  A straight file copy of the ZIP from the Windows Server 2019 guest VM SMB 3.1.1 network share to a client PC does not show problems and is fast.

    4.  The problem seems to be related to the RANDOM SMB IO operations that the ZIP tool is performing in retrieving the ZIP file data over the SMB share.

    5.  ZIP size nor contents seem to matter.

    6.  If the Windows Server 2019 guest VM is instead hosted on a Windows Server 2016 Hyper-V host, the problem does not exist.

    Even shorter consise summary:

    A Windows Server 2019 Hyper-V host, running a VM guest that is also running Windows Server 2019 will have terrible SMB share networking UNZIP performance if the client you are unzipping to (over the network using an SMB share connection) is a Windows Client OS (7,8,10).  The problem does not exist if you try the unzip process on a server OS (over the network).  The problem does not exist if the ZIP is shared off of the Windows Server 2019 Hyper-V host.

    We find this problem to be very annoying and a great head scratcher!

    Can anyone help or re-produce the bug?

    Thanks!









    Saturday, January 19, 2019 7:20 PM

All replies

  • This issue seems to be 100% repeatable.

    Create a vanilla Windows Server 2019 VM on 2019 Hyper-v host (used Datacenter to match the reported above example, not sure if that matters).  place a zip file on any drive/share in the 2019 VM and try to extract the content from the network location directly from a windows client OS (tested Windows 7 and Windows 10 1709 and 1809) and the unzip will be extremely slow and possibly fail.  Doing the same test using windows server 2016 as the client has no issues.

    Thursday, January 24, 2019 4:23 PM
  • Hi,

    Exactly the same problem here.

    We have Windows Server 2019 with Hyper-V role installed on Fujitsu TX 1330 M4 server. We have two VMs on this host, both Windows Serves 2019, gen2 (one is DC and the other one is terminal server). One folder share is created on first VM (DC). Running app from that share takes forever (at least minute or more) to open on Win 7, 8.1 and 10 (1809 and 1803) client. 

    Since this is new production environment it is pain to investigate. We were able to make followig steps:

    - Created share on terminal server (second VM) and put app there. Opening app from client takes forever. Exactly as slow as if it is opened from the first VM. 

    - Created shared folder on Hyper-V host and put app there. This is working with normal speed from client (aproximately 5 seconds to open).

    - For testing purposes we also created third VM, windows 10 (1809) on the same Hyper-V host, shared folder and run app on client from that folder - works normal (aproximately 5 seconds to open).

    We have also simply copied file from first, second and newly cerated windows 10 VM to Windows 10 client machine. The result was also surprising. Copy from one or the other Windows 2019 VMs was significantly slower compared to file copy from Windows 10 VM to Windows 10 client.

    Let me also add the fact that we have basicaly the same environment build at quite a lot of our customers, with one exeption - it is done with Windows server 2016. We have no issues there. It is clear to us that Win 2019 VM on Win 2019 Hyper-V host has some kind of networking problem.

    We haven't found any solution to this issue. Any help would be greatly appreciated.


    Friday, March 1, 2019 9:39 AM
  • Hi,

    Have found workaround that solved issue for us. Please check: https://social.technet.microsoft.com/Forums/en-US/8aa6a88c-ffc8-4ede-abfc-42e746ff5996/windows-server-2019-hyperv-guest-on-windows-server-2019-hyperv-host?forum=winserverhyperv&prof=required

    Best Reagards

    Marko

    • Proposed as answer by 3Point IT Wednesday, March 13, 2019 1:45 PM
    Monday, March 11, 2019 8:49 AM
  • Hi,

    I have the same speed issue (25% of real speed + timeouts-fails issue) on hyper-v host 2019 and windows server 2019 OS guest. I've create a WIN10 VM and hook my VDs on it, share them and list them over AD  to have a decent access speed and no timeouts over my workstations.

    I'll try this solution today.

    Thank you!!

    Monday, March 11, 2019 6:10 PM
  • You are the man! Thanks for this, it fixed the issue we were having straight away (after messing around with a lot of other potential fixes)

    Wednesday, March 13, 2019 1:46 PM
  • Hi,

    I have the same speed issue (25% of real speed + timeouts-fails issue) on hyper-v host 2019 and windows server 2019 OS guest. I've create a WIN10 VM and hook my VDs on it, share them and list them over AD  to have a decent access speed and no timeouts over my workstations.

    I'll try this solution today.

    Thank you!!

    *RSC* $false on vswitch did the job.  Thank you!!! 
    Thursday, March 14, 2019 3:43 AM
  • Same issues here. All kinds of extremely weird networking issues with RSC enabled. Great new feature... Took me a lot of time to nail down to RSC. Wish I found this post earlier..   Anyway disabling RSC makes everything flashy again.

    You say you run Dell servers, I saw Fujitsu for someone, and we have HPe Gen 9 and 10 machines. Ours have HPe 331i nics which are rebranded Broadcom NX1. Do your machines happen to run the same nics? Might this be a Broadcom driver issue, just like the VMQ hell we've had with Broadcom? Having said that, I don't think enabling any of these offloading technologies on 1Gbps nics helps a lot. From 10Gbps and up it's another story.





    Saturday, March 30, 2019 10:12 PM
  • We also encountered this problem during a weekend migration of old servers into new ones with Windows Server 2019 and HyperV. After hours of troubleshooting we figured out only VMs are affected and not the host itself and alls network related tasks like file copy or dcpromo are affected.
    We also already had an "emergency" (because we only had a weekend and it was already Friday) plan B for the next morning to reinstall everything with Server 2016 until I hit this article.
    Without even reading what RSC means I disabled this software vSwitch feature and a few minutes later everything works as expected. Full network speed back again.

    Thank you very much for this post and resolution!

    Friday, May 31, 2019 5:04 AM
  • It's now october 2019, and we've just used the lastest HPe SPP pack to update drivers / firmwares on our Hyper-V hosts. Just out of curiosity I enabled RSC on our machines again, to verify whether or not it has been fixed by a driver update or so. But alas, enabling it immediately slows down the network to crawling speeds again.

    Our machines (both HPe Gen9 and Gen10) are running the built-in HPe 331i quad-port Gigabit, aka rebranded Broadcom NX1 adapters. We are running driver version 214.0.0.0, which is the latest driver from HPe, which is / should be supported for Server 2019.

    So unfortunately, yet another 'best thing since sliced bread' feature that just doesn't work out of the box, whether it's a driver issue or a Windows issue.


    Wednesday, October 16, 2019 9:26 AM
  • ... same Problem for Intel X711 Quad-Port Cards,

    disable "RSC" on Virtual Switch and on the nic it is bound to (at hyperv-host) did the trick

    .. but no problems with "RSS" (think you have mispelled this, Robert?)


    • Edited by chias Tuesday, October 29, 2019 10:40 AM
    Tuesday, October 29, 2019 10:39 AM
  • You are completely right. Of course I meant RSC. I updated my post. Typical it doesn't work either on your Intel based card. So basically it seems to be another MS feature that just doesn't work well. Yet it is (or was at least) enabled by default. Auch.
    Tuesday, October 29, 2019 11:51 AM