Start a Conversation

Unsolved

H

1 Rookie

 • 

75 Posts

1675

October 8th, 2021 14:00

Virtual Network Crawling on 1 Gbps Physical Connection ?!

Hi All,

I'm facing the issue of not being able to achieve 1 Gbps transmission speed in the network.

This is my network setup, its s Partially Nested Setup (I'm aware its not supported by VMware, this is in a home lab).

Dell.png

The physical disks are 7200 RPM HDD 1TB, NIC ports on Dell and the Physical Switch are all 1 Gbps, the LAN cable is CAT5e, only the Laptop interface is 100 Mbps. I also checked the switch interfaces, it has auto detect feature and shows all ports as 1000Mbps except the laptop port which 100Mbps.

The vCenter is running in VMware Workstation and has the Nested ESXi added to its SDDC.

I'm backing up VM running in the Nested ESXi (stored in the StarWind NAS & SAN) to Veeam Server storage and the issue is that Backup speed never goes above 6Mbps, 1 Gbps throughput should give at the very least ~75 to ~90 Mbps.

I ran iPerf between the Veeam Server and the VM running on the Nested ESXi, the throughput still turned out to be 100Mbps and not 1 Gbps.

Spoiler
PS C:\Users\Administrator.VLAB\Desktop\iperf-3.1.3-win64> ./iperf3.exe -c 10.10.60.68
Connecting to host 10.10.60.68, port 5201
[ 4] local 10.10.20.6 port 57241 connected to 10.10.60.68 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.01 sec 11.4 MBytes 94.5 Mbits/sec
[ 4] 1.01-2.01 sec 14.6 MBytes 123 Mbits/sec
[ 4] 2.01-3.00 sec 13.1 MBytes 111 Mbits/sec
[ 4] 3.00-4.01 sec 11.8 MBytes 98.2 Mbits/sec
[ 4] 4.01-5.00 sec 11.0 MBytes 92.8 Mbits/sec
[ 4] 5.00-6.00 sec 11.1 MBytes 92.9 Mbits/sec
[ 4] 6.00-7.01 sec 11.0 MBytes 92.2 Mbits/sec
[ 4] 7.01-8.01 sec 10.9 MBytes 90.6 Mbits/sec
[ 4] 8.01-9.01 sec 10.0 MBytes 83.7 Mbits/sec
[ 4] 9.01-10.01 sec 10.4 MBytes 87.8 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.01 sec 115 MBytes 96.6 Mbits/sec sender
[ 4] 0.00-10.01 sec 115 MBytes 96.5 Mbits/sec receiver
















PS C:\Users\Administrator.VLAB\Desktop\iperf-3.1.3-win64> ./iperf3.exe -c 10.10.60.68Connecting to host 10.10.60.68, port 5201[ 4] local 10.10.20.6 port 57241 connected to 10.10.60.68 port 5201[ ID] Interval Transfer Bandwidth[ 4] 0.00-1.01 sec 11.4 MBytes 94.5 Mbits/sec[ 4] 1.01-2.01 sec 14.6 MBytes 123 Mbits/sec[ 4] 2.01-3.00 sec 13.1 MBytes 111 Mbits/sec[ 4] 3.00-4.01 sec 11.8 MBytes 98.2 Mbits/sec[ 4] 4.01-5.00 sec 11.0 MBytes 92.8 Mbits/sec[ 4] 5.00-6.00 sec 11.1 MBytes 92.9 Mbits/sec[ 4] 6.00-7.01 sec 11.0 MBytes 92.2 Mbits/sec[ 4] 7.01-8.01 sec 10.9 MBytes 90.6 Mbits/sec[ 4] 8.01-9.01 sec 10.0 MBytes 83.7 Mbits/sec[ 4] 9.01-10.01 sec 10.4 MBytes 87.8 Mbits/sec- - - - - - - - - - - - - - - - - - - - - - - - -[ ID] Interval Transfer Bandwidth[ 4] 0.00-10.01 sec 115 MBytes 96.6 Mbits/sec sender[ 4] 0.00-10.01 sec 115 MBytes 96.5 Mbits/sec receiver

I ran tcpdump on the laptop interface (100Mbps) interface just to see if any traffic traverses it during iPerf test, but there was no traffic.

I checked the Veeam server had an E1000 NIC which I changed to VMXNET3, and also tested 2 VMs (on the Physical server, not nested one) with iPerf and the result is the same, its 100 Mbps.

At this point I'm not sure what is causing the traffic to traverse at 100 Mbps, any thoughts ?

Thank You

1 Rookie

 • 

75 Posts

October 9th, 2021 03:00

So I pull all cables out of the Server, Switch and replugged them and now I'm getting 

Spoiler
[ 4] local 10.10.20.5 port 59278 connected to 10.10.20.6 port 5201
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-1.01 sec 14.8 MBytes 123 Mbits/sec
[ 4] 1.01-2.00 sec 24.4 MBytes 205 Mbits/sec
[ 4] 2.00-3.00 sec 24.2 MBytes 204 Mbits/sec
[ 4] 3.00-4.00 sec 24.2 MBytes 203 Mbits/sec
[ 4] 4.00-5.00 sec 24.2 MBytes 204 Mbits/sec
[ 4] 5.00-6.00 sec 24.2 MBytes 203 Mbits/sec
[ 4] 6.00-7.01 sec 24.4 MBytes 204 Mbits/sec
[ 4] 7.01-8.00 sec 24.2 MBytes 205 Mbits/sec
[ 4] 8.00-9.00 sec 24.4 MBytes 204 Mbits/sec
[ 4] 9.00-10.00 sec 24.1 MBytes 203 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth
[ 4] 0.00-10.00 sec 233 MBytes 196 Mbits/sec sender
[ 4] 0.00-10.00 sec 233 MBytes 196 Mbits/sec receiver
[ 4] local 10.10.20.5 port 59278 connected to 10.10.20.6 port 5201[ ID] Interval Transfer Bandwidth[ 4] 0.00-1.01 sec 14.8 MBytes 123 Mbits/sec[ 4] 1.01-2.00 sec 24.4 MBytes 205 Mbits/sec[ 4] 2.00-3.00 sec 24.2 MBytes 204 Mbits/sec[ 4] 3.00-4.00 sec 24.2 MBytes 203 Mbits/sec[ 4] 4.00-5.00 sec 24.2 MBytes 204 Mbits/sec[ 4] 5.00-6.00 sec 24.2 MBytes 203 Mbits/sec[ 4] 6.00-7.01 sec 24.4 MBytes 204 Mbits/sec[ 4] 7.01-8.00 sec 24.2 MBytes 205 Mbits/sec[ 4] 8.00-9.00 sec 24.4 MBytes 204 Mbits/sec[ 4] 9.00-10.00 sec 24.1 MBytes 203 Mbits/sec- - - - - - - - - - - - - - - - - - - - - - - - -[ ID] Interval Transfer Bandwidth[ 4] 0.00-10.00 sec 233 MBytes 196 Mbits/sec sender[ 4] 0.00-10.00 sec 233 MBytes 196 Mbits/sec receiver
So not sure if the cables are the issue, or is the Server, Switch renegotiating, Veeam back up speed also went up to 8 MBps..

1 Rookie

 • 

75 Posts

November 27th, 2021 05:00

I did some research on various matters with regards to the issue I'm facing as I'm trying to troubleshoot and narrow down the problem.

I have a Dell R620 with ESXi running on it, and 4 physical SSD disks. I enabled passthrough for the H710 Mini Mono in ESXi but the disk were not detectable any longer after reboot. I came to know H710 Mini does not support passthrough, and I also came to know about IT mode which allows passthrough in ESXi to work with H710 Mini.

There are 2 existing issue that I'm facing.

1) Slow SSD write speeds

2) Slow network throughput

My questions relates to both as I'm in a nested virtualization environment.

1) What advantage does passthrough have if I were to enable it

2) Does it improve SSD write speeds as VM's gain direct access to the Disks (I'm yet to see SSD write speeds over 10 or 20 MB/s, the SSD have a write speed of 550MB/s

3) Does it improve network throughput in anyway (existing network is 1Gbps, but transfer speed rarely go over 15-17MB/s).

Currently the 4 SSD disks are in RAID0 configuration and are Virtual Disks (configured in iDRAC), so these disks appear as 4 individual SSD disks in ESXi.

To my knowledge the network should be at least ~600Mbps or 75MB/s but I have failed to see this. Any advice on how to go about this.

Thank You

No Events found!

Top