Ana içeriğe atla
  • Hızla ve kolayca sipariş verin
  • Siparişleri görüntüleyin ve kargonuzun durumunu izleyin
  • Ürünlerinizin listesini oluşturun ve listeye erişin

How to set up NIC teamings on PowerEdge Servers

Summary: How to create NIC teamings on a Dell PowerEdge Server in VMware, Windows or using Linux.

Bu makale şunlar için geçerlidir: Bu makale şunlar için geçerli değildir: Bu makale, belirli bir ürüne bağlı değildir. Bu makalede tüm ürün sürümleri tanımlanmamıştır.

Instructions

The following article provides information about NIC Teaming in Windows, VMware, and Linux.

 

 

What is Network Adapter Teaming (Bonding)?

Network adapter teaming is a term that is used to describe various methods of combining multiple network connections to increase throughput or provide redundancy. Network interface card (NIC) teaming and LAN on Motherboard (LOM) teaming. Can provide organizations with a cost-effective method to quickly and easily enhance network reliability and throughput.
Network interface card (NIC) teaming is one method for providing high availability and fault tolerance in servers.

Below is an example of a web server with Two NICs each with one uplink and one downlink connection.
One of the two network cards fails or is disconnected but the client's computer connection remains connected.
2 NIC teaming network card fails
Fig 1: Two NIC teaming network card fails, but Internet connection remains up.

 

The four main types of network teams are as follows:

 

Smart Load Balancing (SLB) and Fail over: This type of team balances network traffic across all primary adapters. If a primary adapter fails, the remaining primary adapters continue to balance the load. If all primary adapters fail, traffic continues to flow using the standby adapter with no interruption. Once a primary adapter is brought back online, traffic resumes flowing through it.

SLB with Auto Fallback Disable: This type of team functions as above, but traffic does not automatically revert to the primary adapter once it comes back online.

IEEE 802.3ad Dynamic Link Aggregation: Also known as Link Aggregation Control Protocol (LACP) or IEEE 802.1ax. This type of team provides increased throughput by bundling multiple physical links into one logical link whose effective bandwidth is the sum of that of the physical links. This type of team requires that the power on the other end of the connection support LACP. The switch must be properly configured for the team to function properly.

Generic Trunking: Also known as static link aggregation, this type of team provides the same type of bundling functionality as IEEE 802.3ad/802.1ax but does not use LACP. The switch does not have to support LACP but must be properly configured for this type of team to function.

NOTE: These types of teams are supported by Broadcom network adapters. Intel network adapters provide similar functionality but use different terminology to describe the team types. Some operating systems, such as Windows Server 2012, also provide NIC teaming functionality and likely use different terminology.

 

Scenarios where NIC teamings cannot be set up.
  • If the network card is being used as a shared LOM for the iDRAC
  • If the network card is used for network booting.
  • If the network card is used for a Kernel debug network adapter (KDNIC).
  • NICs that use technologies other than Ethernet, such as WWAN, WLAN/Wi-Fi, Bluetooth, and InfiniBand, including Internet Protocol over InfiniBand (IPoIB) NICs.
  • We also recommend that Network cards must be of the same speed.

 

Windows NIC Teaming Setting up NIC Teaming for Windows Server 2008/2012/2012 R2/2016/2019

To create a NIC Team:

  1. In Server Manager, click Local Server.

  2. In the Propertiespane , locate NIC Teaming and then click the link Disabled to the right. The NIC Teaming dialogue box opens.

  3. NIC Teaming Dialog box
    Windows NIC Teaming Dialog box
    Fig 2: Windows NIC Teaming Dialog box

  4. In Adapters and Interfaces, select the network adapters that you want to add to a NIC Team.

  5. Click TASKS, and then click Add to New Team.
    Windows adapters and interfaces add to a new team
    Fig 3: Windows adapters and interfaces add to a new team.

  6. The New team dialogue box opens and displays network adapters and team members. In Team name, type a name for the new NIC Team.
    Create NIC by selecting adapters and create a team name
    Fig 4: Windows - Create NIC by selecting adapters and create a team name.

  7. If needed, expand Additional properties, select values for Teaming mode, Load-balancing mode, and Standby adapter. Usually, the highest-performing load-balancing mode is Dynamic.
    Windows NIC team addition properties
    Fig 5: Windows NIC team addition properties

  8. If you want to configure or assign a VLAN number to the NIC Team, click the link to the right of the Primary team interface. The New team interface dialogue box opens.
    Windows Default VLAN membership
    Fig 6: Windows Default VLAN membership

  9. To configure VLAN membership, click Specific VLAN. Type the VLAN information in the first section of the dialogue box.
    Windows-Specific VLAN membership
    Fig 7: Windows-Specific VLAN membership

  10. Click OK.

 

NIC Teaming on a Hyper-V Host

If you must set up NIC Teaming on a Hyper-V host, see the Microsoft Article Create a new NIC Team on a host computerThis hyperlink is taking you to a website outside of Dell Technologies.

 

PowerShell Instruction

We recommend using Microsoft Teaming on operating systems 2012 and higher.

 

Creating the Network Team using PowerShell

  1. Open an elevated PowerShell prompt. In the Windows® 10 taskbar search, type PowerShell. Press the W and S keys to open Search.

  2. You should now see the result of Windows PowerShell at the top. Right-click Windows PowerShell, and select Run as Administrator.
    Windows Start Menu PowerShell Run as administrator
    Fig 8: Windows Start Menu PowerShell Run as administrator

  3. If you are presented with the User Account Control prompt, click Yes.

  4. Enter the command new-NetLBFOTeam [TEAMNAME] "[NIC1]", "[NIC2]" and press the Enter Key.

    • [TEAMNAME] - the name you want to give to the team of network adapters
    • [NIC1] - the name of the first network adapter found from above
    • [NIC2] - the name of the second network adapter found from above

    PowerShell Command
    Fig 8: PowerShell Command

    Example

    new-NetLBFOTeam NIC-Team "NIC1" , "NIC2"

    Open the Network Connections by going to the Control Panel > Network and Internet > Network Connections.

 

VMware NIC Teaming

VMware vSphere
A NIC team can share a load of traffic between physical and virtual networks. Among some or all its members, and provide a passive failover in the event of a hardware failure or network outage.

Refer to the VMware KB for detailed steps on how to configure NIC Teaming on VMware selecting your ESXi Version upper right.
Configure NIC Teaming, Failover, and Load Balancing on a vSphere Standard Switch or Standard Port Group.

Reference: NIC teaming in ESXi and ESX (1004088)

 

Linux Channel Bonding Interfaces

Linux allows administrators to bind multiple networks interfaces together into a single channel using the bonding kernel module and a special network interface that is called a channel bonding interface. Channel bonding enables two or more network interfaces to act as one, simultaneously increasing the bandwidth and providing redundancy. Warning The use of direct cable connections without network switches is not supported for bonding. The failover mechanisms described here do not work as expected without the presence of network switches.

 

Bonding is not supported with a direct connection using crossover cables.

 

The active-backup, balance-TLB, and balance-alb modes do not require any specific configuration of the switch. Other bonding modes require configuring the switch to aggregate the links. For example, a Cisco switch requires EtherChannel for Modes 0, 2, and 3, but for Mode 4 LACP and EtherChannel are required. See the documentation that is supplied with your switch and the bonding.txt file in the kernel-doc package.

 

Check if the Bonding Kernel Module is Installed.
In Red Hat Enterprise Linux 6, the bonding module is not loaded by default. You can load the module by issuing the following command as root:

~]# modprobe --first-time bonding

 

No visual output indicates that the module was not running and has now been loaded. This activation does not persist across system restarts. See Section 31.7, "Persistent Module Loading" for an explanation of persistent module loading. Given a correct configuration file using the BONDING_OPTS directive, the bonding module is loaded as required and therefore does not need to be loaded separately. To display information about the module, issue the following command:

~]$ modinfo bonding

 

Working with Kernel Modules for information about loading and unloading modules. Create a Channel Bonding Interface
To create a channel bonding interface, create a file in the /etc/sysconfig/network-scripts/ directory called ifcfg-bondN, replacing N with the number for the interface, such as 0.
The contents of the file can be identical to whatever type of interface is getting bonded, such as an Ethernet interface. The only difference is that the DEVICE directive is bondN, replacing N with the number for the interface. The NM_CONTROLLED directive can be added to prevent NetworkManager from configuring this device.
Example ifcfg-bond0 interface configuration file

The following is an example of a channel bonding interface configuration file:

DEVICE=bond0
IPADDR=192.168.1.1
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
NM_CONTROLLED=no
BONDING_OPTS="bonding parameters separated by spaces"

 

The MAC address of the bond is taken from the first interface to be added to the network. It can also be specified using the HWADDR directive if required. If you want NetworkManager to control this interface, remove the NM_CONTROLLED=no directive, or set it to yes, and add TYPE=Bond and BONDING_MASTER=yes.
After the channel bonding interface is created, the network interfaces to be bound together must be configured by adding the MASTER and SLAVE directives to their configuration files. The configuration files for each of the channel-bonded interfaces can be nearly identical.
Example ifcfg-ethX bonded interface configuration file

If two Ethernet interfaces are being channel that is bonded, both eth0 and eth1 can be as follows:

DEVICE=ethX
BOOTPROTO=none
ONBOOT=yes
MASTER=bond0
SLAVE=yes
USERCTL=no
NM_CONTROLLED=no

 

In this example, replace X with the numerical value for the interface.

 

Once the interfaces have been configured, restart the network service to bring the bond up. As root, issue the following command:

~]# service network restart

 

To view the status of a bond, view the /proc/ file by issuing a command in the following format:

cat /proc/net/bonding/bondN

 

For example:

~]$ cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.6.0 (September 26, 2009)

Bonding Mode: load balancing (round-robin)
MII Status: down
MII Polling Interval (ms): 0
Up Delay (ms): 0
Down Delay (ms): 0

 

Important In Red Hat Enterprise Linux 6, interface-specific parameters for the bonding kernel module must be specified as a space-separated list in the BONDING_OPTS="bonding parameters" directive in the ifcfg-bondN interface file. Do not specify options specific to a bond in /etc/modprobe.d/bonding.conf, or in the deprecated /etc/modprobe.conf file. The max_bonds parameter is not interface specific and therefore, if required, should be specified in /etc/modprobe.d/bonding.conf as follows:

options bonding max_bonds=1

 

However, the max_bonds parameter should not be set when using ifcfg-bondN files with the BONDING_OPTS directive as this directive causes the network-scripts to create the bond interfaces as required.
Any changes to /etc/modprobe.d/bonding.conf do not take effect until the module is next loaded. A running module must first be unloaded.

 

Creating Multiple Bonds
In Red Hat Enterprise Linux 6, for each bond, a channel bonding interface is created including the BONDING_OPTS directive. This configuration method is used so that multiple bonding devices can have different configurations. To create multiple channel bonding interfaces, proceed as follows:
Create multiple ifcfg-bondN files with the BONDING_OPTS directive; this directive causes the network-scripts to create the bond interfaces as required.
Create or edit existing, interface configuration files to be bonded and include the SLAVE directive.
Assign the interfaces to be bonded, the slave interfaces, to the channel bonding interfaces by means of the MASTER directive.
Example multiple ifcfg-bondN interface configuration files
The following is an example of a channel bonding interface configuration file:

DEVICE=bond
N IPADDR=192.168.1.1
NETMASK=255.255.255.0
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
NM_CONTROLLED=no
vBONDING_OPTS="bonding parameters separated by spaces"

 

In this example, replace N with the number for the bond interface. For example, to create two bonds create two configuration files, ifcfg-bond0 and ifcfg-bond1.
Create the interfaces to be bonded as per the Example ifcfg-ethX bonded interface configuration file and assign them to the bond interfaces as required using the MASTER=bondN directive. For example, continuing on from the example above, if two interfaces per bond are required, then for two bonds create four interface configuration files and assign the first two using MASTER=bond0 and the next two using MASTER=bond1.

 

Reference: Linux Channel Bonding Interfaces

 

Etkilenen Ürünler

Microsoft Windows Server 2016, Microsoft Windows Server 2019, Red Hat Enterprise Linux Version 5, Red Hat Enterprise Linux Version 6
Makale Özellikleri
Article Number: 000124262
Article Type: How To
Son Değiştirme: 12 Ağu 2024
Version:  7
Sorularınıza diğer Dell kullanıcılarından yanıtlar bulun
Destek Hizmetleri
Aygıtınızın Destek Hizmetleri kapsamında olup olmadığını kontrol edin.