Presentation
There are several ways to configure network teaming in RHEL 7:
- using the nmtui command and a Text User Interface,
- using the nmcli command at the Command Line Interface,
- using the graphical interface,
- through direct changes in the network configuration files.
For the rest of this tutorial, it is the nmcli option that has been chosen because it’s the quickest method and arguably the least prone to errors.
Prerequisites
To put into practice this tutorial, you need two VM and access to their respective console.
Each VM has been installed with a base distribution (minimal distribution should work but was not tested). Each VM’s got two network interfaces called eth0 and eth1.
Install the teamd package:
# yum install -y teamd
If a previous network configuration was set up, remove it on both VM:
# nmcli con show NAME UUID TYPE DEVICE Wired connection 1 f32cfcb7-3567-4313-9cf3-bdd87010c7a2 802-3-ethernet eth1 System eth0 257e9416-b420-4218-b1eb-f14302f20941 802-3-ethernet eth0 # nmcli con del f32cfcb7-3567-4313-9cf3-bdd87010c7a2 # nmcli con del 257e9416-b420-4218-b1eb-f14302f20941
Teaming Configuration
Execute the following steps at the console of both VM.
Create the teaming interface:
# nmcli con add type team con-name myteam0 ifname team0 config '{ "runner": {"name": "loadbalance"}}' team0 config '{ "runner": {"name": "loadbalance"}}' [10655.288431] IPv6: ADDRCONF(NETDEV_UP): team0: link is not ready [10655.306955] team0: Mode changed to "loadbalance" Connection 'myteam0' (ab0a5f7b-2547-4d4f-8fc8-834030839fc1) successfully added.
Note1: If you don’t specify con-name myteam0, the teaming interface will be named team-team0.
Note2: Examples of configuration are available in the /usr/share/doc/teamd-*/example_configs. You can also get some examples through man teamd.conf.
Now, the file /etc/sysconfig/network-scripts/ifcfg-myteam0 contains the main following lines:
DEVICE=team0 TEAM_CONFIG="{ \"runner\": {\"name\": \"loadbalance\"}}" DEVICETYPE=Team NAME=myteam0 ONBOOT=yes
Add an IPv4 configuration:
In RHEL 7.0:
# nmcli con mod myteam0 ipv4.addresses "192.168.1.10/24 192.168.1.1" # nmcli con mod myteam0 ipv4.method manual
From RHEL 7.1 on:
# nmcli con mod myteam0 ipv4.addresses 192.168.1.10/24 # nmcli con mod myteam0 ipv4.gateway 192.168.1.1 # nmcli con mod myteam0 ipv4.method manual
Note: If you don’t specify any IP configuration, both VM will get their ip address and gateway through DHCP by default.
Add the eth0 interface to the teaming interface:
# nmcli con add type team-slave con-name team0-slave0 ifname eth0 master team0 [10707.777803] team0: Port device eth0 added [10707.779146] IPv6: ADDRCONF(NETDEV_CHANGE): team0: link becomes ready Connection 'team0-slave0' (a9a5b612-aad6-48b0-a097-88db35c898d3) successfully added.
Note1: If you don’t specify con-name team0-slave0, the teaming slave interface will be named team-slave-eth0.
Note2: The file /etc/sysconfig/network-scripts/ifcfg-team0-slave0 has been created with the following main lines:
NAME=team0-slave0 DEVICE=eth0 ONBOOT=yes TEAM_MASTER=team0 DEVICETYPE=TeamPort
Add the eth1 interface to the teaming interface:
# nmcli con add type team-slave con-name team0-slave1 ifname eth1 master team0 [10750.419419] team0: Port device eth1 added Connection 'team0-slave1' (e468dce3-a032-4088-8173-e7bee1bd4ad5) successfully added.
Note1: If you don’t specify con-name team0-slave1, the teaming slave interface will be named team-slave-eth1.
Note2: The file /etc/sysconfig/network-scripts/ifcfg-team0-slave1 has been created with the following main lines:
NAME=team0-slave1 DEVICE=eth1 ONBOOT=yes TEAM_MASTER=team0 DEVICETYPE=TeamPort
Activate the teaming interface:
# nmcli con up myteam0 [10818.800169] team0: Port device eth1 removed [10818.803399] team0: Port device eth0 removed [10818.939884] team0: Port device eth1 added [10818.941069] IPv6: ADDRCONF(NETDEV_CHANGE): team0: link becomes ready [10818.971887] team0: Port device eth0 added [10819.932168] IPv6: team0: IPv6 duplicate address fe80::5054:ff:fe3f:860a detected! Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/32)
Check the configuration:
# nmcli con show NAME UUID TYPE DEVICE team0-slave0 a9a5b612-aad6-48b0-a097-88db35c898d3 802-3-ethernet eth0 myteam0 ab0a5f7b-2547-4d4f-8fc8-834030839fc1 team team0 team0-slave1 e468dce3-a032-4088-8173-e7bee1bd4ad5 802-3-ethernet eth1
You can also use the teamdctl command to check the configuration state:
# teamdctl team0 state setup: runner: loadbalance ports: eth0 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up eth1 link watches: link summary: up instance[link_watch_0]: name: ethtool link: up
Or to dump the configuration:
# teamdctl team0 config dump { "device": "team0", "ports": { "eth0": { "link_watch": { "name": "ethtool" } }, "eth1": { "link_watch": { "name": "ethtool" } } }, "runner": { "name": "loadbalance", "tx_hash": [ "eth", "ipv4", "ipv6" ] } }
You can also get the ports status with the teamnl command:
# teamnl team0 ports 2: eth0: up 0Mbit HD 3: eth1: up 0Mbit HD
In addition, you can directly change the content of the files in the /etc/sysconfig/network-scripts directory but you need to apply the following command afterwards:
# nmcli con reload
Source: RHEL 7 Networking Guide and nmcli-examples man page.
Exam Tip
If you don’t remember all the details the day of the exam, get the information in the nmcli-examples and teamd.conf man pages or in the /usr/share/doc/teamd-*/example_ifcfgs and /usr/share/doc/teamd-*/example_ifcfgs directories.
Additional Resources
The RootUsers website’s got an interesting tutorial about Configuring Network Teaming in Linux.
Venkat Nagappan provides a nice video about Setting up Network Teaming and Bridging (20min/2015).
if you are working on a server with for example ONLY 2 ethernets , you still nmcli delete one of them and you create the teaming ? or the whole connection will be down for deleting the connection you are using ?
At this moment, I don’t know. Try and let me know!
Is there any command like “teamdctl state” for bridging as well?
brctl show?
Shikaz,
I know this post is four years old, but I experienced related issues.
On VirtualBox, VMware Workstation Pro 12, and KVM, I created teamd interfaces using two NICs over a host-only network. Once up and configured I could ping between two systems, both using teamd. I am not reliably able to break one of the connections and continue to ping. e.g. I used both “nmcli con down team0-slave0” or “ip link set down eth0”, and expect the connection to still ping… but it doesn’t. Sometimes is does, I bring that connection up and down the other, and ping fails. I should be able to drop either connection and the ping continue. Functions the same with bond, so it’s not specific to teamd. I know this works on physical systems because I have often tested it to ensure alternate switch paths work. I tried it using the KVM tutorial on this site using bridged connections and everything worked (KVM), so I am thinking it something about host-only networks.
I am baffled.
P.S. Using “teamdctl team0 state” and “teamnl team0 port” look normal.
If you use VirtualBox (or VMware) to configure bonding or teaming, ensure that network adapters have promiscuous mode set to “Allow All”, and then enable promisc mode on the network interfaces on both virtual machines.
man i have accomplished this by adding 3 Ethernet for tests and kept one up and two for teaming, one thing i don’t understand, when you say that this configuration to be done on 2 VMs what is the relation between them ? i mean each one will have a loadbalance between two ethernet on it’s own, is there any relation between the two machines ?
thanks
Both virtual machines are in the same physical server. This will be exactly the same situation during the exam.
For me it seems absolutely impossible to configure teaming in Centos7 guest both in Vbox and Vmware..I follow the steps EXACTLY like shown here or in the nmcli-examples manual BUT when I try to bring up the team i continuously get Error: Connection activation failed: Active connection removed before it was itialized.
SO ANNOYING……..
Any advice or indication will be greatly appreciated 🙂
Interesting. Sorry, I don’t see how I could help you, I only use KVM at the moment.
Alamahat,
This reply may be untimely, but I was able to do the above exercise using VMplayer and CentOS 7. To accomplish it I needed to add additional network interfaces to the VM, which can be done by the following: CNTRL-D->Add->Network Adapter->Bridged (times two, to gain two NICs). One this was accomplished the above worked fairly well:
[root@server1 ~]# nmcli con show
NAME UUID TYPE DEVICE
eno16777736 14eb0203-db51-4f40-bf1e-276a4d4059c6 802-3-ethernet eno16777736
team0 279b773f-af14-4c8a-a05f-f6f0c434b220 team team0
team0-slave0 083af62a-9a1f-468d-9508-96a2d41d585c 802-3-ethernet ens37
team0-slave1 cc786c8b-f588-4ff2-8611-c064c43e766d 802-3-ethernet ens38
virbr0 50840a08-6013-4de8-add0-abd7375f145b bridge virbr0
Wired connection 1 2fe48e33-2f0b-34d7-b742-bbfeae06f869 802-3-ethernet —
Wired connection 2 dcd4a3d7-5910-3b4e-a059-97309cfa0b12 802-3-ethernet —
I have been using Vbox on Windows host for all my studying for Rhel.
But this begs the question:On the exams I dont think there will be a Windows host.Most probably a Rhel 7 Desktop running 2 servers as KVM guests..I mean there must be be a Desktop somewhere…And there must be 2 servers also..Its is veeeeryyyyy unclear.
I think RH should have been more clear about the exam environment..
🙂
There is no Windows host at the exams. You are in a KVM environment.
Through Internet, you’ve got access to a console that allows you to reboot KVM guests.
For the RHCSA exam, there is only one KVM guest without any graphical interface on it.
But graphical interface is allowed on those two VMs for RHCE? I find teaming to be configured easiest via GUI, one task done in like 2-3 minutes or similar.
Also, one question, is it possible that if I configure bonding/teaming with graphical interface, I don’t have the output with teamdctl team0 state?
I only have
setup:
runner: roundrobin
Thanks!
You shouldn’t rely too much on GUI for the exam. Learn to do it both ways, with and without GUI.
Concerning your question, I don’t know at the moment. I will need to test it.
When you do it by GUI, the output of commands is not shown until you do a restart, found that one out 🙂
I am asking because for a lot of stuff, Ghori mentions GUI in his books, and for some, use of GUI is much much quicker, that’s why I ask.
Of course, in production world, there won’t be a GUI, but on exam, in my opinion it’s just an issue of speed.
My question was just, is GUI available both on host and VMs or not at all?
You have to understand that there are two kinds of GUI: text GUI (authconfig-tui, nmtui, etc) and graphical GUI (authconfig-gtk, etc). You can think about using text GUI tools. However, don’t learn graphical GUI tools, they shouldn’t be available at the exam.
@ Rookie
GUI is available on the VMs during the exam. You just have to start it or enable it.
Just FYI, it is convenient to know that there are example team configs in /usr/share/doc/teamd-1.5/example_configs.
That way, is easier to copy-paste a sample config instead of having to build it by memory or scanning the manual if, e.g., a LACP team is requested, or an ARP ping watch.
Regarding the connection activation failure, be carefull at the runner syntax. If the syntax is bad (say you type runer instead of runner, or loadbaalance) that is the problem that you will get.
I’ve done this under vmware (both Workstation and ESXi) w/o trouble.
This is a very interesting comment. I will update the tutorial. Thank you very much.
One note from my RHCSA test was the /usr/share/doc directory didn’t exist.
This is very strange. When you install the teamd package, you get this directory:
Understood. I certainly have /usr/share/doc on my home systems. Perhaps the image used for testing excludes the docs directory by design. I can only report that I’d planned on using the docs directory to help with one of the commands and it wasn’t on the system.
On my test server (CentOS 7.2) it says “Error: Connection activation failed: NetworkManager plugin for ‘team’ unavailable”. I had to install NetworkManager-team and restart NetworkManager to enable the plugin.
Just FYI
Thanks for this information.
Another tip: it happened to me a couple of times that I made a mistake in the configuration and the team did not come up.
After fixing the team config, it did not come up either… and it ended up being that somehow the team slaves became disconnected.
If you reboot it will be up again, but a faster way to deal with this is nmcli device connect xxx, or something like that 🙂
Thanks.
CertDepot,
I am very grateful for you making this website and sharing you knowledge! Thank you and may God continue to bless you.
Reading all the books, Michael Jang, Asghar Ghori, Sander van Vugt and others, I see that they are all missing couple of things and you just happen to have that steps which are missing in their books. For instance, reading “Network Teaming” on Sander van Vugt’s book, I was not even told that I had to remove existing network configurations on the slave interfaces. This is just one of them, there are couples of other errors in this book that you happen to just bring the solutions to them.
Thank you.
Some notes on setting up this environment in VirtualBox..
1) Go to VirtualBox Preferences>>Network>>Host-only Networks.
Make sure you have at least one host-only network (i.e. vboxnet0)
2) Go to your virtual machine settings. Click on network.
For Adapter 1, select Host-only adapter. Select your host-only network (i.e. vboxnet0).
Click Advanced.
Promiscuous Mode should be set to Allow All.
3) Do the exact same thing for Adapter2.
4) Once you are inside your instance, you will need to set your eth0 and eth1 interfaces to promiscuous mode.
ip link set eth0 promisc on
ip link set eth1 promisc on
My guess is that promiscuous mode may be necessary to test teaming with VirtualBox and VMware Fusion/Workstation due to MAC addresses being duplicated inside the virtual machine (it occurs when one attaches a team-slave to a team-master..the team-master will get the MAC of the first slave to connect).
Duplicate MACs confuses these hypervisor’s underlying host-only network switch which can hard-code the relationship between mac addresses and interfaces, unlike a traditional learning switch or software bridge with expiring fdb entries.
In Vmware Fusion, you’ll actually get a warning that it detects duplicate MAC addresses as soon as you bring the team-master up.
Turning promisc on eth0 and eth1 will allow you to prove the setup works when you disconnect a slave interface from the team by pinging the team’s interface ip address from your host machine or another vm on the same host-only network.
Thanks for this information.
Hi Everyone
For those having issues with network teaming, nmtui command is easy to use. Get the JSON config examples from:
# cat /usr/share/doc/teamd-1.9/example_configs/
Network teaming can be done in four ways.
1. nmcli command
2. nmtui command
3. nm-connection-editor
4. NetworkManager
Out of the four listed above, please dont use method 3 and 4. I ran into errors. The team never came up
FYI
GUI is available in the exam (Both RHCSA and RHCE) and is 100% allowed. But try by all means to stay away from GUI.
Regards
Bruce
Thanks for all these details.
Did you tried configuring an Ipv6 on a team interface, i got a error The IPv6 address display:
inet6 x:x:x:x::1/64 scope global tentative dadfailed.
No, I didn’t try.
I have two servers: Server 1 has 2x10Gbit ethernet connections; Server 2 has 2x10Gbit ethernet connections. Both servers are running CentOS Linux release 7.2.1511 3.10.0-327.18.2.el7.x86_64 with Intel X710 10 GbR latest version 17.0.12 on Dell PowerEdge R730 server.
Teaming LACP is UP & RUNNING but when I perform network throughput tests, it’s showing only 9Gb speed.
Switch is Dell 10Gb compatible, The configuration on the switch seems ok.
What is the bottleneck? Someone can help me?
teamd-1.17-6.el7_2.x86_64
# nmcli con show
NAME UUID TYPE DEVICE
em1 e9757f4f-c0b4-4bbc-bd4e-b66103553000 802-3-ethernet em1
p5p2 5993e656-31cc-197d-8359-a7d520292c34 802-3-ethernet p5p2
p5p1 ae980826-1d4c-660f-07b7-c4ec1025b41b 802-3-ethernet p5p1
team0 702de3eb-2e80-897c-fd52-cd0494dd8123 team team0
# teamdctl team0 state
setup:
runner: lacp
ports:
p5p1
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
down count: 0
runner:
aggregator ID: 6, Selected
selected: yes
state: current
p5p2
link watches:
link summary: up
instance[link_watch_0]:
name: ethtool
link: up
down count: 0
runner:
aggregator ID: 6, Selected
selected: yes
state: current
runner:
active: yes
fast rate: yes
# teamdctl team0 config dump
{
“device”: “team0”,
“link_watch”: {
“name”: “ethtool”
},
“ports”: {
“p5p1”: {
“prio”: 9
},
“p5p2”: {
“prio”: 10
}
},
“runner”: {
“active”: true,
“fast_rate”: true,
“name”: “lacp”,
“tx_hash”: [
“eth”,
“ipv4”,
“ipv6”
]
}
}
# teamnl team0 ports
6: p5p1: up 10000Mbit FD
7: p5p2: up 10000Mbit FD
# ethtool p5p2
Settings for p5p2:
Supported ports: [ FIBRE ]
Supported link modes: 10000baseT/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: No
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: 10000Mb/s
Duplex: Full
Port: Direct Attach Copper
PHYAD: 0
Transceiver: external
Auto-negotiation: off
Supports Wake-on: d
Wake-on: d
Current message level: 0x0000000f (15)
drv probe link timer
Link detected: yes
# ethtool p5p1
Settings for p5p1:
Supported ports: [ FIBRE ]
Supported link modes: 10000baseT/Full
Supported pause frame use: Symmetric
Supports auto-negotiation: No
Advertised link modes: Not reported
Advertised pause frame use: No
Advertised auto-negotiation: No
Speed: 10000Mb/s
Duplex: Full
Port: Direct Attach Copper
PHYAD: 0
Transceiver: external
Auto-negotiation: off
Supports Wake-on: g
Wake-on: d
Current message level: 0x0000000f (15)
drv probe link timer
Link detected: yes
# iperf3
[SUM] 0.00-6.00 sec 6.58 GBytes 9.43 Gbits/sec 0 sender
[SUM] 0.00-6.00 sec 6.57 GBytes 9.41 Gbits/sec receiver
Hi Guys,
What does that mean when during the exam “RHCE – Link Aggregation part” it said:
“Note:- run “lab teambridge setup” to add two more interfaces.In exam you have to configure in both serverX and desktopX machine”
Does each machine have one interface and I should add another one for Link Aggregation ???
Why do we need to set up two virtual machines? Is this the given scenario during the exam? I only ask because two of my study materials regarding this topic only explains a scenario where you’re given one virtual machine with multiple network interfaces and you need to team two of the interfaces from this one virtual machine.
Is there an exam scenario where I’ll be given two virtual machines and need to team the network interfaces from each virtual machine?
Also for the one virtual machine scenario, can I team a primary network interface with a secondary interface or can I only team two or more non-primary interfaces?
Appreciate any clarification, thanks.
You need to use two virtual machines in order to test your configuration. With only one virtual machine, I don’t how you can check your configuration is correct?
Concerning the exam details, I don’t know it.
CertDepot is right, how can you test teaming between two RHEL systems if you only use one RHEL server?
After reading Sander’s book on network teaming, I was under the impression that we are supposed to do teaming between 2 or more nic of the same server (not two VM). Please correct me if I am wrong.
Teaming or bonding are effectively done between 2 or more NIC on the same server.
After, you can connect 2 servers/VMs together or one server and one switch.
In terms of config – would multiple configs do a similar task?
Lets say I want link redundancy. I’ve got 2 NICS and in case a NIC goes down I still want the team to be up.
Would I use an active-backup config for that, or would load balancing achieve the same result?
I’m assuming either would work but that technically the active-backup is probably the better solution? I’d also imagine the load balance config would just keep routing traffic to the NIC thats up – thus effectively doing the same thing?
Any differences?
Thanks!
It Depends on the setup and requirements of the network. From my understanding, you can only use one running method at a configuration/setup time. The setup is compiled at running time.
Asking yourself which is the most important factor. Do a risk analysis on the failure of the Network.
The Red Hat Manual was not clear to me. I found the following
”
roundrobin: This is the default that we are using, it simply sends packets to all interfaces in the team in a round robin manner, that is one at a time followed by the next interface.
broadcast: All traffic is sent over all ports.
activebackup: One interface is in use while the other is set aside as a backup, the link is monitored for changes and will use the failover link if needed.
loadbalance: Traffic is balanced over all interfaces based on Tx traffic, equal load should be shared over available interfaces..
”
References https://www.rootusers.com/how-to-configure-network-teaming-in-linux/
Hi CertDepot and Lisenet could you guys please guide me to the right direction. I am bit confused with network teaming and it’s causing a connectivity problem with ISCSI/target and port forwarding.
so when we start the exam we get two sets of IPs:
– one for the Server (example IP: 192.168.4.10/24)
– another one for the Client (Desktop). (example IP: 192.168.4.20/24)
To start our exam, We set the interface enp0s on both server and client with the given IP address and gateway information.
Then we are asked to configure an ISCSI target which should be available to available to client only. we configure the ISCSI target and login to the portal from the client with the target (servers) IP 192.168.4.10.
Then, later on in the exam, we were asked to configure the port forwarding for the client IP address 192.168.4.20 which should be forwarded to the specific port of server (192.168.4.10).
Then we are asked to configure network teaming for both server and client with interface enp0s3 and enp0s7 for both server and the client. IPs a re following
Server: 192.168.4.30
Client : 192.168.4.40
So we created the network teaming with active backup and activated the teamed connection with auto-connect method.
both server and clients are pingable from each other, everything works fine. but as soon as teaming connection start the enp0s3’s old IP address disappear and IP address 192.168.4.10 and 192.168.4.20 became inaccessible. The ISCSI target and port forwarding don’t work anymore.
Shall we configure ISCSI target for Teamed IP address?
Please advise.
IMO, unless the exam question asks you to configure iSCSI on the teamed interface explicitly, it should be configured on 192.168.4.10/24 (as per your example).
I hope this question won’t break the NDA? during RHCE exam how many network interfaces do they have in each VM? is it two or three?
I cannot tell you what’s on the exam I’m afraid, but if you practice for different scenarios, you won’t have issues during the exam.
That comes close to the NDA. In short there is no way of knowing. The exam changes from time to time.
Thank you sam and Lisenet, I did practice with 2 NICS and 3 NICS.
Is there any command like ” teamdctl team0 state” for bridging as well?
Hi all,
I am facing some kind of strange issue on vmware12
I have four Ethernet ens37 ens38 ens39 and ens40. Team connection is using ens38 and ens39. I can access via team ip. But problem is when I disconnect both ethernet ens39 and ens40, team ip can still be accessible. Any idea about this issue.
Team is on ens38 and ens39, you disconnect ens39 and ens40, but team remains running on ens38.
Sorry my mistake 🙁
Team is using ens39 and ens40. After disconnect ens39 40, team ip can still be accessible.
This is all clear, only confusing part to me is – configure aggregated links between two RHEL systems?
Does that mean that teaming should be configured on two VMs and that’s all?
Hi,
I’m definitely going to use nmtui instead of the command line in the exam.
It’s hard to remember and wasting time using the command line.
It took me 10 mins to set up teaming in command line and around only 2mins to set it up using nmtui.
Yes, I perfectly understand. However, Red Hat wrote in its documentation that nmtui would be deprecated.
I have no idea if this will happen but I thought I had to disclose this information.
Note there are bugs in the nmtui. As CertDepot pointed out, this is no longer been updated. I would suggest you know how to use all the methods, including the GUI version and the nmcli.
It took you 10 minutes to set it up because you didn’t know the tool. If you invest some time in practising and learning nmcli, you will configure teaming in 90 seconds, or less. It’s the same with everything.
Why are we all dealing with the spaghetti syntax of nmcli con this or that, there are only 5 lines to remember to write into the ifcfg conf file for teamed interfaces and 7 or 8 for team interface itself. Create those ifcfg-teamN files by hand and be done with it. I only use nmcli con or nmcli dev to bring connections/devices up or down or eventually reload a config file. I’d rather go that route instead of untangling nmcli messy configuration syntax. If there’s a separate JSON file to use for configuration instead of in-line, how difficult is it to remember con mod teamN team.config /path/to/team.conf…
Regarding copy/pasting from example configurations into the actual conf files: I have ease of use for that function with nano, vim doesn’t seem to like it when you want to copy paste lines from one file to another.