iperf bandwidth less than interface speed – Managing your servers can streamline the performance of your team by allowing them to complete complex tasks faster. Plus, it can enable them to detect problems early on before they get out of hand and compromise your business. As a result, the risk of experiencing operational setbacks is drastically lower.
But the only way to make the most of your server management is to perform it correctly. And to help you do so, this article will share nine tips on improving your server management and fix some problem about linux, bonding, iperf, , .
I used linux bonding driver to bond 2 NICs in mode 6 in client PC.
ethtool ethX shows speed =1000
ethtool bond0 shows speed =2000
but when i use iperf
bandwidth of eth0 = 934Mbps
bandwidth of eth1 = 637Mbps
bandwidth of bond0 = 934Mbps
Shouldnt the bandwidth of bond0 be around 2000Mbps?
In short: no, bonding does not work in this manner.
Long story: the linux bonding, with its various bonding scheme, is very configurable. It has no less than 7 different bonding types, each with its strong and weak points. I strongly suggest you to read the documentation you can found here. However, the take away is the (except for the
round-robin mode, which I detail below) no bonding scheme is capable of increase the throughput of a single session, rather they speedup multiple concurrent session. So your
iperf output is perfectly normal, as it open a single session which can not be accelerated by the bonding driver.
The only expcetion to this rule is the
round-robin mode, which transmits packet in a, well, round-robin fashion: the first one goes out from the first inteface, the second one from the second interface, and so on. This bonding mode can accelerate a single session by the virtue of concurrent sent packets. However it has many pitfalls, ranging from incompatible switches, out-of-order packet delivery (with relative retransmission), bad scaling over 2 interfaces, etc.
This is only a very concise summary. If you are interesting in how bonding works, you should really give a serious look at the documentation I provided above.
I am currently experimenting with round robin – I noticed that when bonding 2 NICs it does provide a 1.6-1.7 Gb/s when using iperf (keep in mind that two machines I use to test the speed have 2 NICs each in mode 0). I did a test today with 3 NICs and got ¬900 Mb/s – the reason for this is that round robin works best with an even number of NICs. I would only use it as a backbone for server backup (on the cheap) with an NFS share…