https://wiki.linuxfoundation.org/networking/netem
How can I use netem on incoming traffic?
You need to use the Intermediate Functional Block pseudo-device IFB . This network device allows attaching queuing discplines to incoming packets.# modprobe ifb
# ip link set dev ifb0 up
# tc qdisc add dev eth0 ingress
# tc filter add dev eth0 parent ffff: \
protocol ip u32 match u32 0 0 flowid 1:1 action mirred egress redirect dev ifb0
# tc qdisc add dev ifb0 root netem delay 750ms
Another way is to use another machine as an Ethernet bridge , and apply netem to both Ethernet devices.
文中简单提到了,2种方法:
按照上文中的提示,依次敲命令,体验下:
modprobe ifb
执行命令:
modprobe ifb
验证,执行命令:
ip link list
显示内容:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 00:16:3e:00:29:30 brd ff:ff:ff:ff:ff:ff
3: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:b1:cd:f3:26 brd ff:ff:ff:ff:ff:ff
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:b5:6d:27:f0 brd ff:ff:ff:ff:ff:ff
6: veth250924d@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
link/ether 7e:f2:81:b2:06:58 brd ff:ff:ff:ff:ff:ff link-netnsid 0
12: vethd86893f@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP mode DEFAULT group default
link/ether 5a:60:84:59:3e:90 brd ff:ff:ff:ff:ff:ff link-netnsid 2
13: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/ether be:69:e0:c3:61:e6 brd ff:ff:ff:ff:ff:ff
14: cni0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
link/ether 0a:58:0a:f4:00:01 brd ff:ff:ff:ff:ff:ff
405: ifb0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 32
link/ether 2e:21:ac:75:30:3f brd ff:ff:ff:ff:ff:ff
406: ifb1: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 32
link/ether a6:a9:f7:cd:58:06 brd ff:ff:ff:ff:ff:ff
可以看到新增2块网卡: ifb0、ifb1
它们的状态为:state DOWN
ip link set dev ifb0 up
执行命令:
ip link set dev ifb0 up
验证,执行命令:
ip link list
显示内容:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 00:16:3e:00:29:30 brd ff:ff:ff:ff:ff:ff
3: docker_gwbridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:b1:cd:f3:26 brd ff:ff:ff:ff:ff:ff
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
link/ether 02:42:b5:6d:27:f0 brd ff:ff:ff:ff:ff:ff
6: veth250924d@if5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP mode DEFAULT group default
link/ether 7e:f2:81:b2:06:58 brd ff:ff:ff:ff:ff:ff link-netnsid 0
12: vethd86893f@if11: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker_gwbridge state UP mode DEFAULT group default
link/ether 5a:60:84:59:3e:90 brd ff:ff:ff:ff:ff:ff link-netnsid 2
13: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/ether be:69:e0:c3:61:e6 brd ff:ff:ff:ff:ff:ff
14: cni0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default qlen 1000
link/ether 0a:58:0a:f4:00:01 brd ff:ff:ff:ff:ff:ff
405: ifb0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN mode DEFAULT group default qlen 32
link/ether 2e:21:ac:75:30:3f brd ff:ff:ff:ff:ff:ff
406: ifb1: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 32
link/ether a6:a9:f7:cd:58:06 brd ff:ff:ff:ff:ff:ff
可以看到 ifb0 网卡的状态为:state UNKNOWN
tc qdisc add dev eth0 ingress
执行命令:
tc qdisc add dev eth0 ingress
验证,执行命令:
tc qdisc show dev eth0
显示内容:
qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc ingress ffff: parent ffff:fff1 ----------------
可以看到 多了:qdisc ingress ffff: parent ffff:fff1 —————-
tc filter add dev eth0 parent ffff: protocol ip u32 match u32 0 0 flowid 1:1 action mirred egress redirect dev ifb0
执行命令:
tc filter add dev eth0 parent ffff: protocol ip u32 match u32 0 0 flowid 1:1 action mirred egress redirect dev ifb0
验证,执行命令:
tc filter list dev eth0 parent ffff:fff1
显示内容:
filter parent ffff: protocol ip pref 49152 u32
filter parent ffff: protocol ip pref 49152 u32 fh 800: ht divisor 1
filter parent ffff: protocol ip pref 49152 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:1
match 00000000/00000000 at 0
action order 1: mirred (Egress Redirect to device ifb0) stolen
index 4 ref 1 bind 1
可以看到 Egress Redirect to device ifb0 ,通过 eth0 的过滤器规则 parent ffff:fff1,把数据重定向到 device ifb0
tc qdisc add dev ifb0 root netem delay 750ms
执行命令:
tc qdisc add dev ifb0 root netem delay 750ms
验证,执行命令:
tc qdisc show dev ifb0
显示内容:
qdisc netem 800d: root refcnt 2 limit 1000 delay 750.0ms
再在本地 ping 下测试 Linux 机看看:
可以看出,命令生效啦!
执行命令:
tc qdisc add dev eth0 root netem delay 750ms
验证,执行命令:
tc qdisc show dev eth0
显示内容:
qdisc netem 800e: root refcnt 2 limit 1000 delay 750.0ms
qdisc ingress ffff: parent ffff:fff1 ----------------
再在本地 ping 下测试 Linux 机看看:
可以看到 输入方向、输出方向都生效了。
tc qdisc del dev ifb0 root netem
执行命令:
tc qdisc del dev ifb0 root netem
验证,执行命令:
tc qdisc show dev ifb0
显示内容:
qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
tc qdisc del dev eth0 root netem
执行命令:
tc qdisc del dev eth0 root netem
验证,执行命令:
tc qdisc show dev eth0
显示内容:
qdisc pfifo_fast 0: root refcnt 2 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
qdisc ingress ffff: parent ffff:fff1 ----------------
tc filter del dev eth0 parent ffff: protocol ip pref 49152 u32
若不知道过滤器名称:
tc filter list dev eth0 parent ffff:fff1
显示内容:
filter parent ffff: protocol ip pref 49152 u32
filter parent ffff: protocol ip pref 49152 u32 fh 800: ht divisor 1
filter parent ffff: protocol ip pref 49152 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:1
match 00000000/00000000 at 0
action order 1: mirred (Egress Redirect to device ifb0) stolen
index 4 ref 1 bind 1
然后执行:
tc filter del dev eth0 parent ffff: protocol ip pref 49152 u32
就可以删掉过滤器啦。
tc qdisc del dev eth0 ingress
最后执行:
tc qdisc del dev eth0 ingress
以上
本章主要根据 https://wiki.linuxfoundation.org/networking/netem 中,关于输入方向的例子介绍。
实际操作练习了一遍。
下一章,继续根据 https://wiki.linuxfoundation.org/networking/netem 中的介绍,练习体验下,netem 的模拟网络的 6 个方面:
并在最后给出一些弱网模拟的5个方面的值设置。Rate control 可以不考虑!