你将学到什么

  • 虚拟机的Ping包是如何出外网的

DevStack环境准备

DevStack4G 2CPU 50GB2张网卡(NAT模式)VMWare虚拟机(开启CPU虚拟化)CentOS 7
# cat ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=eth0
DEVICE=eth0
ONBOOT=yes
IPADDR=192.168.30.11
PREFIX=24
GATEWAY=192.168.30.2
DNS1=114.114.114.114
# cat ifcfg-eth1
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=no
NAME=eth1
DEVICE=eth1
ONBOOT=yes

部署DevStack

# git clone https://git.openstack.org/openstack-dev/devstack -b stable/ocata
# devstack/tools/create-stack-user.sh
### 为stack用户设置密码
# passwd stack
# su stack ### 部署OpenStack
# cd
# git clone https://git.openstack.org/openstack-dev/devstack -b stable/ocata
# cd devstack
# cp samples/local.conf .
# vim local.conf
ADMIN_PASSWORD=123456
# ./stack.sh
### OpenStack服务状态查询
# systemctl status devstack@* ### 1. 部署完成后将eth1添加到br-ex
# ovs-vsctl add-port br-ex eth1
### 2. 登入http://192.168.30.11/把alt_demo和demo下所有网络和路由都删了,然后删除public网络,根据自己的外网地址创建新的public网络,VMWare的NAT网络为192.168.30.0/24,网关为192.168.30.2
### 3. 在admin项目下创建private网络192.168.0.0/24,并添加路由router连接public网络和private网络
### 4. 在private网络下创建虚拟机A

最终DevStack网络拓扑如下

Neutron网络研究-LMLPHP

Ping包流向

tap-XXXtapc8139c13-4e192.168.0.4/24fe : 16: 3e : 3b : 1c : 16
qvb-XXXtapc8139c13-4e-2a : 6b : 39 : d5 : a3 : 83
qvo-XXXtapc8139c13-4e-22 : 76 : f3 : 85 : 76 : df
qr#MNqr-51cb52fe-34192.168.0.1/24fa : 16 : 3e : f2 : e6 : c2
qg#KLqg-997f3a2c-2b192.168.30.101/24fe : 16: 3e : 3b : 1c : 16

tap-XXX到qvb-XXX

### security flows以后写

qvo-XXX到qr#MN

当数据包到达qvo-XXX后将查询br-int的OVS流表进行转发

### 首先我们来看下br-int的流表,这边都只列举关键数据
# ovs-ofctl show br-int
3(qvoc8139c13-4e): addr:22:76:f3:85:76:df
config: 0
state: 0
current: 10GB-FD COPPER
speed: 10000 Mbps now, 0 Mbps max ### 首先看table=0的行,然后过滤存在icmp6和arp字段的行,然后找到存在in_port=3的行
# ovs-ofctl dump-flows br-int
cookie=0xb0a9009a2f367add, duration=20319.589s, table=0, n_packets=121, n_bytes=12082, idle_age=19038, priority=9,in_port=3 actions=resubmit(,25)
### 可以看到我们接下来要看的表就是25,在table=25的行找到源MAC地址等于tapc8139c13-4e的行(直接看in_port=3也可以,qvb-XXX和qvo-XXX是veth pair设备,tap-XXX发送到qvb-XXX的包都会出现在qvo-XXX,就好像是tap-XXX直接发往qvo-XXX,所以这边的dl_src并不是qvb-XXX)
cookie=0xb0a9009a2f367add, duration=20319.604s, table=25, n_packets=122, n_bytes=12068, idle_age=19036, priority=2,in_port=3,dl_src=fa:16:3e:3b:1c:16 actions=resubmit(,60)
### 最后我们可以看到table=60表的动作是NORMAL,按照设备的常规L2/L3处理流程来处理数据包(外网的数据包都直接发往private网络的网关即qr-XXX),而我们通过打印DATAPATH就可以看到最终的转发路径
cookie=0xb0a9009a2f367add, duration=20344.086s, table=60, n_packets=4715, n_bytes=447847, idle_age=0, priority=3 actions=NORMAL ### 首先查下端口
# ovs-dpctl show
system@ovs-system:
port 4: qr-51cb52fe-34 (internal)
port 11: qvoc8139c13-4e
### 这边可以看到生成了一条qvo-XXX发往qr-XXX的DATAPATH
# ovs-dpctl dump-flows system@ovs-system
recirc_id(0),in_port(10),eth(src=fa:16:3e:3b:1c:16,dst=fa:16:3e:f2:e6:c2),eth_type(0x0800),ipv4(frag=no), packets:1, bytes:98, used:2.121s, actions:4

qr#MN到qg#KL

Neutron的L3 agent是通过iptables来实现路由和NAT功能,每个 L3 Agent 运行在一个network namespace中

# ip netns
qdhcp-5fc6d4d3-4580-42a3-8dd4-bfcef4d5961f
qdhcp-72f9fa34-5f0d-477a-a634-305a6af7968c
qrouter-a8956ad4-bbd1-4cef-a307-528b0b39a7ab
# ip netns exec qrouter-a8956ad4-bbd1-4cef-a307-528b0b39a7ab bash

首先我们来看下路由表

# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 192.168.30.2 0.0.0.0 UG 0 0 0 qg-997f3a2c-2b
192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-51cb52fe-34
192.168.10.0 0.0.0.0 255.255.255.0 U 0 0 0 qr-cf0bb1cb-86
192.168.30.0 0.0.0.0 255.255.255.0 U 0 0 0 qg-997f3a2c-2b

这边是在network namespace中了,按照路由表,ping包将由qg-XXX发往192.168.30.2,接着我们来看下在数据包出qg-XXX的时候,iptables怎么实现SNAT

### L3 agent通过自定义链来实现SNAT,需要注意的是在有floating ip的情况下走不到步骤(7)的
# ip netns exec qrouter-a8956ad4-bbd1-4cef-a307-528b0b39a7ab iptables -t nat -S
-P PREROUTING ACCEPT
-P INPUT ACCEPT
-P OUTPUT ACCEPT
-P POSTROUTING ACCEPT
-N neutron-l3-agent-OUTPUT
-N neutron-l3-agent-POSTROUTING ### Neutorn 增加的 SNAT chain
-N neutron-l3-agent-PREROUTING
-N neutron-l3-agent-float-snat ### Neutorn 增加的 SNAT chain
-N neutron-l3-agent-snat ### Neutorn 增加的 SNAT chain
-N neutron-postrouting-bottom ### Neutorn 增加的 SNAT chain
-A PREROUTING -j neutron-l3-agent-PREROUTING
-A OUTPUT -j neutron-l3-agent-OUTPUT
-A POSTROUTING -j neutron-l3-agent-POSTROUTING ### (1) 将SNAT chain转到自定义的 neutron-l3-agent-POSTROUTING
-A POSTROUTING -j neutron-postrouting-bottom ### (3) 将SNAT chain转到自定义的 neutron-postrouting-bottom
-A neutron-l3-agent-OUTPUT -d 192.168.30.107/32 -j DNAT --to-destination 192.168.0.4
-A neutron-l3-agent-POSTROUTING ! -i qg-997f3a2c-2b ! -o qg-997f3a2c-2b -m conntrack ! --ctstate DNAT -j ACCEPT ### (2) 如果出口或者入口不是qg-997f3a2c-2b并且状态不是DNAT的都接受(由于我们的包是从qg-997f3a2c-2b出去的所以不满足继续找下一个规则)
-A neutron-l3-agent-PREROUTING -d 192.168.30.107/32 -j DNAT --to-destination 192.168.0.4
-A neutron-l3-agent-PREROUTING -d 169.254.169.254/32 -i qr-+ -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 9697
-A neutron-l3-agent-float-snat -s 192.168.0.4/32 -j SNAT --to-source 192.168.30.107 ### (6) 因为floating ip的存在就会用floating ip的地址192.168.30.107做SNAT,结束匹配
-A neutron-l3-agent-snat -j neutron-l3-agent-float-snat ### (5) 转到 neutron-l3-agent-float-snat
-A neutron-l3-agent-snat -o qg-997f3a2c-2b -j SNAT --to-source 192.168.30.101 ### (7) floating ip不存在的话会走到此处,修改源地址为192.168.30.101
-A neutron-l3-agent-snat -m mark ! --mark 0x2/0xffff -m conntrack --ctstate DNAT -j SNAT --to-source 192.168.30.101 ### (8) -
-A neutron-postrouting-bottom -m comment --comment "Perform source NAT on outgoing traffic." -j neutron-l3-agent-snat ### (4) 转到 neutron-l3-agent-snat

qg#KL到int-br-ex

当ping包由qg#KL再次进入br-int的数据包时源地址已被修改为192.168.30.107

# ovs-ofctl show br-int
1(int-br-ex): addr:32:f4:d5:fc:39:76
config: 0
state: 0
speed: 0 Mbps now, 0 Mbps max
12(qg-997f3a2c-2b): addr:00:00:00:00:d0:7d
config: PORT_DOWN
state: LINK_DOWN
speed: 0 Mbps now, 0 Mbps max ### table=0的行里面没有in_port=12的行,所以匹配到的行只有下面一条
# ovs-ofctl dump-flows br-int
cookie=0xb0a9009a2f367add, duration=24922.247s, table=0, n_packets=252, n_bytes=24551, idle_age=26, priority=0 actions=resubmit(,60)
### 可以看到最终的动作还是NORMAL
cookie=0xb0a9009a2f367add, duration=24922.244s, table=60, n_packets=5884, n_bytes=561698, idle_age=15, priority=3 actions=NORMAL ### 我们继续来看下DATAPATH
# ovs-dpctl show
port 2: eth1
port 7: qg-997f3a2c-2b (internal)
# arp -a
? (192.168.30.1) at 00:50:56:c0:00:08 [ether] on eth0
? (192.168.30.2) at 00:50:56:e3:37:85 [ether] on eth0
### 现在已经是外网网段了,数据包这次要发往外网网关192.168.30.2,这边我们找到了最终出外网的DATAPATH
# ovs-dpctl dump-flows system@ovs-system
recirc_id(0),in_port(7),eth(src=fa:16:3e:e8:89:77,dst=00:50:56:e3:37:85),eth_type(0x0800),ipv4(frag=no), packets:3, bytes:294, used:2.682s, actions:2
05-15 04:25