问题描述
我使用此指南[OpenStack Charms部署指南]执行了群集节点安装.( https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/install-maas.html ),其中类型网络是一个扁平网络,使用的组件是:
I performed a cluster node installation using this guide [OpenStack Charms Deployment Guide].(https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/latest/install-maas.html), where the type of network is a Flat network and the components used are:
- 马斯
- Juju
- Openstack
我的实验室由以下设备组成:
My lab is composed by following devices:
1 IBM System 3540 M4 Maas (500GB HDD - 8GB RAM - 1 Nic)
1 IBM System 3540 M4 Juju (500GB HDD - 8GB RAM -1 Nic)
4 IBM System 3540 M4 Openstack (500GBx2 HDD - 16GB RAM - 2 Nic)
1 Palo Alto Network Firewall
Public Network 10.20.81.0/24 - Private Network 10.0.0.0/24
Maas: 10.20.81.1
Juju: 10.20.81.2
Openstack 10.20.81.21-24
Gateway 10.20.81.254
Instance: 10.0.0.9 - 10.20.81.215 (floating)
网络计划
10.20.81.0/24
+-------------+
Firewall
10.20.81.254
+-------------+
|
+-------------------------------------------------------------+
Switch
vlan81 vlan81 vlan81
+-------------------------------------------------------------+
| | || | | |
+--------------+ +------------+ +------------------+
|Maas+Juju |Juju Gui| |Openstack
|10.20.81.1 |10.20.81.2 |10.20.81.21-24
+--------------+ +-------------+ +------------------+
|
+--------------------------------------------+
Private Subnet-1 Public Subnet-2
10.0.0.0/24 10.20.81.0/24
+---+----+--+ +----+------+
| | +----+ |
| | | | |
| +--------+ VR +-------------+
| | |
+--+-+ +----+
| |
| VM |
| .9 |
| |
+----+
在我的实验室中,Openstack的节点具有两个eth接口,第一个(eno2)用作浮动IP的单个外部网络,然后另一个(eno3)专用于私有网络.
On my lab, the nodes for Openstack present two eth interface, the first one (eno2) the single external network used as floating IP, then the other one (eno3) for the private network.
我在Juju gui上
On Juju gui I've that:
neutron-gateway:
bridge-mappings: physnet1:br-ex
data-port: br-ex:eno2
neutron-api:
flat-network-providers: physnet1
我已打开此帖子 https://ask.openstack.org/en/question/119783/no-route-to-instance-ssh-and-ping-no-route-to-host/解决与实例之间的Ping和Ssh连接有关的问题,但在同一检查期间,我在中子网关上看到了此问题:
I've opened this post https://ask.openstack.org/en/question/119783/no-route-to-instance-ssh-and-ping-no-route-to-host/ to resolve the problem about the Ping and Ssh connection to my instance, but during same check I've seen this issue on neutron-gateway:
error: "could not add network device eno2 to ofproto (Device or resource busy)"
也许这是我第一个问题的原因,但是我不知道如何解决.
Maybe that is the cause of my first issue, but I don't understand how I can fix it.
$:juju ssh neutron-gateway/0
Welcome to Ubuntu 18.04.2 LTS (GNU/Linux 4.15.0-46-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Tue Mar 19 16:07:19 UTC 2019
System load: 0.64 Processes: 409
Usage of /: 5.7% of 273.00GB Users logged in: 0
Memory usage: 13% IP address for lxdbr0: 10.122.135.1
Swap usage: 0% IP address for br-eno2: 10.20.81.21
Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud
3 packages can be updated.
0 updates are security updates.
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ovs-vsctl show output
ubuntu@os-compute01:~$ sudo ovs-vsctl show
6f8542aa-45d7-409d-8787-8983f3c643eb
Manager "ptcp:6640:127.0.0.1"
is_connected: true
Bridge br-ex
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port "eno2"
Interface "eno2"
error: "could not add network device eno2 to ofproto (Device or resource busy)"
Port br-ex
Interface br-ex
type: internal
Port phy-br-ex
Interface phy-br-ex
type: patch
options: {peer=int-br-ex}
Bridge br-tun
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port patch-int
Interface patch-int
type: patch
options: {peer=patch-tun}
Port br-tun
Interface br-tun
type: internal
Port "gre-0a145118"
Interface "gre-0a145118"
type: gre
options: {df_default="true", in_key=flow, local_ip="10.20.81.21", out_key=flow, remote_ip="10.20.81.24"}
Bridge br-int
Controller "tcp:127.0.0.1:6633"
is_connected: true
fail_mode: secure
Port br-int
Interface br-int
type: internal
Port patch-tun
Interface patch-tun
type: patch
options: {peer=patch-int}
Port "tapb0b04b07-8f"
tag: 2
Interface "tapb0b04b07-8f"
Port int-br-ex
Interface int-br-ex
type: patch
options: {peer=phy-br-ex}
Port "tap2354468c-88"
tag: 4
Interface "tap2354468c-88"
Port "tap6d2b2fe0-47"
tag: 4
Interface "tap6d2b2fe0-47"
ovs_version: "2.10.0"
juju状态
$:juju status
Model Controller Cloud/Region Version SLA Timestamp
openstack maas-cloud-controller maas-cloud 2.5.1 unsupported 22:10:17Z
App Version Status Scale Charm Store Rev OS Notes
ceph-mon 13.2.4+dfsg1 active 3 ceph-mon jujucharms 31 ubuntu
ceph-osd 13.2.4+dfsg1 active 3 ceph-osd jujucharms 273 ubuntu
ceph-radosgw 13.2.4+dfsg1 active 1 ceph-radosgw jujucharms 262 ubuntu
cinder 13.0.2 active 1 cinder jujucharms 276 ubuntu
cinder-ceph 13.0.2 active 1 cinder-ceph jujucharms 238 ubuntu
glance 17.0.0 active 1 glance jujucharms 271 ubuntu
keystone 14.0.1 active 1 keystone jujucharms 288 ubuntu
mysql 5.7.20-29.24 active 1 percona-cluster jujucharms 272 ubuntu
neutron-api 13.0.2 active 1 neutron-api jujucharms 266 ubuntu
neutron-gateway 13.0.2 active 1 neutron-gateway jujucharms 256 ubuntu
neutron-openvswitch 13.0.2 active 3 neutron-openvswitch jujucharms 255 ubuntu
nova-cloud-controller 18.0.3 active 1 nova-cloud-controller jujucharms 316 ubuntu
nova-compute 18.0.3 active 3 nova-compute jujucharms 290 ubuntu
ntp 3.2 active 4 ntp jujucharms 31 ubuntu
openstack-dashboard 14.0.1 active 1 openstack-dashboard jujucharms 271 ubuntu
rabbitmq-server 3.6.10 active 1 rabbitmq-server jujucharms 82 ubuntu
Unit Workload Agent Machine Public address Ports Message
ceph-mon/0 active idle 1/lxd/0 10.20.81.4 Unit is ready and clustered
ceph-mon/1 active idle 2/lxd/0 10.20.81.8 Unit is ready and clustered
ceph-mon/2* active idle 3/lxd/0 10.20.81.5 Unit is ready and clustered
ceph-osd/0 active idle 1 10.20.81.23 Unit is ready (1 OSD)
ceph-osd/1 active idle 2 10.20.81.22 Unit is ready (1 OSD)
ceph-osd/2* active idle 3 10.20.81.24 Unit is ready (1 OSD)
ceph-radosgw/0* active idle 0/lxd/0 10.20.81.15 80/tcp Unit is ready
cinder/0* active idle 1/lxd/1 10.20.81.18 8776/tcp Unit is ready
cinder-ceph/0* active idle 10.20.81.18 Unit is ready
glance/0* active idle 2/lxd/1 10.20.81.6 9292/tcp Unit is ready
keystone/0* active idle 3/lxd/1 10.20.81.20 5000/tcp Unit is ready
mysql/0* active idle 0/lxd/1 10.20.81.17 3306/tcp Unit is ready
neutron-api/0* active idle 1/lxd/2 10.20.81.7 9696/tcp Unit is ready
neutron-gateway/0* active idle 0 10.20.81.21 Unit is ready
ntp/0* active idle 10.20.81.21 123/udp chrony: Ready
nova-cloud-controller/0* active idle 2/lxd/2 10.20.81.3 8774/tcp,8775/tcp,8778/tcp Unit is ready
nova-compute/0 active idle 1 10.20.81.23 Unit is ready
neutron-openvswitch/1 active idle 10.20.81.23 Unit is ready
ntp/2 active idle 10.20.81.23 123/udp chrony: Ready
nova-compute/1 active idle 2 10.20.81.22 Unit is ready
neutron-openvswitch/2 active idle 10.20.81.22 Unit is ready
ntp/3 active idle 10.20.81.22 123/udp chrony: Ready
nova-compute/2* active idle 3 10.20.81.24 Unit is ready
neutron-openvswitch/0* active idle 10.20.81.24 Unit is ready
ntp/1 active idle 10.20.81.24 123/udp chrony: Ready
openstack-dashboard/0* active idle 3/lxd/2 10.20.81.19 80/tcp,443/tcp Unit is ready
rabbitmq-server/0* active idle 0/lxd/2 10.20.81.16 5672/tcp Unit is ready
Machine State DNS Inst id Series AZ Message
0 started 10.20.81.21 nbe8q3 bionic Openstack Deployed
0/lxd/0 started 10.20.81.15 juju-26461e-0-lxd-0 bionic Openstack Container started
0/lxd/1 started 10.20.81.17 juju-26461e-0-lxd-1 bionic Openstack Container started
0/lxd/2 started 10.20.81.16 juju-26461e-0-lxd-2 bionic Openstack Container started
1 started 10.20.81.23 pdnc7c bionic Openstack Deployed
1/lxd/0 started 10.20.81.4 juju-26461e-1-lxd-0 bionic Openstack Container started
1/lxd/1 started 10.20.81.18 juju-26461e-1-lxd-1 bionic Openstack Container started
1/lxd/2 started 10.20.81.7 juju-26461e-1-lxd-2 bionic Openstack Container started
2 started 10.20.81.22 yxkyet bionic Openstack Deployed
2/lxd/0 started 10.20.81.8 juju-26461e-2-lxd-0 bionic Openstack Container started
2/lxd/1 started 10.20.81.6 juju-26461e-2-lxd-1 bionic Openstack Container started
2/lxd/2 started 10.20.81.3 juju-26461e-2-lxd-2 bionic Openstack Container started
3 started 10.20.81.24 bgqsdy bionic Openstack Deployed
3/lxd/0 started 10.20.81.5 juju-26461e-3-lxd-0 bionic Openstack Container started
3/lxd/1 started 10.20.81.20 juju-26461e-3-lxd-1 bionic Openstack Container started
3/lxd/2 started 10.20.81.19 juju-26461e-3-lxd-2 bionic Openstack Container started
iptables
请提出任何建议.我仍然无法解决问题.谢谢.
Any suggestions please. I am still unable to solve the problem. Thanks.
推荐答案
更新26/03/19:
我在Juju gui上
On Juju gui I've that:
neutron-gateway:
bridge-mappings: physnet1:br-ex
data-port: br-ex:eno2
neutron-api:
flat-network-providers: physnet1
在部署Openstack之前,我已将数据端口从br-ex:eno2更改为br-ex:eno3
Before to make the deploy of Openstack I've changed data-port from br-ex:eno2 to br-ex:eno3
neutron-gateway:
bridge-mappings: physnet1:br-ex
data-port: br-ex:eno3
有关eno2的问题已解决,但仍然可以ping通实例.
The issue on eno2 is been resolved but the ping to instance is still present.
这篇关于打开堆栈中子网关-错误:“无法将网络设备eno2添加到ofproto(设备或资源繁忙)"的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!