OVS/OVN command Cheat Sheet
This article aims documents OVN and OVS commands with examples, it’s a living document as I learn more things.
Host Alias'
Most commands are be ran from an openstack controller, due to OVN running inside the ovn_controller container we need some setup first to allow commands to be ran from the host.
RHOSP13 - RHOSP16:
if [ -f "/var/run/docker.pid" ]; then export containerTool=docker; else export containerTool=podman; fi
export SBDB=$(ovs-vsctl get open . external_ids:ovn-remote | sed -e 's/\"//g')
export NBDB=$(ovs-vsctl get open . external_ids:ovn-remote | sed -e 's/\"//g' | sed -e 's/6642/6641/g')
alias ovn-sbctl='$containerTool exec ovn_controller ovn-sbctl --db=$SBDB'
alias ovn-nbctl='$containerTool exec ovn_controller ovn-nbctl --db=$NBDB'
alias ovn-trace='$containerTool exec ovn_controller ovn-trace --db=$SBDB'
alias ovn-appctl='$containerTool exec ovn_controller ovn-appctl'
alias ovn-detrace='cat >/tmp/trace && $containerTool cp /tmp/trace ovn_controller:/tmp/trace && $containerTool exec -it ovn_controller bash -c "ovn-detrace --ovnsb=$SBDB --ovnnb=$NBDB </tmp/trace"'
RHOSP17:
alias ovn-nbctl="podman exec -it ovn_cluster_north_db_server ovn-nbctl --no-leader"
alias ovn-sbctl="podman exec -it ovn_cluster_south_db_server ovn-sbctl --no-leader"
alias ovn-appctl="podman exec -it ovn_controller ovn-appctl"
ovs-vsctl show
0cb8a84f-9f0f-43ef-b622-0ae3ec9f67c4
Bridge br-ex
fail_mode: standalone [1]
Port patch-provnet-23a82ee4-e05b-4832-8130-60c3db42448a-to-br-int
Interface patch-provnet-23a82ee4-e05b-4832-8130-60c3db42448a-to-br-int
type: patch
options: {peer=patch-br-int-to-provnet-23a82ee4-e05b-4832-8130-60c3db42448a}
Port vlan10
tag: 10
Interface vlan10
type: internal
Port vlan11
tag: 11
Interface vlan11
type: internal
Port ens19
Interface ens19
Port br-ex
Interface br-ex
type: internal
Bridge br-int
fail_mode: secure [1]
Port br-int
Interface br-int
type: internal
Port patch-br-int-to-provnet-23a82ee4-e05b-4832-8130-60c3db42448a
Interface patch-br-int-to-provnet-23a82ee4-e05b-4832-8130-60c3db42448a
type: patch
options: {peer=patch-provnet-23a82ee4-e05b-4832-8130-60c3db42448a-to-br-int}
Port ovn-107da9-0
Interface ovn-107da9-0
type: geneve
options: {csum="true", key=flow, remote_ip="172.16.0.242"}
bfd_status: {diagnostic="No Diagnostic", flap_count="0", forwarding="false", remote_diagnostic="No Diagnostic", remote_state=down, state=down}
Bridge br-tenant
fail_mode: standalone [1]
Port br-tenant
Interface br-tenant
type: internal
Port vlan704
tag: 704
Interface vlan704
type: internal
ovs_version: "2.13.4"
[1]
Controller Failure Settings:
When a controller is configured, it is, ordinarily, responsible for setting up all flows on the switch. Thus, if the connection to the controller fails, no new network connections can be set up.
If the connection to the controller stays down long enough, no packets can pass through the switch at all.
If the value is standalone, or if neither of these settings is set, ovs-vswitchd will take over responsibility for setting up flows when no message has been received from the controller for three
times the inactivity probe interval. In this mode, ovs-vswitchd causes the datapath to act like an ordinary MAC-learning switch. ovs-vswitchd will continue to retry connecting to the controller
in the background and, when the connection succeeds, it discontinues its standalone behavior.
If this option is set to secure, ovs-vswitchd will not set up flows on its own when the controller connection fails.
get-fail-mode bridge
Prints the configured failure mode.
del-fail-mode bridge
Deletes the configured failure mode.
set-fail-mode bridge standalone|secure
Sets the configured failure mode.
ovs-vsctl list bridge br-int
[root@overcloud-controller-0 heat-admin] ovs-vsctl list bridge br-int
_uuid : bf713c93-e17a-4cc8-8222-10bd804b9fb4
auto_attach : []
controller : []
datapath_id : "0000929535fc92c8"
datapath_type : ""
datapath_version : "<unknown>"
external_ids : {ct-zone-1129d490-07cc-402f-bbe9-24dc6700b2a4_dnat="7", ct-zone-1129d490-07cc-402f-bbe9-24dc6700b2a4_snat="3", ct-zone-5f2e8302-eb17-4490-bed5-b27ac889b969_dnat="1", ct-zone-5f2e8302-eb17-4490-bed5-b27ac889b969_snat="6", ct-zone-d18aca1a-dcd2-4a05-9827-c11fdb5c27d7_dnat="4", ct-zone-d18aca1a-dcd2-4a05-9827-c11fdb5c27d7_snat="2", ct-zone-provnet-23a82ee4-e05b-4832-8130-60c3db42448a="5", ovn-nb-cfg="74671"}
fail_mode : secure
flood_vlans : []
flow_tables : {}
ipfix : []
mcast_snooping_enable: false
mirrors : []
name : br-int
netflow : []
other_config : {disable-in-band="true", hwaddr="92:95:35:fc:92:c8"}
ports : [28f40cf8-9367-4a43-afec-0d04c940fa8e, a7f80b28-0ce6-48d5-919d-c5e5880d4d01, d0c2aca1-da6e-472e-805e-616d9691462f]
protocols : []
rstp_enable : false
rstp_status : {}
sflow : []
status : {}
stp_enable : false
ovs-ofctl dump-flows br-int
cookie=0x3e9c9b62, duration=1418815.342s, table=0, n_packets=17, n_bytes=1180, priority=150,in_port="patch-br-int-to",dl_vlan=11 actions=strip_vlan,load:0x5->NXM_NX_REG13[],load:0x7->NXM_NX_REG11[],load:0x3->NXM_NX_REG12[],load:0x2->OXM_OF_METADATA[],load:0x1->NXM_NX_REG14[],resubmit(,8)
ovs-ofctl del-flows br-int
[root@overcloud-controller-0 heat-admin] ovs-ofctl dump-flows br-int | wc -l
534
[root@overcloud-controller-0 heat-admin] ovs-ofctl del-flows br-int
[root@overcloud-controller-0 heat-admin] ovs-ofctl dump-flows br-int | wc -l
1
[root@overcloud-controller-0 heat-admin] systemctl restart tripleo_ovn_controller.service
[root@overcloud-controller-0 heat-admin] ovs-ofctl dump-flows br-int | wc -l
534
INFO We can see that after deleting the flows we were able to simply restart ovn_controller to restore them
ovn-appctl -t ovn-controller recompute
Trigger a full compute iteration in ovn-controller based on the contents of the Southbound database and local OVS database.
This command is intended to use only in the event of a bug in the incremental processing engine in ovn-controller to avoid inconsistent states. It should therefore be used with care as full recomputes are cpu intensive.
[root@overcloud-controller-0 heat-admin] ovn-appctl -t ovn-controller recompute
[no output]
Check all versions are matching
containers:
source stackrc
(undercloud) [stack@director ~]$ tripleo-ansible-inventory --stack $(openstack stack list -c "Stack Name" -f value) --static-yaml-inventory inventory.yaml
(undercloud) [stack@director ~]$ ansible -i inventory.yaml overcloud -m shell -a 'for i in $(podman ps --filter name=ovn -q); do podman inspect $i | jq .[].Config.Labels.version -r; done' -b
rpm version
ansible -i inventory.yaml overcloud -m shell -a 'for i in $(podman ps --filter name=ovn -q); do podman exec $i rpm -qa | grep ovn; done' -b
neutron-ovn-db-sync-util
This will sync the neutron database with OVN
Generate Ansible inventory file if you don’t already have one:
source stackrc
tripleo-ansible-inventory --stack <stack_name> --static-yaml-inventory inventory.yaml
Add an iptables
rule to block both Neutron ports:
ansible -i inventory.yaml neutron_api -m shell -ba 'iptables -I INPUT 1 -p tcp -m multiport --dports 13696,9696 -m comment --comment "Blocking Neutron API" -j DROP'
Run the neutron-ovn-db-sync-util
command on one of the nodes running neutron_api from the last output, in this example it’s controller-0
ansible -i $(which tripleo-ansible-inventory) controller-0 -m shell -ba 'podman exec -it neutron_api neutron-ovn-db-sync-util --config-file /usr/share/neutron/neutron-dist.conf --config-dir /usr/share/neutron/server --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugin.ini --config-dir /etc/neutron/conf.d/common --config-dir /etc/neutron/conf.d/neutron-server --ovn-neutron_sync_mode=repair'
Remove the rule added earlier:
ansible -i inventory.yaml neutron_api -m shell -ba 'iptables -D INPUT $(sudo iptables -L --line-numbers | grep -m 1 "Blocking Neutron API" | cut -d " " -f1)'
Check CPU usage on the master node for ovn_northd pacemaker service
grep -r "CPU usage" /var/log/containers/openvswitch/ovn-northd.log
Check the cpu usage on the master node for ovsdb southbound database
ansible -i inventory.yaml Controller -m shell -a 'grep "9.% CPU usage" /var/log/containers/openvswitch/ovsdb-server-sb.log | wc -l' -b
control01-beaker-tok02 | CHANGED | rc=0 >>
0
control02-beaker-tok02 | CHANGED | rc=0 >>
1763
control00-beaker-tok02 | CHANGED | rc=0 >>
0
Get snapshot of ovn resources
ovn-nbctl list ACL | grep uuid | wc -l
ovn-nbctl list Logical_Switch | grep uuid | wc -l
ovn-nbctl list Logical_Router | grep uuid | wc -l
ovn-nbctl list Logical_Switch_Port | grep uuid | wc -l
Applying QOS to a port:
Create a QOS rule:
$ openstack network qos rule create --max-kbps 5000 --max-burst-kbits 5000 bw-limiter --type bandwidth-limit --egress
$ openstack network qos policy show bw-limiter -f yaml
description: ''
id: b6883855-d504-4a21-92bf-31fc4a83b093
is_default: false
location:
cloud: ''
project:
domain_id: null
domain_name: Default
id: 723755564280469caac6ef55d8ef8e0d
name: test
region_name: regionOne
zone: null
name: bw-limiter
project_id: 723755564280469caac6ef55d8ef8e0d
rules:
- direction: egress
id: 211aaeab-f2c3-4645-9cf7-156f95f57530
max_burst_kbps: 5000
max_kbps: 5000
qos_policy_id: b6883855-d504-4a21-92bf-31fc4a83b093
type: bandwidth_limit
- direction: ingress
id: 7d62812c-70f4-401e-91ba-406141f3f59a
max_burst_kbps: 5000
max_kbps: 5000
qos_policy_id: b6883855-d504-4a21-92bf-31fc4a83b093
type: bandwidth_limit
shared: false
tags: []
Find the port you would like to attach it to and set it:
$ openstack port list --fixed-ip ip-address=192.168.123.132 -f yaml
- Fixed IP Addresses:
- ip_address: 192.168.123.132
subnet_id: 930f58ff-8e1e-4b1f-a72c-b960dc110f6f
ID: 7d7c7ab8-b198-4aa2-a8a6-8a3e291260fa
MAC Address: fa:16:3e:71:ca:bf
Name: ''
Status: ACTIVE
$ openstack port set --qos-policy bw-limiter 7d7c7ab8-b198-4aa2-a8a6-8a3e291260fa
$ $ openstack port show 7d7c7ab8-b198-4aa2-a8a6-8a3e291260fa -c qos_policy_id -f yaml
qos_policy_id: b6883855-d504-4a21-92bf-31fc4a83b093
From the OVN controller container we can see the qos policy on the port:
$ podman exec -it ovn_controller bash
First get the logical switch from the port ID:
$ ovn-nbctl lsp-get-ls 7d7c7ab8-b198-4aa2-a8a6-8a3e291260fa
1cccc492-8a8c-43e7-a658-81b791278ad9 (neutron-9fbf7c22-369f-4088-8d2a-0b506384f1e7)
We can then pass that to the qos-list command to return all QOS on the switch ports:
$ ovn-nbctl qos-list 1cccc492-8a8c-43e7-a658-81b791278ad9 | grep 7d7c7ab8-b198-4aa2-a8a6-8a3e291260fa
from-lport 2002 (inport == "7d7c7ab8-b198-4aa2-a8a6-8a3e291260fa") rate=5000 burst=5000
to-lport 2002 (outport == "7d7c7ab8-b198-4aa2-a8a6-8a3e291260fa") rate=5000 burst=5000
Checking the number of connections to the southbound database:
ansible -i inventory.yaml Controller -bm shell -a 'podman exec -it $(podman ps -qf name=ovn-dbs-bundle) ovn-sbctl get Connection . status'
controller-1 | CHANGED | rc=0 >>
{bound_port="6642", sec_since_connect="0", sec_since_disconnect="0"}
controller-2 | CHANGED | rc=0 >>
{bound_port="6642", sec_since_connect="0", sec_since_disconnect="0"}
controller-0 | CHANGED | rc=0 >>
{bound_port="6642", n_connections="89", sec_since_connect="0", sec_since_disconnect="0"}
Notes
ovn_controller programs ovs running on the hosts using open flow rules
ovn_controller runs in containers on the overcloud and openvswitch runs on the host as a systemd service