Contents

Tracing Packets Out an External Network With OVN


This article documents how to trace packets out of a external network in RHOSP16 with ml2_ovn

Tip
This document is WIP

Gather Information

First we need to collect some information about the environment, instance and network:

Instance info:

$ openstack server show \
  -c OS-EXT-SRV-ATTR:host \
  -c OS-EXT-SRV-ATTR:hostname \
  -c OS-EXT-SRV-ATTR:instance_name \
  -c addresses \
  -f yaml \
  cirros-external

OS-EXT-SRV-ATTR:host: overcloud-novacompute-0.localdomain
OS-EXT-SRV-ATTR:hostname: cirros-external
OS-EXT-SRV-ATTR:instance_name: instance-00000016
addresses: external-network=172.21.11.168

With the ip address we can get the network UUID and information:

$ openstack network show \
  -c name \
  -c id \
  -c provider:segmentation_id \
  -c router:external \
  -f yaml \
  external-network

id: 88ab458a-4a46-4bb8-b12f-f3c1f8b8bd2c
name: external-network
provider:segmentation_id: 11
router:external: true

and the port information:

$ openstack port list \
  --fixed-ip ip-address=172.21.11.168 \
  -f yaml

- Fixed IP Addresses:
  - ip_address: 172.21.11.168
    subnet_id: 14b44337-1aa8-47ea-8bd0-e0d09ac010c7
  ID: ecd8fb74-f469-4c46-b202-dd807b9af37d
  MAC Address: fa:16:3e:fd:d3:b2
  Name: ''
  Status: ACTIVE

Finally now that we have collected all the instance information we can login to the compute node hosting the instance and take a look at it’s interface(s):

$ ssh heat-admin@overcloud-novacompute-0.ctlplane
$ sudo su -
$ podman exec -it nova_libvirt bash
$ virsh list

 Id   Name                State
-----------------------------------
4    instance-00000016   running

$ virsh domiflist instance-00000016
 Interface        Type     Source   Model    MAC
----------------------------------------------------------------
 tapecd8fb74-f4   bridge   br-int   virtio   fa:16:3e:fd:d3:b2
Note

Lets recap on the information collected so far:

OS-EXT-SRV-ATTR:host: overcloud-novacompute-0.localdomain
OS-EXT-SRV-ATTR:hostname: cirros-external
OS-EXT-SRV-ATTR:instance_name: instance-00000016
addresses: external-network=172.21.11.168
id: 88ab458a-4a46-4bb8-b12f-f3c1f8b8bd2c
name: external-network
provider:segmentation_id: 11
router:external: true
- Fixed IP Addresses:
  - ip_address: 172.21.11.168
    subnet_id: 14b44337-1aa8-47ea-8bd0-e0d09ac010c7
  ID: ecd8fb74-f469-4c46-b202-dd807b9af37d
  MAC Address: fa:16:3e:fd:d3:b2
  Name: ''
  Status: ACTIVE
instance:tap-interface: tapecd8fb74-f4
instance:mac-address: fa:16:3e:fd:d3:b2
instance:bridge: br-int

tcpdump the Tap Interface

With the information we have collected so far we can start a ping inside the instance out to 1.1.1.1 and see what we can learn, using tcpdump on hypervisor targeting the instance tap interface we can see the packets making it out:

$ tcpdump -nnei tapecd8fb74-f4 icmp -c 2

06:28:44.111981 fa:16:3e:fd:d3:b2 > c8:d3:ff:a6:82:0f, ethertype IPv4 (0x0800), length 98: 172.21.11.168 > 1.1.1.1: ICMP echo request, id 62209, seq 31, length 64
06:28:44.116597 c8:d3:ff:a6:82:0f > fa:16:3e:fd:d3:b2, ethertype IPv4 (0x0800), length 98: 1.1.1.1 > 172.21.11.168: ICMP echo reply, id 62209, seq 31, length 64
2 packets captured
Note
When we see packets on the tap interface this is exactly the same as tcpdumping the interface inside the instance however that’s not always an option.

Now that we can see packets are making it out the instance and back again, how does that happen? Jumping forward a little we can see the tap interface is part of the br-int bridge in ovs

$ ovs-vsctl port-to-br tapecd8fb74-f4

br-int

We can also see that the bridge br-int contains some other tap interfaces used by other instances as well as “uplinks” to external networks:

$ ovs-vsctl list-ports br-int
ovn-4fdd83-0 <--- Tunnel to other network chassis
patch-br-int-to-provnet-23a82ee4-e05b-4832-8130-60c3db42448a <--- External network
patch-br-int-to-provnet-fd0f005a-21fe-4baa-8122-bd56c3524e49 <--- External network
tap1129d490-00 <--- Other instances 
tap570a190e-75 <--- Other instances
tapc51b6cd7-24 <--- Other instances
tapd18aca1a-d0 <--- Other instances
tapecd8fb74-f4  <--- Our tap interface

So if everything on this compute node is sitting on the same bridge then what prevents me from accessing other customers instances running on the same compute node?

OVN configured OpenFlow rules!!!

OVN Overview

Let’s take a step back and check out the OVN configuration for our tap interface, we can look at all the resources in the northbound OVN database with ovn-nbctl show however we have all the details we need to look directly at the port we need.

Firstly we get the switch the Neutron port is connected to then list out the switch details:

$ ovn-nbctl lsp-get-ls ecd8fb74-f469-4c46-b202-dd807b9af37d
4f6d9037-6969-4086-9124-bc2c64ee9476 (neutron-88ab458a-4a46-4bb8-b12f-f3c1f8b8bd2c)

$ ovn-nbctl show 4f6d9037-6969-4086-9124-bc2c64ee9476
switch 4f6d9037-6969-4086-9124-bc2c64ee9476 (neutron-88ab458a-4a46-4bb8-b12f-f3c1f8b8bd2c) (aka external-network)
    port provnet-23a82ee4-e05b-4832-8130-60c3db42448a
        type: localnet
        tag: 11
        addresses: ["unknown"]
    port f51d83a3-3af7-4d15-a14c-8c4e73e21a03
        addresses: ["fa:16:3e:2f:73:80 172.21.11.144"]
    port ecd8fb74-f469-4c46-b202-dd807b9af37d                 <--- Here is our port information
        addresses: ["fa:16:3e:fd:d3:b2 172.21.11.168"]        <--- Here is our port information
    port 76de5eec-82cc-4600-990e-2dc95ea6e106
        type: router
        router-port: lrp-76de5eec-82cc-4600-990e-2dc95ea6e106
    port 4258bc9a-89d4-4e53-8213-340309de7787
        addresses: ["fa:16:3e:81:3d:e7 172.21.11.132"]
    port 4fdd36b4-b318-43c0-812d-69ba9eee5aa6
        type: localport
        addresses: ["fa:16:3e:0c:ff:3d 172.21.11.100"]

Looking at the southbound database we can see that the port is indeed bound to this compute node:

$ ovn-sbctl show 
Chassis "4fdd83d2-63b4-4a98-a24f-b1cdb7a57136"
    hostname: overcloud-controller-0.localdomain
    Encap geneve
        ip: "172.16.0.69"
        options: {csum="true"}
Chassis "107da918-4d12-4661-b363-88bc3da6f367"
    hostname: overcloud-novacompute-0.localdomain
    Encap geneve
        ip: "172.16.0.242"
        options: {csum="true"}
    Port_Binding "ecd8fb74-f469-4c46-b202-dd807b9af37d" <--- Here is our port
    Port_Binding "edd406e1-8035-4418-8f10-c0b89d7c4df2"
    Port_Binding "c51b6cd7-244f-4d2f-8281-ba7010ced1cf"
    Port_Binding "4fdd36b4-b318-43c0-812d-69ba9eee5aa6"
    Port_Binding "570a190e-75c8-4e6b-b6ee-8cfb05971240"

We can take a look at the OVN south bound logical flows but it’s really hard to read and there is tonnes of lines, we have some tools to help:

$ ovn-sbctl lflow-list neutron-88ab458a-4a46-4bb8-b12f-f3c1f8b8bd2c |wc -l
182

Using ovn-trace

Let’s create an ovn-trace command to see what the OVN logical flows want to do with our packet, for this we will need some of the information we have captured so far:

switch: OVN switch for our Neutron network
inport: OVN port of our instance
eth.src: Source MAC from tcpdump
eth.dst: Dest MAC from tcpdump
ip4.src: Source ip from tcpdump
ip4.dst: Dest ip from tcpdump
ip.ttl: Time to live for the packet

Now that we have gather the required information we can execute the ovn-trace command:

$ ovn-trace --summary neutron-88ab458a-4a46-4bb8-b12f-f3c1f8b8bd2c 'inport == "ecd8fb74-f469-4c46-b202-dd807b9af37d" && eth.src == fa:16:3e:fd:d3:b2 && eth.dst == c8:d3:ff:a6:82:0f && ip4.src == 172.21.11.168 && ip4.dst == 1.1.1.1 && ip.ttl == 32'
# ip,reg14=0x6,vlan_tci=0x0000,dl_src=fa:16:3e:fd:d3:b2,dl_dst=c8:d3:ff:a6:82:0f,nw_src=172.21.11.168,nw_dst=1.1.1.1,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=32
ingress(dp="external-network", inport="ecd8fb") {
    next;
    next;
    reg0[0] = 1;
    next;
    ct_next;
    ct_next(ct_state=est|trk /* default (use --ct to customize) */) {
        reg0[8] = 1;
        reg0[10] = 1;
        next;
        next;
        outport = get_fdb(eth.dst);
        next;
        outport = "_MC_unknown";
        output;
        multicast(dp="external-network", mcgroup="_MC_unknown") {
            egress(dp="external-network", inport="ecd8fb", outport="provnet-23a82e") {
                next;
                next;
                reg0[8] = 1;
                reg0[10] = 1;
                next;
                output;
                /* output to "provnet-23a82e", type "localnet" */;
            };
        };
    };
};
Note
So now we know that the packet “should” go instance -> br-int -> br-ex -> provnet-23a82e however we are missing a physical interface in that flow. So far we have only looked at the OVN configuration, we will need to look at OVS to get the full picture.

We have a lot of output here, we can change the output flag from --summary to --minimal to only show us where the packet would end up. If it wasn’t clear before now it is, the packet is going to head out provnet-23a82e.

$ ovn-trace --minimal neutron-88ab458a-4a46-4bb8-b12f-f3c1f8b8bd2c 'inport == "ecd8fb74-f469-4c46-b202-dd807b9af37d" && eth.src == fa:16:3e:fd:d3:b2 && eth.dst == c8:d3:ff:a6:82:0f && ip4.src == 172.21.11.168 && ip4.dst == 1.1.1.1 && ip.ttl == 32'
# ip,reg14=0x6,vlan_tci=0x0000,dl_src=fa:16:3e:fd:d3:b2,dl_dst=c8:d3:ff:a6:82:0f,nw_src=172.21.11.168,nw_dst=1.1.1.1,nw_proto=0,nw_tos=0,nw_ecn=0,nw_ttl=32
ct_next(ct_state=est|trk /* default (use --ct to customize) */) {
    output("provnet-23a82e");
};

Looking at the ovn-nbctl show 4f6d9037-6969-4086-9124-bc2c64ee9476 output from above we can see that is external network is using vlan 11 and that matches our external network configured in Neutron.

$ ovn-nbctl show 4f6d9037-6969-4086-9124-bc2c64ee9476
switch 4f6d9037-6969-4086-9124-bc2c64ee9476 (neutron-88ab458a-4a46-4bb8-b12f-f3c1f8b8bd2c) (aka external-network)
    port provnet-23a82ee4-e05b-4832-8130-60c3db42448a
        type: localnet
        tag: 11
        addresses: ["unknown"]
[...]

OVS Overview

We can confirm the physical side of provnet-23a82ee4-e05b-4832-8130-60c3db42448a, in the ovn-bridge-mappings we can see that our Neutron network is created using the datacentre physical network which is mapped to br-ex

$ ovs-vsctl get open . external_ids
{hostname=overcloud-novacompute-0.localdomain, ovn-bridge=br-int, ovn-bridge-mappings="datacentre:br-ex", ovn-encap-ip="172.16.0.242", ovn-encap-type=geneve, ovn-match-northd-version="true", ovn-openflow-probe-interval="60", ovn-remote="tcp:172.16.2.149:6642", ovn-remote-probe-interval="60000", rundir="/var/run/openvswitch", system-id="107da918-4d12-4661-b363-88bc3da6f367"}

Here we can see br-int has a patch port to our provnet which is inside br-ex

$ ovs-vsctl show
2f5ffadb-f5c7-4ce5-854e-e549abf3a6c2
    Bridge br-ex
        fail_mode: standalone
        Port ens19                                                                            <--- Here
            Interface ens19                                                                   <--- Here
        Port vlan10
            tag: 10
            Interface vlan10
                type: internal
        Port br-ex
            Interface br-ex
                type: internal
        Port patch-provnet-fd0f005a-21fe-4baa-8122-bd56c3524e49-to-br-int
            Interface patch-provnet-fd0f005a-21fe-4baa-8122-bd56c3524e49-to-br-int
                type: patch
                options: {peer=patch-br-int-to-provnet-fd0f005a-21fe-4baa-8122-bd56c3524e49}
        Port patch-provnet-23a82ee4-e05b-4832-8130-60c3db42448a-to-br-int                     <--- Here 
            Interface patch-provnet-23a82ee4-e05b-4832-8130-60c3db42448a-to-br-int            <--- Here
                type: patch                                                                   <--- Here
                options: {peer=patch-br-int-to-provnet-23a82ee4-e05b-4832-8130-60c3db42448a}  <--- Here
    Bridge br-int
        fail_mode: secure
        datapath_type: system
        Port patch-br-int-to-provnet-23a82ee4-e05b-4832-8130-60c3db42448a                     <--- Here 
            Interface patch-br-int-to-provnet-23a82ee4-e05b-4832-8130-60c3db42448a            <--- Here
                type: patch                                                                   <--- Here
                options: {peer=patch-provnet-23a82ee4-e05b-4832-8130-60c3db42448a-to-br-int}  <--- Here
        Port tap570a190e-75
            Interface tap570a190e-75
        Port tap1129d490-00
            Interface tap1129d490-00
        Port tapecd8fb74-f4
            Interface tapecd8fb74-f4
        Port tapc51b6cd7-24
            Interface tapc51b6cd7-24
        Port tapd18aca1a-d0
            Interface tapd18aca1a-d0
        Port br-int
            Interface br-int
                type: internal
        Port ovn-4fdd83-0
            Interface ovn-4fdd83-0
                type: geneve
                options: {csum="true", key=flow, remote_ip="172.16.0.69"}
                bfd_status: {diagnostic="No Diagnostic", flap_count="0", forwarding="false", remote_diagnostic="No Diagnostic", remote_state=down, state=down}
        Port patch-br-int-to-provnet-fd0f005a-21fe-4baa-8122-bd56c3524e49
            Interface patch-br-int-to-provnet-fd0f005a-21fe-4baa-8122-bd56c3524e49
                type: patch
                options: {peer=patch-provnet-fd0f005a-21fe-4baa-8122-bd56c3524e49-to-br-int}

Testing our Assumptions

Let’s use tcpdump to test along the path we think the packet is taking:

patch-br-int-to-provnet-23a82ee4-e05b-4832-8130-60c3db42448a in br-int:`

$ ovs-tcpdump -i patch-br-int-to-provnet-23a82ee4-e05b-4832-8130-60c3db42448a icmp
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ovsmi712109, link-type EN10MB (Ethernet), capture size 262144 bytes
01:50:32.483069 IP 172.21.2.142 > 172.21.11.222: ICMP echo request, id 6, seq 429, length 64
01:50:33.484223 IP 172.21.2.142 > 172.21.11.222: ICMP echo request, id 6, seq 430, length 64
01:50:34.485844 IP 172.21.2.142 > 172.21.11.222: ICMP echo request, id 6, seq 431, length 64

patch-provnet-23a82ee4-e05b-4832-8130-60c3db42448a-to-br-int in br-ex

$ ovs-tcpdump -i patch-provnet-23a82ee4-e05b-4832-8130-60c3db42448a-to-br-int icmp
dropped privs to tcpdump
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on ovsmi604600, link-type EN10MB (Ethernet), capture size 262144 bytes

<empty output>

ens19 in br-ex

$ tcpdump -nnei ens19 vlan and icmp
12:43:10.269439 fa:16:3e:fd:d3:b2 > c8:d3:ff:a6:82:0f, ethertype 802.1Q (0x8100), length 102: vlan 11, p 0, ethertype IPv4, 172.21.11.168 > 1.1.1.1: ICMP echo request, id 2562, seq 51683, length 64
12:43:10.274050 c8:d3:ff:a6:82:0f > fa:16:3e:fd:d3:b2, ethertype 802.1Q (0x8100), length 102: vlan 11, p 0, ethertype IPv4, 1.1.1.1 > 172.21.11.168: ICMP echo reply, id 2562, seq 51683, length 64

br-ex

$ tcpdump -nnei br-ex icmp
<empty output>
Note
The reason we can’t see the packets on br-ex is because it’s a linux bridge, our packets are handled by OVS, not the standard kernel path

ovs-appctl ofproto/trace

Now that we have had a good look at the ovn-trace and OVS overview, let’s look at ovs-appctl ofproto/trace. Remember, OVN is programing OVS so the OVN logical flows could be fine but there could be something missing in OVS OpenFlow

FIrst we need the Open vSwitch test package which provides the ovs-tcpundump command:

$ dnf install openvswitch2.15-test

With the Open vSwitch test package installed we now have access to the ovs-tcpundump command. When given the hex output of a packet from tcpdump it will convert the output into single hexadecimal string, this can then be used with with ovs-appctl ofproto/trace

$ flow=$(tcpdump -nXXi tapecd8fb74-f4 icmp and dst host 1.1.1.1 -c1 | ovs-tcpundump)

$ echo $flow
c8d3ffa6820ffa163efdd3b20800450000542df64000400152f4ac150ba801010101080054ec3602d4fcb651e1c200000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000

With the hexedecimal packet captured we can pass it into ovs-appctl ofproto/trace, I have to snip all the steps in the middle but where the packet ends up is the importaint thing:

$ ovs-appctl ofproto/trace br-int in_port=`ovs-vsctl get Interface tapecd8fb74-f4 ofport` $flow
Flow: icmp,in_port=10,vlan_tci=0x0000,dl_src=fa:16:3e:fd:d3:b2,dl_dst=c8:d3:ff:a6:82:0f,nw_src=172.21.11.168,nw_dst=1.1.1.1,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0

bridge("br-int")
----------------
 1. in_port=10, priority 100, cookie 0xe2ed22b8
    set_field:0xb->reg13
    set_field:0x4->reg11
    set_field:0x2->reg12
    set_field:0x2->metadata
    set_field:0x6->reg14
    resubmit(,8)

[...]

        65. reg15=0x1,metadata=0x2, priority 100, cookie 0x3e9c9b62
            push_vlan:0x8100
            set_field:4107->vlan_vid
            output:3

            bridge("br-ex")
            ---------------
                 0. priority 0
                    NORMAL
                     -> forwarding to learned port
            pop_vlan
    set_field:0x8001->reg15

Final flow: recirc_id=0x845b,eth,icmp,reg0=0x300,reg11=0x4,reg12=0x2,reg13=0x5,reg14=0x6,reg15=0x8001,metadata=0x2,in_port=10,vlan_tci=0x0000,dl_src=fa:16:3e:fd:d3:b2,dl_dst=c8:d3:ff:a6:82:0f,nw_src=172.21.11.168,nw_dst=1.1.1.1,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0
Megaflow: recirc_id=0x845b,ct_state=+new-est-rel-rpl-inv+trk,ct_label=0/0x1,eth,ip,in_port=10,dl_src=fa:16:3e:fd:d3:b2,dl_dst=c8:d3:ff:a6:82:0f,nw_dst=0.0.0.0/5,nw_frag=no
Datapath actions: ct(commit,zone=11,label=0/0x1),push_vlan(vid=11,pcp=0),5

Looking at the output we can see that the packet has made it through br-int and on to br-ex:

        65. reg15=0x1,metadata=0x2, priority 100, cookie 0x3e9c9b62
            push_vlan:0x8100
            set_field:4107->vlan_vid
            output:3   <--- Here

We can confirm that by checking ovs-ofctl:

$ ovs-ofctl show br-int | grep '3('
 3(patch-br-int-to): addr:52:8c:6b:0b:62:d5

Now that the packet is in br-ex we can see in the Datapath actions section at the bottom that we have also pushed the vlan id of 11 onto the packet and sent it out port 5 which is ens19, our physical interface:

Datapath actions: ct(commit,zone=11,label=0/0x1),push_vlan(vid=11,pcp=0),5  <--- Here

For the datapath actions port we need to check ovs-dpctl rather then ovs-ofctl like before

$ ovs-dpctl show |grep "port 5"
  port 5: ens19

ovn-detrace

Sadly the podman version shipped with RHOSP16.2 does not contain the fix for having containers read from piped stdin https://github.com/containers/podman/pull/4818 :

$ podman -v
podman version 1.6.4
$ echo test | podman exec -i ovn_controller cat
Error: read unixpacket @->/var/run/libpod/socket/021a1345cec913023cebac34aae03b4b2dda6eeb86e7ad4f35e1431050b8c2ea/attach: read: connection reset by peer

Due to this we need some dirty hack in order to use ovn-detrace package installed inside the ovn_controller container which I have included in my OVS/OVN command Cheat Sheet :

alias ovn-detrace='cat >/tmp/trace && podman cp /tmp/trace ovn_controller:/tmp/trace && podman exec -it ovn_controller bash -c "ovn-detrace --ovnsb=$SBDB --ovnnb=$NBDB </tmp/trace"'

ovs-appctl ofproto/trace br-int in_port=`ovs-vsctl get Interface tapecd8fb74-f4 ofport` $flow | ovn-detrace

What the ovn-detrace command does is it takes the output from the OVS ofproto/trace and combines it with OVN logical flow information, here is a snipped:

$ ovs-appctl ofproto/trace br-int in_port=`ovs-vsctl get Interface tapecd8fb74-f4 ofport` $flow | ovn-detrace
Flow: icmp,in_port=10,vlan_tci=0x0000,dl_src=fa:16:3e:fd:d3:b2,dl_dst=c8:d3:ff:a6:82:0f,nw_src=172.21.11.168,nw_dst=1.1.1.1,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0

bridge("br-int")
----------------
1. in_port=10, priority 100, cookie 0xe2ed22b8
set_field:0xb->reg13
set_field:0x4->reg11
set_field:0x2->reg12
set_field:0x2->metadata
set_field:0x6->reg14
resubmit(,8)
  *  Logical datapath: "neutron-88ab458a-4a46-4bb8-b12f-f3c1f8b8bd2c" (1129d490-07cc-402f-bbe9-24dc6700b2a4)
  *  Port Binding: logical_port "ecd8fb74-f469-4c46-b202-dd807b9af37d", tunnel_key 6, chassis-name "107da918-4d12-4661-b363-88bc3da6f367", chassis-str "overcloud-novacompute-0.localdomain"
8. reg14=0x6,metadata=0x2,dl_src=fa:16:3e:fd:d3:b2, priority 50, cookie 0xf8178df2
resubmit(,9)
  *  Logical datapaths:
  *      "neutron-88ab458a-4a46-4bb8-b12f-f3c1f8b8bd2c" (1129d490-07cc-402f-bbe9-24dc6700b2a4) [ingress]
  *  Logical flow: table=0 (ls_in_port_sec_l2), priority=50, match=(inport == "ecd8fb74-f469-4c46-b202-dd807b9af37d" && eth.src == {fa:16:3e:fd:d3:b2}), actions=(next;)
   *  Logical Switch Port: ecd8fb74-f469-4c46-b202-dd807b9af37d type  (addresses ['fa:16:3e:fd:d3:b2 172.21.11.168'], dynamic addresses [], security ['fa:16:3e:fd:d3:b2 172.21.11.168']

[...]

49. reg15=0x1,metadata=0x2, priority 50, cookie 0x5da7c46f
resubmit(,64)
  *  Logical datapaths:
  *      "neutron-88ab458a-4a46-4bb8-b12f-f3c1f8b8bd2c" (1129d490-07cc-402f-bbe9-24dc6700b2a4) [egress]
  *  Logical flow: table=9 (ls_out_port_sec_l2), priority=50, match=(outport == "provnet-23a82ee4-e05b-4832-8130-60c3db42448a), actions=(output;)
   *  Logical Switch Port: provnet-23a82ee4-e05b-4832-8130-60c3db42448a type localnet (addresses ['unknown'], dynamic addresses [], security []
64. priority 0
resubmit(,65)
65. reg15=0x1,metadata=0x2, priority 100, cookie 0x3e9c9b62
push_vlan:0x8100
set_field:4107->vlan_vid
output:14
  *  Logical datapath: "neutron-88ab458a-4a46-4bb8-b12f-f3c1f8b8bd2c" (1129d490-07cc-402f-bbe9-24dc6700b2a4)
  *  Port Binding: logical_port "provnet-23a82ee4-e05b-4832-8130-60c3db42448a", tunnel_key 1, 

bridge("br-ex")
---------------
0. priority 0
NORMAL
-> forwarding to learned port
pop_vlan
set_field:0x8001->reg15

Final flow: recirc_id=0x4,eth,icmp,reg0=0x300,reg11=0x4,reg12=0x2,reg13=0x5,reg14=0x6,reg15=0x8001,metadata=0x2,in_port=10,vlan_tci=0x0000,dl_src=fa:16:3e:fd:d3:b2,dl_dst=c8:d3:ff:a6:82:0f,nw_src=172.21.11.168,nw_dst=1.1.1.1,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0
Megaflow: recirc_id=0x4,ct_state=+new-est-rel-rpl-inv+trk,ct_label=0/0x1,eth,ip,in_port=10,dl_src=fa:16:3e:fd:d3:b2,dl_dst=c8:d3:ff:a6:82:0f,nw_dst=0.0.0.0/5,nw_frag=no
Datapath actions: ct(commit,zone=11,label=0/0x1),push_vlan(vid=11,pcp=0),5

This output should look familiar as it is a blending of both the commands we have just looked at, this can be useful to use when you believe there is a missmatch between the logical flows and OpenFlow rules.