Deploying OpenStack Operators on OpenShift

This document will provide an overview for quickly deploying OpenStack operators on an OpenShift cluster for testing and development.

We will be using the install_yamls repo to deploy all the OpenStack operators. This is a follow-up from my last post Deploying Virtual Multi-node OpenShift Cluster with Metal3

Verify access

install_yamls tool will read the kubeconfig sourced so verify the correct cluster is being targeted:

oc get nodes

Deploy Operators

Clone and dc into the install_yamls repo:

git clone && cd install_yamls

Create Physical Volumes(PV) for the operators Physical Volume Claims(PVC) to bind to:

make crc_storage

Install the operators and verify:

make openstack
oc get csv -l -w

Create an instance of OpenstackControlPlane

make deploy_openstack
oc get pods -w
oc get all -n openstack is helpful to spot if issues have started, sometimes jobs will fail, or you can spot missing resources

Access the OpenStack control plane services

The OpenStack services are exposed via an OpenShift route:

❯ oc get route -n openstack
NAME               HOST/PORT                                                   PATH   SERVICES           PORT               TERMINATION   WILDCARD
cinder-public             cinder-public      cinder-public                    None
glance-public             glance-public      glance-public                    None
keystone-public           keystone-public    keystone-public                  None
neutron-public            neutron-public     neutron-public                   None
nova-public               nova-public        nova-public                      None
placement-public          placement-public   placement-public                 None

These routes are services:

❯ oc get service -n openstack | grep public
cinder-public                                                     ClusterIP      <none>        8776/TCP                       18h
glance-public                                                     ClusterIP   <none>        9292/TCP                       18h
keystone-public                                                   ClusterIP    <none>        5000/TCP                       18h
neutron-public                                                    ClusterIP   <none>        9696/TCP                       18h
nova-public                                                       ClusterIP      <none>        8774/TCP                       18h
placement-public                                                  ClusterIP    <none>        8778/TCP                       18h

To access these services the requests will need to be tunneled into the hypervisor where OpenShift is running. The correct host entries will need to be added to your local machines /etc/hosts file.

Checking the hostname on the hypervisor is the easiest way:

host has address

All the OpenStack services will resolve to the same IP address so the following line needs to be added to your local system: console

Now any request to for example will go over sshuttle tunnel to the hypervisor which has access to correct network.

Lastly the OpenStack config needs to be written to your local system for the OpenStack client to access these services:

mkdir -p ~/.config/openstack

cat > ~/.config/openstack/clouds.yaml << EOF
$(oc get cm openstack-config -o json | jq -r '.data["clouds.yaml"]')

Testing the deployment

Manual commands can be executed inside container like before:


oc exec -it  pod/mariadb-openstack -- mysql -uroot -p12345678 -e "show databases;"


| Database           |
| cinder             |
| glance             |
| information_schema |
| keystone           |
| mysql              |
| neutron            |
| nova_api           |
| nova_cell0         |
| nova_cell1         |
| performance_schema |
| placement          |


oc exec pod/ovsdbserver-sb-0 -- ovn-sbctl show
Chassis "f97f0645-85b2-4e51-964e-0ad8311f7de4"
    hostname: worker-0
    Encap geneve
        ip: ""
        options: {csum="true"}
Chassis "67b5cec0-cd87-4364-9813-28784fa8aef4"
    hostname: worker-1
    Encap geneve
        ip: ""
        options: {csum="true"}

Using the OpenStack Client:

Set our OpenStack cloud and export the password:

export OS_CLOUD=default
export OS_PASSWORD=12345678


openstack token issue


expires: 2023-03-09T02:08:32+0000
id: gAAAAABkCTGQq2aHrscq21hiWTRAlDDEjekzDOmqbaZeEzsgSoaR4ZCrIKxpm3_rDzNVRbMC9p79HE8xgz1ZesDr47yRrGc1XakSjfxIEJJm6tnjzGjLeyGwvqg54cedUM0lkcR2D3SC2BheBh-KQVYRKBjfEwOjwhDq__CTZkfGoL29wcAVxjA
project_id: ff0adbc357f94aeaa2d54c600daaff1e
user_id: 7f8e535fc615404d8fb97c801d394e88

Using the OpenShift WebUI:

The host entry should be added from the previous step so browsing to console-openshift-console.apps.<HostName> will go over the sshuttle tunnel.

To get the kubeadmin password you will need to log in to the hypervisor and run the following command:

cat dev-scripts/ocp/ostest/auth/kubeadmin-password