Contents

Deploying OpenStack Operators on OpenShift


This document will provide an overview for quickly deploying OpenStack operators on an OpenShift cluster for testing and development.


We will be using the install_yamls repo to deploy all the OpenStack operators. This is a follow-up from my last post Deploying Virtual Multi-node OpenShift Cluster with Metal3


Verify access

install_yamls tool will read the kubeconfig sourced so verify the correct cluster is being targeted:

oc get nodes

Deploy Operators

Clone and dc into the install_yamls repo:

git clone https://github.com/openstack-k8s-operators/install_yamls.git && cd install_yamls

Create Physical Volumes(PV) for the operators Physical Volume Claims(PVC) to bind to:

make crc_storage

Install the operators and verify:

make openstack
oc get csv -l operators.coreos.com/openstack-operator.openstack -w

Create an instance of OpenstackControlPlane

make deploy_openstack
oc get pods -w
Note
oc get all -n openstack is helpful to spot if issues have started, sometimes jobs will fail, or you can spot missing resources

Access the OpenStack control plane services

The OpenStack services are exposed via an OpenShift route:

❯ oc get route -n openstack
NAME               HOST/PORT                                                   PATH   SERVICES           PORT               TERMINATION   WILDCARD
cinder-public      cinder-public-openstack.apps.ostest.test.metalkube.org             cinder-public      cinder-public                    None
glance-public      glance-public-openstack.apps.ostest.test.metalkube.org             glance-public      glance-public                    None
keystone-public    keystone-public-openstack.apps.ostest.test.metalkube.org           keystone-public    keystone-public                  None
neutron-public     neutron-public-openstack.apps.ostest.test.metalkube.org            neutron-public     neutron-public                   None
nova-public        nova-public-openstack.apps.ostest.test.metalkube.org               nova-public        nova-public                      None
placement-public   placement-public-openstack.apps.ostest.test.metalkube.org          placement-public   placement-public                 None

These routes are services:

❯ oc get service -n openstack | grep public
cinder-public                                                     ClusterIP   172.30.1.36      <none>        8776/TCP                       18h
glance-public                                                     ClusterIP   172.30.214.235   <none>        9292/TCP                       18h
keystone-public                                                   ClusterIP   172.30.62.172    <none>        5000/TCP                       18h
neutron-public                                                    ClusterIP   172.30.118.154   <none>        9696/TCP                       18h
nova-public                                                       ClusterIP   172.30.3.35      <none>        8774/TCP                       18h
placement-public                                                  ClusterIP   172.30.167.10    <none>        8778/TCP                       18h

To access these services the requests will need to be tunneled into the hypervisor where OpenShift is running. The correct host entries will need to be added to your local machines /etc/hosts file.

Checking the hostname on the hypervisor is the easiest way:

host cinder-public-openstack.apps.ostest.test.metalkube.org
cinder-public-openstack.apps.ostest.test.metalkube.org has address 192.168.111.4

All the OpenStack services will resolve to the same IP address so the following line needs to be added to your local system:

192.168.111.4 openshift-authentication-openshift-authentication.apps.ostest.test.metalkube.org api.ostest.test.metalkube.org prometheus-k8s-openshift-monitoring.apps.ostest.test.metalkube.org alertmanager-main-openshift-monitoring.apps.ostest.test.metalkube.org kubevirt-web-ui.apps.ostest.test.metalkube.org oauth-openshift.apps.ostest.test.metalkube.org grafana-openshift-monitoring.apps.ostest.test.metalkube.org glance-internal-openstack.apps.ostest.test.metalkube.org console-openshift-console.apps.ostest.test.metalkube.org console keystone-public-openstack.apps.ostest.test.metalkube.org cinder-public-openstack.apps.ostest.test.metalkube.org glance-public-openstack.apps.ostest.test.metalkube.org neutron-public-openstack.apps.ostest.test.metalkube.org nova-public-openstack.apps.ostest.test.metalkube.org placement-public-openstack.apps.ostest.test.metalkube.org

Now any request to keystone-public-openstack.apps.ostest.test.metalkube.org for example will go over sshuttle tunnel to the hypervisor which has access to correct network.

Lastly the OpenStack config needs to be written to your local system for the OpenStack client to access these services:

mkdir -p ~/.config/openstack

cat > ~/.config/openstack/clouds.yaml << EOF
$(oc get cm openstack-config -o json | jq -r '.data["clouds.yaml"]')
EOF

Testing the deployment

Manual commands can be executed inside container like before:

Mariadb

oc exec -it  pod/mariadb-openstack -- mysql -uroot -p12345678 -e "show databases;"

Output:

+--------------------+
| Database           |
+--------------------+
| cinder             |
| glance             |
| information_schema |
| keystone           |
| mysql              |
| neutron            |
| nova_api           |
| nova_cell0         |
| nova_cell1         |
| performance_schema |
| placement          |
+--------------------+

OVN

oc exec pod/ovsdbserver-sb-0 -- ovn-sbctl show
Chassis "f97f0645-85b2-4e51-964e-0ad8311f7de4"
    hostname: worker-0
    Encap geneve
        ip: "10.131.0.80"
        options: {csum="true"}
Chassis "67b5cec0-cd87-4364-9813-28784fa8aef4"
    hostname: worker-1
    Encap geneve
        ip: "10.128.2.47"
        options: {csum="true"}

Using the OpenStack Client:

Set our OpenStack cloud and export the password:

export OS_CLOUD=default
export OS_PASSWORD=12345678

Keystone

openstack token issue

Output:

expires: 2023-03-09T02:08:32+0000
id: gAAAAABkCTGQq2aHrscq21hiWTRAlDDEjekzDOmqbaZeEzsgSoaR4ZCrIKxpm3_rDzNVRbMC9p79HE8xgz1ZesDr47yRrGc1XakSjfxIEJJm6tnjzGjLeyGwvqg54cedUM0lkcR2D3SC2BheBh-KQVYRKBjfEwOjwhDq__CTZkfGoL29wcAVxjA
project_id: ff0adbc357f94aeaa2d54c600daaff1e
user_id: 7f8e535fc615404d8fb97c801d394e88

Using the OpenShift WebUI:

The host entry should be added from the previous step so browsing to console-openshift-console.apps.<HostName> will go over the sshuttle tunnel.

To get the kubeadmin password you will need to log in to the hypervisor and run the following command:

cat dev-scripts/ocp/ostest/auth/kubeadmin-password