Deploying Virtual Multi-node OpenShift Cluster with Metal3

This document will provide an overview for quickly deploying a reproducer/development lab using OpenShift-metal3.

At the end of this guide, you will have a single server, virtualized multi-node Openshift deployment with 3 masters and 2 workers.

Get a server


For this guide you will just need a large server with the following specs:

Resource Value
Memory 128G
CPU 12 cores
Storage 256G

Beaker node:

To get a Beaker node with sufficient compute, you can use the following XML file:

<job retention_tag="60days">
  <whiteboard>Provision Centos8 x86_64 on +128G RAM / +12 Cores / +256G HD

This is for openshift-metal3 testing.</whiteboard>
  <recipeSet priority="Normal">
    <recipe whiteboard="" role="RECIPE_MEMBERS" ks_meta="" kernel_options="" kernel_options_post="">
      <autopick random="false"/>
      <watchdog panic="ignore"/>
  <distro_name op="=" value="CentOS-8.2"/>
  <distro_arch op="=" value="x86_64"/>
<memory op="&gt;" value="128000"/>
<cores op="&gt;" value="8"/>
<size op="&gt;" value="137438953472"/>
  <arch op="=" value="x86_64"/>
  <system_type op="=" value="Machine"/>
  <key_value key="HVM" op="=" value="1"/>
      <task name="/distribution/install" role="STANDALONE"/>
      <task name="/distribution/reservesys" role="STANDALONE">
          <param name="RESERVETIME" value="1296000"/>

Configure the OS

Add and configure m3 user

Once you have the server, you can login and add a user that will perform the deployment:

useradd m3
usermod -aG wheel m3
passwd m3

Password-less sudo

The user needs to be able to do password-less sudo, so add it to /etc/sudoers.d/m3:

echo "m3 ALL=(root) NOPASSWD:ALL" > /etc/sudoers.d/m3

Install the requirements

sudo dnf upgrade -y
sudo dnf install -y git make wget jq tmux

Clone the openshift-metal3/dev-scripts repo from Github:

su - m3
git clone

Modifying the config file

Create a copy

cd dev-scripts

If the server your using has all it’s storage allocated to /home export the following environment variable:

export WORKING_DIR=/home/dev-scripts

Get OpenShift CI Token

Visit the following site to get the CI token(Internal Red Hat only), click on your name in the top right, copy the login command, extract the token from the command.

The token you get will look like this:

oc login --token=<TOKEN> --server=

<TOKEN> needs to be set in the file like so:

# You can get this token from by
# clicking on your name in the top right corner and coping the login
# command (the token is part of the command)
set +x
export CI_TOKEN='<TOKEN>'
set -x

Pull secret

Collect your pull secret from here and store it in a file called pull_secret.json

Optional modifications

The number of workers that will be created

# Indicate number of workers to deploy
export NUM_WORKERS=2

# Indicate number of extra VMs to create but not deploy

Specs for worker node VMs:

Doubling the memory is recommended if your going to be deploying anything significant like Openstack operators
# Change VM resources for workers.
## Defaults:
#export WORKER_MEMORY=8192
#export WORKER_DISK=30
#export WORKER_VCPU=4

IPV4 or IPV6:

# IP stack for the cluster.
# Default: "v6"
# Choices: "v4", "v6", "v4v6"
#export IP_STACK=v4

Set the network type:

# Set the network type for the Openshift cluster.
# The value selected is based off the value of IP_STACK.
# v4 Default:
#export NETWORK_TYPE="OpenShiftSDN"
# v6 Default:
#export NETWORK_TYPE="OVNKubernetes"
# v4v6 Default:
#export NETWORK_TYPE="OVNKubernetes"


Once everything has been configured, we simply run make (it’s best to do this in tmux)


Cleanup the deployment

OpenShift Cluster

This will just clean up the OpenShift cluster in case you would like to redeploy without recreating the VMs


Virtual Machines

This will remove all the virtual machines created for the OpenShift nodes:


You can quickly redeploy the OpenShift cluster:

rm -fr ocp

Connecting to the cluster remotely

To connect to the OpenShift cluster externally like from your workstation or laptop, you can use a utility called sshuttle together with a host file entry.


Firstly the kubeconfig file needs to be copied from the hypervisor to your local machine

scp <user>@<hypervisor>:/home/m3/dev-scripts/ocp/ostest/auth/kubeconfig ~/.kube/kubeconfig

Hosts File

A host entry is required to direct all of the hostnames to the hypervisor hosting the OpenShift cluster, this example is a good start but any routes you add will need to be apended to the end of this line. console

Sshuttle Command

Now the host entry is directing to for example, but there will be no route on your local system for this address as it only exists virtually on the hypervisor. We can use sshuttle to solve this.

sshuttle -r <user>@<hypervisor>


You can check the state of the system like any other OCP deployment at this point, except some new Custom Resource Definitions (CRD):

export KUBECONFIG=$(find . -name kubeconfig)export KUBECONFIG=$(find . -name kubeconfig)
oc get bmh -A

Check cluster version:

oc get clusterversions


Thanks to Brendan Shephard for writing the guide this was based off

If anything seems outdated check the official docs on the openshift-metal3/dev-scripts repo from where this opinionated guide was written from.