Deploying Virtual Multi-node OpenShift Cluster with Metal3
This document will provide an overview for quickly deploying a reproducer/development lab using OpenShift-metal3.
At the end of this guide, you will have a single server, virtualized multi-node Openshift deployment with 3 masters and 2 workers.
Get a server
BYO
For this guide you will just need a large server with the following specs:
Resource | Value |
---|---|
Memory | 128G |
CPU | 12 cores |
Storage | 256G |
Beaker node:
To get a Beaker node with sufficient compute, you can use the following XML file:
<job retention_tag="60days">
<whiteboard>Provision Centos8 x86_64 on +128G RAM / +12 Cores / +256G HD
This is for openshift-metal3 testing.</whiteboard>
<recipeSet priority="Normal">
<recipe whiteboard="" role="RECIPE_MEMBERS" ks_meta="" kernel_options="" kernel_options_post="">
<autopick random="false"/>
<watchdog panic="ignore"/>
<packages/>
<ks_appends/>
<repos/>
<distroRequires>
<and>
<distro_name op="=" value="CentOS-8.2"/>
<distro_arch op="=" value="x86_64"/>
</and>
</distroRequires>
<hostRequires>
<and>
<system>
<memory op=">" value="128000"/>
</system>
<cpu>
<cores op=">" value="8"/>
</cpu>
<disk>
<size op=">" value="137438953472"/>
</disk>
<arch op="=" value="x86_64"/>
<system_type op="=" value="Machine"/>
<key_value key="HVM" op="=" value="1"/>
</and>
</hostRequires>
<partitions/>
<task name="/distribution/install" role="STANDALONE"/>
<task name="/distribution/reservesys" role="STANDALONE">
<params>
<param name="RESERVETIME" value="1296000"/>
</params>
</task>
</recipe>
</recipeSet>
</job>
Configure the OS
Add and configure m3 user
Once you have the server, you can login and add a user that will perform the deployment:
useradd m3
usermod -aG wheel m3
passwd m3
Password-less sudo
The user needs to be able to do password-less sudo, so add it to /etc/sudoers.d/m3
:
echo "m3 ALL=(root) NOPASSWD:ALL" > /etc/sudoers.d/m3
Install the requirements
sudo dnf upgrade -y
sudo dnf install -y git make wget jq tmux
Clone the openshift-metal3/dev-scripts
repo from Github:
su - m3
git clone https://github.com/openshift-metal3/dev-scripts
Modifying the config file
Create a copy
cd dev-scripts
cp config_example.sh config_m3.sh
If the server your using has all it’s storage allocated to /home
export the following environment variable:
export WORKING_DIR=/home/dev-scripts
Get OpenShift CI Token
Visit the following site to get the CI token(Internal Red Hat only), click on your name in the top right, copy the login command, extract the token from the command.
The token you get will look like this:
oc login --token=<TOKEN> --server=https://api.ci.l2s4.p1.openshiftapps.com:6443
<TOKEN>
needs to be set in the config_m3.sh
file like so:
#!/bin/bash
# You can get this token from https://api.ci.openshift.org/ by
# clicking on your name in the top right corner and coping the login
# command (the token is part of the command)
set +x
export CI_TOKEN='<TOKEN>'
set -x
...
Pull secret
Collect your pull secret from here and store it in a file called pull_secret.json
Optional modifications
The number of workers that will be created
# Indicate number of workers to deploy
export NUM_WORKERS=2
# Indicate number of extra VMs to create but not deploy
export NUM_EXTRA_WORKERS=2
Specs for worker node VMs:
# WORKER_MEMORY, WORKER_DISK, WORKER_VCPU -
# Change VM resources for workers.
## Defaults:
## WORKER_DISK=30
## WORKER_MEMORY=8192
## WORKER_VCPU=4
#
#export WORKER_MEMORY=8192
#export WORKER_DISK=30
#export WORKER_VCPU=4
IPV4 or IPV6:
# IP_STACK -
# IP stack for the cluster.
# Default: "v6"
# Choices: "v4", "v6", "v4v6"
#
#export IP_STACK=v4
Set the network type:
# NETWORK_TYPE -
# Set the network type for the Openshift cluster.
# The value selected is based off the value of IP_STACK.
#
# v4 Default:
#export NETWORK_TYPE="OpenShiftSDN"
#
# v6 Default:
#export NETWORK_TYPE="OVNKubernetes"
#
# v4v6 Default:
#export NETWORK_TYPE="OVNKubernetes"
Deployment
Once everything has been configured, we simply run make
(it’s best to do this in tmux)
tmux
make
Cleanup the deployment
OpenShift Cluster
This will just clean up the OpenShift cluster in case you would like to redeploy without recreating the VMs
./ocp_cleanup.sh
Virtual Machines
This will remove all the virtual machines created for the OpenShift nodes:
./host_cleanup.sh
You can quickly redeploy the OpenShift cluster:
./ocp_cleanup.sh
rm -fr ocp
./06_create_cluster.sh
Connecting to the cluster remotely
To connect to the OpenShift cluster externally like from your workstation or laptop, you can use a utility called sshuttle
together with a host file entry.
Kubeconfig
Firstly the kubeconfig
file needs to be copied from the hypervisor to your local machine
scp <user>@<hypervisor>:/home/m3/dev-scripts/ocp/ostest/auth/kubeconfig ~/.kube/kubeconfig
Hosts File
A host entry is required to direct all of the hostnames to the hypervisor hosting the OpenShift cluster, this example is a good start but any routes you add will need to be apended to the end of this line.
192.168.111.5 console-openshift-console.apps.ostest.test.metalkube.org console openshift-authentication-openshift-authentication.apps.ostest.test.metalkube.org api.ostest.test.metalkube.org prometheus-k8s-openshift-monitoring.apps.ostest.test.metalkube.org alertmanager-main-openshift-monitoring.apps.ostest.test.metalkube.org kubevirt-web-ui.apps.ostest.test.metalkube.org oauth-openshift.apps.ostest.test.metalkube.org grafana-openshift-monitoring.apps.ostest.test.metalkube.org
Sshuttle Command
Now the host entry is directing api.ostest.test.metalkube.org
to 192.168.111.5
for example, but there will be no route on your local system for this address
as it only exists virtually on the hypervisor. We can use sshuttle
to solve this.
sshuttle -r <user>@<hypervisor> 192.168.111.0/24
Verification
You can check the state of the system like any other OCP deployment at this point, except some new Custom Resource Definitions (CRD):
export KUBECONFIG=$(find . -name kubeconfig)export KUBECONFIG=$(find . -name kubeconfig)
oc get bmh -A
Check cluster version:
oc get clusterversions
Notes
Thanks to Brendan Shephard for writing the guide this was based off
If anything seems outdated check the official docs on the openshift-metal3/dev-scripts repo from where this opinionated guide was written from.