Contents

Single Node OpenShift deployment Guide


In this guide I will be deploying A Single Node OpenShift environment (SNO) on to a VM hosted on Proxmox hypervisor with an OpnSense Router hosting an Unbound DNS server.


This type of deployment comes with some limitations:

  • Upgrades are supported
  • Scale outs are not supported

However, it is great for getting hands on experiance with OpenShift as it only requires a small node and simple networking compared with a full OpenShift deployment.


Prerequisites

Hardware

Single-node OpenShift requires the following minimum host resources however depending on your work load these will need to be increased:

  • vCPU: 12
  • Memory: 33 GB
  • Storage: 120 GB
  • Network: 1 nic that can route to the internet
Note
If deploying SNO on to a virtual machine the vCPU’s will need to be passed through from the host to enable nested virtualisation

/posts/Single_node_OpenShift_deployment_guide/vm-info.png


DNS

Required DNS settings in OpnSense:

/posts/Single_node_OpenShift_deployment_guide/dns-records.png

Or added to your /etc/hosts file

❯ grep "sno.ocp.home.lewisdenny.local" /etc/hosts
172.21.72.202 console-openshift-console.apps.sno.ocp.home.lewisdenny.local oauth-openshift.apps.sno.ocp.home.lewisdenny.local api.sno.ocp.home.lewisdenny.local
Note
Any app deployed to the OpenShift environment will need to have an $appName.apps.$hostName.$domainName dns record manually added to your /etc/hosts/ file. Using a DNS server is recommended.

NFS Server (optional)

If you would like persistent storage for your image registry, pods and virtual machines then you will need an external storage backend. An external NFS server will be used in this case, for example I will be using NFS shares served from my SNO VM’s hypervisor. Once you have followed the steps for installing and configuring an NFS server for your distro you will need to export two shares:

  • registry
  • nfs-sc
root@pve-r720:~# grep openshift /etc/exports 
/slow/storage/openshift/registry *(rw,sync,root_squash,no_subtree_check,no_wdelay)
/slow/storage/openshift/nfs-sc *(rw,sync,no_root_squash,no_subtree_check,no_wdelay)

root@pve-r720:~# exportfs -rv
exporting *:/slow/storage/openshift/nfs-sc
exporting *:/slow/storage/openshift/registry
Tip

THis can be checked from another server on the network using showmount -e $NFSServerIPAddress

❯ showmount -e 172.21.77.99
Export list for 172.21.77.99:
/slow/storage/openshift/nfs-sc   *
/slow/storage/openshift/registry *

Deploying the environment

Now that the prerequisites are taken care of the deployment will now be driven from console.redhat.com/openshift. This is very much point and click but I have documented the steps you need to take below:

Cloud Deployment Steps

  1. Login and select the “Create Cluster” button.
  2. Select “Datacenter” and then select “Create cluster” again underneath “Assisted Installer - Technology Preview”
  3. Enter a cluster name and add in the base domain.
Note
In the DNS example above the cluster name would be sno and the domain would be ocp.home.lewisdenny.lab
  1. Select “Install single node OpenShift (SNO)” as the default is to deploy a multi-node cluster.
  2. Read and accept the warnings about single node OpenShift availability, scalability and life cycle management limitations at present given it is a technology preview.
  3. Select “Generate Discovery ISO.”
  4. Add your public ssh key and download the ISO.
  5. Attach this discovery ISO to the host you wish to install.
  6. Set the host to automatically boot from CDROM, and power the system up.
  7. After a few moments your host will show up in the assisted installer UI.
  8. The host will start reporting information about the system and network configuration.
  9. Select the subnet you want OpenShift to use.
Note
This is the network you will access your SNO environment on, both the apps and api endpoint will listen on the IP address assigned to this interface.
  1. Click next, review the setting and select “Install cluster”
  2. View the installation progress until the installation complete

Verification

The installation will take around 50mins to complete depending on your internet connection speed.

Once the installation is complete the health of the environment can be verified.

Firstly the kubeconfig needs to be downloaded from the completed install console page and copied to ~/kube/config where the oc command will be able to read it.

Note
The kubeconfig contains the location of the api endpoint as well as authentication credentials to access.

Now oc commands can be ran to check on the environment:

❯ oc get nodes
NAME   STATUS   ROLES           AGE    VERSION
sno    Ready    master,worker   136m   v1.21.1+9807387
❯ oc get co
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.8.5     True        False         False      108m
baremetal                                  4.8.5     True        False         False      119m
cloud-credential                           4.8.5     True        False         False      123m
cluster-autoscaler                         4.8.5     True        False         False      119m
config-operator                            4.8.5     True        False         False      128m
console                                    4.8.5     True        False         False      108m
csi-snapshot-controller                    4.8.5     True        False         False      108m
dns                                        4.8.5     True        False         False      119m
etcd                                       4.8.5     True        False         False      123m
image-registry                             4.8.5     True        False         False      117m
ingress                                    4.8.5     True        False         False      117m
insights                                   4.8.5     True        False         False      122m
kube-apiserver                             4.8.5     True        False         False      118m
kube-controller-manager                    4.8.5     True        False         False      118m
kube-scheduler                             4.8.5     True        False         False      118m
kube-storage-version-migrator              4.8.5     True        False         False      127m
machine-api                                4.8.5     True        False         False      121m
machine-approver                           4.8.5     True        False         False      124m
machine-config                             4.8.5     True        False         False      112m
marketplace                                4.8.5     True        False         False      123m
monitoring                                 4.8.5     True        False         False      112m
network                                    4.8.5     True        False         False      129m
node-tuning                                4.8.5     True        False         False      122m
openshift-apiserver                        4.8.5     True        False         False      118m
openshift-controller-manager               4.8.5     True        False         False      117m
openshift-samples                          4.8.5     True        False         False      117m
operator-lifecycle-manager                 4.8.5     True        False         False      123m
operator-lifecycle-manager-catalog         4.8.5     True        False         False      123m
operator-lifecycle-manager-packageserver   4.8.5     True        False         False      119m
service-ca                                 4.8.5     True        False         False      128m
storage                                    4.8.5     True        False         False      122m
Note
As we can see in the output all the cluster operators are in an available state with none progressing or degraded.

Storage

Configuring Image Registry NFS backend

After a successful deployment the image registry cluster operators managementState will be set to Removed, this is required for the deployment to complete however it now needs to be set to Managed so we can connect it to the registry NFS share 1 :

$ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"managementState":"Managed"}}'

Now Physical Volume Claim(PVC) needs to be created, we can do that with the following command to create an empty claim:

❯ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"pvc":{"claim":""}}}}'

This will create the following pvc which can be seen in a Pending state:

❯ oc get pvc -A
NAMESPACE                  NAME                     STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
openshift-image-registry   image-registry-storage   Pending                                                     59s

Now that we have a pvc, we need a Physical Volume(PV) for the pvc to bound to:

❯ curl -s https://gitlab.com/lewisdenny/openshift/-/raw/main/install/ocp-registry-pv.yaml | oc create -f -
apiVersion: v1
kind: PersistentVolume
metadata:
  name: registry-pv
spec:
  accessModes:
    - ReadWriteMany
  capacity:
    storage: 100Gi
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /slow/storage/openshift/registry
    server: 172.21.77.99

We can see the status of the pvc change to Bound if everything is correct:

❯ oc get pvc -A
NAMESPACE                  NAME                     STATUS   VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS   AGE
openshift-image-registry   image-registry-storage   Bound    registry-pv   100Gi      RWX                           10m

Configuring NFS client provisioner deployment

Firstly we need to create a new name space for the NFS client provisioner to be deployed in:

❯ oc create namespace nfs
namespace/nfs created

Next we will need to create a few resources, this deployment uses the nfs-subdir-external-provisioner image and the default resource yaml files can be found on the nfs-subdir-external-provisioner Github page. Comparing the defaults with my examples below should make it quite clear what needs to be adjusted.

  • rbac.yaml
❯ curl -s https://gitlab.com/lewisdenny/openshift/-/raw/main/install/rbac.yaml | oc create -f -
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: nfs
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
  • deployment.yaml
❯ curl -s https://gitlab.com/lewisdenny/openshift/-/raw/main/install/deployment.yaml | oc create -f -
deployment.apps/nfs-client-provisioner created
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: nfs
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-storage 
            - name: NFS_SERVER
              value: 172.21.77.99
            - name: NFS_PATH
              value: /slow/storage/openshift/nfs-sc
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.21.77.99
            path: /slow/storage/openshift/nfs-sc
  • class.yaml
❯ curl -s https://gitlab.com/lewisdenny/openshift/-/raw/main/install/class.yaml | oc create -f -
storageclass.storage.k8s.io/managed-nfs-storage created
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: nfs-storage # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
  archiveOnDelete: "false"

Now that all the resource have been added we can see there is an error with our newly created NFS deployment:

❯ oc get deployment nfs-client-provisioner -n nfs -ojsonpath='{.status.conditions[].message}'
'pods "nfs-client-provisioner-848967bfc9-" is forbidden: unable to validate against anysecurity context constraint: [provider "anyuid": Forbidden: not usable by user orserviceaccount, provider "containerized-data-importer": Forbidden: not usable by user orserviceaccount, spec.volumes[0]: Invalid value: "nfs": nfs volumes are not allowed to beused, provider "nonroot": Forbidden: not usable by user or serviceaccount, provider"hostmount-anyuid": Forbidden: not usable by user or serviceaccount, provider"bridge-marker": Forbidden: not usable by user or serviceaccount, provider"machine-api-termination-handler": Forbidden: not usable by user or serviceaccount,provider "kubevirt-controller": Forbidden: not usable by user or serviceaccount,provider "hostnetwork": Forbidden: not usable by user or serviceaccount, provider"hostaccess": Forbidden: not usable by user or serviceaccount, provider "linux-bridge":Forbidden: not usable by user or serviceaccount, provider "nmstate": Forbidden: notusable by user or serviceaccount, provider "kubevirt-handler": Forbidden: not usable byuser or serviceaccount, provider "node-exporter": Forbidden: not usable by user orserviceaccount, provider "privileged": Forbidden: not usable by user or serviceaccount]'

This is due to the service account used by the deployment not having the hostmount-anyuid security context.

First a new role is needs to be created with the hostmount-anyuid security context:

❯ oc create role use-scc-hostmount-anyuid --verb=use --resource=scc --resource-name=hostmount-anyuid -n nfs
❯ oc get roles -n nfs
NAME                                    CREATED AT
leader-locking-nfs-client-provisioner   2021-09-05T10:25:51Z
use-scc-hostmount-anyuid                2021-09-05T10:58:49Z

Next we need to check what service account the NFS deployment is using:

❯ oc get deployments -n nfs -o yaml | yq e '.' - | grep serviceAccount:
                  f:serviceAccount: {}
          serviceAccount: nfs-client-provisioner

Then we need to add the role to our service account, in this example that is nfs-client-provisioner:

❯ oc adm policy add-role-to-user use-scc-hostmount-anyuid -z nfs-client-provisioner --role-namespace nfs -n nfs
role.rbac.authorization.k8s.io/use-scc-hostmount-anyuid added: "nfs-client-provisioner"

Lastly we need to scale in and out the NFS deployment for the new configuration to take effect:

❯ oc scale deploy nfs-client-provisioner -n nfs --replicas 0
deployment.apps/nfs-client-provisioner scaled

❯ oc scale deploy nfs-client-provisioner -n nfs --replicas 1
deployment.apps/nfs-client-provisioner scaled

Users

By default, only a kubeadmin user exists on your cluster. To specify an identity provider, you must create a Custom Resource (CR) that describes that identity provider and add it to the cluster. 2

Add HTPasswd identity provider

Create htpasswd file that contains your hashed password:

htpasswd -c -B -b users.htpasswd user1 MyPassword!
Adding password for user user1

To use the HTPasswd identity provider, you must define a secret that contains the HTPasswd user file:

oc create secret generic htpass-secret --from-file=htpasswd=users.htpasswd -n openshift-config
secret/htpass-secret created

Now we need to add an identity provider:

curl -s https://gitlab.com/lewisdenny/openshift/-/raw/main/install/identity-provider.yaml | oc apply -f -
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: my_htpasswd_provider 
    challenge: true 
    login: true 
    mappingMethod: claim 
    type: HTPasswd
    htpasswd:
      fileData:
        name: htpass-secret 

Now you can test the account:

❯ oc login -u user1
Authentication required for https://api.ocp.home.lewisdenny.io:6443 (openshift)
Username: user1
Password: 
Login successful.

Add user to admin role:

oc adm groups new mylocaladmins
oc adm groups add-users mylocaladmins user1
oc adm policy add-cluster-role-to-group cluster-admin mylocaladmins

Testing it all together

❯ curl -s https://gitlab.com/lewisdenny/openshift/-/raw/main/install/nfs-provisioner-test.yaml |  oc create -f -
persistentvolumeclaim/test-nfs-provisioner created
pod/ubuntu-test-nfs created

Extra Information:
https://cloud.redhat.com/blog/deploy-openshift-at-the-edge-with-single-node-openshift

https://www.youtube.com/watch?v=QFf0yVAHQKc