Install a Kubernetes Cluster on RHEL8 with docker-ee
For this, we will walk-through a multi-node Kubernetes cluster installation on RHEL 8. This time we are going to deploy with docker-ee
These are the popular Container runtimes and being used mainly and already have covered for containerd.
For this cluster , we are going to use docker as it’s Container runtimes and am taking docker-Enterprise Edition
Prerequisites:
- Using RHEL8 with 3 nodes having 2 core cpu with 7.5GB of RAM with 25GB of disk.
- Internet connectivity on all your nodes. We will be fetching Kubernetes and other required packages from the repository. Equally, you will need to make sure that the DNF package manager is installed by default and can fetch packages remotely.
Installation of Kubernetes Cluster on Master-Node
Kubernetes makes use of various ports for communication and access and these ports need to be accessible to Kubernetes and not limited by the firewall. If your cluster behaves abnormally , you can configure the firewall rules on the ports.

Step 1: Login to the node and here i will be performing all the operation on root
[root@k8master ~]# cat /etc/redhat-release
Red Hat Enterprise Linux release 8.2 (Ootpa)
Install Docker-EE
#Subscription to docker EEexport DOCKERURL="https://storebits.docker.com/ee/rhel/sub-b752795b-73da-4bb0-9ca2-cea3cd04dacd"sudo -E sh -c 'echo "$DOCKERURL/rhel" > /etc/yum/vars/dockerurl'sudo sh -c 'echo "8" > /etc/yum/vars/dockerosversion'# Install storage driver
sudo yum install -y yum-utils wget device-mapper-persistent-data lvm2sudo -E yum-config-manager --add-repo "$DOCKERURL/rhel/docker-ee.repo"# installs dependent container-selinux as wellsudo yum -y install docker-ee docker-ee-cli containerd.io# Enable and start docker service
systemctl enable docker >/dev/null 2>&1
systemctl start docker

Install Kubernetes (Kubeadm, kubelet and kubectl) on RHEL 8
Next, you will need to add Kubernetes repositories manually as they do not come installed by default on RHEL8.
Kubeadm helps you bootstrap a Kubernetes cluster. With kubeadm, you are going to create/enable single-control-plane.
Kubeadm also supports other cluster lifecycle functions, such as upgrades, downgrade, and managing bootstrap tokens
# Add yum repo file for Kubernetes
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
#Install Kubernetes (kubeadm, kubelet and kubectl)
[root@k8master ~]# yum install -y kubeadm-1.17.0 kubelet-1.17.0 kubectl-1.17.0
When the installation completes successfully, enable and start the kubelet service
[root@k8master ~]# systemctl enable kubelet[root@k8master ~]# echo 'KUBELET_EXTRA_ARGS="--fail-swap-on=false"' > /etc/sysconfig/kubelet[root@k8master ~]# systemctl start kubelet
Create a control-plane Master with kubeadm
[root@k8master]# kubeadm init --pod-network-cidr=10.244.0.0/16
--
--
--
Your Kubernetes control-plane has initialized successfully!
Next, copy the following join-token and store it somewhere, as we required to run this command on the worker nodes later.
kubeadm join 10.128.0.30:6443 --token lytk96.6s4luv49sc4zp3zj \ --discovery-token-ca-cert-hash sha256:22ea9ad42bea163d214d0914a47961117ed95b0d33f48d1efcd2447b196cf61c
Once Kubernetes initialized successfully, you must enable your user to start using the cluster. In our scenario, we will be using the root user. You can also start the cluster using sudo user as shown.
To use root, run:
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
Setup Your Pod Network
We are going to use flannel as our CNI or pod network
[root@k8master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel createdconfigmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
Check system pods and we after sometime you are going to see the status as below

Now if you check the status of your master-node, it should be ‘Ready’.
[root@k8master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8master Ready master 43m v1.17.0
Join the Worker Node to the Kubernetes Cluster
You can follow the same steps till( Kubeadm, kubelet and kubectl) on the worker nodes and once everything set up on the worker node. Confirm docker status as shown below

We now require the token that kubeadm init generated, to join the cluster. You can copy and paste it to your k8worker if you had copied it somewhere.
[root@k8worker1 ~]# kubeadm join 10.128.0.30:6443 --token lytk96.6s4luv49sc4zp3zj \> --discovery-token-ca-cert-hash sha256:22ea9ad42bea163d214d0914a47961117ed95b0d33f48d1efcd2447b196cf61c
After sucessful exection of the above command, you going to see some messages like below.
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.
Go back to your master-node and verify if worker k8worker have joined the cluster using the following command.
root@k8master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8master Ready master 10m v1.17.0
k8worker1 Ready <none> 17s v1.17.0
If all the steps run successfully, then, you should see k8worker in ready status on the master-node. At this point, you have now successfully deployed a Kubernetes cluster on RHEL8 with docker-ee
For more info — https://kubernetes.io/