Deploy a Kubernetes Cluster

In order to leverage Kubernetes platform for deploying Containerized Applications, we need to set up a Kubernetes cluster. In my Home Lab, i have used 2 Ubuntu 18.0.4.03 Virtual Machines to create a Kubernetes cluster. Ubuntu machines should have Internet connectivity, name resolution should work between both the machines and machines should be patched & updated.

Configuring a Kubernetes Cluster on Ubuntu:

Pre-requisites:

1. 2 X Ubuntu 18.04.3 VMs
2. Each VM should have minimum of 2 vCPUs and 2 GB RAM

Configuration Steps:

1. Install Docker on both the nodes:

sudo apt install docker.io

Run the below command to check the verion of Docker:

docker –version

2. Enable Docker on both the nodes:

sudo systemctl enable docker

Run the following command to install curl on both the VMs:
sudo apt install curl

4. Add the Kubernetes signing key on both the nodes:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add

5. Add Xenial Kubernetes Repository on both the nodes:

sudo apt-add-repository “deb http://apt.kubernetes.io/ kubernetes-xenial main”

6. Install Kubeadm:

sudo apt install kubeadm

kubeadm version

7. Deploy Kubernetes Cluster:

Disable swap memory (if running) on both the nodes:

sudo swapoff -a

8. Give Unique hostnames to each node:

sudo hostnamectl set-hostname kube-master

hostnamectl set-hostname kube-node1

9. Initialize Kubernetes on the master node:

sudo kubeadm init –pod-network-cidr=10.244.0.0/16

Note down the output of the above command in a Notepad, you’ll require the discovery token to join the Worker Node to the cluster.

10. To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

You can check the status of the master node by running the following command:

kubectl get nodes

11. Deploy a Pod Network through the master node:

sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Use the following command in order to view the status of the network:

kubectl get pods –all-namespaces

Use the following command to check the status of the Master Node:

sudo kubectl get nodes

12. Create a Service Account to Access the Kuberntes UI Dashboard:

kubectl create serviceaccount dashboard -n default

Assign Cluster-Admin Privileges to the Service Account:

kubectl create clusterrolebinding dashboard-admin -n default –clusterrole=cluster-admin –serviceaccount=default:dashboard

Generate a Bearer Token for the Service Account to Access the Dashboard UI:

kubectl get secret $(kubectl get serviceaccount dashboard -o jsonpath=”{.secrets[0].name}”) -o jsonpath=”{.data.token}” | base64 –decode

Token:

eyJhbGciOiJSUzI1NiIsImtpZCI6Ik95R3ZaUXBCeWVYS01IQ3JfNms3eWJ0MlZrQzI3WHRH
NFhKUEtsc3VieEkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJu
ZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRl
cy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRhc2hib2FyZC10b2tlbi14Znp3
biIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbW
UiOiJkYXNoYm9hcmQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtY
WNjb3VudC51aWQiOiIxMzk4ZWU3OS1hZjNmLTQ0YzctOTkxOC0zZTMxNWRkNWRlM2Ei
LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkYXNoYm9hcmQifQ.LW
keqAnIsylBvH9fRL_e5Uabcdefashajshkjahsjkahsjabxabsnbavhgqw872y2hjbnasbnmasba

12. Deploying the Dashboard UI:

For those of you who didn’t knew, Kubernetes has a GUI based Dashboard which can be used for almost every admin task once the cluster has been set up. Use the below steps to deploy the Dashboard UI:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

You can now access the Dashboard using the kubectl command-line tool by running the following command:

Kubectl proxy

Kubectl will make Dashboard available at:

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Access the above dashboard by entering the above URL in a Web Browser and login using the Bearer Token:

Kubernetes UI Dashboard

14. Add the slave node to the network in order to form a cluster:

kubeadm join Kube-Master_FQDN:6443 –token kjq37t.2nqh0m4y31ytnzt4 \

–discovery-token-ca-cert-hash sha256:8da665906a2a1ff54ee50efa6c1d0088fb27399b2e649e4ca8cfe8ee115a9728

Use the following command to check the status of all the Cluster Nodes:

sudo kubectl get nodes

If you have followed all the above steps the Output should be something like this:

Pro-Tip:
Some people like to use Pycharm for writing YAML files for Kuberetes but i like to use Visual Studio Code for all my Command Line stuff. You can install an extension for Remote-SSH in Visual Studio Code and connect to your Kubernetes Cluster using Visual Stutio Code.

Enjoy your newly built Kubernetes Cluster. 🙂

Deploy a Kubernetes Cluster

In order to leverage Kubernetes platform for deploying Containerized Applications, we need to set up a Kubernetes cluster. In my Home Lab, i have used 2 Ubuntu 18.0.4.03 Virtual Machines to create a Kubernetes cluster. Ubuntu machines should have Internet connectivity, name resolution should work between both the machines and machines should be patched & updated.

Configuring a Kubernetes Cluster on Ubuntu:

Pre-requisites:

1. 2 X Ubuntu 18.04.3 VMs
2. Each VM should have minimum of 2 vCPUs and 2 GB RAM

Configuration Steps:

1. Install Docker on both the nodes:

sudo apt install docker.io

Run the below command to check the verion of Docker:

docker –version

2. Enable Docker on both the nodes:

sudo systemctl enable docker

Run the following command to install curl on both the VMs:
sudo apt install curl

4. Add the Kubernetes signing key on both the nodes:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add

5. Add Xenial Kubernetes Repository on both the nodes:

sudo apt-add-repository “deb http://apt.kubernetes.io/ kubernetes-xenial main”

6. Install Kubeadm:

sudo apt install kubeadm

kubeadm version

7. Deploy Kubernetes Cluster:

Disable swap memory (if running) on both the nodes:

sudo swapoff -a

8. Give Unique hostnames to each node:

sudo hostnamectl set-hostname kube-master

hostnamectl set-hostname kube-node1

9. Initialize Kubernetes on the master node:

sudo kubeadm init –pod-network-cidr=10.244.0.0/16

Note down the output of the above command in a Notepad, you’ll require the discovery token to join the Worker Node to the cluster.

10. To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

You can check the status of the master node by running the following command:

kubectl get nodes

11. Deploy a Pod Network through the master node:

sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Use the following command in order to view the status of the network:

kubectl get pods –all-namespaces

Use the following command to check the status of the Master Node:

sudo kubectl get nodes

12. Create a Service Account to Access the Kuberntes UI Dashboard:

kubectl create serviceaccount dashboard -n default

Assign Cluster-Admin Privileges to the Service Account:

kubectl create clusterrolebinding dashboard-admin -n default –clusterrole=cluster-admin –serviceaccount=default:dashboard

Generate a Bearer Token for the Service Account to Access the Dashboard UI:

kubectl get secret $(kubectl get serviceaccount dashboard -o jsonpath=”{.secrets[0].name}”) -o jsonpath=”{.data.token}” | base64 –decode

Token:

eyJhbGciOiJSUzI1NiIsImtpZCI6Ik95R3ZaUXBCeWVYS01IQ3JfNms3eWJ0MlZrQzI3WHRH
NFhKUEtsc3VieEkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJu
ZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRl
cy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImRhc2hib2FyZC10b2tlbi14Znp3
biIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbW
UiOiJkYXNoYm9hcmQiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtY
WNjb3VudC51aWQiOiIxMzk4ZWU3OS1hZjNmLTQ0YzctOTkxOC0zZTMxNWRkNWRlM2Ei
LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6ZGVmYXVsdDpkYXNoYm9hcmQifQ.LW
keqAnIsylBvH9fRL_e5Uabcdefashajshkjahsjkahsjabxabsnbavhgqw872y2hjbnasbnmasba

12. Deploying the Dashboard UI:

For those of you who didn’t knew, Kubernetes has a GUI based Dashboard which can be used for almost every admin task once the cluster has been set up. Use the below steps to deploy the Dashboard UI:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml

You can now access the Dashboard using the kubectl command-line tool by running the following command:

Kubectl proxy

Kubectl will make Dashboard available at:

http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

Access the above dashboard by entering the above URL in a Web Browser and login using the Bearer Token:

Kubernetes UI Dashboard

14. Add the slave node to the network in order to form a cluster:

kubeadm join Kube-Master_FQDN:6443 –token kjq37t.2nqh0m4y31ytnzt4 \

–discovery-token-ca-cert-hash sha256:8da665906a2a1ff54ee50efa6c1d0088fb27399b2e649e4ca8cfe8ee115a9728

Use the following command to check the status of all the Cluster Nodes:

sudo kubectl get nodes

If you have followed all the above steps the Output should be something like this:

Pro-Tip:
Some people like to use Pycharm for writing YAML files for Kuberetes but i like to use Visual Studio Code for all my Command Line stuff. You can install an extension for Remote-SSH in Visual Studio Code and connect to your Kubernetes Cluster using Visual Stutio Code.

Enjoy your newly built Kubernetes Cluster. 🙂

Two-Factor Authentication for vRealize Automation

Featured

In vRealize Automation 7.X VMware Identity Manager (vIDM) was embedded in vRealize Automation Appliance, even though the UI for vIDM is disabled however we can still use the capability of vRealize Automation to Authenticate using a RADIUS server. There is a very informative VMware blog on Configuring vRA 7 for 2 Factor Authentication posted by Jon Schulman back in February 2016 but a few things have changed since then.

I had to configure Two Factor Authentication for one the customers, this was when i realized that there are few modification which are required to configure Two-Factor Authentication in vRA. In order to provide a brief overview, we will be using a Ubuntu Machine to install and will install Google Authenticator and RADIUS server on this machine. Ubuntu machine will integrate with Active Directory for Authentication and then we will configure vRealize Automation to use RADIUSAuthAdapter to Authenticate users with “AD Password + Google Authenticator Passcode”.

Configuring Two-Factor Authentication:

Pre-requisites:

1. 1 Ubuntu 18.04.3 VM
2. VM should have minimum of 2 vCPUs and 4 GB RAM
3. Active Directory
4. vRealize Automation 7.X
5. DNS Record for Ubuntu Machine

1. Configure Ubuntu Machine:

I am using an Ubuntu VM of 2 vCPUs, 4 GB Memory and 20 GB HDD for this Demo.

Patch/Update the Ubuntu machine and Install open-vm-tools using the below commands:

sudo apt-get update
sudo apt-get dist-upgrade
sudo apt-get install open-vm-tools

Install NTP and Open-SSH on the Ubuntu machine using the below commands:

sudo apt-get install ntp
sudo apt-get install openssh-server

Now would be a good time to verify DNS resolution for Active Directory and vRealize Automation.

Download and Install latest PowerBroker Identity Services package pbis-open-9.1.0.551.linux.x86_64.deb.sh from BeyondTrust Github repo:

sudo ./Desktop/pbis-open-9.1.0.551.linux.x86_64.deb.sh

Once the Power Broker Identity Services has been installed, there is one more thing that we need to do before we add Ubuntu machine to Active Directory. We need to uninstall Avahi daemon:

sudo apt-get remove avahi-daemon

Now we can add our Ubuntu machine to Active Directory using the below command:

sudo /opt/pbis/bin/domainjoin-cli join vmlab.local administrator@vmlab.local

After you have successfully joined the system to Active Directory, login to Active Directory and verify that the Computer Object has been created.

Reboot your Ubuntu machine. Once the machine has been restarted, you should be able to login to the machine using Active Directory Credentials.

Now we will setup the Default Domain, Home Directory and Shortname for our Active Directory Domain in the Ubuntu server by running the below commands:

sudo /opt/pbis/bin/config AssumeDefaultDomain true
sudo /opt/pbis/bin/config LoginShellTemplate /bin/bash
sudo /opt/pbis/bin/config HomeDirTemplate %H/%U
sudo /opt/pbis/bin/config UserDomainPrefix VMLAB

Now we need to add the below mentioned lines to /usr/share/lightdm/lightdm.conf.d/50-ubuntu.conf file to allow users to login to Ubuntu machine using Active Directory Credentials on Ubuntu login screen:

sudo nano /usr/share/lightdm/lightdm.conf.d/50-ubuntu.conf


allow-guest=false
greeter-show-manual-login=true

Install Freeradius Server on your Ubuntu machine using the below command:

sudo apt install freeradius

Edit the file /etc/freeradius/3.0/radiusd.conf and replace the below entries:
Replace:
user = freerad
group = freerad

With:
user = root
group = root

Run the below command to enable PAM modules:
ln -sf /etc/freeradius/3.0/mods-available/pam /etc/freeradius/3.0/mods-enabled/pam


Comment out all the lines in the file /etc/pam.d/radiusd and add the below lines:

auth requisite pam_google_authenticator.so forward_pass
account required pam_lsass.so use_first_pass

Add the below entry to the file /etc/freeradius/3.0/users:

DEFAULT Auth-Type := PAM



Modify the file /etc/freeradius/3.0/clients.conf to make an entry for vRealize Automation server in the Ubuntu machine:

client vra76.vmlab.local {
ipaddr = 192.168.0.55
secret = VMwar31!
shortname = vra76
}

Edit the file and uncomment pam to enable Pluggable Authentication Modules in file /etc/freeradius/3.0/sites-enabled/default:

Restart FREEARDIUS service:

2. Install Google Authenticator:

Once your machine has been setup to Authenticate using Active Directory, we will proceed with installing Google Authenticator and generate Google Authenticator Token to generate Passcodes using Google Authenticator Mobile Application:

Use the below command to install Google Authenticator:

sudo apt install libpam-google-authenticator

We can now begin enabling our Active Directory users to start enrolling for Google Authenticator Application on their Mobile phones. Advise the users to login to the respective App Stores (Google Playstore & Apple’s App Store) to download and Install Google Authenticator App.



Assume the Identity of the user you want to enable for login and run the command google-authenticator:

su demo@vmlab.local
google-authenticator

Type y to generate a time-based token and a QR Code and Secret Key will be generated for th user. User can scan the QR code using Google Authenticator App or User can enter the Secret Key to import the tokwn in the App:

There are 5 Emergency Scracth codes which are generated for the user to use when user wants to Authenticate but does not have Mobile phone handy with him/her.
These codes are one-time codes and answer the below questions as per your environment. I would recommend answering the 3rd question as “yes”, as it will allow the users to use a token for upto 4 minutes otherwise the each token expires within 30 seconds.

Google Authenticator doesn’t allow you to take a screenshot of an actie token hence the image but a successfully imported token in Google Authenticator looks like this:

3. Configure vRealize Automation to Authenticate using Radius:

Login to vRealize Automation with Tenant Administrator credentials and click on the Connector under Directories Management section:

Click on the Auth Adapters and click on the RadiusAuthAdapter to enable Radius Authentication.

Enable Radius Adapter and enter the details of the Ubuntu machine, set the Authentication Type as PAP, you can enter Number of attemps to Radius server and Time out values as per your requirement and enter the shortname for your domain followed by a Backslash “\”:

If you want to enable High Availability, you can setup another Ubuntu machine in a similar fashion but for our demo we will not be enabling Secondary server. Click on Save to save the settings and enable the Radius Auth Adapter. Enter the Passphrase hint to provide users information about entering Password followed by Passphrase.

Click on Network Ranges and create a new Network range, All clients which are using the IP addresses of this range will be required to authenticate using Password + Passcode (Two-Factor Authentication). If you are using a Distributed vRealize Automation Deployment, Start and End IP Address of the Network Range should be the vRA Portal VIP:

Now the last step in the configuration is to setup a Policy Rule to enforce Radius Authentication. Create a Policy Rule by selecting the newly created Network Range, select All Device Types to access the content from and select the Authentication method as Radius and click on Ok.

Drag and drop the newly created Network Range to the top of the list and click Save to save the configuration:

Now when you’ll attempt to login to vRealize Automation portal using a client with the IP Address defined in the Network Range, you’ll prompted with the new Passphrase hint:

In order to login to vRA portal, enter your Password followed by the Google Authenticator Passphrase:

BOOM!!, you have managed to configure Two-Factor Authentication for vRealize Automation.