If you are familiar with minikube, a lightweight implementation of the Kubernetes ecosystem, then you may have also heard of Minishift. Designed as a development platform and delivered through a utility, this is the Red Hat OKD (Origin Kubernetes Distribution—formerly called OpenShift Origin) all-in-one implementation of Red Hat OpenShift. Being highly versatile, it can be deployed on varying platforms:
All of these examples have one thing in common: the Minishift utility is deploying the ecosystem as an image onto a VM from the host machine, where the host machine is not itself a virtual machine. In the case where the host machine is a virtual machine (highlighted in bold above), the underlying hardware and virtualization platform needs to support nested virtualization. Which is the ability to support running one VM within another or running a hypervisor within a VM. This capability isn’t always available.
Installation can become challenging when you don’t have access to the host machine. This article will go through the process of installing Minishift directly onto an already existing VM. Although we will discuss this from the context of an AWS linux VM, the process can be applied to any venue.
We are going to setup an environment comprising of two linux VMs. The first being the control node from which the installation will take place. The second linux node will host the Minishift environment. For compatibility reasons, we want the Minishift node to either be RHEL or one of its derivatives (CentOS, Fedora, Oracle Linux, etc).
For the purposes of this tutorial we are going with CentOS for the Minishift node and AWS Linux 2 for the control node. You can use this CFT to deploy the necessary AWS resources. Keep in mind that this template can only be run from us-east-1, so if you want to set this up in a different region, either modify the template with the appropriate AMIs or create the instances manually.
$sudo yum -y update
$ssh-keygen
$sudo systemctl restart sshd
$ssh root@<ip_address>
$wget https://github.com/minishift/minishift/releases/download/v1.34.2/minishift-1.34.2-linux-amd64.tgz
$tar zxvf minishift-1.34.2-linux-amd64.tgz
$export PATH=$PATH:<path to minishift utility>
$minishift config set vm-driver generic
$minishift config view
$minishift start --remote-ipaddress <minishift_ip> --remote-ssh-user root --remote-ssh-key /home/ec2-user/.ssh/id_rsa
$minishift status
$https://<minishift_node_ip>:8443/console
$export PATH=$PATH:<path to oc utility>
$oc status
Here is a simple “Hello, World!” example that you can use to test the installation.
$oc new-app openshift/hello-openshift
Start by creating a new file that will hold the container manifest
$vi hello-openshift-deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-openshift
spec:
replicas: 1
template:
metadata:
labels:
app: hello-openshift
spec:
containers:
- name: hello-openshift
image: openshift/hello-openshift:latest
ports:
- containerPort: 80
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: 300m
memory: 512Mi
Deploy the file
$oc create -f hello-openshift-deployment.yml
Issue an oc get pods
to see a list of active pods. If you can see the hello-world app, then you have succeeded in setting up your environment.
If you’ve been playing with Kubernetes for a while, then you know that setting the appropriate resource specs (highlighted above), and keeping them updated for a specific container over its lifecycle can prove to be challenging. Not just from the perspective of knowing what to set it to in the first place, but also coming back and manually implementing changes. In the age of automation, this process of manually keeping these specs up to date seems unnecessary. Is there a better way?
Check out our upcoming tutorial, ‘Scaling OpenShift Container Resources using Ansible,’ where I will show you how to take an automation technology like Ansible and use it to continuously and automatically keep the resource specifications optimized.