VMware vCenter Orchestrator (vCO) Click Below for Plug-ins:

vCO Plug-in

Building a Kubernetes Blueprint for vRealize Automation - Intro

User Rating: 5 / 5

Star ActiveStar ActiveStar ActiveStar ActiveStar Active
 

In my efforts to learn more in the Cloud Native Apps space, I recently attended DockerCon 2018 and have been playing around with Docker a bit so I get the basics. Since VMware has Pivotal Container Services (PKS) for self-hosted enterprise ready Kubernetes and VMware Kubernetes Engine (VKE) as an upcoming SaaS offering, I figured it was time to start getting a feel for Kubernetes.

Docker Training

For the first two days of DockerCon, I opted for the Docker Fundamentals training as an add-on. I'm glad I did because the hands-on exercises provided some great experience and taught me several new things. It was nice to have a controlled introduction to Docker, Swarm, and Kubernetes. The VMs used for the training were pre-configured with all the necessary tools so no time was wasted there. So when it came time to set up my own "test" environment, I wasn't certain as to what needed to be installed. There were a few folks in the class that had pre-registered for the exam before they had even completed the training, needless to say, they were under-prepared by the fundamentals level training and their own hands-on experience and failed. I'm glad I anticipated as much and didn't bother with the certification exam!

My Goal

My initial goal for this blueprint was to provide a simple to consume single-master, multi-node blueprint that offered a choice between Flannel and Calico for the networking plug-in so that I could re-create some of the hands-on exercises I did in the Docker Fundamentals training - in particular, using a NodePort so that any/all nodes in the cluster would provide content on the same port across all nodes, regardless of which node was actually running the services that provided the content! Sounds simple enough...

Existing Blueprint?

In order to save myself some time, I did a bit of Googling and checked the VMware Sample Exchange and came across an old post from Ryan Kelly: http://www.vmtocloud.com/the-kubernetes-blueprint-for-vra7-is-here/ and I didn't even see this post by Mark Brookfield https://virtualhobbit.com/2018/02/05/deploying-kubernetes-with-vrealize-automation/ until I was done with my initial work. Each of those articles took different approaches from what I did.

These existing efforts are good starts, and I like what Mark did with the Dashboard bit, but neither of them fit my needs so I opted to build from the ground up with Cent OS 7 as my base.

Challenges

I feel like I'm a typical IT person - I try to get things up and running quickly with minimal reliance on "Documentation" ... It helps to gauge how intuitive and admin-friendly the stuff is. Sometimes this works rather well, other times, it causes me unnecessary heartache. Case in point: Networking for Kubernetes. When you "roll your own" Kubernetes cluster, you need to choose one of the many different networking/policy plug-ins, initialize your cluster, and apply the plug-in. Well, each plug-in has guidance on the pre-defined/preferred Pod Network CIDR that should be specified when initializing the Kubernetes cluster.
For example, Calico prefers _192.168.0.0/16_ while Flannel and some others document _10.244.0.0/16_, and others simply specify that you should be sure to include the *--pod-network-cidr=* switch when running _kubeadm init_. Since I didn't read the docs, I simply initialized my cluster, searched around a bit, plugged in one of the options, and tested a few deployments/pods in the cluster. At first appearance, things *look* ok, but when I applied a NodePort to the deployment, I got inconsistent results: either one or none of the nodes would serve up the desired content on!

Results

media_1533750659448.png

I'm satisfied with the end result of my work and hope that my next post on the detailed build-out helps others learn more about this as well. It gives us an opportunity to learn some of the basics around Kubernetes and the challenges faced with attempting to work with it as a standalone cluster vs. an Enterprise offering such as PKS & VKE from VMware, as well as the offerings from Google, Microsoft, Amazon, RedHat, etc...

Blueprint Preview

media_1533750784047.png

Request Form

media_1533750944436.png

Network Plugin Options

media_1533750996631.png

The drop-down is simple text, nothing special or dynamic here. The software component for the K8s Master takes this string as an input, then pulls and applies the appropriate yaml to the cluster.

Nodes in the cluster and cluster routing between nodes ready

media_1533751632570.png

Based on the earlier text about my challenges, you can see in this screenshot that I chose "Calico" for this deployment because of all the routes to 192.168.x.x networks for each node in the cluster. Scaling out the nodes in vRA will result in the new node(s) being automatically added to the cluster and the additional routes created.

Where to find the blueprint:

The blueprint has been released on VMware Code's Sample Exchange under vRA Blueprints.
Visit Sample Exchange

NUC Lab Kit

Below are my thoughts for a vSAN nuc lab. Since I already have cables, not including them here. I ordered (and received by Nov 30, 2016)
3 x nuc, 3 x 32GB Crucial mem, 3 x Toshiba NVMe drive, 3 x Startech USB to GB NIC, and 3 x Crucial 1TB SSD. I've also been very happy with my Cisco SG300-10 so I bought one more since my existing one only has one port available. Each of the items listed here are linked below - all were purchased using the provided links below.
single NIC (See this post for details on using the USB -> GB NIC item listed below

I stayed with the i5 for the power consumption and form factor vs. the i7 Skull Canyon ;)