Skip to content

Test Cluster Deployment

Chris Alfano edited this page Dec 18, 2018 · 7 revisions

Test Cluster Architecture

THIS DOCUMENT IS DEPRECATED, SEE Alpha Cluster Deployment

This document describes an architecture and deployment procedure for a kubernetes cluster which can be used for testing purposes

Overview

The authoritative reference on Kubernetes architecture should always be considered to be the Kubernetes architecture documentation, but this document will briefly cover Kubernetes components most pertinent to the test cluster deployment.

Containers in a Kubernetes cluster are deployed to and run on one of any number of nodes. The deployment of containers to the cluster is orchestrated by one of any number of master nodes. Every Kubernetes cluster must at a very minimum deploy one of each such class of infrastructure.

Container deployment to kubernetes clusters is intended to be node unaware-- that is, a container shouldn't care which node it is actually scheduled to run on. This presents a practical challenge for containers which depend on persistent storage; should such a container need to change nodes for any reason, its persistent storage must travel with it. In order to help solve this problem, Kubernetes offers Persistent Volume and Persistent Volume Claim resources to assign persistent storage resources to containers. This test cluster calls for the deployment of a NFS server in order to provide backing storage for those cluster resources.

Details

This testing cluster essentially calls for the deployment of 3 different infrastructure classes:

  1. Kubernetes master/control plane server (1)
  2. Kubernetes node/worker server (1 or more)
  3. NFS storage server (1 or more)

Kubernetes master/control plane server

The process described in this document performs Kubernetes master server deployment via the ansible playbooks provided by kubernetes. Kubernetes master servers are deployed with the Fedora operating system for compatibility with the kubernetes ansible playbooks.

Kubernetes node/worker server

Like master servers, the process for deployment of Kubernetes node servers covered by this document utilizes the ansible playbooks provided by kubernetes, and as such, is deployed on top of a running Fedora server.

Post-deployment considerations

SELinux

By default, containers in the kube-system namespace (and possibly any container using external volumes) will be unable to start due to SELinux permissions on their external volumes. The way this was handled on the testing cluster was to simply setenforce 0, but this should be highly discouraged in production.

NFS storage server

The NFS storage server deployment process described by this document is performed manually. The server is provisioned on top of the OpenSUSE Leap 42.2 distribution for ease of access to ZFS kernel modules and utilities

Deployment

Requirements

In addition to the aforementioned operating system requirements for each infrastructure class, the deployment machine (the machine which the ansible playbooks will be run from) must have the following tools installed:

  • Ansible 1.9
  • python-netaddr

Steps

Kubernetes nodes

The kubernetes nodes should be deployed using the playbooks and instructions provided by the ansible conrtib playbooks.

NFS Server

Upon deploying the base operating system, the following provisioning steps must be performed.

1. Install ZFS and NFS

zypper ar obs://filesystems filesystems
zypper in zfs-kmp-default
zypper in zfs
zypper in yast2-nfs-server

2. Load ZFS kernel module, and set module to load at boot

modprobe zfs
echo zfs > /etc/modules-load.d/zfs.conf

3. Create ZFS pool for container volumes

  • If necessary, create a raw disk image file to use as the zpool block device
zpool create container-volumes /dev/sdX

4. Enable and start the NFS server

systemctl enable nfs-server
systemctl start nfs-server

5. Create and share ZFS volumes to be exported for containers

  • Replace 192.168.0.0/16 with the network your kubernetes workers will be connecting to the NFS server from
zfs create -o quota=5GB -o sharenfs='rw=192.168.0.0/16' container-volumes/nginx-configs
zfs share -a

6. Verify the volumes are being shared

showmount -e localhost

Clone this wiki locally