-
Notifications
You must be signed in to change notification settings - Fork 4
Test Cluster Deployment
THIS DOCUMENT IS DEPRECATED, SEE Alpha Cluster Deployment
This document describes an architecture and deployment procedure for a kubernetes cluster which can be used for testing purposes
The authoritative reference on Kubernetes architecture should always be considered to be the Kubernetes architecture documentation, but this document will briefly cover Kubernetes components most pertinent to the test cluster deployment.
Containers in a Kubernetes cluster are deployed to and run on one of any number of nodes. The deployment of containers to the cluster is orchestrated by one of any number of master nodes. Every Kubernetes cluster must at a very minimum deploy one of each such class of infrastructure.
Container deployment to kubernetes clusters is intended to be node unaware-- that is, a container shouldn't care which node it is actually scheduled to run on. This presents a practical challenge for containers which depend on persistent storage; should such a container need to change nodes for any reason, its persistent storage must travel with it. In order to help solve this problem, Kubernetes offers Persistent Volume and Persistent Volume Claim resources to assign persistent storage resources to containers. This test cluster calls for the deployment of a NFS server in order to provide backing storage for those cluster resources.
This testing cluster essentially calls for the deployment of 3 different infrastructure classes:
- Kubernetes master/control plane server (1)
- Kubernetes node/worker server (1 or more)
- NFS storage server (1 or more)
The process described in this document performs Kubernetes master server deployment via the ansible playbooks provided by kubernetes. Kubernetes master servers are deployed with the Fedora operating system for compatibility with the kubernetes ansible playbooks.
Like master servers, the process for deployment of Kubernetes node servers covered by this document utilizes the ansible playbooks provided by kubernetes, and as such, is deployed on top of a running Fedora server.
By default, containers in the kube-system namespace (and possibly any container using external volumes) will be unable to start due to SELinux permissions on their external volumes. The way this was handled on the testing cluster was to simply setenforce 0, but this should be highly discouraged in production.
The NFS storage server deployment process described by this document is performed manually. The server is provisioned on top of the OpenSUSE Leap 42.2 distribution for ease of access to ZFS kernel modules and utilities
In addition to the aforementioned operating system requirements for each infrastructure class, the deployment machine (the machine which the ansible playbooks will be run from) must have the following tools installed:
- Ansible 1.9
- python-netaddr
The kubernetes nodes should be deployed using the playbooks and instructions provided by the ansible conrtib playbooks.
Upon deploying the base operating system, the following provisioning steps must be performed.
zypper ar obs://filesystems filesystems
zypper in zfs-kmp-default
zypper in zfs
zypper in yast2-nfs-server
modprobe zfs
echo zfs > /etc/modules-load.d/zfs.conf
- If necessary, create a raw disk image file to use as the zpool block device
zpool create container-volumes /dev/sdX
systemctl enable nfs-server
systemctl start nfs-server
- Replace 192.168.0.0/16 with the network your kubernetes workers will be connecting to the NFS server from
zfs create -o quota=5GB -o sharenfs='rw=192.168.0.0/16' container-volumes/nginx-configs
zfs share -a
showmount -e localhost