Setting up a development environment with Cluster API using Kind
Airship is a collection of loosely coupled, but interoperable open source tools that declaratively automates cloud provisioning. Airship is designed to make your cloud deployments simple, repeatable, and resilient.
The primary motivation for Airship 2.0 is the continued evolution of the control plane, and by aligning with maturing CNCF projects we can improve Airship by making 2.0:
- More capable
- More secure
- More resilient
- Easier to operate
One such project is Cluster API, a Kubernetes project that brings declarative, Kubernetes-style APIs to cluster creation, configuration, and management. It provides optional, additive functionality on top of core Kubernetes to manage the lifecycle of a Kubernetes cluster.
In a previous blog post, Alan Meadows and Rodolfo Pacheco discussed the evolution of Airship 1.0 to Airship 2.0 and the relationship between Drydock and Cluster API. It's an interesting read, looking at how Cluster API will be used by Airship 2.0.
Today I will provide you the documentation and my tested step-by-step directions to creating a Cluster API development environment using Kind. This development environment will allow you to deploy virtual nodes as Docker containers in Kind, test out changes to the Cluster API codebase, and gain a better understanding of how Airship works at the component level to deploy Kubernetes clusters. These steps have all been tested in a virtual machine with the following configuration:
- Hypervisor: VirtualBox 6.1
- Operating System: Ubuntu 18.04 Desktop
- Memory: 8gb
- Processor: 6cpus
- Networking: NAT
- Proxy: N/A
To begin, create a new virtual machine with the above configuration.
Next, we will be working with the Cluster API Quickstart documentation using the Docker Provider and leveraging Kind to create clusters. What follows is a consolidated set of instructions from these resources.
-
Update package manager and install common packages
sudo apt-get update && sudo apt-get dist-upgrade -y sudo apt-get install -y gcc python git make
-
Install golang (Documentation)
wget https://dl.google.com/go/go1.14.1.linux-amd64.tar.gz sudo tar -C /usr/local -xzf go1.14.1.linux-amd64.tar.gz rm go1.14.1.linux-amd64.tar.gz
-
Install docker (Documentation)
sudo apt-get remove docker docker-engine docker.io containerd runc sudo apt-get update sudo apt-get install -y \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo apt-key fingerprint 0EBFCD88 sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" sudo apt-get update sudo apt-get install -y docker-ce docker-ce-cli containerd.io sudo groupadd docker sudo usermod -aG docker $USER
-
Update /etc/profile with necessary environment variables
sudo bash -c 'cat <<EOF >> /etc/profile export PATH=\$PATH:/usr/local/go/bin export DOCKER_POD_CIDRS=172.17.0.0/16 export DOCKER_SERVICE_CIDRS=10.0.0.0/24 export DOCKER_SERVICE_DOMAIN=cluster.local EOF'
-
Logout and log back in, or reboot your machine, for the user group and profile changes to take effect
sudo reboot now
-
Install kustomize (Documentation)
git clone https://github.com/kubernetes-sigs/kustomize.git cd kustomize/kustomize go install . sudo mv ~/go/bin/kustomize /usr/local/bin/ cd ~
-
Install kind (Documentation)
curl -Lo ./kind https://github.com/kubernetes-sigs/kind/releases/download/v0.7.0/kind-$(uname)-amd64 chmod +x ./kind sudo mv ./kind /usr/local/bin/kind
-
Install kubectl (Documentation)
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl
-
Install clusterctl (Documentation)
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v0.3.2/clusterctl-linux-amd64 -o clusterctl chmod +x ./clusterctl sudo mv ./clusterctl /usr/local/bin/clusterctl
-
Set up cluster api using docker provider (Documentation)
git clone https://github.com/kubernetes-sigs/cluster-api.git cd cluster-api cat > clusterctl-settings.json <<EOF { "providers": ["cluster-api","bootstrap-kubeadm","control-plane-kubeadm", "infrastructure-docker"], "provider_repos": [] } EOF make -C test/infrastructure/docker docker-build REGISTRY=gcr.io/k8s-staging-capi-docker make -C test/infrastructure/docker generate-manifests REGISTRY=gcr.io/k8s-staging-capi-docker ./cmd/clusterctl/hack/local-overrides.py cat > ~/.cluster-api/clusterctl.yaml <<EOF providers: - name: docker url: $HOME/.cluster-api/overrides/infrastructure-docker/latest/infrastructure-components.yaml type: InfrastructureProvider EOF cat > kind-cluster-with-extramounts.yaml <<EOF kind: Cluster apiVersion: kind.sigs.k8s.io/v1alpha3 nodes: - role: control-plane extraMounts: - hostPath: /var/run/docker.sock containerPath: /var/run/docker.sock EOF cp cmd/clusterctl/test/testdata/docker/v0.3.0/cluster-template.yaml ~/.cluster-api/overrides/infrastructure-docker/v0.3.0/ kind create cluster --config ./kind-cluster-with-extramounts.yaml --name clusterapi kind load docker-image gcr.io/k8s-staging-capi-docker/capd-manager-amd64:dev --name clusterapi clusterctl init --core cluster-api:v0.3.0 --bootstrap kubeadm:v0.3.0 --control-plane kubeadm:v0.3.0 --infrastructure docker:v0.3.0 clusterctl config cluster work-cluster --kubernetes-version 1.17.0 > work-cluster.yaml kubectl apply -f work-cluster.yaml kubectl --namespace=default get secret/work-cluster-kubeconfig -o jsonpath={.data.value} | base64 --decode > ./work-cluster.kubeconfig kubectl --kubeconfig=./work-cluster.kubeconfig apply -f https://docs.projectcalico.org/v3.12/manifests/calico.yaml
-
Interact with your cluster
kubectl --kubeconfig=./work-cluster.kubeconfig get nodes
That's all there is to it! If you made it this far, you should have a working CAPD environment to develop in.
I'd like to thank Michael McCune and the rest of the Cluster API community for helping me troubleshoot my setup so that I could share these steps with you. The Cluster API community is available on Slack in the #cluster-api channel.