| Component | Purpose | Key Facts |
|---|---|---|
| kube-apiserver | REST API frontend for all cluster operations | All cluster communication goes through API server |
| etcd | Distributed key-value store for cluster state | Backup etcd regularly - it's the source of truth |
| kube-scheduler | Assigns pods to nodes based on constraints | Considers resources, affinity, taints/tolerations |
| kube-controller-manager | Runs controller processes (node, replication) | Ensures desired state matches actual state |
Know the location of static pod manifests: /etc/kubernetes/manifests/
Control plane components run as static pods on kubeadm clusters.
# Initialize control plane kubeadm init --pod-network-cidr=10.244.0.0/16 # Set up kubectl for current user mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config # Join worker nodes kubeadm join <control-plane>:6443 --token <token> \ --discovery-token-ca-cert-hash sha256:<hash> # Generate new join token kubeadm token create --print-join-command
etcd backup/restore is heavily tested on the CKA exam!
ETCDCTL_API=3 etcdctl snapshot save backup.db \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.key # Verify backup ETCDCTL_API=3 etcdctl snapshot status backup.db
# Restore to new data directory ETCDCTL_API=3 etcdctl snapshot restore backup.db \ --data-dir=/var/lib/etcd-restored # Update etcd manifest to use new data-dir # Edit /etc/kubernetes/manifests/etcd.yaml
# Create Role kubectl create role pod-reader --verb=get,list,watch --resource=pods # Create RoleBinding kubectl create rolebinding read-pods --role=pod-reader --user=jane # Check permissions kubectl auth can-i create pods --as=jane kubectl auth can-i --list --as=system:serviceaccount:default:mysa