What’s you need:
- ca.key, ca.crt
- External ETCD
- HAproxy for controlPlaneEndpoint:6443
After initial cluster 1 master 1 worker then run:
$ kubectl -n kube-system get cm kubeadm-config -o yaml > config.yml
Update config.yml insert “controlPlaneEndpoint: haproxy-ip:6443” into config.yaml under “apiServer”
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
controlPlaneEndpoint: haproxy-ip:6443
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
...
...
...
and run
# kubeadm init phase upload-certs --upload-certs --config config.yml
result:
# kubeadm init phase upload-certs --upload-certs --config config.yml
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
3a02b15ec3893cf243710c4d8b2ere58ee9b2d1e89e0cc3976d69ecd980526ec
Go to another master node. Copy external ETCD certificate to /etc/etcd and run
# kubeadm join haproxy-ip:6443 --token oykpxx.89hlaw4na67yg5c0 --discovery-token-ca-cert-hash sha256:1f484b29987de44e090e4b234c9f9f6887a3e7awed16b445313ce7ec3021b25f --control-plane --certificate-key 3a02b15ec3893cf243710c4d8b2ere58ee9b2d1e89e0cc3976d69ecd980526ec
Get node
$ kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP
s1 Ready master 17m v1.16.15 192.19.14.181
s2 Ready <none> 16m v1.16.15 192.19.14.182
s3 Ready master 13m v1.16.15 192.19.14.183
With external ETCD normal kubeadm certs check-expiration will not work
# kubeadm alpha certs check-expiration
failed to load existing certificate apiserver-etcd-client: open /etc/kubernetes/pki/apiserver-etcd-client.crt: no such file or directory
To see the stack trace of this error execute with --v=5 or higher
Try this
# kubeadm alpha certs check-expiration --config=config.yml
Done.