Kubernetes Cluster upgrade.

K8s Upgrade
  1. Upgrade all nodes at once.
  2. Upgrade one node at a time.
  3. Add new node to the cluster.
  • Kube apiserver: Should be always in higher version then the other K8s components
  • Controller Manager: Can be 1 version lower then kube apiserver. If apiserver is v1.10. Controller manager can be in v1.9 or v1.10
  • Kube scheduler: Can be 1 version lower then kube apiserver. If apiserver is v1.10. kube scheduler can be in v1.9 or v1.10
  • Kubelet: Can be 2 version lower then kube apiserver. If apiserver is v1.10. kubelet service can be in v1.8 or v1.9 or v1.10
  • Kube proxy: Can be 2 version lower then kube apiserver. If apiserver is v1.10. kube proxy service can be in v1.8 or v1.9 or v1.10
  • Kubectl: Can be 1 version higher or same version as apiserver or 1 version lower then api server. in this sample — it can be v1.09 or v1.10 or v1.11.

STEPS TO UPGRADE KUBERNETES CONTROLPLANE & WORKER NODES-

Kubernetes Upgrade plan
# Replace x with the patch version you picked for this upgrade
$sudo kubeadm upgrade apply v1.19.x
After the command is executed, we will see an output something like below-
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.19.0"
[upgrade/versions] Cluster version: v1.18.4
[upgrade/versions] kubeadm version: v1.19.0
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.19.0"...
Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003
Static pod: kube-controller-manager-kind-control-plane hash: 9ac092f0ca813f648c61c4d5fcbf39f2
Static pod: kube-scheduler-kind-control-plane hash: 7da02f2c78da17af7c2bf1533ecf8c9a
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-kind-control-plane hash: 171c56cd0e81c0db85e65d70361ceddf
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-kind-control-plane hash: 171c56cd0e81c0db85e65d70361ceddf
Static pod: etcd-kind-control-plane hash: 171c56cd0e81c0db85e65d70361ceddf
Static pod: etcd-kind-control-plane hash: 59e40b2aab1cd7055e64450b5ee438f0
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests999800980"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003
Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003
Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003
Static pod: kube-apiserver-kind-control-plane hash: b4c8effe84b4a70031f9a49a20c8b003
Static pod: kube-apiserver-kind-control-plane hash: f717874150ba572f020dcd89db8480fc
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-kind-control-plane hash: 9ac092f0ca813f648c61c4d5fcbf39f2
Static pod: kube-controller-manager-kind-control-plane hash: b155b63c70e798b806e64a866e297dd0
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-13-16-24-16/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-kind-control-plane hash: 7da02f2c78da17af7c2bf1533ecf8c9a
Static pod: kube-scheduler-kind-control-plane hash: 260018ac854dbf1c9fe82493e88aec31
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
W0713 16:26:14.074656 2986 dns.go:282] the CoreDNS Configuration will not be migrated due to unsupported version of CoreDNS. The existing CoreDNS Corefile configuration and deployment has been retained.
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.0". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

Upgrade kubelet and kubectl on control plane nodes.

# Replace x in 1.19.x-0 with the latest patch version
$ yum install -y kubelet-1.19.x-0 kubectl-1.19.x-0 --disableexcludes=kubernetes
# Restart the kubelet service.sudo systemctl daemon-reload
sudo systemctl restart kubelet

#Upgrade worker nodes

#Upgrade kubeadm

# Replace x in 1.19.x-0 with the latest patch version.$ yum install -y kubeadm-1.19.x-0 --disable excludes=kubernetes

# Upgrade the kubelet configuration

$ sudo kubeadm upgrade node

# Drain the node

# Replace <node-to-drain> with the name of your node you are draining.
$kubectl drain <node-to-drain> --ignore-daemonsets

# Upgrade kubelet and kubectl

# Replace x in 1.19.x-0 with the latest patch version
$ yum install -y kubelet-1.19.x-0 kubectl-1.19.x-0 --disableexcludes=kubernetes
# Restart the Kubelet.
$sudo systemctl daemon-reload
$sudo systemctl restart kubelet

# Uncordon the node

# Replace <node-to-drain> with the name of your node
$kubectl uncordon <node-to-drain>

# Verify the status of the cluster

$ kubectl get nodes

--

--

--

Stay hungry; Stay Foolish!!

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Kala Network Testnet with $40,000 worth of reward

How to convert .py including pytorch to .exe using pyinstaller. (Windows)

TDD, the one who helps(halts) you on Coding

You Can Use This 5 Git-Hub Alternatives 2020

(Video Series)⚡️Docker Practical Guide⚡️Part-4: Install WordPress and MySQL with Docker-Compose 🤓

An ultimate guide for Proof of Concept in Software Development

What Web Development Firm to Choose in 2022?

Scaling Graphite to Millions of Metrics

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Khemnath chauhan

Khemnath chauhan

Stay hungry; Stay Foolish!!

More from Medium

KUBERNETES DEPLOYMENT

K3s — nerdctl [Field Test 1]

Run Hashicorp Vault on AWS Elastic Kubernetes Service (EKS) Part 1