This is the multi-page printable view of this section. Click here to print.
Administration with kubeadm
- 1: Certificate Management with kubeadm
- 2: Configuring a cgroup driver
- 3: Reconfiguring a kubeadm cluster
- 4: Upgrading kubeadm clusters
- 5: Upgrading Linux nodes
- 6: Upgrading Windows nodes
- 7: Changing The Kubernetes Package Repository
1 - Certificate Management with kubeadm
Kubernetes v1.15 [stable]
Client certificates generated by kubeadm expire after 1 year. This page explains how to manage certificate renewals with kubeadm. It also covers other tasks related to kubeadm certificate management.
Before you begin
You should be familiar with PKI certificates and requirements in Kubernetes.
Using custom certificates
By default, kubeadm generates all the certificates needed for a cluster to run. You can override this behavior by providing your own certificates.
To do so, you must place them in whatever directory is specified by the
--cert-dir
flag or the certificatesDir
field of kubeadm's ClusterConfiguration
.
By default this is /etc/kubernetes/pki
.
If a given certificate and private key pair exists before running kubeadm init
,
kubeadm does not overwrite them. This means you can, for example, copy an existing
CA into /etc/kubernetes/pki/ca.crt
and /etc/kubernetes/pki/ca.key
,
and kubeadm will use this CA for signing the rest of the certificates.
External CA mode
It is also possible to provide only the ca.crt
file and not the
ca.key
file (this is only available for the root CA file, not other cert pairs).
If all other certificates and kubeconfig files are in place, kubeadm recognizes
this condition and activates the "External CA" mode. kubeadm will proceed without the
CA key on disk.
Instead, run the controller-manager standalone with --controllers=csrsigner
and
point to the CA certificate and key.
PKI certificates and requirements includes guidance on setting up a cluster to use an external CA.
Check certificate expiration
You can use the check-expiration
subcommand to check when certificates expire:
kubeadm certs check-expiration
The output is similar to this:
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Dec 30, 2020 23:36 UTC 364d no
apiserver Dec 30, 2020 23:36 UTC 364d ca no
apiserver-etcd-client Dec 30, 2020 23:36 UTC 364d etcd-ca no
apiserver-kubelet-client Dec 30, 2020 23:36 UTC 364d ca no
controller-manager.conf Dec 30, 2020 23:36 UTC 364d no
etcd-healthcheck-client Dec 30, 2020 23:36 UTC 364d etcd-ca no
etcd-peer Dec 30, 2020 23:36 UTC 364d etcd-ca no
etcd-server Dec 30, 2020 23:36 UTC 364d etcd-ca no
front-proxy-client Dec 30, 2020 23:36 UTC 364d front-proxy-ca no
scheduler.conf Dec 30, 2020 23:36 UTC 364d no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Dec 28, 2029 23:36 UTC 9y no
etcd-ca Dec 28, 2029 23:36 UTC 9y no
front-proxy-ca Dec 28, 2029 23:36 UTC 9y no
The command shows expiration/residual time for the client certificates in the
/etc/kubernetes/pki
folder and for the client certificate embedded in the kubeconfig files used
by kubeadm (admin.conf
, controller-manager.conf
and scheduler.conf
).
Additionally, kubeadm informs the user if the certificate is externally managed; in this case, the user should take care of managing certificate renewal manually/using other tools.
kubeadm
cannot manage certificates signed by an external CA.
kubelet.conf
is not included in the list above because kubeadm configures kubelet
for automatic certificate renewal
with rotatable certificates under /var/lib/kubelet/pki
.
To repair an expired kubelet client certificate see
Kubelet client certificate rotation fails.
On nodes created with kubeadm init
, prior to kubeadm version 1.17, there is a
bug where you manually have to modify the
contents of kubelet.conf
. After kubeadm init
finishes, you should update kubelet.conf
to
point to the rotated kubelet client certificates, by replacing client-certificate-data
and
client-key-data
with:
client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
client-key: /var/lib/kubelet/pki/kubelet-client-current.pem
Automatic certificate renewal
kubeadm renews all the certificates during control plane upgrade.
This feature is designed for addressing the simplest use cases; if you don't have specific requirements on certificate renewal and perform Kubernetes version upgrades regularly (less than 1 year in between each upgrade), kubeadm will take care of keeping your cluster up to date and reasonably secure.
If you have more complex requirements for certificate renewal, you can opt out from the default
behavior by passing --certificate-renewal=false
to kubeadm upgrade apply
or to kubeadm upgrade node
.
--certificate-renewal
is false
for the kubeadm upgrade node
command. In that case, you should explicitly set --certificate-renewal=true
.
Manual certificate renewal
You can renew your certificates manually at any time with the kubeadm certs renew
command, with the appropriate command line options.
This command performs the renewal using CA (or front-proxy-CA) certificate and key stored in /etc/kubernetes/pki
.
After running the command you should restart the control plane Pods. This is required since
dynamic certificate reload is currently not supported for all components and certificates.
Static Pods are managed by the local kubelet
and not by the API Server, thus kubectl cannot be used to delete and restart them.
To restart a static Pod you can temporarily remove its manifest file from /etc/kubernetes/manifests/
and wait for 20 seconds (see the fileCheckFrequency
value in KubeletConfiguration struct.
The kubelet will terminate the Pod if it's no longer in the manifest directory.
You can then move the file back and after another fileCheckFrequency
period, the kubelet will recreate
the Pod and the certificate renewal for the component can complete.
certs renew
uses the existing certificates as the authoritative source for attributes (Common
Name, Organization, SAN, etc.) instead of the kubeadm-config
ConfigMap. It is strongly recommended
to keep them both in sync.
kubeadm certs renew
can renew any specific certificate or, with the subcommand all
, it can renew all of them, as shown below:
kubeadm certs renew all
Clusters built with kubeadm often copy the admin.conf
certificate into
$HOME/.kube/config
, as instructed in Creating a cluster with kubeadm.
On such a system, to update the contents of $HOME/.kube/config
after renewing the admin.conf
, you must run the following commands:
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Renew certificates with the Kubernetes certificates API
This section provides more details about how to execute manual certificate renewal using the Kubernetes certificates API.
Set up a signer
The Kubernetes Certificate Authority does not work out of the box. You can configure an external signer such as cert-manager, or you can use the built-in signer.
The built-in signer is part of kube-controller-manager
.
To activate the built-in signer, you must pass the --cluster-signing-cert-file
and
--cluster-signing-key-file
flags.
If you're creating a new cluster, you can use a kubeadm configuration file:
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
controllerManager:
extraArgs:
cluster-signing-cert-file: /etc/kubernetes/pki/ca.crt
cluster-signing-key-file: /etc/kubernetes/pki/ca.key
Create certificate signing requests (CSR)
See Create CertificateSigningRequest for creating CSRs with the Kubernetes API.
Renew certificates with external CA
This section provide more details about how to execute manual certificate renewal using an external CA.
To better integrate with external CAs, kubeadm can also produce certificate signing requests (CSRs). A CSR represents a request to a CA for a signed certificate for a client. In kubeadm terms, any certificate that would normally be signed by an on-disk CA can be produced as a CSR instead. A CA, however, cannot be produced as a CSR.
Create certificate signing requests (CSR)
You can create certificate signing requests with kubeadm certs renew --csr-only
.
Both the CSR and the accompanying private key are given in the output.
You can pass in a directory with --csr-dir
to output the CSRs to the specified location.
If --csr-dir
is not specified, the default certificate directory (/etc/kubernetes/pki
) is used.
Certificates can be renewed with kubeadm certs renew --csr-only
.
As with kubeadm init
, an output directory can be specified with the --csr-dir
flag.
A CSR contains a certificate's name, domains, and IPs, but it does not specify usages. It is the responsibility of the CA to specify the correct cert usages when issuing a certificate.
- In
openssl
this is done with theopenssl ca
command. - In
cfssl
you specify usages in the config file.
After a certificate is signed using your preferred method, the certificate and the private key
must be copied to the PKI directory (by default /etc/kubernetes/pki
).
Certificate authority (CA) rotation
Kubeadm does not support rotation or replacement of CA certificates out of the box.
For more information about manual rotation or replacement of CA, see manual rotation of CA certificates.
Enabling signed kubelet serving certificates
By default the kubelet serving certificate deployed by kubeadm is self-signed. This means a connection from external services like the metrics-server to a kubelet cannot be secured with TLS.
To configure the kubelets in a new kubeadm cluster to obtain properly signed serving
certificates you must pass the following minimal configuration to kubeadm init
:
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
serverTLSBootstrap: true
If you have already created the cluster you must adapt it by doing the following:
- Find and edit the
kubelet-config-1.28
ConfigMap in thekube-system
namespace. In that ConfigMap, thekubelet
key has a KubeletConfiguration document as its value. Edit the KubeletConfiguration document to setserverTLSBootstrap: true
. - On each node, add the
serverTLSBootstrap: true
field in/var/lib/kubelet/config.yaml
and restart the kubelet withsystemctl restart kubelet
The field serverTLSBootstrap: true
will enable the bootstrap of kubelet serving
certificates by requesting them from the certificates.k8s.io
API. One known limitation
is that the CSRs (Certificate Signing Requests) for these certificates cannot be automatically
approved by the default signer in the kube-controller-manager -
kubernetes.io/kubelet-serving
.
This will require action from the user or a third party controller.
These CSRs can be viewed using:
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-9wvgt 112s kubernetes.io/kubelet-serving system:node:worker-1 Pending
csr-lz97v 1m58s kubernetes.io/kubelet-serving system:node:control-plane-1 Pending
To approve them you can do the following:
kubectl certificate approve <CSR-name>
By default, these serving certificate will expire after one year. Kubeadm sets the
KubeletConfiguration
field rotateCertificates
to true
, which means that close
to expiration a new set of CSRs for the serving certificates will be created and must
be approved to complete the rotation. To understand more see
Certificate Rotation.
If you are looking for a solution for automatic approval of these CSRs it is recommended that you contact your cloud provider and ask if they have a CSR signer that verifies the node identity with an out of band mechanism.
Third party custom controllers can be used:
Such a controller is not a secure mechanism unless it not only verifies the CommonName in the CSR but also verifies the requested IPs and domain names. This would prevent a malicious actor that has access to a kubelet client certificate to create CSRs requesting serving certificates for any IP or domain name.
Generating kubeconfig files for additional users
During cluster creation, kubeadm signs the certificate in the admin.conf
to have
Subject: O = system:masters, CN = kubernetes-admin
.
system:masters
is a break-glass, super user group that bypasses the authorization layer (for example,
RBAC).
Sharing the admin.conf
with additional users is not recommended!
Instead, you can use the kubeadm kubeconfig user
command to generate kubeconfig files for additional users.
The command accepts a mixture of command line flags and
kubeadm configuration options.
The generated kubeconfig will be written to stdout and can be piped to a file using
kubeadm kubeconfig user ... > somefile.conf
.
Example configuration file that can be used with --config
:
# example.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
# Will be used as the target "cluster" in the kubeconfig
clusterName: "kubernetes"
# Will be used as the "server" (IP or DNS name) of this cluster in the kubeconfig
controlPlaneEndpoint: "some-dns-address:6443"
# The cluster CA key and certificate will be loaded from this local directory
certificatesDir: "/etc/kubernetes/pki"
Make sure that these settings match the desired target cluster settings. To see the settings of an existing cluster use:
kubectl get cm kubeadm-config -n kube-system -o=jsonpath="{.data.ClusterConfiguration}"
The following example will generate a kubeconfig file with credentials valid for 24 hours
for a new user johndoe
that is part of the appdevs
group:
kubeadm kubeconfig user --config example.yaml --org appdevs --client-name johndoe --validity-period 24h
The following example will generate a kubeconfig file with administrator credentials valid for 1 week:
kubeadm kubeconfig user --config example.yaml --client-name admin --validity-period 168h
2 - Configuring a cgroup driver
This page explains how to configure the kubelet's cgroup driver to match the container runtime cgroup driver for kubeadm clusters.
Before you begin
You should be familiar with the Kubernetes container runtime requirements.
Configuring the container runtime cgroup driver
The Container runtimes page
explains that the systemd
driver is recommended for kubeadm based setups instead
of the kubelet's default cgroupfs
driver,
because kubeadm manages the kubelet as a
systemd service.
The page also provides details on how to set up a number of different container runtimes with the
systemd
driver by default.
Configuring the kubelet cgroup driver
kubeadm allows you to pass a KubeletConfiguration
structure during kubeadm init
.
This KubeletConfiguration
can include the cgroupDriver
field which controls the cgroup
driver of the kubelet.
In v1.22 and later, if the user does not set the cgroupDriver
field under KubeletConfiguration
,
kubeadm defaults it to systemd
.
In Kubernetes v1.28, you can enable automatic detection of the cgroup driver as an alpha feature. See systemd cgroup driver for more details.
A minimal example of configuring the field explicitly:
# kubeadm-config.yaml
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.21.0
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd
Such a configuration file can then be passed to the kubeadm command:
kubeadm init --config kubeadm-config.yaml
Kubeadm uses the same KubeletConfiguration
for all nodes in the cluster.
The KubeletConfiguration
is stored in a ConfigMap
object under the kube-system
namespace.
Executing the sub commands init
, join
and upgrade
would result in kubeadm
writing the KubeletConfiguration
as a file under /var/lib/kubelet/config.yaml
and passing it to the local node kubelet.
Using the cgroupfs
driver
To use cgroupfs
and to prevent kubeadm upgrade
from modifying the
KubeletConfiguration
cgroup driver on existing setups, you must be explicit
about its value. This applies to a case where you do not wish future versions
of kubeadm to apply the systemd
driver by default.
See the below section on "Modify the kubelet ConfigMap" for details on how to be explicit about the value.
If you wish to configure a container runtime to use the cgroupfs
driver,
you must refer to the documentation of the container runtime of your choice.
Migrating to the systemd
driver
To change the cgroup driver of an existing kubeadm cluster from cgroupfs
to systemd
in-place,
a similar procedure to a kubelet upgrade is required. This must include both
steps outlined below.
systemd
driver. This requires executing only the first step below
before joining the new nodes and ensuring the workloads can safely move to the new
nodes before deleting the old nodes.
Modify the kubelet ConfigMap
-
Call
kubectl edit cm kubelet-config -n kube-system
. -
Either modify the existing
cgroupDriver
value or add a new field that looks like this:cgroupDriver: systemd
This field must be present under the
kubelet:
section of the ConfigMap.
Update the cgroup driver on all nodes
For each node in the cluster:
- Drain the node using
kubectl drain <node-name> --ignore-daemonsets
- Stop the kubelet using
systemctl stop kubelet
- Stop the container runtime
- Modify the container runtime cgroup driver to
systemd
- Set
cgroupDriver: systemd
in/var/lib/kubelet/config.yaml
- Start the container runtime
- Start the kubelet using
systemctl start kubelet
- Uncordon the node using
kubectl uncordon <node-name>
Execute these steps on nodes one at a time to ensure workloads have sufficient time to schedule on different nodes.
Once the process is complete ensure that all nodes and workloads are healthy.
3 - Reconfiguring a kubeadm cluster
kubeadm does not support automated ways of reconfiguring components that were deployed on managed nodes. One way of automating this would be by using a custom operator.
To modify the components configuration you must manually edit associated cluster objects and files on disk.
This guide shows the correct sequence of steps that need to be performed to achieve kubeadm cluster reconfiguration.
Before you begin
- You need a cluster that was deployed using kubeadm
- Have administrator credentials (
/etc/kubernetes/admin.conf
) and network connectivity to a running kube-apiserver in the cluster from a host that has kubectl installed - Have a text editor installed on all hosts
Reconfiguring the cluster
kubeadm writes a set of cluster wide component configuration options in
ConfigMaps and other objects. These objects must be manually edited. The command kubectl edit
can be used for that.
The kubectl edit
command will open a text editor where you can edit and save the object directly.
You can use the environment variables KUBECONFIG
and KUBE_EDITOR
to specify the location of
the kubectl consumed kubeconfig file and preferred text editor.
For example:
KUBECONFIG=/etc/kubernetes/admin.conf KUBE_EDITOR=nano kubectl edit <parameters>
Applying cluster configuration changes
Updating the ClusterConfiguration
During cluster creation and upgrade, kubeadm writes its
ClusterConfiguration
in a ConfigMap called kubeadm-config
in the kube-system
namespace.
To change a particular option in the ClusterConfiguration
you can edit the ConfigMap with this command:
kubectl edit cm -n kube-system kubeadm-config
The configuration is located under the data.ClusterConfiguration
key.
ClusterConfiguration
includes a variety of options that affect the configuration of individual
components such as kube-apiserver, kube-scheduler, kube-controller-manager, CoreDNS, etcd and kube-proxy.
Changes to the configuration must be reflected on node components manually.
Reflecting ClusterConfiguration
changes on control plane nodes
kubeadm manages the control plane components as static Pod manifests located in
the directory /etc/kubernetes/manifests
.
Any changes to the ClusterConfiguration
under the apiServer
, controllerManager
, scheduler
or etcd
keys must be reflected in the associated files in the manifests directory on a control plane node.
Such changes may include:
extraArgs
- requires updating the list of flags passed to a component containerextraMounts
- requires updated the volume mounts for a component container*SANs
- requires writing new certificates with updated Subject Alternative Names.
Before proceeding with these changes, make sure you have backed up the directory /etc/kubernetes/
.
To write new certificates you can use:
kubeadm init phase certs <component-name> --config <config-file>
To write new manifest files in /etc/kubernetes/manifests
you can use:
kubeadm init phase control-plane <component-name> --config <config-file>
The <config-file>
contents must match the updated ClusterConfiguration
.
The <component-name>
value must be the name of the component.
/etc/kubernetes/manifests
will tell the kubelet to restart the static Pod for the corresponding component.
Try doing these changes one node at a time to leave the cluster without downtime.
Applying kubelet configuration changes
Updating the KubeletConfiguration
During cluster creation and upgrade, kubeadm writes its
KubeletConfiguration
in a ConfigMap called kubelet-config
in the kube-system
namespace.
You can edit the ConfigMap with this command:
kubectl edit cm -n kube-system kubelet-config
The configuration is located under the data.kubelet
key.
Reflecting the kubelet changes
To reflect the change on kubeadm nodes you must do the following:
- Log in to a kubeadm node
- Run
kubeadm upgrade node phase kubelet-config
to download the latestkubelet-config
ConfigMap contents into the local file/var/lib/kubelet/config.yaml
- Edit the file
/var/lib/kubelet/kubeadm-flags.env
to apply additional configuration with flags - Restart the kubelet service with
systemctl restart kubelet
kubeadm upgrade
, kubeadm downloads the KubeletConfiguration
from the
kubelet-config
ConfigMap and overwrite the contents of /var/lib/kubelet/config.yaml
.
This means that node local configuration must be applied either by flags in
/var/lib/kubelet/kubeadm-flags.env
or by manually updating the contents of
/var/lib/kubelet/config.yaml
after kubeadm upgrade
, and then restarting the kubelet.
Applying kube-proxy configuration changes
Updating the KubeProxyConfiguration
During cluster creation and upgrade, kubeadm writes its
KubeProxyConfiguration
in a ConfigMap in the kube-system
namespace called kube-proxy
.
This ConfigMap is used by the kube-proxy
DaemonSet in the kube-system
namespace.
To change a particular option in the KubeProxyConfiguration
, you can edit the ConfigMap with this command:
kubectl edit cm -n kube-system kube-proxy
The configuration is located under the data.config.conf
key.
Reflecting the kube-proxy changes
Once the kube-proxy
ConfigMap is updated, you can restart all kube-proxy Pods:
Obtain the Pod names:
kubectl get po -n kube-system | grep kube-proxy
Delete a Pod with:
kubectl delete po -n kube-system <pod-name>
New Pods that use the updated ConfigMap will be created.
Applying CoreDNS configuration changes
Updating the CoreDNS Deployment and Service
kubeadm deploys CoreDNS as a Deployment called coredns
and with a Service kube-dns
,
both in the kube-system
namespace.
To update any of the CoreDNS settings, you can edit the Deployment and Service objects:
kubectl edit deployment -n kube-system coredns
kubectl edit service -n kube-system kube-dns
Reflecting the CoreDNS changes
Once the CoreDNS changes are applied you can delete the CoreDNS Pods:
Obtain the Pod names:
kubectl get po -n kube-system | grep coredns
Delete a Pod with:
kubectl delete po -n kube-system <pod-name>
New Pods with the updated CoreDNS configuration will be created.
kubeadm upgrade apply
, your changes to the CoreDNS
objects will be lost and must be reapplied.
Persisting the reconfiguration
During the execution of kubeadm upgrade
on a managed node, kubeadm might overwrite configuration
that was applied after the cluster was created (reconfiguration).
Persisting Node object reconfiguration
kubeadm writes Labels, Taints, CRI socket and other information on the Node object for a particular Kubernetes node. To change any of the contents of this Node object you can use:
kubectl edit no <node-name>
During kubeadm upgrade
the contents of such a Node might get overwritten.
If you would like to persist your modifications to the Node object after upgrade,
you can prepare a kubectl patch
and apply it to the Node object:
kubectl patch no <node-name> --patch-file <patch-file>
Persisting control plane component reconfiguration
The main source of control plane configuration is the ClusterConfiguration
object stored in the cluster. To extend the static Pod manifests configuration,
patches can be used.
These patch files must remain as files on the control plane nodes to ensure that
they can be used by the kubeadm upgrade ... --patches <directory>
.
If reconfiguration is done to the ClusterConfiguration
and static Pod manifests on disk,
the set of node specific patches must be updated accordingly.
Persisting kubelet reconfiguration
Any changes to the KubeletConfiguration
stored in /var/lib/kubelet/config.yaml
will be overwritten on
kubeadm upgrade
by downloading the contents of the cluster wide kubelet-config
ConfigMap.
To persist kubelet node specific configuration either the file /var/lib/kubelet/config.yaml
has to be updated manually post-upgrade or the file /var/lib/kubelet/kubeadm-flags.env
can include flags.
The kubelet flags override the associated KubeletConfiguration
options, but note that
some of the flags are deprecated.
A kubelet restart will be required after changing /var/lib/kubelet/config.yaml
or
/var/lib/kubelet/kubeadm-flags.env
.
What's next
4 - Upgrading kubeadm clusters
This page explains how to upgrade a Kubernetes cluster created with kubeadm from version
1.27.x to version 1.28.x, and from version
1.28.x to 1.28.y (where y > x
). Skipping MINOR versions
when upgrading is unsupported. For more details, please visit Version Skew Policy.
To see information about upgrading clusters created using older versions of kubeadm, please refer to following pages instead:
- Upgrading a kubeadm cluster from 1.26 to 1.27
- Upgrading a kubeadm cluster from 1.25 to 1.26
- Upgrading a kubeadm cluster from 1.24 to 1.25
- Upgrading a kubeadm cluster from 1.23 to 1.24
The upgrade workflow at high level is the following:
- Upgrade a primary control plane node.
- Upgrade additional control plane nodes.
- Upgrade worker nodes.
Before you begin
- Make sure you read the release notes carefully.
- The cluster should use a static control plane and etcd pods or external etcd.
- Make sure to back up any important components, such as app-level state stored in a database.
kubeadm upgrade
does not touch your workloads, only components internal to Kubernetes, but backups are always a best practice. - Swap must be disabled.
Additional information
- The instructions below outline when to drain each node during the upgrade process. If you are performing a minor version upgrade for any kubelet, you must first drain the node (or nodes) that you are upgrading. In the case of control plane nodes, they could be running CoreDNS Pods or other critical workloads. For more information see Draining nodes.
- All containers are restarted after upgrade, because the container spec hash value is changed.
- To verify that the kubelet service has successfully restarted after the kubelet has been upgraded,
you can execute
systemctl status kubelet
or view the service logs withjournalctl -xeu kubelet
. - Usage of the
--config
flag ofkubeadm upgrade
with kubeadm configuration API types with the purpose of reconfiguring the cluster is not recommended and can have unexpected results. Follow the steps in Reconfiguring a kubeadm cluster instead.
Changing the package repository
If you're using the community-owned package repositories (pkgs.k8s.io
), you need to
enable the package repository for the desired Kubernetes minor release. This is explained in
Changing the Kubernetes package repository
document.
apt.kubernetes.io
and yum.kubernetes.io
) have been
deprecated and frozen starting from September 13, 2023.
Using the new package repositories hosted at pkgs.k8s.io
is strongly recommended and required in order to install Kubernetes versions released after September 13, 2023.
The deprecated legacy repositories, and their contents, might be removed at any time in the future and without
a further notice period. The new package repositories provide downloads for Kubernetes versions starting with v1.24.0.
Determine which version to upgrade to
Find the latest patch release for Kubernetes 1.28 using the OS package manager:
# Find the latest 1.28 version in the list.
# It should look like 1.28.x-*, where x is the latest patch.
apt update
apt-cache madison kubeadm
# Find the latest 1.28 version in the list.
# It should look like 1.28.x-*, where x is the latest patch.
yum list --showduplicates kubeadm --disableexcludes=kubernetes
Upgrading control plane nodes
The upgrade procedure on control plane nodes should be executed one node at a time.
Pick a control plane node that you wish to upgrade first. It must have the /etc/kubernetes/admin.conf
file.
Call "kubeadm upgrade"
For the first control plane node
-
Upgrade kubeadm:
# replace x in 1.28.x-* with the latest patch version apt-mark unhold kubeadm && \ apt-get update && apt-get install -y kubeadm='1.28.x-*' && \ apt-mark hold kubeadm
# replace x in 1.28.x-* with the latest patch version yum install -y kubeadm-'1.28.x-*' --disableexcludes=kubernetes
-
Verify that the download works and has the expected version:
kubeadm version
-
Verify the upgrade plan:
kubeadm upgrade plan
This command checks that your cluster can be upgraded, and fetches the versions you can upgrade to. It also shows a table with the component config version states.
Note:kubeadm upgrade
also automatically renews the certificates that it manages on this node. To opt-out of certificate renewal the flag--certificate-renewal=false
can be used. For more information see the certificate management guide.Note: Ifkubeadm upgrade plan
shows any component configs that require manual upgrade, users must provide a config file with replacement configs tokubeadm upgrade apply
via the--config
command line flag. Failing to do so will causekubeadm upgrade apply
to exit with an error and not perform an upgrade. -
Choose a version to upgrade to, and run the appropriate command. For example:
# replace x with the patch version you picked for this upgrade sudo kubeadm upgrade apply v1.28.x
Once the command finishes you should see:
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.28.x". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
Note: For versions earlier than v1.28, kubeadm defaulted to a mode that upgrades the addons (including CoreDNS and kube-proxy) immediately duringkubeadm upgrade apply
, regardless of whether there are other control plane instances that have not been upgraded. This may cause compatibility problems. Since v1.28, kubeadm defaults to a mode that checks whether all the control plane instances have been upgraded before starting to upgrade the addons. You must perform control plane instances upgrade sequentially or at least ensure that the last control plane instance upgrade is not started until all the other control plane instances have been upgraded completely, and the addons upgrade will be performed after the last control plane instance is upgraded. If you want to keep the old upgrade behavior, please enable theUpgradeAddonsBeforeControlPlane
feature gate bykubeadm upgrade apply --feature-gates=UpgradeAddonsBeforeControlPlane=true
. The Kubernetes project does not in general recommend enabling this feature gate, you should instead change your upgrade process or cluster addons so that you do not need to enable the legacy behavior. TheUpgradeAddonsBeforeControlPlane
feature gate will be removed in a future release. -
Manually upgrade your CNI provider plugin.
Your Container Network Interface (CNI) provider may have its own upgrade instructions to follow. Check the addons page to find your CNI provider and see whether additional upgrade steps are required.
This step is not required on additional control plane nodes if the CNI provider runs as a DaemonSet.
For the other control plane nodes
Same as the first control plane node but use:
sudo kubeadm upgrade node
instead of:
sudo kubeadm upgrade apply
Also calling kubeadm upgrade plan
and upgrading the CNI provider plugin is no longer needed.
Drain the node
Prepare the node for maintenance by marking it unschedulable and evicting the workloads:
# replace <node-to-drain> with the name of your node you are draining
kubectl drain <node-to-drain> --ignore-daemonsets
Upgrade kubelet and kubectl
-
Upgrade the kubelet and kubectl:
# replace x in 1.28.x-* with the latest patch version apt-mark unhold kubelet kubectl && \ apt-get update && apt-get install -y kubelet='1.28.x-*' kubectl='1.28.x-*' && \ apt-mark hold kubelet kubectl
# replace x in 1.28.x-* with the latest patch version yum install -y kubelet-'1.28.x-*' kubectl-'1.28.x-*' --disableexcludes=kubernetes
-
Restart the kubelet:
sudo systemctl daemon-reload sudo systemctl restart kubelet
Uncordon the node
Bring the node back online by marking it schedulable:
# replace <node-to-uncordon> with the name of your node
kubectl uncordon <node-to-uncordon>
Upgrade worker nodes
The upgrade procedure on worker nodes should be executed one node at a time or few nodes at a time, without compromising the minimum required capacity for running your workloads.
The following pages show how to upgrade Linux and Windows worker nodes:
Verify the status of the cluster
After the kubelet is upgraded on all nodes verify that all nodes are available again by running the following command from anywhere kubectl can access the cluster:
kubectl get nodes
The STATUS
column should show Ready
for all your nodes, and the version number should be updated.
Recovering from a failure state
If kubeadm upgrade
fails and does not roll back, for example because of an unexpected shutdown during execution, you can run kubeadm upgrade
again.
This command is idempotent and eventually makes sure that the actual state is the desired state you declare.
To recover from a bad state, you can also run kubeadm upgrade apply --force
without changing the version that your cluster is running.
During upgrade kubeadm writes the following backup folders under /etc/kubernetes/tmp
:
kubeadm-backup-etcd-<date>-<time>
kubeadm-backup-manifests-<date>-<time>
kubeadm-backup-etcd
contains a backup of the local etcd member data for this control plane Node.
In case of an etcd upgrade failure and if the automatic rollback does not work, the contents of this folder
can be manually restored in /var/lib/etcd
. In case external etcd is used this backup folder will be empty.
kubeadm-backup-manifests
contains a backup of the static Pod manifest files for this control plane Node.
In case of a upgrade failure and if the automatic rollback does not work, the contents of this folder can be
manually restored in /etc/kubernetes/manifests
. If for some reason there is no difference between a pre-upgrade
and post-upgrade manifest file for a certain component, a backup file for it will not be written.
How it works
kubeadm upgrade apply
does the following:
- Checks that your cluster is in an upgradeable state:
- The API server is reachable
- All nodes are in the
Ready
state - The control plane is healthy
- Enforces the version skew policies.
- Makes sure the control plane images are available or available to pull to the machine.
- Generates replacements and/or uses user supplied overwrites if component configs require version upgrades.
- Upgrades the control plane components or rollbacks if any of them fails to come up.
- Applies the new
CoreDNS
andkube-proxy
manifests and makes sure that all necessary RBAC rules are created. - Creates new certificate and key files of the API server and backs up old files if they're about to expire in 180 days.
kubeadm upgrade node
does the following on additional control plane nodes:
- Fetches the kubeadm
ClusterConfiguration
from the cluster. - Optionally backups the kube-apiserver certificate.
- Upgrades the static Pod manifests for the control plane components.
- Upgrades the kubelet configuration for this node.
kubeadm upgrade node
does the following on worker nodes:
- Fetches the kubeadm
ClusterConfiguration
from the cluster. - Upgrades the kubelet configuration for this node.
5 - Upgrading Linux nodes
This page explains how to upgrade a Linux Worker Nodes created with kubeadm.
Before you begin
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
To check the version, enterkubectl version
.
- Familiarize yourself with the process for upgrading the rest of your kubeadm cluster. You will want to upgrade the control plane nodes before upgrading your Linux Worker nodes.
Changing the package repository
If you're using the community-owned package repositories (pkgs.k8s.io
), you need to
enable the package repository for the desired Kubernetes minor release. This is explained in
Changing the Kubernetes package repository
document.
apt.kubernetes.io
and yum.kubernetes.io
) have been
deprecated and frozen starting from September 13, 2023.
Using the new package repositories hosted at pkgs.k8s.io
is strongly recommended and required in order to install Kubernetes versions released after September 13, 2023.
The deprecated legacy repositories, and their contents, might be removed at any time in the future and without
a further notice period. The new package repositories provide downloads for Kubernetes versions starting with v1.24.0.
Upgrading worker nodes
Upgrade kubeadm
Upgrade kubeadm:
# replace x in 1.28.x-* with the latest patch version
apt-mark unhold kubeadm && \
apt-get update && apt-get install -y kubeadm='1.28.x-*' && \
apt-mark hold kubeadm
# replace x in 1.28.x-* with the latest patch version
yum install -y kubeadm-'1.28.x-*' --disableexcludes=kubernetes
Call "kubeadm upgrade"
For worker nodes this upgrades the local kubelet configuration:
sudo kubeadm upgrade node
Drain the node
Prepare the node for maintenance by marking it unschedulable and evicting the workloads:
# replace <node-to-drain> with the name of your node you are draining
kubectl drain <node-to-drain> --ignore-daemonsets
Upgrade kubelet and kubectl
-
Upgrade the kubelet and kubectl:
# replace x in 1.28.x-* with the latest patch version apt-mark unhold kubelet kubectl && \ apt-get update && apt-get install -y kubelet='1.28.x-*' kubectl='1.28.x-*' && \ apt-mark hold kubelet kubectl
# replace x in 1.28.x-* with the latest patch version yum install -y kubelet-'1.28.x-*' kubectl-'1.28.x-*' --disableexcludes=kubernetes
-
Restart the kubelet:
sudo systemctl daemon-reload sudo systemctl restart kubelet
Uncordon the node
Bring the node back online by marking it schedulable:
# replace <node-to-uncordon> with the name of your node
kubectl uncordon <node-to-uncordon>
What's next
- See how to Upgrade Windows nodes.
6 - Upgrading Windows nodes
Kubernetes v1.18 [beta]
This page explains how to upgrade a Windows node created with kubeadm.
Before you begin
You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a cluster with at least two nodes that are not acting as control plane hosts. If you do not already have a cluster, you can create one by using minikube or you can use one of these Kubernetes playgrounds:
Your Kubernetes server must be at or later than version 1.17. To check the version, enterkubectl version
.
- Familiarize yourself with the process for upgrading the rest of your kubeadm cluster. You will want to upgrade the control plane nodes before upgrading your Windows nodes.
Upgrading worker nodes
Upgrade kubeadm
-
From the Windows node, upgrade kubeadm:
# replace 1.28.4 with your desired version curl.exe -Lo <path-to-kubeadm.exe> "https://dl.k8s.io/v1.28.4/bin/windows/amd64/kubeadm.exe"
Drain the node
-
From a machine with access to the Kubernetes API, prepare the node for maintenance by marking it unschedulable and evicting the workloads:
# replace <node-to-drain> with the name of your node you are draining kubectl drain <node-to-drain> --ignore-daemonsets
You should see output similar to this:
node/ip-172-31-85-18 cordoned node/ip-172-31-85-18 drained
Upgrade the kubelet configuration
-
From the Windows node, call the following command to sync new kubelet configuration:
kubeadm upgrade node
Upgrade kubelet and kube-proxy
-
From the Windows node, upgrade and restart the kubelet:
stop-service kubelet curl.exe -Lo <path-to-kubelet.exe> "https://dl.k8s.io/v1.28.4/bin/windows/amd64/kubelet.exe" restart-service kubelet
-
From the Windows node, upgrade and restart the kube-proxy.
stop-service kube-proxy curl.exe -Lo <path-to-kube-proxy.exe> "https://dl.k8s.io/v1.28.4/bin/windows/amd64/kube-proxy.exe" restart-service kube-proxy
Uncordon the node
-
From a machine with access to the Kubernetes API, bring the node back online by marking it schedulable:
# replace <node-to-drain> with the name of your node kubectl uncordon <node-to-drain>
What's next
- See how to Upgrade Linux nodes.
7 - Changing The Kubernetes Package Repository
This page explains how to enable a package repository for the desired
Kubernetes minor release upon upgrading a cluster. This is only needed
for users of the community-owned package repositories hosted at pkgs.k8s.io
.
Unlike the legacy package repositories, the community-owned package
repositories are structured in a way that there's a dedicated package
repository for each Kubernetes minor version.
Before you begin
This document assumes that you're already using the community-owned
package repositories (pkgs.k8s.io
). If that's not the case, it's strongly
recommended to migrate to the community-owned package repositories as described
in the official announcement.
apt.kubernetes.io
and yum.kubernetes.io
) have been
deprecated and frozen starting from September 13, 2023.
Using the new package repositories hosted at pkgs.k8s.io
is strongly recommended and required in order to install Kubernetes versions released after September 13, 2023.
The deprecated legacy repositories, and their contents, might be removed at any time in the future and without
a further notice period. The new package repositories provide downloads for Kubernetes versions starting with v1.24.0.
Verifying if the Kubernetes package repositories are used
If you're unsure whether you're using the community-owned package repositories or the legacy package repositories, take the following steps to verify:
Print the contents of the file that defines the Kubernetes apt
repository:
# On your system, this configuration file could have a different name
pager /etc/apt/sources.list.d/kubernetes.list
If you see a line similar to:
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.27/deb/ /
You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official announcement.
Print the contents of the file that defines the Kubernetes yum
repository:
# On your system, this configuration file could have a different name
cat /etc/yum.repos.d/kubernetes.repo
If you see a baseurl
similar to the baseurl
in the output below:
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.27/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.27/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl
You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official announcement.
Print the contents of the file that defines the Kubernetes zypper
repository:
# On your system, this configuration file could have a different name
cat /etc/zypp/repos.d/kubernetes.repo
If you see a baseurl
similar to the baseurl
in the output below:
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.27/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.27/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl
You're using the Kubernetes package repositories and this guide applies to you. Otherwise, it's strongly recommended to migrate to the Kubernetes package repositories as described in the official announcement.
The URL used for the Kubernetes package repositories is not limited to pkgs.k8s.io
,
it can also be one of:
pkgs.k8s.io
pkgs.kubernetes.io
packages.kubernetes.io
Switching to another Kubernetes package repository
This step should be done upon upgrading from one to another Kubernetes minor release in order to get access to the packages of the desired Kubernetes minor version.
-
Open the file that defines the Kubernetes
apt
repository using a text editor of your choice:nano /etc/apt/sources.list.d/kubernetes.list
You should see a single line with the URL that contains your current Kubernetes minor version. For example, if you're using v1.27, you should see this:
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.27/deb/ /
-
Change the version in the URL to the next available minor release, for example:
deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /
-
Save the file and exit your text editor. Continue following the relevant upgrade instructions.
-
Open the file that defines the Kubernetes
yum
repository using a text editor of your choice:nano /etc/yum.repos.d/kubernetes.repo
You should see a file with two URLs that contain your current Kubernetes minor version. For example, if you're using v1.27, you should see this:
[kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.27/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.27/rpm/repodata/repomd.xml.key exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
-
Change the version in these URLs to the next available minor release, for example:
[kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/vv1.28/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/vv1.28/rpm/repodata/repomd.xml.key exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
-
Save the file and exit your text editor. Continue following the relevant upgrade instructions.
What's next
- See how to Upgrade Linux nodes.
- See how to Upgrade Windows nodes.