26๋ ๋ K8S Deploy ์ ๋ฆฌ ๊ธ์ ๋๋ค.
kubeadm upgrade : control node
๊ธฐ์กด ์ค์น๋์ด ์๋ 1.32 ๋ฒ์ ์์ 1.33์ผ๋ก ์ ๊ทธ๋ ์ด๋๋ฅผ ์งํํ๋ค.
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
dnf makecache
(โ|HomeLab:default) root@k8s-ctr:~# dnf list --showduplicates kubeadm --disableexcludes=kubernetes
Last metadata expiration check: 0:00:05 ago on Sun 25 Jan 2026 04:13:40 AM KST.
Installed Packages
kubeadm.aarch64 1.32.11-150500.1.1 @kubernetes
Available Packages
kubeadm.aarch64 1.33.0-150500.1.1 kubernetes
kubeadm.ppc64le 1.33.0-150500.1.1 kubernetes
kubeadm.s390x 1.33.0-150500.1.1 kubernetes
kubeadm.src 1.33.0-150500.1.1 kubernetes
kubeadm.x86_64 1.33.0-150500.1.1 kubernetes
kubeadm.aarch64 1.33.1-150500.1.1 kubernetes
kubeadm.ppc64le 1.33.1-150500.1.1 kubernetes
kubeadm.s390x 1.33.1-150500.1.1 kubernetes
kubeadm.src 1.33.1-150500.1.1 kubernetes
kubeadm.x86_64 1.33.1-150500.1.1 kubernetes
kubeadm.aarch64 1.33.2-150500.1.1 kubernetes
kubeadm.ppc64le 1.33.2-150500.1.1 kubernetes
kubeadm.s390x 1.33.2-150500.1.1 kubernetes
kubeadm.src 1.33.2-150500.1.1 kubernetes
kubeadm.x86_64 1.33.2-150500.1.1 kubernetes
kubeadm.aarch64 1.33.3-150500.1.1 kubernetes
kubeadm.ppc64le 1.33.3-150500.1.1 kubernetes
kubeadm.s390x 1.33.3-150500.1.1 kubernetes
kubeadm.src 1.33.3-150500.1.1 kubernetes
kubeadm.x86_64 1.33.3-150500.1.1 kubernetes
kubeadm.aarch64 1.33.4-150500.1.1 kubernetes
kubeadm.ppc64le 1.33.4-150500.1.1 kubernetes
kubeadm.s390x 1.33.4-150500.1.1 kubernetes
kubeadm.src 1.33.4-150500.1.1 kubernetes
kubeadm.x86_64 1.33.4-150500.1.1 kubernetes
kubeadm.aarch64 1.33.5-150500.1.1 kubernetes
kubeadm.ppc64le 1.33.5-150500.1.1 kubernetes
kubeadm.s390x 1.33.5-150500.1.1 kubernetes
kubeadm.src 1.33.5-150500.1.1 kubernetes
kubeadm.x86_64 1.33.5-150500.1.1 kubernetes
kubeadm.aarch64 1.33.6-150500.1.1 kubernetes
kubeadm.ppc64le 1.33.6-150500.1.1 kubernetes
kubeadm.s390x 1.33.6-150500.1.1 kubernetes
kubeadm.src 1.33.6-150500.1.1 kubernetes
kubeadm.x86_64 1.33.6-150500.1.1 kubernetes
kubeadm.aarch64 1.33.7-150500.1.1 kubernetes
kubeadm.ppc64le 1.33.7-150500.1.1 kubernetes
kubeadm.s390x 1.33.7-150500.1.1 kubernetes
kubeadm.src 1.33.7-150500.1.1 kubernetes
kubeadm.x86_64 1.33.7-150500.1.1 kubernetes
dnf install -y --disableexcludes=kubernetes kubeadm-1.33.7-150500.1.1
(โ|HomeLab:default) root@k8s-ctr:~# which kubeadm && kubeadm version -o yaml
/usr/bin/kubeadm
clientVersion:
buildDate: "2025-12-09T14:41:01Z"
compiler: gc
gitCommit: a7245cdf3f69e11356c7e8f92b3e78ca4ee4e757
gitTreeState: clean
gitVersion: v1.33.7
goVersion: go1.24.11
major: "1"
minor: "33"
platform: linux/arm64
(โ|HomeLab:default) root@k8s-ctr:~# kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade/config] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[upgrade/config] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: 1.32.11
[upgrade/versions] kubeadm version: v1.33.7
I0125 04:14:27.484269 34595 version.go:261] remote version is much newer: v1.35.0; falling back to: stable-1.33
[upgrade/versions] Target version: v1.33.7
[upgrade/versions] Latest version in the v1.32 series: v1.32.11
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT NODE CURRENT TARGET
kubelet k8s-ctr v1.32.11 v1.33.7
kubelet k8s-w1 v1.32.11 v1.33.7
kubelet k8s-w2 v1.32.11 v1.33.7
Upgrade to the latest stable version:
COMPONENT NODE CURRENT TARGET
kube-apiserver k8s-ctr v1.32.11 v1.33.7
kube-controller-manager k8s-ctr v1.32.11 v1.33.7
kube-scheduler k8s-ctr v1.32.11 v1.33.7
kube-proxy 1.32.11 v1.33.7
CoreDNS v1.11.3 v1.12.0
etcd k8s-ctr 3.5.24-0 3.5.24-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.33.7
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
upgrade apply
kubeadm config images pull
(โ|HomeLab:default) root@k8s-ctr:~# kubeadm config images pull
I0125 04:16:12.035485 35334 version.go:261] remote version is much newer: v1.35.0; falling back to: stable-1.33
[config/images] Pulled registry.k8s.io/kube-apiserver:v1.33.7
[config/images] Pulled registry.k8s.io/kube-controller-manager:v1.33.7
[config/images] Pulled registry.k8s.io/kube-scheduler:v1.33.7
[config/images] Pulled registry.k8s.io/kube-proxy:v1.33.7
[config/images] Pulled registry.k8s.io/coredns/coredns:v1.12.0
[config/images] Pulled registry.k8s.io/pause:3.10
[config/images] Pulled registry.k8s.io/etcd:3.5.24-0
crictl pull registry.k8s.io/kube-proxy:v1.33.7
crictl pull registry.k8s.io/coredns/coredns:v1.12.0
(โ|HomeLab:default) root@k8s-ctr:~# kubeadm upgrade apply v1.33.7
[upgrade] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[upgrade] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
[upgrade/preflight] Running preflight checks
[upgrade] Running cluster health checks
[upgrade/preflight] You have chosen to upgrade the cluster version to "v1.33.7"
[upgrade/versions] Cluster version: v1.32.11
[upgrade/versions] kubeadm version: v1.33.7
[upgrade] Are you sure you want to proceed? [y/N]: y
[upgrade/preflight] Pulling images required for setting up a Kubernetes cluster
[upgrade/preflight] This might take a minute or two, depending on the speed of your internet connection
[upgrade/preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[upgrade/control-plane] Upgrading your static Pod-hosted control plane to version "v1.33.7" (timeout: 5m0s)...
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests1264508871"
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moving new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backing up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2026-01-25-04-17-09/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
...
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moving new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backing up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2026-01-25-04-17-09/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This can take up to 5m0s
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/control-plane] The control plane instance for this node was successfully upgraded!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upgrade/kubeconfig] The kubeconfig files for this node were successfully upgraded!
W0125 04:20:06.669406 35713 postupgrade.go:117] Using temporary directory /etc/kubernetes/tmp/kubeadm-kubelet-config1825126381 for kubelet config. To override it set the environment variable KUBEADM_UPGRADE_DRYRUN_DIR
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config1825126381/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade/kubelet-config] The kubelet configuration for this node was successfully upgraded!
[upgrade/bootstrap-token] Configuring bootstrap token and cluster-info RBAC rules
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade] SUCCESS! A control plane node of your cluster was upgraded to "v1.33.7".
[upgrade] Now please proceed with upgrading the rest of the nodes by following the right order.
k8s ์ํ ํ์ธ
(โ|HomeLab:default) root@k8s-ctr:~# kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-ctr Ready control-plane 88m v1.32.11 192.168.10.100 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
k8s-w1 Ready <none> 46m v1.32.11 192.168.10.101 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
k8s-w2 Ready <none> 46m v1.32.11 192.168.10.102 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
# ํ์ฌ kubelet์ ์
๊ทธ๋ ์ด๋ ๋์ง ์์ ์ํ
(โ|HomeLab:default) root@k8s-ctr:~# kubectl describe node k8s-ctr | grep 'Kubelet Version:'
Kubelet Version: v1.32.11
(โ|HomeLab:default) root@k8s-ctr:~# ls -l /etc/kubernetes/manifests/
cat /etc/kubernetes/manifests/*.yaml | grep -i image:
total 16
-rw-------. 1 root root 2549 Jan 25 04:17 etcd.yaml
-rw-------. 1 root root 3602 Jan 25 04:17 kube-apiserver.yaml
-rw-------. 1 root root 3103 Jan 25 04:17 kube-controller-manager.yaml
-rw-------. 1 root root 1656 Jan 25 04:17 kube-scheduler.yaml
image: registry.k8s.io/etcd:3.5.24-0
image: registry.k8s.io/kube-apiserver:v1.33.7
image: registry.k8s.io/kube-controller-manager:v1.33.7
image: registry.k8s.io/kube-scheduler:v1.33.7
(โ|HomeLab:default) root@k8s-ctr:~# kubectl get pods -n kube-system -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{range .spec.containers[*]} - {.name}: {.image}{"\n"}{end}{"\n"}{end}'
coredns-674b8bbfcf-5mfmw
- coredns: registry.k8s.io/coredns/coredns:v1.12.0
coredns-674b8bbfcf-svcdl
- coredns: registry.k8s.io/coredns/coredns:v1.12.0
etcd-k8s-ctr
- etcd: registry.k8s.io/etcd:3.5.24-0
kube-apiserver-k8s-ctr
- kube-apiserver: registry.k8s.io/kube-apiserver:v1.33.7
kube-controller-manager-k8s-ctr
- kube-controller-manager: registry.k8s.io/kube-controller-manager:v1.33.7
kube-proxy-76rfv
- kube-proxy: registry.k8s.io/kube-proxy:v1.33.7
kube-proxy-lfwv2
- kube-proxy: registry.k8s.io/kube-proxy:v1.33.7
kube-proxy-xhddm
- kube-proxy: registry.k8s.io/kube-proxy:v1.33.7
kube-scheduler-k8s-ctr
- kube-scheduler: registry.k8s.io/kube-scheduler:v1.33.7
metrics-server-5dd7b49d79-5q6zx
- metrics-server: registry.k8s.io/metrics-server/metrics-server:v0.8.0
kubelet / kubectl ์ ๊ทธ๋ ์ด๋
dnf list --showduplicates kubectl --disableexcludes=kubernetes
dnf install -y --disableexcludes=kubernetes kubelet-1.33.7-150500.1.1 kubectl-1.33.7-150500.1.1
(โ|HomeLab:default) root@k8s-ctr:~# which kubectl && kubectl version --client=true
/usr/bin/kubectl
Client Version: v1.33.7
Kustomize Version: v5.6.0
(โ|HomeLab:default) root@k8s-ctr:~# which kubelet && kubelet --version
/usr/bin/kubelet
Kubernetes v1.33.7
systemctl daemon-reload
systemctl restart kubelet
# k8s api ์์คํ
๋ฐ๋ชฌ ์ฌ์์ ํ 10์ด ๊ฐ๋ ๋จ์ ๋จ
(โ|HomeLab:default) root@k8s-ctr:~# kubectl get nodes -o wide
The connection to the server 192.168.10.100:6443 was refused - did you specify the right host or port?
(โ|HomeLab:default) root@k8s-ctr:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-ctr Ready control-plane 91m v1.33.7 192.168.10.100 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
k8s-w1 Ready <none> 49m v1.32.11 192.168.10.101 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
k8s-w2 Ready <none> 49m v1.32.11 192.168.10.102 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
(โ|HomeLab:default) root@k8s-ctr:~# kubectl describe node k8s-ctr | grep 'Kubelet Version:'
Kubelet Version: v1.33.7
upgrade : worker node
(โ|HomeLab:default) root@k8s-ctr:~# kubectl get pod -A -owide | grep k8s-w1
default webpod-697b545f57-69cxx 1/1 Running 0 21m 10.244.1.8 k8s-w1 <none> <none>
kube-flannel kube-flannel-ds-p8scw 1/1 Running 0 51m 192.168.10.101 k8s-w1 <none> <none>
kube-system coredns-674b8bbfcf-svcdl 1/1 Running 0 5m19s 10.244.1.9 k8s-w1 <none> <none>
kube-system kube-proxy-76rfv 1/1 Running 0 5m19s 192.168.10.101 k8s-w1 <none> <none>
monitoring alertmanager-kube-prometheus-stack-alertmanager-0 2/2 Running 0 46m 10.244.1.6 k8s-w1 <none> <none>
monitoring kube-prometheus-stack-kube-state-metrics-7846957b5b-rcgj2 1/1 Running 7 (6m41s ago) 46m 10.244.1.3 k8s-w1 <none> <none>
monitoring kube-prometheus-stack-operator-584f446c98-5kl6m 1/1 Running 0 46m 10.244.1.4 k8s-w1 <none> <none>
monitoring kube-prometheus-stack-prometheus-node-exporter-66mg2 1/1 Running 0 46m 192.168.10.101 k8s-w1 <none> <none>
monitoring x509-certificate-exporter-nodes-vz6bf 1/1 Running 0 25m 10.244.1.7 k8s-w1 <none> <none>
(โ|HomeLab:default) root@k8s-ctr:~# kubectl get pod -A -owide | grep k8s-w2
default webpod-697b545f57-djhsz 1/1 Running 0 21m 10.244.2.6 k8s-w2 <none> <none>
kube-flannel kube-flannel-ds-bfcb7 1/1 Running 0 51m 192.168.10.102 k8s-w2 <none> <none>
kube-system coredns-674b8bbfcf-5mfmw 1/1 Running 0 5m23s 10.244.2.7 k8s-w2 <none> <none>
kube-system kube-proxy-xhddm 1/1 Running 0 5m19s 192.168.10.102 k8s-w2 <none> <none>
kube-system metrics-server-5dd7b49d79-5q6zx 1/1 Running 0 47m 10.244.2.2 k8s-w2 <none> <none>
monitoring kube-prometheus-stack-grafana-5cb7c586f9-h2xdw 3/3 Running 0 46m 10.244.2.3 k8s-w2 <none> <none>
monitoring kube-prometheus-stack-prometheus-node-exporter-hq4nl 1/1 Running 0 46m 192.168.10.102 k8s-w2 <none> <none>
monitoring prometheus-kube-prometheus-stack-prometheus-0 2/2 Running 0 46m 10.244.2.4 k8s-w2 <none> <none>
monitoring x509-certificate-exporter-nodes-bnlk6 1/1 Running 0 26m 10.244.2.5 k8s-w2 <none> <none>
(โ|HomeLab:default) root@k8s-ctr:~# kubectl get sts -A
NAMESPACE NAME READY AGE
monitoring alertmanager-kube-prometheus-stack-alertmanager 1/1 46m
monitoring prometheus-kube-prometheus-stack-prometheus 1/1 46m
(โ|HomeLab:default) root@k8s-ctr:~# kubectl get pv,pvc -A
No resources found
# web deployment
(โ|HomeLab:default) root@k8s-ctr:~# cat <<EOF | kubectl apply -f -
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: webpod
namespace: default
spec:
maxUnavailable: 0
selector:
matchLabels:
app: webpod
EOF
poddisruptionbudget.policy/webpod created
(โ|HomeLab:default) root@k8s-ctr:~# kubectl get pdb
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
webpod N/A 0 0 4s
2๋ฒ ๋ ธ๋ ๋๋ ์ธ
- kubelet ์ฌ์์ํ๊ฒ ๋๋ฉด ํด๋น ๋ ธ๋์ Pod ์ ๋ถ ์ํฅ์ด ๋ฏธ์น๋ค.
- ๋ง์ฝ Drain ์์ด ์ํํ๊ฒ ๋๋ฉด ์๋ ํ์์ด ๋ฐ์ํ๋ค.
- Pod ๊ฐ์ ์ข ๋ฃ
- Service ์๋ํฌ์ธํธ ๊ฐฑ์ ์ง์ฐ
- ์ค์ ํธ๋ํฝ ๋๊น
(โ|HomeLab:default) root@k8s-ctr:~# kubectl drain k8s-w2
node/k8s-w2 cordoned
error: unable to drain node "k8s-w2" due to error: [cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-flannel/kube-flannel-ds-bfcb7, kube-system/kube-proxy-xhddm, monitoring/kube-prometheus-stack-prometheus-node-exporter-hq4nl, monitoring/x509-certificate-exporter-nodes-bnlk6, cannot delete Pods with local storage (use --delete-emptydir-data to override): kube-system/metrics-server-5dd7b49d79-5q6zx, monitoring/kube-prometheus-stack-grafana-5cb7c586f9-h2xdw, monitoring/prometheus-kube-prometheus-stack-prometheus-0], continuing command...
There are pending nodes to be drained:
k8s-w2
cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-flannel/kube-flannel-ds-bfcb7, kube-system/kube-proxy-xhddm, monitoring/kube-prometheus-stack-prometheus-node-exporter-hq4nl, monitoring/x509-certificate-exporter-nodes-bnlk6
cannot delete Pods with local storage (use --delete-emptydir-data to override): kube-system/metrics-server-5dd7b49d79-5q6zx, monitoring/kube-prometheus-stack-grafana-5cb7c586f9-h2xdw, monitoring/prometheus-kube-prometheus-stack-prometheus-0
(โ|HomeLab:default) root@k8s-ctr:~# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-ctr Ready control-plane 95m v1.33.7
k8s-w1 Ready <none> 52m v1.32.11
k8s-w2 Ready,SchedulingDisabled <none> 52m v1.32.11
(โ|HomeLab:default) root@k8s-ctr:~# kubectl get pod -A -owide |grep k8s-w2
default webpod-697b545f57-djhsz 1/1 Running 0 23m 10.244.2.6 k8s-w2 <none> <none>
kube-flannel kube-flannel-ds-bfcb7 1/1 Running 0 52m 192.168.10.102 k8s-w2 <none> <none>
kube-system coredns-674b8bbfcf-5mfmw 1/1 Running 0 7m15s 10.244.2.7 k8s-w2 <none> <none>
kube-system kube-proxy-xhddm 1/1 Running 0 7m11s 192.168.10.102 k8s-w2 <none> <none>
kube-system metrics-server-5dd7b49d79-5q6zx 1/1 Running 0 49m 10.244.2.2 k8s-w2 <none> <none>
monitoring kube-prometheus-stack-grafana-5cb7c586f9-h2xdw 3/3 Running 0 48m 10.244.2.3 k8s-w2 <none> <none>
monitoring kube-prometheus-stack-prometheus-node-exporter-hq4nl 1/1 Running 0 48m 192.168.10.102 k8s-w2 <none> <none>
monitoring prometheus-kube-prometheus-stack-prometheus-0 2/2 Running 0 48m 10.244.2.4 k8s-w2 <none> <none>
monitoring x509-certificate-exporter-nodes-bnlk6 1/1 Running 0 27m 10.244.2.5 k8s-w2 <none> <none>
(โ|HomeLab:default) root@k8s-ctr:~# kubectl delete pdb webpod
poddisruptionbudget.policy "webpod" deleted
(โ|HomeLab:default) root@k8s-ctr:~# kubectl drain k8s-w2 --ignore-daemonsets --delete-emptydir-data
node/k8s-w2 already cordoned
Warning: ignoring DaemonSet-managed Pods: kube-flannel/kube-flannel-ds-bfcb7, kube-system/kube-proxy-xhddm, monitoring/kube-prometheus-stack-prometheus-node-exporter-hq4nl, monitoring/x509-certificate-exporter-nodes-bnlk6
evicting pod default/webpod-697b545f57-djhsz
pod/webpod-697b545f57-djhsz evicted
node/k8s-w2 drained
Normal NodeNotSchedulable 76s kubelet Node k8s-w2 status is now: NodeNotSchedulable
2๋ฒ ๋ ธ๋์์ ๋ ํฌ ์ ๋ฐ์ดํธ ํ ์ค์น
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.33/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
dnf makecache
dnf install -y --disableexcludes=kubernetes kubeadm-1.33.7-150500.1.1
root@k8s-w2:~# which kubeadm && kubeadm version -o yaml
/usr/bin/kubeadm
clientVersion:
buildDate: "2025-12-09T14:41:01Z"
compiler: gc
gitCommit: a7245cdf3f69e11356c7e8f92b3e78ca4ee4e757
gitTreeState: clean
gitVersion: v1.33.7
goVersion: go1.24.11
major: "1"
minor: "33"
platform: linux/arm64
(โ|HomeLab:default) root@k8s-ctr:~# kubeadm upgrade node
[upgrade] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[upgrade] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
[upgrade/preflight] Running pre-flight checks
[upgrade/preflight] Pulling images required for setting up a Kubernetes cluster
[upgrade/preflight] This might take a minute or two, depending on the speed of your internet connection
[upgrade/preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[upgrade/control-plane] Upgrading your Static Pod-hosted control plane instance to version "v1.33.7"...
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests1787272466"
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Current and new manifests of kube-apiserver are equal, skipping upgrade
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Current and new manifests of kube-controller-manager are equal, skipping upgrade
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Current and new manifests of kube-scheduler are equal, skipping upgrade
[upgrade/control-plane] The control plane instance for this node was successfully upgraded!
[upgrade/kubeconfig] The kubeconfig files for this node were successfully upgraded!
W0125 04:29:13.994627 42764 postupgrade.go:117] Using temporary directory /etc/kubernetes/tmp/kubeadm-kubelet-config2004289552 for kubelet config. To override it set the environment variable KUBEADM_UPGRADE_DRYRUN_DIR
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config2004289552/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade/kubelet-config] The kubelet configuration for this node was successfully upgraded!
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
dnf install -y --disableexcludes=kubernetes kubelet-1.33.7-150500.1.1 kubectl-1.33.7-150500.1.1
root@k8s-w2:~# which kubectl && kubectl version --client=true
which kubelet && kubelet --version
/usr/bin/kubectl
Client Version: v1.33.7
Kustomize Version: v5.6.0
/usr/bin/kubelet
Kubernetes v1.33.7
systemctl daemon-reload
systemctl restart kubelet
systemctl status kubelet --no-pager
root@k8s-w2:~# crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
7068507da5faf 78ccb937011a5 9 minutes ago Running kube-proxy 0 50d16979fb12c kube-proxy-xhddm kube-system
a9db646e5b868 11873b3fefc46 30 minutes ago Running x509-certificate-exporter 0 a700751a3c1f2 x509-certificate-exporter-nodes-bnlk6 monitoring
177837127ba1b 6b5bc413b280c 50 minutes ago Running node-exporter 0 c1a410d18fdd5 kube-prometheus-stack-prometheus-node-exporter-hq4nl monitoring
6ccbdcc3de3e7 d84558c0144bc 55 minutes ago Running kube-flannel 0 25d5799fe45f6 kube-flannel-ds-bfcb7 kube-flannel
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.34/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.34/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
dnf makecache
(โ|HomeLab:default) root@k8s-ctr:~# kubeadm upgrade node
[upgrade] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[upgrade] Use 'kubeadm init phase upload-config --config your-config-file' to re-upload it.
[upgrade/preflight] Running pre-flight checks
[upgrade/preflight] Pulling images required for setting up a Kubernetes cluster
[upgrade/preflight] This might take a minute or two, depending on the speed of your internet connection
[upgrade/preflight] You can also perform this action beforehand using 'kubeadm config images pull'
[upgrade/control-plane] Upgrading your Static Pod-hosted control plane instance to version "v1.33.7"...
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests1244878104"
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Current and new manifests of kube-apiserver are equal, skipping upgrade
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Current and new manifests of kube-controller-manager are equal, skipping upgrade
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Current and new manifests of kube-scheduler are equal, skipping upgrade
[upgrade/control-plane] The control plane instance for this node was successfully upgraded!
[upgrade/kubeconfig] The kubeconfig files for this node were successfully upgraded!
W0125 04:30:49.956448 43195 postupgrade.go:117] Using temporary directory /etc/kubernetes/tmp/kubeadm-kubelet-config162224862 for kubelet config. To override it set the environment variable KUBEADM_UPGRADE_DRYRUN_DIR
[upgrade] Backing up kubelet config file to /etc/kubernetes/tmp/kubeadm-kubelet-config162224862/config.yaml
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade/kubelet-config] The kubelet configuration for this node was successfully upgraded!
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
(โ|HomeLab:default) root@k8s-ctr:~# kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-ctr Ready control-plane 99m v1.33.7 192.168.10.100 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
k8s-w1 Ready <none> 57m v1.32.11 192.168.10.101 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
k8s-w2 Ready <none> 57m v1.34.3 192.168.10.102 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
(โ|HomeLab:default) root@k8s-ctr:~# kubectl get pod -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
webpod-697b545f57-69cxx 1/1 Running 0 27m 10.244.1.8 k8s-w1 <none> <none>
webpod-697b545f57-6zwb9 1/1 Running 0 3s 10.244.2.8 k8s-w2 <none> <none>'Infra > ์ฟ ๋ฒ๋คํฐ์ค' ์นดํ ๊ณ ๋ฆฌ์ ๋ค๋ฅธ ๊ธ
| [K8S Deploy] #5์ฃผ์ฐจ Kubespray HA & Upgrade (0) | 2026.02.08 |
|---|---|
| [K8S Deploy] #4์ฃผ์ฐจ Kubespray ๋์ ์๋ฆฌ ์ดํดํ๊ธฐ (0) | 2026.02.01 |
| [K8S Deploy] #3์ฃผ์ฐจ kubeadm (1) (0) | 2026.01.25 |
| [K8S Deploy] #2์ฃผ์ฐจ Ansible (2) (1) | 2026.01.18 |
| [K8S Deploy] #2์ฃผ์ฐจ Ansible (1) (0) | 2026.01.18 |