26๋ ๋ K8S Deploy ์ ๋ฆฌ ๊ธ์ ๋๋ค.
RKE2 ์ค์ต ํ๊ฒฝ ๊ตฌ์ฑ
RKE2๋ Rancher์์ ์ ๊ณตํ๋ CNCF ์ธ์ฆ Kubernetes ๋ฐฐํฌํ์ผ๋ก, ๋ณด์ ๊ฐํ(CIS ๊ธฐ์ค)์ ์ด์ ์๋ํ๋ฅผ ๊ธฐ๋ณธ ์ ๊ณตํ๋ Kubernetes ํ๋ซํผ์ด๋ค.
์ดํดํ๊ธฐ ์ฝ๊ฒ ๋ณด๋ฉด ๋ณด์ + ์๋ํ๋ kubeadm ํ๊ฒฝ์ ์ฌ์ฉํ ์ ์๋ ํ๋ซํผ์ผ๋ก ๋ณผ ์ ์๋ค.
๋ณด์ ๊ฐํํ Kubernetes ๋ฐฐํฌ ๋ฒ์ ์ด๋ฏ๋ก control-plane์ static pod๋ก ์ด์ํ๊ณ Helm ๊ธฐ๋ฐ ์ ๋์จ ๋ฐฐํฌ์ containerd, etcd๋ฅผ ๋ด์ฅํ์ฌ ์ด์ ๋ณต์ก๋๋ฅผ ๋ฎ์ถ ์ ์๊ณ CIS ๊ธฐ์ค ๋ณด์ ์ค์ ๊ณผ etcd encryption์ด ๊ธฐ๋ณธ ์ ์ฉ๋์ด ์ํฐํ๋ผ์ด์ฆ ํ๊ฒฝ์ ์ ํฉํ๋ค.
[root@k8s-node1 ~]# curl -sfL https://get.rke2.io --output install.sh
chmod +x install.sh
INSTALL_RKE2_CHANNEL=v1.33 ./install.sh
[INFO] using stable RPM repositories
[INFO] using 1.33 series from channel stable
================================================================================================================================================
Package Architecture Version Repository Size
================================================================================================================================================
Installing:
rke2-server aarch64 1.33.8~rke2r1-0.el9 rancher-rke2-1.33-stable 8.3 k
Installing dependencies:
rke2-common aarch64 1.33.8~rke2r1-0.el9 rancher-rke2-1.33-stable 25 M
rke2-selinux noarch 0.22-1.el9 rancher-rke2-common-stable 22 k
Transaction Summary
================================================================================================================================================
Install 3 Packages
Total download size: 25 M
Installed size: 113 M
Downloading Packages:
(1/3): rke2-selinux-0.22-1.el9.noarch.rpm 42 kB/s | 22 kB 00:00
(2/3): rke2-server-1.33.8~rke2r1-0.el9.aarch64.rpm 15 kB/s | 8.3 kB 00:00
(3/3): rke2-common-1.33.8~rke2r1-0.el9.aarch64.rpm 11 MB/s | 25 MB 00:02
------------------------------------------------------------------------------------------------------------------------------------------------
.. 3/3
Installed:
rke2-common-1.33.8~rke2r1-0.el9.aarch64 rke2-selinux-0.22-1.el9.noarch rke2-server-1.33.8~rke2r1-0.el9.aarch64
Complete!
[root@k8s-node1 ~]# rke2 --version
rke2 version v1.33.8+rke2r1 (eb75e3c1774cee5a584259d6fee77eb8cfa9b430)
go version go1.24.12 X:boringcrypto
[root@k8s-node1 ~]# dnf repolist
repo id repo name
appstream Rocky Linux 9 - AppStream
baseos Rocky Linux 9 - BaseOS
extras Rocky Linux 9 - Extras
rancher-rke2-1.33-stable Rancher RKE2 1.33 (v1.33)
rancher-rke2-common-stable Rancher RKE2 Common (v1.33)
[root@k8s-node1 ~]# cat /etc/yum.repos.d/rancher-rke2.repo
[rancher-rke2-common-stable]
name=Rancher RKE2 Common (v1.33)
baseurl=https://rpm.rancher.io/rke2/stable/common/centos/9/noarch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://rpm.rancher.io/public.key
[rancher-rke2-1.33-stable]
name=Rancher RKE2 1.33 (v1.33)
baseurl=https://rpm.rancher.io/rke2/stable/1.33/centos/9/aarch64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://rpm.rancher.io/public.key
[root@k8s-node1 ~]# tree /etc/rancher
/etc/rancher
โโโ rke2
1 directory, 0 files
[root@k8s-node1 ~]# tree /var/lib/rancher/
/var/lib/rancher/
โโโ rke2
โโโ agent
โ โโโ containerd
โ โ โโโ io.containerd.snapshotter.v1.overlayfs
โ โ โโโ snapshots
โ โโโ logs
โโโ data
โโโ server
8 directories, 0 files
[root@k8s-node1 ~]# cat << EOF > /etc/rancher/rke2/config.yaml
write-kubeconfig-mode: "0644"
debug: true
cni: canal
bind-address: 192.168.10.11
advertise-address: 192.168.10.11
node-ip: 192.168.10.11
disable-cloud-controller: true
disable:
- servicelb
- rke2-coredns-autoscaler
- rke2-ingress-nginx
- rke2-snapshot-controller
- rke2-snapshot-controller-crd
- rke2-snapshot-validation-webhook
EOF
[root@k8s-node1 ~]# mkdir -p /var/lib/rancher/rke2/server/manifests/
cat << EOF > /var/lib/rancher/rke2/server/manifests/rke2-canal-config.yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-canal
namespace: kube-system
spec:
valuesContent: |-
flannel:
iface: "enp0s9"
EOF
[root@k8s-node1 ~]# cat << EOF > /var/lib/rancher/rke2/server/manifests/rke2-coredns-config.yaml
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-coredns
namespace: kube-system
spec:
valuesContent: |-
autoscaler:
enabled: false
EOF
[root@k8s-node1 ~]# systemctl enable --now rke2-server.service
systemctl status rke2-server --no-pager
Created symlink /etc/systemd/system/multi-user.target.wants/rke2-server.service → /usr/lib/systemd/system/rke2-server.service.
โ rke2-server.service - Rancher Kubernetes Engine v2 (server)
Loaded: loaded (/usr/lib/systemd/system/rke2-server.service; enabled; preset: disabled)
Active: active (running) since Mon 2026-02-23 02:35:47 KST; 6ms ago
Docs: https://github.com/rancher/rke2#readme
Process: 6343 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 6344 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 6345 (rke2)
Tasks: 94
Memory: 1.6G
CPU: 28.095s
CGroup: /system.slice/rke2-server.service
โโ6345 "/usr/bin/rke2 server"
โโ6367 containerd -c /var/lib/rancher/rke2/agent/etc/con…
โโ6481 kubelet --volume-plugin-dir=/var/lib/kubelet/volu…
โโ6543 /var/lib/rancher/rke2/data/v1.33.8-rke2r1-1b28723…
โโ6550 /var/lib/rancher/rke2/data/v1.33.8-rke2r1-1b28723…
โโ6708 /var/lib/rancher/rke2/data/v1.33.8-rke2r1-1b28723…
โโ6953 /var/lib/rancher/rke2/data/v1.33.8-rke2r1-1b28723…
โโ6961 /var/lib/rancher/rke2/data/v1.33.8-rke2r1-1b28723…
Feb 23 02:35:47 k8s-node1 rke2[6345]: time="2026-02-23T02:35:47+09…nt"
Feb 23 02:35:47 k8s-node1 rke2[6345]: time="2026-02-23T02:35:47+09…3]"
Feb 23 02:35:47 k8s-node1 rke2[6345]: time="2026-02-23T02:35:47+09…te"
Feb 23 02:35:47 k8s-node1 rke2[6345]: time="2026-02-23T02:35:47+09…ly"
Feb 23 02:35:47 k8s-node1 rke2[6345]: time="2026-02-23T02:35:47+09…ng"
Feb 23 02:35:47 k8s-node1 systemd[1]: Started Rancher Kubernetes E…r).
Feb 23 02:35:47 k8s-node1 rke2[6345]: time="2026-02-23T02:35:47+09…es"
Feb 23 02:35:47 k8s-node1 rke2[6345]: time="2026-02-23T02:35:47+09…nt"
Feb 23 02:35:47 k8s-node1 rke2[6345]: time="2026-02-23T02:35:47+09…l]"
Feb 23 02:35:47 k8s-node1 rke2[6345]: time="2026-02-23T02:35:47+09…er"
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8s-node1 ~]# pstree -a | grep -v color | grep 'rke2$' -A5
|-rke2
| |-containerd -c /var/lib/rancher/rke2/agent/etc/containerd/config.toml
| | `-10*[{containerd}]
| |-kubelet --volume-plugin-dir=/var/lib/kubelet/volumeplugins --file-check-frequency=5s --sync-frequency=30s...
| | `-17*[{kubelet}]
| `-11*[{rke2}]
[root@k8s-node1 ~]# mkdir ~/.kube
ls -l /etc/rancher/rke2/rke2.yaml
cp /etc/rancher/rke2/rke2.yaml ~/.kube/config
-rw-r--r--. 1 root root 2965 Feb 23 02:35 /etc/rancher/rke2/rke2.yaml
[root@k8s-node1 ~]# ln -s /var/lib/rancher/rke2/bin/containerd /usr/local/bin/containerd
ln -s /var/lib/rancher/rke2/bin/kubectl /usr/local/bin/kubectl
ln -s /var/lib/rancher/rke2/bin/crictl /usr/local/bin/crictl
ln -s /var/lib/rancher/rke2/bin/runc /usr/local/bin/runc
ln -s /var/lib/rancher/rke2/bin/ctr /usr/local/bin/ctr
ln -s /var/lib/rancher/rke2/agent/etc/crictl.yaml /etc/crictl.yaml
[root@k8s-node1 ~]# source <(kubectl completion bash)
alias k=kubectl
complete -F __start_kubectl k
echo 'source <(kubectl completion bash)' >> /etc/profile
echo 'alias k=kubectl' >> /etc/profile
echo 'complete -F __start_kubectl k' >> /etc/profile
[root@k8s-node1 ~]# kubectl cluster-info -v=6
I0223 02:48:44.286842 20076 loader.go:402] Config loaded from file: /root/.kube/config
I0223 02:48:44.287224 20076 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0223 02:48:44.287244 20076 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0223 02:48:44.287253 20076 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0223 02:48:44.287263 20076 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0223 02:48:44.287267 20076 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0223 02:48:44.289555 20076 round_trippers.go:632] "Response" verb="GET" url="https://192.168.10.11:6443/api?timeout=32s" status="200 OK" milliseconds=2
I0223 02:48:44.290839 20076 round_trippers.go:632] "Response" verb="GET" url="https://192.168.10.11:6443/apis?timeout=32s" status="200 OK" milliseconds=0
I0223 02:48:44.294824 20076 round_trippers.go:632] "Response" verb="GET" url="https://192.168.10.11:6443/api/v1/namespaces/kube-system/services?labelSelector=kubernetes.io%2Fcluster-service%3Dtrue" status="200 OK" milliseconds=1
Kubernetes control plane is running at https://192.168.10.11:6443
CoreDNS is running at https://192.168.10.11:6443/api/v1/namespaces/kube-system/services/rke2-coredns-rke2-coredns:udp-53/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@k8s-node1 ~]# kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-node1 Ready control-plane,etcd,master 13m v1.33.8+rke2r1 192.168.10.11 <none> Rocky Linux 9.6 (Blue Onyx) 5.14.0-570.52.1.el9_6.aarch64 containerd://2.1.5-k3s1
[root@k8s-node1 ~]#
helm list -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
rke2-canal kube-system 1 2026-02-22 17:35:55.80529804 +0000 UTC deployed rke2-canal-v3.31.3-build2026020600 v3.31.3
rke2-coredns kube-system 1 2026-02-22 17:35:55.804501882 +0000 UTC deployed rke2-coredns-1.45.201 1.13.1
rke2-metrics-server kube-system 1 2026-02-22 17:36:18.479568302 +0000 UTC deployed rke2-metrics-server-3.13.007 0.8.0
rke2-runtimeclasses kube-system 1 2026-02-22 17:36:19.430256394 +0000 UTC deployed rke2-runtimeclasses-0.1.000 0.1.0
[root@k8s-node1 ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-k8s-node1 1/1 Running 0 13m
kube-system helm-install-rke2-canal-rws6r 0/1 Completed 0 13m
kube-system helm-install-rke2-coredns-d7gv2 0/1 Completed 0 13m
kube-system helm-install-rke2-metrics-server-kpm8k 0/1 Completed 0 13m
kube-system helm-install-rke2-runtimeclasses-xwhjz 0/1 Completed 0 13m
kube-system kube-apiserver-k8s-node1 1/1 Running 0 13m
kube-system kube-controller-manager-k8s-node1 1/1 Running 0 13m
kube-system kube-proxy-k8s-node1 1/1 Running 0 13m
kube-system kube-scheduler-k8s-node1 1/1 Running 0 13m
kube-system rke2-canal-lcxlg 2/2 Running 0 13m
kube-system rke2-coredns-rke2-coredns-559595db99-rx8ch 1/1 Running 0 13m
kube-system rke2-metrics-server-fdcdf575d-gkhg2 1/1 Running 0 12m
RKE2 ๋ด๋ถ ๋๋ ํฐ๋ฆฌ ๊ตฌ์กฐ
[root@k8s-node1 ~]# tree /var/lib/rancher/rke2 -L 1
/var/lib/rancher/rke2
โโโ agent
โโโ bin -> /var/lib/rancher/rke2/data/v1.33.8-rke2r1-1b2872361ec5/bin
โโโ data
โโโ server
4 directories, 0 files
[root@k8s-node1 ~]# tree /var/lib/rancher/rke2/server/ -L 1
/var/lib/rancher/rke2/server/
โโโ agent-token -> /var/lib/rancher/rke2/server/token
โโโ cred
โโโ db
โโโ etc
โโโ manifests
โโโ node-token -> /var/lib/rancher/rke2/server/token
โโโ tls
โโโ token
5 directories, 3 files
server ๋๋ ํฐ๋ฆฌ๋ control-plane ๊ตฌ์ฑ์์(etcd, ์ธ์ฆ์, helm ๊ธฐ๋ฐ ์ ๋์จ)๋ฅผ ํฌํจ๋๋ค. manifests ๋๋ ํฐ๋ฆฌ๋ฅผ ํตํด ๊ธฐ๋ณธ ์ ๋์จ์ด Helm ๋ฐฉ์์ผ๋ก ์๋ ๋ฐฐํฌ๋๋ค.
manifests -> helm ์ฐจํธ ๊ตฌ์กฐ
[root@k8s-node1 ~]# kubectl get crd | grep -E 'helm|addon'
addons.k3s.cattle.io 2026-02-22T17:35:44Z
helmchartconfigs.helm.cattle.io 2026-02-22T17:35:44Z
helmcharts.helm.cattle.io 2026-02-22T17:35:44Z
[root@k8s-node1 ~]# kubectl get helmcharts.helm.cattle.io -n kube-system -owide
NAME REPO CHART VERSION TARGETNAMESPACE BOOTSTRAP FAILED JOB
rke2-canal true False helm-install-rke2-canal
rke2-coredns true False helm-install-rke2-coredns
rke2-metrics-server False helm-install-rke2-metrics-server
rke2-runtimeclasses False helm-install-rke2-runtimeclasses
[root@k8s-node1 ~]# kubectl get job -n kube-system
NAME STATUS COMPLETIONS DURATION AGE
helm-install-rke2-canal Complete 1/1 10s 30m
helm-install-rke2-coredns Complete 1/1 9s 30m
helm-install-rke2-metrics-server Complete 1/1 32s 30m
helm-install-rke2-runtimeclasses Complete 1/1 33s 30m
[root@k8s-node1 ~]# kubectl get helmchartconfigs -n kube-system
NAME AGE
rke2-canal 30m
rke2-coredns 30m
RKE2๋ static YAML์ด ์๋๋ผ HelmChart CR์ ํตํด ์ ๋์จ์ ์๋ ์ค์นํ๋ค.
/var/lib/rancher/rke2/server/manifests
↓
HelmChart CR ์์ฑ
↓
helm-install job ์คํ
↓
Addon ์ค์น (coredns, canal ๋ฑ)
์์ด์ ํธ ๋๋ ํฐ๋ฆฌ
[root@k8s-node1 ~]# tree /var/lib/rancher/rke2/agent/ -L 3
/var/lib/rancher/rke2/agent/
โโโ client-ca.crt
โโโ client-kubelet.crt
โโโ client-kubelet.key
โโโ client-kube-proxy.crt
โโโ client-kube-proxy.key
โโโ client-rke2-controller.crt
โโโ client-rke2-controller.key
โโโ containerd
โ โโโ bin
โ โโโ containerd.log
โ โโโ io.containerd.content.v1.content
โ โ โโโ blobs
โ โ โโโ ingest
โ โโโ io.containerd.grpc.v1.cri
โ โ โโโ containers
โ โ โโโ sandboxes
โ โโโ io.containerd.grpc.v1.introspection
โ โ โโโ uuid
โ โโโ io.containerd.metadata.v1.bolt
โ โ โโโ meta.db
โ โโโ io.containerd.runtime.v2.task
โ โ โโโ k8s.io
โ โโโ io.containerd.sandbox.controller.v1.shim
โ โโโ io.containerd.snapshotter.v1.blockfile
โ โโโ io.containerd.snapshotter.v1.btrfs
โ โโโ io.containerd.snapshotter.v1.erofs
โ โโโ io.containerd.snapshotter.v1.fuse-overlayfs
โ โ โโโ snapshots
โ โโโ io.containerd.snapshotter.v1.native
โ โ โโโ snapshots
โ โโโ io.containerd.snapshotter.v1.overlayfs
โ โ โโโ metadata.db
โ โ โโโ snapshots
โ โโโ io.containerd.snapshotter.v1.stargz
โ โ โโโ snapshotter
โ โ โโโ stargz
โ โโโ lib
โ โโโ tmpmounts
โโโ etc
โ โโโ containerd
โ โ โโโ config.toml
โ โโโ crictl.yaml
โ โโโ kubelet.conf.d
โ โโโ 00-rke2-defaults.conf
โโโ images
โ โโโ etcd-image.txt
โ โโโ kube-apiserver-image.txt
โ โโโ kube-controller-manager-image.txt
โ โโโ kube-proxy-image.txt
โ โโโ kube-scheduler-image.txt
โ โโโ runtime-image.txt
โโโ kubelet.kubeconfig
โโโ kubeproxy.kubeconfig
โโโ logs
โ โโโ kubelet.log
โโโ pod-manifests
โ โโโ etcd.yaml
โ โโโ kube-apiserver.yaml
โ โโโ kube-controller-manager.yaml
โ โโโ kube-proxy.yaml
โ โโโ kube-scheduler.yaml
โโโ rke2controller.kubeconfig
โโโ server-ca.crt
โโโ serving-kubelet.crt
โโโ serving-kubelet.key
33 directories, 32 files
agent๋ kubelet + containerd + static pod๋ฅผ ๊ด๋ฆฌํ๋ฉฐ, control-plane ์ปดํฌ๋ํธ๋ static pod๋ก ์คํ๋๋ค.
RKE2 ์ปจํธ๋กค ํ๋ ์ธ ๋์ ์๋ฆฌ
[root@k8s-node1 ~]# tree /var/lib/rancher/rke2/agent/pod-manifests
/var/lib/rancher/rke2/agent/pod-manifests
โโโ etcd.yaml
โโโ kube-apiserver.yaml
โโโ kube-controller-manager.yaml
โโโ kube-proxy.yaml
โโโ kube-scheduler.yaml
0 directories, 5 files
RKE2๋ control-plane ๊ตฌ์ฑ์์๋ฅผ static pod๋ก ์คํํ๋ฉฐ kubelet์ด ์ง์ ๊ด๋ฆฌํ๋ค.
cat /var/lib/rancher/rke2/agent/pod-manifests/kube-apiserver.yaml
# RKE2๋ insecure ์ค์ ์ด ๊ธฐ๋ณธ์ ์ผ๋ก ๋นํ์ฑํ๋์ด ์์ผ๋ฉฐ RBAC + NodeRestriction์ด ๊ธฐ๋ณธ ์ ์ฉ๋๋ค.
--anonymous-auth=false
--authorization-mode=Node,RBAC
--enable-admission-plugins=NodeRestriction
# RKE2๋ etcd ์ ์ฅ ๋ฐ์ดํฐ(Secret)์ ๋ํด Encryption at Rest๊ฐ ๊ธฐ๋ณธ ์ ์ฉ๋๋ค.
--encryption-provider-config=/var/lib/rancher/rke2/server/cred/encryption-config.json
# ๋ชจ๋ ์ปดํฌ๋ํธ ๊ฐ ํต์ ์ TLS ๊ธฐ๋ฐ์ผ๋ก ๊ตฌ์ฑ๋๋ค.
--client-ca-file
--tls-cert-file
--tls-private-key-file
# etcd๋ local static pod๋ก ์คํ๋๋ฉฐ loopback์ผ๋ก ํต์ ํ๋ค.
--etcd-servers=https://127.0.0.1:2379
kube-apiserver๋ ๊ธฐ๋ณธ์ ์ผ๋ก ๊ฐํ ๋ณด์ ์ค์ (RBAC, NodeRestriction, anonymous-auth ๋นํ์ฑํ)์ด ์ ์ฉ๋์ด ์์ผ๋ฉฐ, etcd์ ์ ์ฅ๋๋ Secret ๋ฐ์ดํฐ๋ AES ๊ธฐ๋ฐ์ผ๋ก ์ํธํ๋๋ค. etcd๋ local static pod๋ก ์คํ๋๋ฉฐ TLS ๊ธฐ๋ฐ ํต์ ์ ์ฌ์ฉํ๊ณ , hostPath ๋ง์ดํธ๋ ํ์ผ ๋จ์๋ก ์ ํํ์ฌ ๋ณด์์ ๊ฐํํ๋ค.
์์ปค ๋ ธ๋ ์ถ๊ฐํ๊ธฐ
[root@k8s-node1 ~]# cat /var/lib/rancher/rke2/server/node-token
K10f9339365cfc8369c242f8b87c8e2287a32f9b0a001cc0b1e0edb01aa7047cf7a::server:4aa040c17d99e5830cdcceef49f911ab
[root@k8s-node1 ~]# ss -tnlp | grep 9345
LISTEN 0 4096 192.168.10.11:9345 0.0.0.0:* users:(("rke2",pid=6345,fd=6))
LISTEN 0 4096 127.0.0.1:9345 0.0.0.0:* users:(("rke2",pid=6345,fd=7))
LISTEN 0 4096 [::1]:9345 [::]:* users:(("rke2",pid=6345,fd=8))
[root@k8s-node2 ~]# curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" INSTALL_RKE2_CHANNEL=v1.33 sh -
[INFO] using stable RPM repositories
[INFO] using 1.33 series from channel stable
Rancher RKE2 Common (v1.33) 714 B/s | 659 B 00:00
Rancher RKE2 Common (v1.33) 4.8 kB/s | 2.4 kB 00:00
Importing GPG key 0xE257814A:
Userid : "Rancher (CI) <ci@rancher.com>"
Fingerprint: C8CF F216 4551 26E9 B9C9 18BE 925E A29A E257 814A
From : https://rpm.rancher.io/public.key
Rancher RKE2 Common (v1.33) 1.8 kB/s | 2.6 kB 00:01
Rancher RKE2 1.33 (v1.33) 676 B/s | 659 B 00:00
Rancher RKE2 1.33 (v1.33) 5.3 kB/s | 2.4 kB 00:00
Importing GPG key 0xE257814A:
Userid : "Rancher (CI) <ci@rancher.com>"
Fingerprint: C8CF F216 4551 26E9 B9C9 18BE 925E A29A E257 814A
From : https://rpm.rancher.io/public.key
Rancher RKE2 1.33 (v1.33) 4.2 kB/s | 5.9 kB 00:01
Dependencies resolved.
======================================================================
Package Arch Version Repository Size
======================================================================
Installing:
rke2-agent aarch64 1.33.8~rke2r1-0.el9
rancher-rke2-1.33-stable 8.3 k
Installing dependencies:
rke2-common aarch64 1.33.8~rke2r1-0.el9
rancher-rke2-1.33-stable 25 M
rke2-selinux noarch 0.22-1.el9 rancher-rke2-common-stable 22 k
Transaction Summary
======================================================================
Install 3 Packages
Total download size: 25 M
Installed size: 113 M
Downloading Packages:
(1/3): rke2-selinux-0.22-1.el9.noarch 44 kB/s | 22 kB 00:00
(2/3): rke2-agent-1.33.8~rke2r1-0.el9 14 kB/s | 8.3 kB 00:00
(3/3): rke2-common-1.33.8~rke2r1-0.el 12 MB/s | 25 MB 00:02
----------------------------------------------------------------------
Total 11 MB/s | 25 MB 00:02
Rancher RKE2 Common (v1.33) 5.1 kB/s | 2.4 kB 00:00
Importing GPG key 0xE257814A:
Userid : "Rancher (CI) <ci@rancher.com>"
Fingerprint: C8CF F216 4551 26E9 B9C9 18BE 925E A29A E257 814A
From : https://rpm.rancher.io/public.key
Key imported successfully
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Running scriptlet: rke2-selinux-0.22-1.el9.noarch 1/3
Installing : rke2-selinux-0.22-1.el9.noarch 1/3
Running scriptlet: rke2-selinux-0.22-1.el9.noarch 1/3
Installing : rke2-common-1.33.8~rke2r1-0.el9.aarch64 2/3
Installing : rke2-agent-1.33.8~rke2r1-0.el9.aarch64 3/3
Running scriptlet: rke2-agent-1.33.8~rke2r1-0.el9.aarch64 3/3
Running scriptlet: rke2-selinux-0.22-1.el9.noarch 3/3
Running scriptlet: rke2-agent-1.33.8~rke2r1-0.el9.aarch64 3/3
Verifying : rke2-selinux-0.22-1.el9.noarch 1/3
Verifying : rke2-agent-1.33.8~rke2r1-0.el9.aarch64 2/3
Verifying : rke2-common-1.33.8~rke2r1-0.el9.aarch64 3/3
Installed:
rke2-agent-1.33.8~rke2r1-0.el9.aarch64
rke2-common-1.33.8~rke2r1-0.el9.aarch64
rke2-selinux-0.22-1.el9.noarch
Complete!
[root@k8s-node2 ~]# TOKEN=K10f9339365cfc8369c242f8b87c8e2287a32f9b0a001cc0b1e0edb01aa7047cf7a::server:4aa040c17d99e5830cdcceef49f911ab
[root@k8s-node2 ~]# mkdir -p /etc/rancher/rke2/
cat << EOF > /etc/rancher/rke2/config.yaml
server: https://192.168.10.11:9345
token: $TOKEN
EOF
[root@k8s-node2 ~]# cat /etc/rancher/rke2/config.yaml
server: https://192.168.10.11:9345
token: K10f9339365cfc8369c242f8b87c8e2287a32f9b0a001cc0b1e0edb01aa7047cf7a
[root@k8s-node2 ~]# systemctl status rke2-agent.service
โ rke2-agent.service - Rancher Kubernetes Engine v2 (agent)
Loaded: loaded (/usr/lib/systemd/system/rke2-agent.service; enabl>
Active: active (running) since Mon 2026-02-23 03:48:55 KST; 1s ago
Docs: https://github.com/rancher/rke2#readme
Process: 6642 ExecStartPre=/sbin/modprobe br_netfilter (code=exite>
Process: 6643 ExecStartPre=/sbin/modprobe overlay (code=exited, st>
Main PID: 6644 (rke2)
Tasks: 53
Memory: 1.5G
CPU: 9.514s
CGroup: /system.slice/rke2-agent.service
โโ6644 "/usr/bin/rke2 agent"
โโ6663 containerd -c /var/lib/rancher/rke2/agent/etc/cont>
โโ6722 kubelet --volume-plugin-dir=/var/lib/kubelet/volum>
โโ6795 /var/lib/rancher/rke2/data/v1.33.8-rke2r1-1b287236>
โโ6798 /var/lib/rancher/rke2/data/v1.33.8-rke2r1-1b287236>
[root@k8s-node1 ~]# kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-node1 Ready control-plane,etcd,master 73m v1.33.8+rke2r1 192.168.10.11 <none> Rocky Linux 9.6 (Blue Onyx) 5.14.0-570.52.1.el9_6.aarch64 containerd://2.1.5-k3s1
k8s-node2 Ready <none> 23s v1.33.8+rke2r1 192.168.10.12 <none> Rocky Linux 9.6 (Blue Onyx) 5.14.0-570.52.1.el9_6.aarch64 containerd://2.1.5-k3s1
[root@k8s-node1 ~]# kubectl get pod -n kube-system -owide | grep k8s-node2
kube-proxy-k8s-node2 1/1 Running 0 28s 192.168.10.12 k8s-node2 <none> <none>
rke2-canal-brhcq 2/2 Running 0 28s 192.168.10.12 k8s-node2 <none> <none>
node2 ์ค์
[root@k8s-node2 ~]# ln -s /var/lib/rancher/rke2/bin/containerd /usr/local/bin/containerd
ln -s /var/lib/rancher/rke2/bin/crictl /usr/local/bin/crictl
ln -s /var/lib/rancher/rke2/agent/etc/crictl.yaml /etc/crictl.yaml
[root@k8s-node2 ~]# tree /etc/rancher/
/etc/rancher/
โโโ node
โ โโโ password
โโโ rke2
โโโ config.yaml
โโโ rke2-pss.yaml
2 directories, 3 files
[root@k8s-node2 ~]# tree /var/lib/rancher/rke2/agent/ -L 3
/var/lib/rancher/rke2/agent/
โโโ client-ca.crt
โโโ client-kubelet.crt
โโโ client-kubelet.key
โโโ client-kube-proxy.crt
โโโ client-kube-proxy.key
โโโ client-rke2-controller.crt
โโโ client-rke2-controller.key
โโโ containerd
โ โโโ bin
โ โโโ containerd.log
โ โโโ io.containerd.content.v1.content
โ โ โโโ blobs
โ โ โโโ ingest
โ โโโ io.containerd.grpc.v1.cri
โ โ โโโ containers
โ โ โโโ sandboxes
โ โโโ io.containerd.grpc.v1.introspection
โ โ โโโ uuid
โ โโโ io.containerd.metadata.v1.bolt
โ โ โโโ meta.db
โ โโโ io.containerd.runtime.v2.task
โ โ โโโ k8s.io
โ โโโ io.containerd.sandbox.controller.v1.shim
โ โโโ io.containerd.snapshotter.v1.blockfile
โ โโโ io.containerd.snapshotter.v1.btrfs
โ โโโ io.containerd.snapshotter.v1.erofs
โ โโโ io.containerd.snapshotter.v1.fuse-overlayfs
โ โ โโโ snapshots
โ โโโ io.containerd.snapshotter.v1.native
โ โ โโโ snapshots
โ โโโ io.containerd.snapshotter.v1.overlayfs
โ โ โโโ metadata.db
โ โ โโโ snapshots
โ โโโ io.containerd.snapshotter.v1.stargz
โ โ โโโ snapshotter
โ โ โโโ stargz
โ โโโ lib
โ โโโ tmpmounts
โโโ etc
โ โโโ containerd
โ โ โโโ config.toml
โ โโโ crictl.yaml
โ โโโ kubelet.conf.d
โ โ โโโ 00-rke2-defaults.conf
โ โโโ rke2-agent-load-balancer.json
โ โโโ rke2-api-server-agent-load-balancer.json
โโโ images
โ โโโ kube-proxy-image.txt
โ โโโ runtime-image.txt
โโโ kubelet.kubeconfig
โโโ kubeproxy.kubeconfig
โโโ logs
โ โโโ kubelet.log
โโโ pod-manifests
โ โโโ kube-proxy.yaml
โโโ rke2controller.kubeconfig
โโโ server-ca.crt
โโโ serving-kubelet.crt
โโโ serving-kubelet.key
33 directories, 26 files
[root@k8s-node2 ~]# cat /var/lib/rancher/rke2/agent/etc/rke2-agent-load-balancer.json | jq
{
"ServerURL": "https://192.168.10.11:9345",
"ServerAddresses": [
"192.168.10.11:9345"
]
}
[root@k8s-node2 ~]# cat /var/lib/rancher/rke2/agent/etc/rke2-api-server-agent-load-balancer.json | jq
{
"ServerURL": "https://192.168.10.11:6443",
"ServerAddresses": [
"192.168.10.11:6443"
]
}
๋ ธ๋ ์ ๊ฑฐํ๊ธฐ
[root@k8s-node1 ~]# kubectl drain k8s-node2 --ignore-daemonsets --delete-emptydir-data
node/k8s-node2 cordoned
Warning: ignoring DaemonSet-managed Pods: kube-system/rke2-canal-brhcq
node/k8s-node2 drained
[root@k8s-node1 ~]# kubectl delete node k8s-node2
node "k8s-node2" deleted
[root@k8s-node2 ~]# systemctl stop rke2-agent
[root@k8s-node2 ~]# ls -l /usr/bin/rke2*
-rwxr-xr-x. 1 root root 118800736 Feb 14 04:04 /usr/bin/rke2
-rwxr-xr-x. 1 root root 3373 Feb 18 02:48 /usr/bin/rke2-killall.sh
-rwxr-xr-x. 1 root root 5606 Feb 18 02:48 /usr/bin/rke2-uninstall.sh
[root@k8s-node2 ~]# cat /usr/bin/rke2-uninstall.sh
#!/bin/sh
set -ex
# helper function for timestamped logging
log() {
echo "[$(date +'%Y-%m-%d %H:%M:%S')] $*"
}
# helper function for logging error and exiting with a message
error() {
log "ERROR: $*" >&2
exit 1
}
# make sure we run as root
if [ ! $(id -u) -eq 0 ]; then
error "$(basename "$0"): must be run as root"
fi
# check_target_mountpoint return success if the target directory is on a dedicated mount point
check_target_mountpoint() {
mountpoint -q "$1"
}
# check_target_ro returns success if the target directory is read-only
check_target_ro() {
touch "$1"/.rke2-ro-test && rm -rf -- "$1"/.rke2-ro-test
test $? -ne 0
}
RKE2_DATA_DIR=${RKE2_DATA_DIR:-/var/lib/rancher/rke2}
. /etc/os-release
if [ -r /etc/redhat-release ] || [ -r /etc/centos-release ] || [ -r /etc/oracle-release ] || [ -r /etc/system-release ]; then
# If redhat/oracle family os is detected, double check whether installation mode is yum or tar.
# yum method assumes installation root under /usr
# tar method assumes installation root under /usr/local
if rpm -q rke2-common >/dev/null 2>&1; then
: "${INSTALL_RKE2_ROOT:="/usr"}"
else
: "${INSTALL_RKE2_ROOT:="/usr/local"}"
fi
elif [ "${ID_LIKE%%[ ]*}" = "suse" ]; then
if rpm -q rke2-common >/dev/null 2>&1; then
: "${INSTALL_RKE2_ROOT:="/usr"}"
if [ -x /usr/sbin/transactional-update ]; then
transactional_update="transactional-update -c --no-selfupdate -d run"
fi
elif check_target_mountpoint "/usr/local" || check_target_ro "/usr/local"; then
# if /usr/local is mounted on a specific mount point or read-only then
# install we assume that installation happened in /opt/rke2
: "${INSTALL_RKE2_ROOT:="/opt/rke2"}"
else
: "${INSTALL_RKE2_ROOT:="/usr/local"}"
fi
else
: "${INSTALL_RKE2_ROOT:="/usr/local"}"
fi
uninstall_killall()
{
_killall="$(dirname "$0")/rke2-killall.sh"
log "Running killall script"
if [ -e "${_killall}" ]; then
eval "${_killall}"
fi
}
uninstall_disable_services()
{
log "Disabling rke2 services"
if command -v systemctl >/dev/null 2>&1; then
systemctl disable rke2-server || true
systemctl disable rke2-agent || true
systemctl reset-failed rke2-server || true
systemctl reset-failed rke2-agent || true
systemctl daemon-reload
fi
}
uninstall_remove_files()
{
if [ -r /etc/redhat-release ] || [ -r /etc/centos-release ] || [ -r /etc/oracle-release ] || [ -r /etc/system-release ]; then
yum remove -y "rke2-*"
rm -f /etc/yum.repos.d/rancher-rke2*.repo
fi
if [ "${ID_LIKE%%[ ]*}" = "suse" ]; then
if rpm -q rke2-common >/dev/null 2>&1; then
log "Removing rke2 packages using zypper"
# rke2 rpm detected
uninstall_cmd="zypper remove -y rke2-server rke2-agent rke2-common rke2-selinux"
if [ "${TRANSACTIONAL_UPDATE=false}" != "true" ] && [ -x /usr/sbin/transactional-update ]; then
uninstall_cmd="transactional-update -c --no-selfupdate -d run $uninstall_cmd"
fi
set +e
$uninstall_cmd
zypper_exit_code=$?
set -e
# Ignore 104 - ZYPPER_EXIT_INF_CAP_NOT_FOUND, which indicates that the package was not found
if [ $zypper_exit_code -ne 0 ] && [ $zypper_exit_code -ne 104 ]; then
exit $zypper_exit_code
fi
rm -f /etc/zypp/repos.d/rancher-rke2*.repo
fi
fi
log "Removing rke2 files"
$transactional_update find "${INSTALL_RKE2_ROOT}/lib/systemd/system" -name "rke2-*.service" -delete
$transactional_update find "${INSTALL_RKE2_ROOT}/lib/systemd/system" -name "rke2-*.env" -delete
find /etc/systemd/system -name "rke2-*.service" -delete
$transactional_update rm -f -- "${INSTALL_RKE2_ROOT}/bin/rke2"
$transactional_update rm -f -- "${INSTALL_RKE2_ROOT}/bin/rke2-killall.sh"
$transactional_update rm -rf -- "${INSTALL_RKE2_ROOT}/share/rke2"
rm -rf /etc/rancher/rke2 /etc/rancher/node /etc/cni /opt/cni/bin /var/lib/cni/ /var/log/pods/ /var/log/containers /var/log/calico
rm -d /etc/rancher || true
rm --one-file-system -rf /var/lib/kubelet || true
rm -rf -- "${RKE2_DATA_DIR}" || error "Failed to remove ${RKE2_DATA_DIR}"
rm -d /var/lib/rancher || true
if type fapolicyd >/dev/null 2>&1; then
log "Removing fapolicyd rules"
if [ -f /etc/fapolicyd/rules.d/80-rke2.rules ]; then
rm -f /etc/fapolicyd/rules.d/80-rke2.rules
fi
fagenrules --load
systemctl try-restart fapolicyd
fi
}
uninstall_remove_self()
{
cleanup
log "Removing uninstall script"
$transactional_update rm -f -- "${INSTALL_RKE2_ROOT}/bin/rke2-uninstall.sh"
}
# Define a cleanup function that triggers on exit
cleanup() {
# Check if last command's exit status was not equal to 0
if [ $? -ne 0 ]; then
if [ -n "$NO_COLOR" ]; then # Disable color code for error message if NO_COLOR env variable is passed
echo -e "Cleanup didn't complete successfully"
else
echo -e "\e[31mCleanup didn't complete successfully\e[0m"
fi
fi
}
uninstall_remove_policy()
{
log "Removing SELinux policy"
semodule -r rke2 || true
}
# Set a trap to log an error if the script exits unexpectedly
trap cleanup EXIT
uninstall_killall
trap - EXIT
trap uninstall_remove_self EXIT
uninstall_disable_services
uninstall_remove_files
uninstall_remove_policy
log "Cleanup completed successfully"
[root@k8s-node2 ~]# rke2-uninstall.sh
...
+ echo '[2026-02-23 03:51:30] Removing uninstall script'
[2026-02-23 03:51:30] Removing uninstall script
+ rm -f -- /usr/bin/rke2-uninstall.sh
์์ปค ๋ ธ๋ ๋ค์ ์ถ๊ฐํ๊ธฐ
[root@k8s-node2 ~]# curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE="agent" INSTALL_RKE2_CHANNEL=v1.33 sh -
[INFO] using stable RPM repositories
[INFO] using 1.33 series from channel stable
Rancher RKE2 Common (v1.33) 711 B/s | 659 B 00:00
Rancher RKE2 1.33 (v1.33) 686 B/s | 659 B 00:00
Dependencies resolved.
======================================================================
Package Arch Version Repository Size
======================================================================
Installing:
rke2-agent aarch64 1.33.8~rke2r1-0.el9
rancher-rke2-1.33-stable 8.3 k
Installing dependencies:
rke2-common aarch64 1.33.8~rke2r1-0.el9
rancher-rke2-1.33-stable 25 M
rke2-selinux noarch 0.22-1.el9 rancher-rke2-common-stable 22 k
Transaction Summary
======================================================================
Install 3 Packages
Total download size: 25 M
Installed size: 113 M
Downloading Packages:
(1/3): rke2-agent-1.33.8~rke2r1-0.el9 18 kB/s | 8.3 kB 00:00
(2/3): rke2-selinux-0.22-1.el9.noarch 48 kB/s | 22 kB 00:00
(3/3): rke2-common-1.33.8~rke2r1-0.el 12 MB/s | 25 MB 00:02
----------------------------------------------------------------------
Total 12 MB/s | 25 MB 00:02
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Running scriptlet: rke2-selinux-0.22-1.el9.noarch 1/3
Installing : rke2-selinux-0.22-1.el9.noarch 1/3
Running scriptlet: rke2-selinux-0.22-1.el9.noarch 1/3
Installing : rke2-common-1.33.8~rke2r1-0.el9.aarch64 2/3
Installing : rke2-agent-1.33.8~rke2r1-0.el9.aarch64 3/3
Running scriptlet: rke2-agent-1.33.8~rke2r1-0.el9.aarch64 3/3
Running scriptlet: rke2-selinux-0.22-1.el9.noarch 3/3
Running scriptlet: rke2-agent-1.33.8~rke2r1-0.el9.aarch64 3/3
Verifying : rke2-selinux-0.22-1.el9.noarch 1/3
Verifying : rke2-agent-1.33.8~rke2r1-0.el9.aarch64 2/3
Verifying : rke2-common-1.33.8~rke2r1-0.el9.aarch64 3/3
Installed:
rke2-agent-1.33.8~rke2r1-0.el9.aarch64
rke2-common-1.33.8~rke2r1-0.el9.aarch64
rke2-selinux-0.22-1.el9.noarch
Complete!
[root@k8s-node2 ~]# mkdir -p /etc/rancher/rke2/
cat << EOF > /etc/rancher/rke2/config.yaml
server: https://192.168.10.11:9345
token: $TOKEN
EOF
cat /etc/rancher/rke2/config.yaml
server: https://192.168.10.11:9345
token: K10f9339365cfc8369c242f8b87c8e2287a32f9b0a001cc0b1e0edb01aa7047cf7a::server:4aa040c17d99e5830cdcceef49f911ab
[root@k8s-node2 ~]# systemctl enable --now rke2-agent.service
Created symlink /etc/systemd/system/multi-user.target.wants/rke2-agent.service → /usr/lib/systemd/system/rke2-agent.service.
์ํ ์ ํ๋ฆฌ์ผ์ด์ ๋ฐฐํฌ
NodePort Service๋ฅผ ํตํด ์ธ๋ถ ํธ๋ํฝ์ด ํด๋ฌ์คํฐ๋ก ๋ค์ด์ ์ฌ๋ฌ Pod๋ก ๋ก๋๋ฐธ๋ฐ์ฑ๋๋ ๋์์ ๊ฒ์ฆํ๋ค.
cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: webpod
spec:
replicas: 2
selector:
matchLabels:
app: webpod
template:
metadata:
labels:
app: webpod
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- sample-app
topologyKey: "kubernetes.io/hostname"
containers:
- name: webpod
image: traefik/whoami
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: webpod
labels:
app: webpod
spec:
selector:
app: webpod
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30000
type: NodePort
EOF
[root@k8s-node1 ~]# kubectl get deploy,pod,svc,ep -owide
Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/webpod 2/2 2 2 6s webpod traefik/whoami app=webpod
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/webpod-697b545f57-tkjl5 1/1 Running 0 6s 10.42.2.2 k8s-node2 <none> <none>
pod/webpod-697b545f57-vvrkx 1/1 Running 0 6s 10.42.0.6 k8s-node1 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 78m <none>
service/webpod NodePort 10.43.232.139 <none> 80:30000/TCP 6s app=webpod
NAME ENDPOINTS AGE
endpoints/kubernetes 192.168.10.11:6443 78m
endpoints/webpod 10.42.0.6:80,10.42.2.2:80 6s
[root@k8s-node1 ~]# while true; do curl -s http://192.168.10.12:30000 | grep Hostname; date; sleep 1; done
Hostname: webpod-697b545f57-vvrkx
Mon Feb 23 03:54:50 AM KST 2026
Hostname: webpod-697b545f57-vvrkx
Mon Feb 23 03:54:51 AM KST 2026
Hostname: webpod-697b545f57-tkjl5
Mon Feb 23 03:54:52 AM KST 2026
Hostname: webpod-697b545f57-vvrkx
Mon Feb 23 03:54:53 AM KST 2026
Hostname: webpod-697b545f57-vvrkx
Mon Feb 23 03:54:54 AM KST 2026
^C
์ค์ ํธ๋ํฝ ํ๋ฆ์ ์ ๋ฆฌํ๋ฉด ์๋์ ๊ฐ๋ค.
์ธ์ฆ์ ๊ด๋ฆฌ
ํ์ฌ ์ธ์ฆ์ ๋ง๋ฃ๊ธฐํ ํ์ธ
[root@k8s-node2 ~]# rke2 certificate check --output table
INFO[0000] Server detected, checking agent and server certificates
FILENAME SUBJECT USAGES EXPIRES RESIDUAL TIME STATUS
-------- ------- ------ ------- ------------- ------
client-rke2-controller.crt system:rke2-controller ClientAuth Feb 22, 2027 18:52 UTC 1 year OK
client-rke2-controller.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-kube-proxy.crt system:kube-proxy ClientAuth Feb 22, 2027 18:52 UTC 1 year OK
client-kube-proxy.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-kubelet.crt system:node:k8s-node2 ClientAuth Feb 22, 2027 18:52 UTC 1 year OK
client-kubelet.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
serving-kubelet.crt k8s-node2 ServerAuth Feb 22, 2027 18:52 UTC 1 year OK
serving-kubelet.crt rke2-server-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
[root@k8s-node1 ~]# rke2 certificate check --output table
INFO[0000] Server detected, checking agent and server certificates
FILENAME SUBJECT USAGES EXPIRES RESIDUAL TIME STATUS
-------- ------- ------ ------- ------------- ------
client.crt etcd-client ClientAuth Feb 22, 2027 17:35 UTC 1 year OK
client.crt etcd-server-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
server-client.crt etcd-server ServerAuth,ClientAuth Feb 22, 2027 17:35 UTC 1 year OK
server-client.crt etcd-server-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
peer-server-client.crt etcd-peer ServerAuth,ClientAuth Feb 22, 2027 17:35 UTC 1 year OK
peer-server-client.crt etcd-peer-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-supervisor.crt system:rke2-supervisor ClientAuth Feb 22, 2027 17:35 UTC 1 year OK
client-supervisor.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-kube-proxy.crt system:kube-proxy ClientAuth Feb 22, 2027 17:35 UTC 1 year OK
client-kube-proxy.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-kube-apiserver.crt system:apiserver ClientAuth Feb 22, 2027 17:35 UTC 1 year OK
client-kube-apiserver.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
serving-kube-apiserver.crt kube-apiserver ServerAuth Feb 22, 2027 17:35 UTC 1 year OK
serving-kube-apiserver.crt rke2-server-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-auth-proxy.crt system:auth-proxy ClientAuth Feb 22, 2027 17:35 UTC 1 year OK
client-auth-proxy.crt rke2-request-header-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-controller.crt system:kube-controller-manager ClientAuth Feb 22, 2027 17:35 UTC 1 year OK
client-controller.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
kube-controller-manager.crt kube-controller-manager ServerAuth Feb 22, 2027 17:35 UTC 1 year OK
kube-controller-manager.crt rke2-server-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-scheduler.crt system:kube-scheduler ClientAuth Feb 22, 2027 17:35 UTC 1 year OK
client-scheduler.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
kube-scheduler.crt kube-scheduler ServerAuth Feb 22, 2027 17:35 UTC 1 year OK
kube-scheduler.crt rke2-server-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-kubelet.crt system:node:k8s-node1 ClientAuth Feb 22, 2027 17:35 UTC 1 year OK
client-kubelet.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
serving-kubelet.crt k8s-node1 ServerAuth Feb 22, 2027 17:35 UTC 1 year OK
serving-kubelet.crt rke2-server-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-rke2-controller.crt system:rke2-controller ClientAuth Feb 22, 2027 17:35 UTC 1 year OK
client-rke2-controller.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-admin.crt system:admin ClientAuth Feb 22, 2027 17:35 UTC 1 year OK
client-admin.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-rke2-cloud-controller.crt rke2-cloud-controller-manager ClientAuth Feb 22, 2027 17:35 UTC 1 year OK
client-rke2-cloud-controller.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
์ธ์ฆ์ ์๋ ๊ต์ฒด
[root@k8s-node1 ~]# systemctl stop rke2-server
[root@k8s-node1 ~]# rke2 certificate rotate
INFO[0000] Server detected, rotating agent and server certificates
INFO[0000] Rotating dynamic listener certificate
INFO[0000] Rotating certificates for kubelet
INFO[0000] Rotating certificates for api-server
INFO[0000] Rotating certificates for controller-manager
INFO[0000] Rotating certificates for etcd
INFO[0000] Rotating certificates for kube-proxy
INFO[0000] Rotating certificates for rke2-controller
INFO[0000] Rotating certificates for admin
INFO[0000] Rotating certificates for auth-proxy
INFO[0000] Rotating certificates for cloud-controller
INFO[0000] Rotating certificates for scheduler
INFO[0000] Rotating certificates for supervisor
INFO[0000] Successfully backed up certificates to /var/lib/rancher/rke2/server/tls-1771786701, please restart rke2 server or agent to rotate certificates
[root@k8s-node1 ~]# rke2 certificate check --output table
INFO[0000] Server detected, checking agent and server certificates
FILENAME SUBJECT USAGES EXPIRES RESIDUAL TIME STATUS
-------- ------- ------ ------- ------------- ------
[root@k8s-node1 ~]# systemctl start rke2-server
[root@k8s-node1 ~]# rke2 certificate check --output table
INFO[0000] Server detected, checking agent and server certificates
FILENAME SUBJECT USAGES EXPIRES RESIDUAL TIME STATUS
-------- ------- ------ ------- ------------- ------
client-kube-apiserver.crt system:apiserver ClientAuth Feb 22, 2027 18:59 UTC 1 year OK
client-kube-apiserver.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
serving-kube-apiserver.crt kube-apiserver ServerAuth Feb 22, 2027 18:59 UTC 1 year OK
serving-kube-apiserver.crt rke2-server-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-auth-proxy.crt system:auth-proxy ClientAuth Feb 22, 2027 18:59 UTC 1 year OK
client-auth-proxy.crt rke2-request-header-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-rke2-cloud-controller.crt rke2-cloud-controller-manager ClientAuth Feb 22, 2027 18:59 UTC 1 year OK
client-rke2-cloud-controller.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-controller.crt system:kube-controller-manager ClientAuth Feb 22, 2027 18:59 UTC 1 year OK
client-controller.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
kube-controller-manager.crt kube-controller-manager ServerAuth Feb 22, 2027 18:59 UTC 1 year OK
kube-controller-manager.crt rke2-server-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client.crt etcd-client ClientAuth Feb 22, 2027 18:59 UTC 1 year OK
client.crt etcd-server-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
server-client.crt etcd-server ServerAuth,ClientAuth Feb 22, 2027 18:59 UTC 1 year OK
server-client.crt etcd-server-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
peer-server-client.crt etcd-peer ServerAuth,ClientAuth Feb 22, 2027 18:59 UTC 1 year OK
peer-server-client.crt etcd-peer-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-scheduler.crt system:kube-scheduler ClientAuth Feb 22, 2027 18:59 UTC 1 year OK
client-scheduler.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
kube-scheduler.crt kube-scheduler ServerAuth Feb 22, 2027 18:59 UTC 1 year OK
kube-scheduler.crt rke2-server-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-supervisor.crt system:rke2-supervisor ClientAuth Feb 22, 2027 18:59 UTC 1 year OK
client-supervisor.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-admin.crt system:admin ClientAuth Feb 22, 2027 18:59 UTC 1 year OK
client-admin.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-kube-proxy.crt system:kube-proxy ClientAuth Feb 22, 2027 18:59 UTC 1 year OK
client-kube-proxy.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-kubelet.crt system:node:k8s-node1 ClientAuth Feb 22, 2027 18:59 UTC 1 year OK
client-kubelet.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
serving-kubelet.crt k8s-node1 ServerAuth Feb 22, 2027 18:59 UTC 1 year OK
serving-kubelet.crt rke2-server-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
client-rke2-controller.crt system:rke2-controller ClientAuth Feb 22, 2027 18:59 UTC 1 year OK
client-rke2-controller.crt rke2-client-ca@1771781703 CertSign Feb 20, 2036 17:35 UTC 10 years OK
diff /etc/rancher/rke2/rke2.yaml ~/.kube/config
# ๋ณ๊ฒฝ๋ ๋ถ๋ถ ์ฒดํฌ
[root@k8s-node1 ~]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.10.11:6443
CoreDNS is running at https://192.168.10.11:6443/api/v1/namespaces/kube-system/services/rke2-coredns-rke2-coredns:udp-53/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
๋ ธ๋ 1.33 -> 1.34 ์ ๊ทธ๋ ์ด๋
ํ์ฌ ๋ฒ์ ์ฒดํฌ
[root@k8s-node1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready control-plane,etcd,master 85m v1.33.8+rke2r1
k8s-node2 Ready <none> 8m44s v1.33.8+rke2r1
[root@k8s-node1 ~]# rke2 --version
rke2 version v1.33.8+rke2r1 (eb75e3c1774cee5a584259d6fee77eb8cfa9b430)
go version go1.24.12 X:boringcrypto
[root@k8s-node1 ~]# curl -s https://update.rke2.io/v1-release/channels | jq .data
[
{
"id": "stable",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/stable"
},
"name": "stable",
"latest": "v1.34.4+rke2r1"
},
{
"id": "latest",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/latest"
},
"name": "latest",
"latest": "v1.35.1+rke2r1",
"latestRegexp": ".*",
"excludeRegexp": "(^[^+]+-|v1\\.25\\.5\\+rke2r1|v1\\.26\\.0\\+rke2r1)"
},
{
"id": "v1.18",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/v1.18"
},
"name": "v1.18",
"latest": "v1.18.20+rke2r1",
"latestRegexp": "v1\\.18\\..*",
"excludeRegexp": "^[^+]+-"
},
{
"id": "v1.19",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/v1.19"
},
"name": "v1.19",
"latest": "v1.19.16+rke2r1",
"latestRegexp": "v1\\.19\\..*",
"excludeRegexp": "(^[^+]+-|v1\\.19\\.13\\+rke2r1)"
},
{
"id": "testing",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/testing"
},
"name": "testing",
"latest": "v1.18.9-beta22+rke2",
"latestRegexp": "-(alpha|beta|rc)"
},
{
"id": "v1.20",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/v1.20"
},
"name": "v1.20",
"latest": "v1.20.15+rke2r2",
"latestRegexp": "v1\\.20\\..*",
"excludeRegexp": "(^[^+]+-|v1\\.20\\.9\\+rke2r1)"
},
{
"id": "v1.21",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/v1.21"
},
"name": "v1.21",
"latest": "v1.21.14+rke2r1",
"latestRegexp": "v1\\.21\\..*",
"excludeRegexp": "(^[^+]+-|v1\\.21\\.3\\+rke2r2)"
},
{
"id": "v1.22",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/v1.22"
},
"name": "v1.22",
"latest": "v1.22.17+rke2r1",
"latestRegexp": "v1\\.22\\..*",
"excludeRegexp": "^[^+]+-"
},
{
"id": "v1.23",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/v1.23"
},
"name": "v1.23",
"latest": "v1.23.17+rke2r1",
"latestRegexp": "v1\\.23\\..*",
"excludeRegexp": "^[^+]+-"
},
{
"id": "v1.24",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/v1.24"
},
"name": "v1.24",
"latest": "v1.24.17+rke2r1",
"latestRegexp": "v1\\.24\\..*",
"excludeRegexp": "(^[^+]+-|v1\\.24\\.9\\+rke2r1)"
},
{
"id": "v1.25",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/v1.25"
},
"name": "v1.25",
"latest": "v1.25.16+rke2r2",
"latestRegexp": "v1\\.25\\..*",
"excludeRegexp": "(^[^+]+-|v1\\.25\\.5\\+rke2r1)"
},
{
"id": "v1.26",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/v1.26"
},
"name": "v1.26",
"latest": "v1.26.15+rke2r1",
"latestRegexp": "v1\\.26\\..*",
"excludeRegexp": "(^[^+]+-|v1\\.26\\.0\\+rke2r1)"
},
{
"id": "v1.27",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/v1.27"
},
"name": "v1.27",
"latest": "v1.27.16+rke2r2",
"latestRegexp": "v1\\.27\\..*",
"excludeRegexp": "^[^+]+-"
},
{
"id": "v1.28",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/v1.28"
},
"name": "v1.28",
"latest": "v1.28.15+rke2r1",
"latestRegexp": "v1\\.28\\..*",
"excludeRegexp": "^[^+]+-"
},
{
"id": "v1.29",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/v1.29"
},
"name": "v1.29",
"latest": "v1.29.15+rke2r1",
"latestRegexp": "v1\\.29\\..*",
"excludeRegexp": "^[^+]+-"
},
{
"id": "v1.30",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/v1.30"
},
"name": "v1.30",
"latest": "v1.30.14+rke2r4",
"latestRegexp": "v1\\.30\\..*",
"excludeRegexp": "^[^+]+-"
},
{
"id": "v1.31",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/v1.31"
},
"name": "v1.31",
"latest": "v1.31.14+rke2r1",
"latestRegexp": "v1\\.31\\..*",
"excludeRegexp": "^[^+]+-"
},
{
"id": "v1.32",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/v1.32"
},
"name": "v1.32",
"latest": "v1.32.12+rke2r1",
"latestRegexp": "v1\\.32\\..*",
"excludeRegexp": "^[^+]+-"
},
{
"id": "v1.33",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/v1.33"
},
"name": "v1.33",
"latest": "v1.33.8+rke2r1",
"latestRegexp": "v1\\.33\\..*",
"excludeRegexp": "^[^+]+-"
},
{
"id": "v1.34",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/v1.34"
},
"name": "v1.34",
"latest": "v1.34.4+rke2r1",
"latestRegexp": "v1\\.34\\..*",
"excludeRegexp": "^[^+]+-"
},
{
"id": "v1.35",
"type": "channel",
"links": {
"self": "https://update.rke2.io/v1-release/channels/v1.35"
},
"name": "v1.35",
"latest": "v1.35.1+rke2r1",
"latestRegexp": "v1\\.35\\..*",
"excludeRegexp": "^[^+]+-"
}
]
์ ๊ทธ๋ ์ด๋ ์ํ
[root@k8s-node1 ~]# curl -s https://update.rke2.io/v1-release/channels | jq .datacurl -sfL https://get.rke2.io | INSTALL_RKE2_CHANNEL=v1.34 sh -
jq: Unknown option -sfL
Use jq --help for help with command-line options,
or see the jq manpage, or online docs at https://stedolan.github.io/jq
[root@k8s-node1 ~]# ^C
[root@k8s-node1 ~]# curl -sfL https://get.rke2.io | INSTALL_RKE2_CHANNEL=v1.34 sh -
[INFO] using stable RPM repositories
[INFO] using 1.34 series from channel stable
Importing GPG key 0xE257814A:
Userid : "Rancher (CI) <ci@rancher.com>"
Fingerprint: C8CF F216 4551 26E9 B9C9 18BE 925E A29A E257 814A
From : https://rpm.rancher.io/public.key
Error: Failed to download metadata for repo 'rancher-rke2-1.34-stable': repomd.xml GPG signature verification error: Bad GPG signature
Importing GPG key 0xE257814A:
Userid : "Rancher (CI) <ci@rancher.com>"
Fingerprint: C8CF F216 4551 26E9 B9C9 18BE 925E A29A E257 814A
From : https://rpm.rancher.io/public.key
Error: Failed to download metadata for repo 'rancher-rke2-1.34-stable': repomd.xml GPG signature verification error: Bad GPG signature
Rancher RKE2 1.34 (v1.34) 722 B/s | 659 B 00:00
Rancher RKE2 1.34 (v1.34) 5.2 kB/s | 2.4 kB 00:00
Importing GPG key 0xE257814A:
Userid : "Rancher (CI) <ci@rancher.com>"
Fingerprint: C8CF F216 4551 26E9 B9C9 18BE 925E A29A E257 814A
From : https://rpm.rancher.io/public.key
Rancher RKE2 1.34 (v1.34) 2.2 kB/s | 3.6 kB 00:01
Package rke2-server-1.33.8~rke2r1-0.el9.aarch64 is already installed.
Dependencies resolved.
=======================================================================
Package Arch Version Repository Size
=======================================================================
Upgrading:
rke2-common
aarch64 1.34.4~rke2r1-0.el9 rancher-rke2-1.34-stable 25 M
rke2-server
aarch64 1.34.4~rke2r1-0.el9 rancher-rke2-1.34-stable 8.3 k
Transaction Summary
=======================================================================
Upgrade 2 Packages
Total download size: 25 M
Downloading Packages:
(1/2): rke2-server-1.34.4~rke2r1-0.el9 15 kB/s | 8.3 kB 00:00
(2/2): rke2-common-1.34.4~rke2r1-0.el9 11 MB/s | 25 MB 00:02
-----------------------------------------------------------------------
Total 11 MB/s | 25 MB 00:02
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Upgrading : rke2-common-1.34.4~rke2r1-0.el9.aarch64 1/4
Upgrading : rke2-server-1.34.4~rke2r1-0.el9.aarch64 2/4
Running scriptlet: rke2-server-1.34.4~rke2r1-0.el9.aarch64 2/4
Running scriptlet: rke2-server-1.33.8~rke2r1-0.el9.aarch64 3/4
Cleanup : rke2-server-1.33.8~rke2r1-0.el9.aarch64 3/4
Running scriptlet: rke2-server-1.33.8~rke2r1-0.el9.aarch64 3/4
Running scriptlet: rke2-common-1.33.8~rke2r1-0.el9.aarch64 4/4
Cleanup : rke2-common-1.33.8~rke2r1-0.el9.aarch64 4/4
Running scriptlet: rke2-common-1.33.8~rke2r1-0.el9.aarch64 4/4
Verifying : rke2-common-1.34.4~rke2r1-0.el9.aarch64 1/4
Verifying : rke2-common-1.33.8~rke2r1-0.el9.aarch64 2/4
Verifying : rke2-server-1.34.4~rke2r1-0.el9.aarch64 3/4
Verifying : rke2-server-1.33.8~rke2r1-0.el9.aarch64 4/4
Upgraded:
rke2-common-1.34.4~rke2r1-0.el9.aarch64
rke2-server-1.34.4~rke2r1-0.el9.aarch64
Complete!
[root@k8s-node1 ~]# rke2 --version
rke2 version v1.34.4+rke2r1 (c6b97dc03cefec17e8454a6f45b29f4e3d0a81d6)
go version go1.24.12 X:boringcrypto
[root@k8s-node1 ~]# kubectl get pod -n kube-system --sort-by=.metadata.creationTimestamp | tac
helm-install-rke2-canal-bvpfs 0/1 Completed 0 8s
helm-install-rke2-coredns-26vr4 0/1 Completed 0 8s
helm-install-rke2-metrics-server-lsrb6 0/1 Completed 0 8s
helm-install-rke2-runtimeclasses-c8g48 0/1 Completed 0 8s
kube-scheduler-k8s-node1 1/1 Running 0 24s
kube-controller-manager-k8s-node1 1/1 Running 0 25s
kube-proxy-k8s-node1 1/1 Running 0 79s
kube-apiserver-k8s-node1 1/1 Running 2 (79s ago) 79s
etcd-k8s-node1 1/1 Running 0 79s
kube-proxy-k8s-node2 1/1 Running 0 11m
rke2-canal-29csn 2/2 Running 0 11m
rke2-metrics-server-fdcdf575d-gkhg2 1/1 Running 0 87m
rke2-canal-lcxlg 2/2 Running 0 88m
rke2-coredns-rke2-coredns-559595db99-rx8ch 1/1 Running 0 88m
NAME READY STATUS RESTARTS AGE
[root@k8s-node1 ~]# dnf repolist
repo id repo name
appstream Rocky Linux 9 - AppStream
baseos Rocky Linux 9 - BaseOS
extras Rocky Linux 9 - Extras
rancher-rke2-1.34-stable Rancher RKE2 1.34 (v1.34)
rancher-rke2-common-stable Rancher RKE2 Common (v1.34)
[root@k8s-node1 ~]# cat /etc/yum.repos.d/rancher-rke2.repo | grep -iE 'name|baseurl'
name=Rancher RKE2 Common (v1.34)
baseurl=https://rpm.rancher.io/rke2/stable/common/centos/9/noarch
name=Rancher RKE2 1.34 (v1.34)
baseurl=https://rpm.rancher.io/rke2/stable/1.34/centos/9/aarch64
[root@k8s-node1 ~]# kubectl get pods -n kube-system -o custom-columns="POD_NAME:.metadata.name,IMAGES:.spec.containers[*].image"
kubectl get pods -n kube-system \
-o custom-columns=\
POD:.metadata.name,\
CONTAINERS:.spec.containers[*].name,\
IMAGES:.spec.containers[*].image
POD_NAME IMAGES
etcd-k8s-node1 index.docker.io/rancher/hardened-etcd:v3.6.7-k3s1-build20260126
helm-install-rke2-canal-bvpfs rancher/klipper-helm:v0.9.14-build20260210
helm-install-rke2-coredns-26vr4 rancher/klipper-helm:v0.9.14-build20260210
helm-install-rke2-metrics-server-lsrb6 rancher/klipper-helm:v0.9.14-build20260210
helm-install-rke2-runtimeclasses-c8g48 rancher/klipper-helm:v0.9.14-build20260210
kube-apiserver-k8s-node1 index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
kube-controller-manager-k8s-node1 index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
kube-proxy-k8s-node1 index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
kube-proxy-k8s-node2 index.docker.io/rancher/hardened-kubernetes:v1.33.8-rke2r1-build20260210
kube-scheduler-k8s-node1 index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
rke2-canal-29csn rancher/hardened-calico:v3.31.3-build20260206,rancher/hardened-flannel:v0.28.1-build20260206
rke2-canal-lcxlg rancher/hardened-calico:v3.31.3-build20260206,rancher/hardened-flannel:v0.28.1-build20260206
rke2-coredns-rke2-coredns-559595db99-rx8ch rancher/hardened-coredns:v1.14.1-build20260206
rke2-metrics-server-fdcdf575d-gkhg2 rancher/hardened-k8s-metrics-server:v0.8.1-build20260206
POD CONTAINERS IMAGES
etcd-k8s-node1 etcd index.docker.io/rancher/hardened-etcd:v3.6.7-k3s1-build20260126
helm-install-rke2-canal-bvpfs helm rancher/klipper-helm:v0.9.14-build20260210
helm-install-rke2-coredns-26vr4 helm rancher/klipper-helm:v0.9.14-build20260210
helm-install-rke2-metrics-server-lsrb6 helm rancher/klipper-helm:v0.9.14-build20260210
helm-install-rke2-runtimeclasses-c8g48 helm rancher/klipper-helm:v0.9.14-build20260210
kube-apiserver-k8s-node1 kube-apiserver index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
kube-controller-manager-k8s-node1 kube-controller-manager index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
kube-proxy-k8s-node1 kube-proxy index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
kube-proxy-k8s-node2 kube-proxy index.docker.io/rancher/hardened-kubernetes:v1.33.8-rke2r1-build20260210
kube-scheduler-k8s-node1 kube-scheduler index.docker.io/rancher/hardened-kubernetes:v1.34.4-rke2r1-build20260210
rke2-canal-29csn calico-node,kube-flannel rancher/hardened-calico:v3.31.3-build20260206,rancher/hardened-flannel:v0.28.1-build20260206
rke2-canal-lcxlg calico-node,kube-flannel rancher/hardened-calico:v3.31.3-build20260206,rancher/hardened-flannel:v0.28.1-build20260206
rke2-coredns-rke2-coredns-559595db99-rx8ch coredns rancher/hardened-coredns:v1.14.1-build20260206
rke2-metrics-server-fdcdf575d-gkhg2 metrics-server rancher/hardened-k8s-metrics-server:v0.8.1-build20260206
[root@k8s-node1 ~]# systemctl restart rke2-server
[root@k8s-node1 ~]# kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-node1 Ready control-plane,etcd,master 90m v1.34.4+rke2r1 192.168.10.11 <none> Rocky Linux 9.6 (Blue Onyx) 5.14.0-570.52.1.el9_6.aarch64 containerd://2.1.5-k3s1
k8s-node2 Ready <none> 12m v1.34.4+rke2r1 192.168.10.12 <none> Rocky Linux 9.6 (Blue Onyx) 5.14.0-570.52.1.el9_6.aarch64 containerd://2.1.5-k3s1
์์ปค๋ ธ๋ ์ ๊ทธ๋ ์ด๋
[root@k8s-node2 ~]# curl -sfL https://get.rke2.io | INSTALL_RKE2_TYPE=agent INSTALL_RKE2_CHANNEL=v1.34 sh -
[INFO] using stable RPM repositories
[INFO] using 1.34 series from channel stable
Importing GPG key 0xE257814A:
Userid : "Rancher (CI) <ci@rancher.com>"
Fingerprint: C8CF F216 4551 26E9 B9C9 18BE 925E A29A E257 814A
From : https://rpm.rancher.io/public.key
Error: Failed to download metadata for repo 'rancher-rke2-1.34-stable': repomd.xml GPG signature verification error: Bad GPG signature
Importing GPG key 0xE257814A:
Userid : "Rancher (CI) <ci@rancher.com>"
Fingerprint: C8CF F216 4551 26E9 B9C9 18BE 925E A29A E257 814A
From : https://rpm.rancher.io/public.key
Error: Failed to download metadata for repo 'rancher-rke2-1.34-stable': repomd.xml GPG signature verification error: Bad GPG signature
Rancher RKE2 1.34 (v1.34) 753 B/s | 659 B 00:00
Rancher RKE2 1.34 (v1.34) 5.3 kB/s | 2.4 kB 00:00
Importing GPG key 0xE257814A:
Userid : "Rancher (CI) <ci@rancher.com>"
Fingerprint: C8CF F216 4551 26E9 B9C9 18BE 925E A29A E257 814A
From : https://rpm.rancher.io/public.key
Rancher RKE2 1.34 (v1.34) 2.7 kB/s | 3.6 kB 00:01
Package rke2-agent-1.33.8~rke2r1-0.el9.aarch64 is already installed.
Dependencies resolved.
======================================================================
Package Arch Version Repository Size
======================================================================
Upgrading:
rke2-agent
aarch64 1.34.4~rke2r1-0.el9 rancher-rke2-1.34-stable 8.3 k
rke2-common
aarch64 1.34.4~rke2r1-0.el9 rancher-rke2-1.34-stable 25 M
Transaction Summary
======================================================================
Upgrade 2 Packages
Total download size: 25 M
Downloading Packages:
(1/2): rke2-agent-1.34.4~rke2r1-0.el9 15 kB/s | 8.3 kB 00:00
(2/2): rke2-common-1.34.4~rke2r1-0.el 11 MB/s | 25 MB 00:02
----------------------------------------------------------------------
Total 11 MB/s | 25 MB 00:02
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1
Upgrading : rke2-common-1.34.4~rke2r1-0.el9.aarch64 1/4
Upgrading : rke2-agent-1.34.4~rke2r1-0.el9.aarch64 2/4
Running scriptlet: rke2-agent-1.34.4~rke2r1-0.el9.aarch64 2/4
Running scriptlet: rke2-agent-1.33.8~rke2r1-0.el9.aarch64 3/4
Cleanup : rke2-agent-1.33.8~rke2r1-0.el9.aarch64 3/4
Running scriptlet: rke2-agent-1.33.8~rke2r1-0.el9.aarch64 3/4
Running scriptlet: rke2-common-1.33.8~rke2r1-0.el9.aarch64 4/4
Cleanup : rke2-common-1.33.8~rke2r1-0.el9.aarch64 4/4
Running scriptlet: rke2-common-1.33.8~rke2r1-0.el9.aarch64 4/4
Verifying : rke2-agent-1.34.4~rke2r1-0.el9.aarch64 1/4
Verifying : rke2-agent-1.33.8~rke2r1-0.el9.aarch64 2/4
Verifying : rke2-common-1.34.4~rke2r1-0.el9.aarch64 3/4
Verifying : rke2-common-1.33.8~rke2r1-0.el9.aarch64 4/4
Upgraded:
rke2-agent-1.34.4~rke2r1-0.el9.aarch64
rke2-common-1.34.4~rke2r1-0.el9.aarch64
Complete!
[root@k8s-node2 ~]# rke2 --version
rke2 version v1.34.4+rke2r1 (c6b97dc03cefec17e8454a6f45b29f4e3d0a81d6)
go version go1.24.12 X:boringcrypto
[root@k8s-node2 ~]# dnf repolist
repo id repo name
appstream Rocky Linux 9 - AppStream
baseos Rocky Linux 9 - BaseOS
extras Rocky Linux 9 - Extras
rancher-rke2-1.34-stable Rancher RKE2 1.34 (v1.34)
rancher-rke2-common-stable Rancher RKE2 Common (v1.34)
[root@k8s-node2 ~]# systemctl restart rke2-agent
[root@k8s-node1 ~]# kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-node1 Ready control-plane,etcd,master 91m v1.34.4+rke2r1 192.168.10.11 <none> Rocky Linux 9.6 (Blue Onyx) 5.14.0-570.52.1.el9_6.aarch64 containerd://2.1.5-k3s1
k8s-node2 Ready <none> 14m v1.34.4+rke2r1 192.168.10.12 <none> Rocky Linux 9.6 (Blue Onyx) 5.14.0-570.52.1.el9_6.aarch64 containerd://2.1.5-k3s1'Infra > ์ฟ ๋ฒ๋คํฐ์ค' ์นดํ ๊ณ ๋ฆฌ์ ๋ค๋ฅธ ๊ธ
| [K8S Deploy] #7์ฃผ์ฐจ ClusterAPI ์ค์ต (0) | 2026.02.23 |
|---|---|
| [K8S Deploy] #6์ฃผ์ฐจ Kubespray offline ์ค์นํ๊ธฐ (0) | 2026.02.15 |
| [K8S Deploy] #5์ฃผ์ฐจ Kubespray HA & Upgrade (0) | 2026.02.08 |
| [K8S Deploy] #4์ฃผ์ฐจ Kubespray ๋์ ์๋ฆฌ ์ดํดํ๊ธฐ (0) | 2026.02.01 |
| [K8S Deploy] #3์ฃผ์ฐจ kubeadm upgrade (2) (0) | 2026.01.25 |