26๋ ๋ K8S Deploy ์ ๋ฆฌ ๊ธ์ ๋๋ค.
kubespray ํตํ k8s ๋ฐฐํฌ
kubespray ๋ณ์ ์ฐ์ ์์
Kubespray๋ Ansible ๊ธฐ๋ฐ์ผ๋ก ๋์ํ๋ฉฐ, ์ค์ ๋๋ถ๋ถ์ด ๋ณ์(variable) ๋ก ์ ์ด๋๋ค.
๋ฐ๋ผ์ Ansible ๋ณ์ ์ฐ์ ์์(variable precedence) ๊ตฌ์กฐ๋ก ์ ์ด๋๋๋ฐ, Ansible์ ๋ณ์๊ฐ ์ ์ธ๋๋ ์์น์ ๋ฐ๋ผ ์ฐ์ ์์๊ฐ ๋ช
ํํ ์ ์๋์ด ์์ผ๋ฉฐ, ์ซ์๊ฐ ๋์์๋ก ์ฐ์ ์ ์ฉ๋๋ค.
Ansible ๊ณต์ ๋ฌธ์ ๊ธฐ์ค์ผ๋ก ๋ณ์ ์ฐ์ ์์๋ ๋ค์๊ณผ ๊ฐ์ ๊ตฌ์กฐ๋ฅผ ๊ฐ์ง๋ค.
- ๋ฎ์ ์ฐ์ ์์ ๋ณ์๋ ๋์ ์ฐ์ ์์ ๋ณ์์ ์ํด ์ธ์ ๋ ๋ฎ์ด์์์ง ์ ์๋ค
- ๊ฐ์ฅ ๋์ ์ฐ์ ์์๋ ํญ์ Extra vars (-e) ์ด๋ค
๋ง์ฝ ๊ฐ์ ์ด๋ฆ์ ๋ณ์๊ฐ ์ฌ๋ฌ ๊ณณ์ ์ ์๋์ด ์๋ค๋ฉด ๊ฐ์ฅ ๋์ค์, ๊ฐ์ฅ ๋์ ๋จ๊ณ์์ ์ ์๋ ๊ฐ์ด ์ค์ ์ ์ฉ๋๋ค.
(1) Role defaults (์ฐ์ ์์ ๋ฎ์)
roles/*/defaults/main.yml
- Kubespray์์ ์ ๊ณตํ๋ ๊ธฐ๋ณธ๊ฐ
- ์ฌ์ฉ์๊ฐ ๋ณ๋ ์ค์ ์ ํ์ง ์์ผ๋ฉด ์ด ๊ฐ์ด ์ ์ฉ๋๋ค
- ๊ฐ์ฅ ๋จผ์ ๋ก๋ฉ๋๋ฉฐ ์ ๋ฎ์ด์์์ง๋ค
(2) Inventory group_vars / host_vars (์ค์ง์ ์ผ๋ก ์ค์ ํ๋ ๊ฐ)
inventory/mycluster/group_vars/all.yml
inventory/mycluster/group_vars/k8s_cluster.yml
inventory/mycluster/host_vars/k8s-node1.yml
- Kubespray์์ ์ฌ์ฉ์๊ฐ ์์ ํ๋ ์์ญ
- Role defaults๋ฅผ ๋ฎ์ด์์ฐ๋ ์ฃผ๋ ์๋จ
- ํด๋ฌ์คํฐ ๋จ์ / ๊ทธ๋ฃน ๋จ์ / ๋ ธ๋ ๋จ์ ์ ์ด๊ฐ ๊ฐ๋ฅ
(3) Play vars / Playbook ๋ด๋ถ vars (๊ฐ์ ์ค์ )
- name: Install etcd
vars:
etcd_cluster_setup: false
etcd_events_cluster_setup: false
import_playbook: install_etcd.yml
- vars: ๋ Play vars
- inventory ๋ณ์๋ณด๋ค ์ฐ์ ์์๊ฐ ๋๋ค
- ์ฌ์ฉ์๊ฐ inventory์์ ๊ฐ์ ๋ฐ๊ฟ๋ ์ด playbook ์์์๋ ๋ฌด์๋๋ค
(4) Extra vars (-e)
ansible-playbook cluster.yml -e etcd_cluster_setup=true
- ๊ฐ์ฅ ์ฐ์ ์์ ๋๋ค
- ๋๋ฒ๊น , ๊ฐ์ override, ์์ ํ ์คํธ์ ์ฌ์ฉ๋๋ค
root@admin-lb:~# cd /root/kubespray/
root@admin-lb:~/kubespray# cat /root/kubespray/inventory/mycluster/inventory.ini
[kube_control_plane]
k8s-node1 ansible_host=192.168.10.11 ip=192.168.10.11 etcd_member_name=etcd1
k8s-node2 ansible_host=192.168.10.12 ip=192.168.10.12 etcd_member_name=etcd2
k8s-node3 ansible_host=192.168.10.13 ip=192.168.10.13 etcd_member_name=etcd3
[etcd:children]
kube_control_plane
[kube_node]
k8s-node4 ansible_host=192.168.10.14 ip=192.168.10.14
#k8s-node5 ansible_host=192.168.10.15 ip=192.168.10.15
๋ณ์๊ฐ ์ด๋์ ์ฌ์ฉ๋์๋์ง ์ถ์
root@admin-lb:~/kubespray# grep -Rn "allow_unsupported_distribution_setup" inventory/mycluster/ playbooks/ roles/ -A1 -B1
inventory/mycluster/group_vars/all/all.yml-141-## If enabled it will allow kubespray to attempt setup even if the distribution is not supported. For unsupported distributions this can lead to unexpected failures in some cases.
inventory/mycluster/group_vars/all/all.yml:142:allow_unsupported_distribution_setup: false
--
roles/kubernetes/preinstall/tasks/0040-verify-settings.yml-22- assert:
roles/kubernetes/preinstall/tasks/0040-verify-settings.yml:23: that: (allow_unsupported_distribution_setup | default(false)) or ansible_distribution in supported_os_distributions
roles/kubernetes/preinstall/tasks/0040-verify-settings.yml-24- msg: "{{ ansible_distribution }} is not a known OS"
์ค์ ๋ฐฐํฌ
root@admin-lb:~/kubespray# cat /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
# turn on stats unix socket
stats socket /var/lib/haproxy/stats
# utilize system-wide crypto-policies
ssl-default-bind-ciphers PROFILE=SYSTEM
ssl-default-server-ciphers PROFILE=SYSTEM
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
mode http
log global
option httplog
option tcplog
option dontlognull
option http-server-close
#option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
# ---------------------------------------------------------------------
# Kubernetes API Server Load Balancer Configuration
# ---------------------------------------------------------------------
frontend k8s-api
bind *:6443
mode tcp
option tcplog
default_backend k8s-api-backend
backend k8s-api-backend
mode tcp
option tcp-check
option log-health-checks
timeout client 3h
timeout server 3h
balance roundrobin
server k8s-node1 192.168.10.11:6443 check check-ssl verify none inter 10000
server k8s-node2 192.168.10.12:6443 check check-ssl verify none inter 10000
server k8s-node3 192.168.10.13:6443 check check-ssl verify none inter 10000
# ---------------------------------------------------------------------
# HAProxy Stats Dashboard - http://192.168.10.10:9000/haproxy_stats
# ---------------------------------------------------------------------
listen stats
bind *:9000
mode http
stats enable
stats uri /haproxy_stats
stats realm HAProxy\ Statistic
stats admin if TRUE
# ---------------------------------------------------------------------
# Configure the Prometheus exporter - curl http://192.168.10.10:8405/metrics
# ---------------------------------------------------------------------
frontend prometheus
bind *:8405
mode http
http-request use-service prometheus-exporter if { path /metrics }
no log
http://192.168.10.10:9000/haproxy_stats
root@admin-lb:~/kubespray# ansible-inventory -i /root/kubespray/inventory/mycluster/inventory.ini --graph
@all:
|--@ungrouped:
|--@etcd:
| |--@kube_control_plane:
| | |--k8s-node1
| | |--k8s-node2
| | |--k8s-node3
|--@kube_node:
| |--k8s-node4
ANSIBLE_FORCE_COLOR=true ansible-playbook -i inventory/mycluster/inventory.ini -v cluster.yml -e kube_version="1.32.9" | tee kubespray_install.log
...
Sunday 08 February 2026 00:43:34 +0900 (0:00:00.043) 0:05:33.369 *******
===============================================================================
kubernetes/kubeadm : Join to cluster if needed ------------------------- 16.06s
kubernetes/control-plane : Joining control plane node to the cluster. -- 14.54s
download : Download_container | Download image if required ------------- 11.63s
download : Download_container | Download image if required ------------- 11.35s
kubernetes/control-plane : Kubeadm | Initialize first control plane node (1st try) --- 7.24s
download : Download_container | Download image if required -------------- 7.20s
download : Download_file | Download item -------------------------------- 6.75s
download : Download_container | Download image if required -------------- 6.66s
download : Download_container | Download image if required -------------- 6.41s
download : Download_file | Download item -------------------------------- 6.23s
system_packages : Manage packages --------------------------------------- 5.71s
container-engine/containerd : Download_file | Download item ------------- 5.61s
network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence --- 5.49s
etcd : Restart etcd ----------------------------------------------------- 5.41s
download : Download_container | Download image if required -------------- 5.21s
download : Download_container | Download image if required -------------- 5.17s
etcd : Configure | Check if etcd cluster is healthy --------------------- 5.15s
container-engine/crictl : Download_file | Download item ----------------- 4.59s
etcd : Gen_certs | Write etcd member/admin and kube_control_plane client certs to other etcd nodes --- 4.56s
container-engine/runc : Download_file | Download item ------------------- 4.48s
์ค์๋ธ ๋ฐฐํฌ ์ ์ฝ 10๋ถ ์ ๋ ์๊ฐ์ด ์์๋๋ค.
api server
root@admin-lb:~/kubespray# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
E0208 01:32:25.680343 15124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"http://localhost:8080/api?timeout=32s\": dial tcp [::1]:8080: connect: connection refused"
๋ฐฐํฌ ์งํ kubeconfig๋ฅผ admin-lb ๋ ธ๋์์๋ ์ฌ์ฉํ ์ ์๋๋ก ์ค์ ํด์ค๋ค.
์๋ ์ง๊ธ ์ค์ ์ํด์ฃผ์ด์ ์๋ฌ ๋ฉ์ธ์ง๊ฐ ๋ฌ ์ํฉ
root@admin-lb:~/kubespray# mkdir /root/.kube
scp k8s-node1:/root/.kube/config /root/.kube/
cat /root/.kube/config | grep server
config 100% 5665 5.4MB/s 00:00
server: https://127.0.0.1:6443
root@admin-lb:~/kubespray# sed -i 's/127.0.0.1/192.168.10.11/g' /root/.kube/config
control-plane ๋ก์ปฌ ์ ์ฉ kubeconfig๋ฅผ ์ธ๋ถ ๊ด๋ฆฌ ๋ ธ๋(admin-lb)์์ ์ฌ์ฉํ ์ ์๋๋ก API Server endpoint๋ฅผ ์ค์ ๋คํธ์ํฌ ์ฃผ์๋ก ๊ต์ฒดํ๋ค.
root@admin-lb:~/kubespray# kubectl describe node | grep -E 'Name:|Taints'
Name: k8s-node1
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Name: k8s-node2
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Name: k8s-node3
Taints: node-role.kubernetes.io/control-plane:NoSchedule
Name: k8s-node4
Taints: <none>
root@admin-lb:~/kubespray# kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.podCIDR}{"\n"}{end}'
k8s-node1 10.233.64.0/24
k8s-node2 10.233.65.0/24
k8s-node3 10.233.66.0/24
k8s-node4 10.233.67.0/24
etcd ํ์ธ
root@admin-lb:~/kubespray# ssh k8s-node1 etcdctl.sh member list -w table
+------------------+---------+-------+----------------------------+----------------------------+------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+-------+----------------------------+----------------------------+------------+
| 8b0ca30665374b0 | started | etcd3 | https://192.168.10.13:2380 | https://192.168.10.13:2379 | false |
| 2106626b12a4099f | started | etcd2 | https://192.168.10.12:2380 | https://192.168.10.12:2379 | false |
| c6702130d82d740f | started | etcd1 | https://192.168.10.11:2380 | https://192.168.10.11:2379 | false |
+------------------+---------+-------+----------------------------+----------------------------+------------+
root@admin-lb:~/kubespray# for i in {1..3}; do echo ">> k8s-node$i <<"; ssh k8s-node$i etcdctl.sh endpoint status -w table; echo; done
>> k8s-node1 <<
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 127.0.0.1:2379 | c6702130d82d740f | 3.5.25 | 6.4 MB | true | false | 5 | 11188 | 11188 | |
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
>> k8s-node2 <<
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 127.0.0.1:2379 | 2106626b12a4099f | 3.5.25 | 6.5 MB | false | false | 5 | 11189 | 11189 | |
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
>> k8s-node3 <<
+----------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 127.0.0.1:2379 | 8b0ca30665374b0 | 3.5.25 | 6.4 MB | false | false | 5 | 11189 | 11189 | |
+----------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
์๋ ์์ฑ ๋ฐ ๋จ์ถ์ด ์ค์
source <(kubectl completion bash)
alias k=kubectl
alias kc=kubecolor
complete -F __start_kubectl k
echo 'source <(kubectl completion bash)' >> /etc/profile
echo 'alias k=kubectl' >> /etc/profile
echo 'alias kc=kubecolor' >> /etc/profile
echo 'complete -F __start_kubectl k' >> /etc/profile
control ์ปดํฌ๋ํธ ํ์ธ
kube-apiserver.yaml
root@admin-lb:~/kubespray# ssh k8s-node1 cat /etc/kubernetes/manifests/kube-apiserver.yaml
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.10.11
- --allow-privileged=true
- --anonymous-auth=True
- --apiserver-count=3
- --authorization-mode=Node,RBAC
- '--bind-address=::'
- --client-ca-file=/etc/kubernetes/ssl/ca.crt
- --default-not-ready-toleration-seconds=300
- --default-unreachable-toleration-seconds=300
- --enable-admission-plugins=NodeRestriction
- --enable-aggregator-routing=False
- --enable-bootstrap-token-auth=true
- --endpoint-reconciler-type=lease
- --etcd-cafile=/etc/ssl/etcd/ssl/ca.pem
- --etcd-certfile=/etc/ssl/etcd/ssl/node-k8s-node1.pem
- --etcd-compaction-interval=5m0s
- --etcd-keyfile=/etc/ssl/etcd/ssl/node-k8s-node1-key.pem
- --etcd-servers=https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379
kube-controller-manager
root@admin-lb:~/kubespray# ssh k8s-node1 cat /etc/kubernetes/manifests/kube-controller-manager.yaml
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- '--bind-address=::'
- --client-ca-file=/etc/kubernetes/ssl/ca.crt
- --cluster-cidr=10.233.64.0/18
- --cluster-name=cluster.local
- --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/ssl/ca.key
- --configure-cloud-routes=false
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --leader-elect-lease-duration=15s
- --leader-elect-renew-deadline=10s
- --node-cidr-mask-size-ipv4=24
- --node-monitor-grace-period=40s
- --node-monitor-period=5s
- --profiling=False
- --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/ssl/ca.crt
- --service-account-private-key-file=/etc/kubernetes/ssl/sa.key
- --service-cluster-ip-range=10.233.0.0/18
- --terminated-pod-gc-threshold=12500
- --use-service-account-credentials=true
kube-scheduler
root@admin-lb:~/kubespray# ssh k8s-node1 cat /etc/kubernetes/manifests/kube-scheduler.yaml
spec:
containers:
- command:
- kube-controller-manager
- --allocate-node-cidrs=true
- --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
- --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
- '--bind-address=::'
- --client-ca-file=/etc/kubernetes/ssl/ca.crt
- --cluster-cidr=10.233.64.0/18
- --cluster-name=cluster.local
- --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.crt
- --cluster-signing-key-file=/etc/kubernetes/ssl/ca.key
- --configure-cloud-routes=false
- --controllers=*,bootstrapsigner,tokencleaner
- --kubeconfig=/etc/kubernetes/controller-manager.conf
- --leader-elect=true
- --leader-elect-lease-duration=15s
- --leader-elect-renew-deadline=10s
- --node-cidr-mask-size-ipv4=24
- --node-monitor-grace-period=40s
- --node-monitor-period=5s
- --profiling=False
- --requestheader-client-ca-file=/etc/kubernetes/ssl/front-proxy-ca.crt
- --root-ca-file=/etc/kubernetes/ssl/ca.crt
- --service-account-private-key-file=/etc/kubernetes/ssl/sa.key
- --service-cluster-ip-range=10.233.0.0/18
- --terminated-pod-gc-threshold=12500
- --use-service-account-credentials=true
์ธ์ฆ์ ์ ๋ณด ํ์ธ
์ปจํธ๋กค ํ๋ ์ธ์ 3๋์ธ๋ฐ ์ธ์ฆ์ ์ ๋ณด๋ฅผ ํ์ธํ๋ฉด 1๋ฒ ๋ ธ๋์๋ง super-admin.conf๊ฐ ์๋๊ฒ์ ํ์ธํ ์ ์๋ค.
root@admin-lb:~/kubespray# for i in {1..3}; do echo ">> k8s-node$i <<"; ssh k8s-node$i ls -l /etc/kubernetes/super-admin.conf ; echo; done
>> k8s-node1 <<
-rw-------. 1 root root 5689 Feb 8 00:42 /etc/kubernetes/super-admin.conf
>> k8s-node2 <<
ls: cannot access '/etc/kubernetes/super-admin.conf': No such file or directory
>> k8s-node3 <<
ls: cannot access '/etc/kubernetes/super-admin.conf': No such file or directory
- k8s-node1
- /etc/kubernetes/super-admin.conf ํ์ผ์ด ์กด์ฌ
- root ์ ์ฉ ๊ถํ(600)์ผ๋ก ์์ฑ๋จ
- k8s-node2, k8s-node3
- ๋์ผ ๊ฒฝ๋ก์ ํด๋น ํ์ผ์ด ์กด์ฌํ์ง ์์
/etc/kubernetes/super-admin.conf๋ Kubespray๊ฐ ์์ฑํ๋ superuser kubeconfig ํ์ผ๋ก Kubernetes API์ ๋ํด cluster-admin ์์ค ์ด์์ ๊ถํ์ ๊ฐ์ง๋ฉฐ, ์ผ๋ฐ์ ์ธ admin.conf๋ณด๋ค ์๋ํ, bootstrap, ๋ด๋ถ ์ ์ด์ฉ์ผ๋ก ์ฌ์ฉ๋๋ค.
๊ธฐ๋ณธ ์ฌ์ฉ์๋ system:masters ๊ทธ๋ฃน์ ์ํ๋ค.
Kubespray๋ ๋ค์ค control-plane ํ๊ฒฝ์์๋ ๋ชจ๋ ๊ด๋ฆฌ์ฉ kubeconfig๋ฅผ ๋ชจ๋ ๋ ธ๋์ ๋ฟ๋ฆฌ์ง ์๋๋ค. ๋์ control-plane ์ค ํ๋์ ๋ ธ๋(primary, first master) ๋ฅผ ๊ธฐ์ค์ผ๋ก ์ผ๊ณ ์๋ ํ์ผ๋ค์ ํด๋น ๋ ธ๋์๋ง ์์ฑํ๋ค.
- /etc/kubernetes/admin.conf
- /etc/kubernetes/super-admin.conf
- /root/.kube/config
coredns ํ์ธ
root@admin-lb:~/kubespray# for i in {1..3}; do echo ">> k8s-node$i <<"; ssh k8s-node$i kubeadm certs check-expiration ; echo; done
>> k8s-node1 <<
[check-expiration] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[check-expiration] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
W0208 02:19:43.293312 51059 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [10.233.0.3]
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Feb 07, 2027 15:42 UTC 364d ca no
apiserver Feb 07, 2027 15:42 UTC 364d ca no
apiserver-kubelet-client Feb 07, 2027 15:42 UTC 364d ca no
controller-manager.conf Feb 07, 2027 15:42 UTC 364d ca no
front-proxy-client Feb 07, 2027 15:42 UTC 364d front-proxy-ca no
scheduler.conf Feb 07, 2027 15:42 UTC 364d ca no
super-admin.conf Feb 07, 2027 15:42 UTC 364d ca no
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Feb 05, 2036 15:42 UTC 9y no
front-proxy-ca Feb 05, 2036 15:42 UTC 9y no
>> k8s-node2 <<
[check-expiration] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[check-expiration] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
W0208 02:19:43.751491 50100 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [10.233.0.3]
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Feb 07, 2027 15:42 UTC 364d ca no
apiserver Feb 07, 2027 15:42 UTC 364d ca no
apiserver-kubelet-client Feb 07, 2027 15:42 UTC 364d ca no
controller-manager.conf Feb 07, 2027 15:42 UTC 364d ca no
front-proxy-client Feb 07, 2027 15:42 UTC 364d front-proxy-ca no
scheduler.conf Feb 07, 2027 15:42 UTC 364d ca no
!MISSING! super-admin.conf
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Feb 05, 2036 15:42 UTC 9y no
front-proxy-ca Feb 05, 2036 15:42 UTC 9y no
>> k8s-node3 <<
[check-expiration] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
[check-expiration] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
W0208 02:19:44.231769 50364 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [10.233.0.3]
CERTIFICATE EXPIRES RESIDUAL TIME CERTIFICATE AUTHORITY EXTERNALLY MANAGED
admin.conf Feb 07, 2027 15:42 UTC 364d ca no
apiserver Feb 07, 2027 15:42 UTC 364d ca no
apiserver-kubelet-client Feb 07, 2027 15:42 UTC 364d ca no
controller-manager.conf Feb 07, 2027 15:42 UTC 364d ca no
front-proxy-client Feb 07, 2027 15:42 UTC 364d front-proxy-ca no
scheduler.conf Feb 07, 2027 15:42 UTC 364d ca no
!MISSING! super-admin.conf
CERTIFICATE AUTHORITY EXPIRES RESIDUAL TIME EXTERNALLY MANAGED
ca Feb 05, 2036 15:42 UTC 9y no
front-proxy-ca Feb 05, 2036 15:42 UTC 9y no
๊ฐ control-plane ๋ ธ๋๋ ํด๋ฌ์คํฐ ์ ์ญ ์ค์ ์ ๊ณต์ ํ๊ณ ์์ผ๋ฉฐ, ์ธ์ฆ์ ์ญ์ ๋์ผํ CA ์ฒด๊ณ ํ์์ ๊ด๋ฆฌ๋๊ณ ์๋ค.
๊ตฌ์ฑ๋ ์ธ์ฆ์์ ์ ๋ณด๋ ์๋์ ๊ฐ๋ค.
- ๋ง๋ฃ ์์ : 2027-02-07
- ์์ฌ ๊ธฐ๊ฐ: ์ฝ 364์ผ
- CA: ca ๋๋ front-proxy-ca
- externally managed: no
2, 3๋ฒ ๋ ธ๋์์ ๋ณด์ด๋ !MISSING! super-admin.conf ๋ฉ์ธ์ง๋ 1๋ฒ ๋ ธ๋๊ฐ primary ๋ ธ๋์์ ๋ณด์ฌ์ฃผ๋ ๋ด์ฉ์ด๋ค.
kubeadm์ด ๊ถ์ฅํ๋ clusterDNS ์ฃผ์๋ Service CIDR์ ์ฒซ ๋ฒ์งธ IP (๊ธฐ๋ณธ์ ์ผ๋ก .10)์ธ๋ฐ ํ์ฌ kubelet ์ค์ ์๋ 10.233.0.3 ์ด ์ฌ์ฉ๋๊ณ ์๋ค. Kubespray ํ๊ฒฝ์์๋ CoreDNS Service IP๋ฅผ .3 ๋๋ .10 ๋ฑ์ผ๋ก ๋ผ์๋ค.
์ด๋ ๊ธฐ์กด ํด๋ฌ์คํฐ ํธํ์ฑ์ ํ๋ณดํ ๋ด๋ถ ๋คํธ์ํฌ ์ค๊ณ๋ ๊ฒ์ด๋ฉฐ, ๊ณผ๊ฑฐ kube-dns ๊ตฌ์ฑ๊ณผ์ ์ฐ์์ฑ์ ์ํด ์๋์ ์ผ๋ก ์กฐ์ ๋์๋ค๊ณ ๋ณผ ์ ์๋ค.
root@admin-lb:~/kubespray# kubectl get svc -n kube-system coredns
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 102m
root@admin-lb:~/kubespray# kubectl get cm -n kube-system kubelet-config -o yaml | grep clusterDNS -A2
clusterDNS:
- 10.233.0.3
clusterDomain: cluster.local
coredns ๋ Kubernetes ํด๋ฌ์คํฐ ๋ด๋ถ DNS๋ฅผ ๋ด๋นํ๋ Service์ด๋ฉฐ, Service ํ์ ์ ClusterIP์ด๊ณ ํด๋ฌ์คํฐ ๋ด๋ถ์์๋ง ์ ๊ทผ ๊ฐ๋ฅํ ๊ฐ์ IP๋ฅผ ๊ฐ์ง๋ค. ์ด ClusterIP๊ฐ ๋ฐ๋ก 10.233.0.3 ์ด๋ค
53/UDP, 53/TCP ํฌํธ๋ ์ผ๋ฐ์ ์ธ DNS ์ง์๋ฅผ ์ํํ๊ณ , 9153/TCP๋ Prometheus metrics ์ฉ ํฌํธ๋ก ์ญํ ์ ์ํํ์ฌ CoreDNS๊ฐ DNS ์๋ฒ + ๊ด์ธก ๋์ ์ปดํฌ๋ํธ ์ญํ ์ ๋์์ ์ํํ๊ณ ์์์ ๋ณด์ฌ์ค๋ค.
cluster ip ํ์ธ
root@admin-lb:~/kubespray# kubectl exec -it -n kube-system nginx-proxy-k8s-node4 -- cat /etc/resolv.conf
search kube-system.svc.cluster.local svc.cluster.local cluster.local default.svc.cluster.local
nameserver 10.233.0.3
options ndots:5
์ด ํด๋ฌ์คํฐ์์ CoreDNS๋ 10.233.0.3 ์ฃผ์๋ก ์๋น์ค ๋๊ณ ์๋ค.
ํด๋น nginx ํ๋์ ๋ชจ๋ DNS ์ง์๋ 10.233.0.3 ์ผ๋ก ์ ๋ฌ๋๊ณ , ์ด IP๋ CoreDNS Service์ ClusterIP ์ด๋ค.
์กฐ์ธ๋ ๋ ธ๋์ ์กฐ์ธ๋์ง ์์ ๋ ธ๋ dns ์ค์ ๋น๊ต
root@admin-lb:~/kubespray# ssh k8s-node1 cat /etc/resolv.conf
# Generated by NetworkManager
search default.svc.cluster.local svc.cluster.local
nameserver 10.233.0.3
nameserver 168.126.63.1
nameserver 168.126.63.2
options ndots:2 timeout:2 attempts:2
root@admin-lb:~/kubespray# ssh k8s-node5 cat /etc/resolv.conf
# Generated by NetworkManager
search lan
nameserver 168.126.63.1
nameserver 168.126.63.2
์กฐ์ธ๋์ง ์์ 5๋ฒ ๋ ธ๋๋ ๊ธฐ๋ณธ dns ์ค์ ์ ๊ฐ์ง๊ณ ์๋ค.
kubeadm-config cm ์ ๋ณด ํ์ธ
Control Plane Endpoint
controlPlaneEndpoint: 192.168.10.11:6443
์ด ํญ๋ชฉ์ ํด๋ฌ์คํฐ ์ธ๋ถ์์ ์ ๊ทผํ ๋์ kube-apiserver ์ง์ ์ ์ ์๋ฏธํ๋ค.
- ํ์ฌ ์ค์ : 192.168.10.11:6443
- LB VIP ๋๋ ๋ํ control-plane ๋ ธ๋ ์ฃผ์๋ก ์ฌ์ฉ ์ค
- ์์ admin-lb์์ kubeconfig์ server: ๊ฐ์ ์ด ์ฃผ์๋ก ์์ ํ ๊ฒ๊ณผ ์ ํํ ์ผ์นํ๋ค
API Server ์ธ์ฆ์ SAN ๊ตฌ์ฑ
apiserver-count: "3"
authorization-mode: Node,RBAC
bind-address: '::'
service-cluster-ip-range: 10.233.0.0/18
kube-apiserver TLS ์ธ์ฆ์๊ฐ ์ ๋ขฐํ๋ ์ ์ ๋์ ๋ชฉ๋ก์ด๋ค.
- ์๋น์ค DNS ์ด๋ฆ : kubernetes.default.svc.cluster.local
- Service ClusterIP : 10.233.0.1
- ๋ก์ปฌ ์ ๊ทผ : 127.0.0.1, ::1
- control-plane ๋ ธ๋ ์ด๋ฆ : k8s-node1~3
- LB DNS / IP
- lb-apiserver.kubernetes.local
- 192.168.10.11~13
์ด ๊ตฌ์ฑ ๋๋ถ์ผ๋ก Pod ๋ด๋ถ, control-plane ๋ ธ๋, admin-lb, LB ๊ฒฝ์ ์ ๊ทผ ๋ชจ๋ tls ์ค๋ฅ ์์ด ๊ฐ๋ฅํ๊ฒ ๋๋ค.
API Server ์ฃผ์ extraArgs
apiserver-count: "3"
authorization-mode: Node,RBAC
bind-address: '::'
service-cluster-ip-range: 10.233.0.0/18
- apiserver-count: 3 : control-plane ๋ ธ๋๊ฐ 3๋์์ ๋ช ์
- authorization-mode: Node,RBAC : Kubernetes ํ์ค ๊ถํ ๋ชจ๋ธ
- bind-address: '::' : IPv4/IPv6 dual-stack ๋์
- service-cluster-ip-range
- Service CIDR = 10.233.0.0/18
- CoreDNS Service IP(10.233.0.3)๊ฐ ์ด ๋ฒ์์ ํฌํจ
Networking ์ค์
networking:
dnsDomain: cluster.local
podSubnet: 10.233.64.0/18
serviceSubnet: 10.233.0.0/18
- Service ๋คํธ์ํฌ : 10.233.0.0/18
- Pod ๋คํธ์ํฌ : 10.233.64.0/18
- DNS ๋๋ฉ์ธ : cluster.local
์์์ ํ์ธํ ๋ด์ฉ์ ์ ๋ฆฌํ๋ฉด ์๋์ ๊ฐ๋ค.
- CoreDNS Service IP = 10.233.0.3
- kubelet clusterDNS = 10.233.0.3
- Pod /etc/resolv.conf nameserver = 10.233.0.3
DNS ์น์ : disabled: true์ ์๋ฏธ
dns:
disabled: true
์ด ๋ถ๋ถ์ kubeadm ๊ธฐ๋ณธ DNS ์ค์น๋ฅผ ๋นํ์ฑํํ๋ค๋ ์๋ฏธ๋ค.
kubeadm ์์ฒด์ CoreDNS addon์ ์ฌ์ฉํ์ง ์์ผ๋ฉฐ ๋์ Kubespray๊ฐ Helm/Manifest ๊ธฐ๋ฐ์ผ๋ก CoreDNS๋ฅผ ๋ณ๋ ๋ฐฐํฌ·๊ด๋ฆฌํ์ฌ DNS ์๋ช ์ฃผ๊ธฐ๋ฅผ ๊ด๋ฆฌํ๋ค.
etcd ๊ตฌ์ฑ (External etcd)
etcd:
external:
endpoints:
- https://192.168.10.11:2379
- https://192.168.10.12:2379
- https://192.168.10.13:2379
control-plane static pod๊ฐ ์๋๋ผ ์ธ๋ถ(etcd cluster) 3๋ ธ๋ etcd HA ๊ตฌ์ฑ๋์ด ์์ผ๋ฉฐ TLS ์ธ์ฆ์ ๊ธฐ๋ฐ์ผ๋ก ์ ๊ทผํ๋ค.
์ด๋ control-plane๊ณผ etcd์ ์ญํ ์ ๋ช ํํ ๋ถ๋ฆฌํ ๊ตฌ์กฐ๋ค.
์ธ์ฆ์ ์ ํจ๊ธฐ๊ฐ ์ค์
certificateValidityPeriod: 8760h0m0s
caCertificateValidityPeriod: 87600h0m0s
kubeadm certs check-expiration ๊ฒฐ๊ณผ์ ์ผ์นํ๋ฉฐ, ์ธ์ฆ์ ์๋ช ์ฃผ๊ธฐ๊ฐ ์ค์ → ์ค์ ์ํ๊น์ง ์ผ๊ด๋๊ฒ ์ ์ง๋๊ณ ์๋ค.
- ์ผ๋ฐ ์ธ์ฆ์: ์ฝ 1๋
- CA ์ธ์ฆ์: ์ฝ 10๋
K8S API ์๋ํฌ์ธํธ

Worker ๋ ธ๋์์ Kubernetes API์ ์ ๊ทผํ๋ ์ค์ ๊ฒฝ๋ก์ Kubespray์ Client-Side Load Balancing ๊ตฌ์กฐ๋ฅผ ์์๋ณธ๋ค.
์์ปค๋ ธ๋ ์ ๋ณด ํ์ธ
root@admin-lb:~/kubespray# ssh k8s-node4 crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD NAMESPACE
468e938c98d5c bc6c1e09a843d 2 hours ago Running metrics-server 0 09ff933093700 metrics-server-65fdf69dcb-jnvbd kube-system
1b68d0c87c769 2f6c962e7b831 2 hours ago Running coredns 0 b0027b8776163 coredns-664b99d7c7-5drtx kube-system
37d48e61e4df1 cadcae92e6360 2 hours ago Running kube-flannel 0 c8cdc9b9aff8e kube-flannel-ds-arm64-wkwrg kube-system
8117e271262f3 72b57ec14d31e 2 hours ago Running kube-proxy 0 c02b63ef148dc kube-proxy-bjqdx kube-system
6a22218ae0325 5a91d90f47ddf 2 hours ago Running nginx-proxy 0 83316bf2859bd nginx-proxy-k8s-node4 kube-system
nginx ์ปจํผ๊ทธ
root@admin-lb:~/kubespray# ssh k8s-node4 cat /etc/nginx/nginx.conf
error_log stderr notice;
worker_processes 2;
worker_rlimit_nofile 130048;
worker_shutdown_timeout 10s;
events {
multi_accept on;
use epoll;
worker_connections 16384;
}
stream {
upstream kube_apiserver {
least_conn;
server 192.168.10.11:6443;
server 192.168.10.12:6443;
server 192.168.10.13:6443;
}
server {
listen 127.0.0.1:6443;
proxy_pass kube_apiserver;
proxy_timeout 10m;
proxy_connect_timeout 1s;
}
}
http {
aio threads;
aio_write on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 5m;
keepalive_requests 100;
reset_timedout_connection on;
server_tokens off;
autoindex off;
server {
listen 8081;
location /healthz {
access_log off;
return 200;
}
location /stub_status {
stub_status on;
access_log off;
}
}
}
nginx-proxy๋ Deployment๋ DaemonSet์ด ์๋๋ผ static pod๋ก kubelet์ด /etc/kubernetes/manifests ํ์ ๋งค๋ํ์คํธ๋ฅผ ๊ฐ์งํด ์ง์ ์์ฑ ๋ฐ ๊ด๋ฆฌ๋๋ค.
nginx ์ค์ ์ worker ๋ ธ๋๋ ํญ์ localhost๋ง ๋ฐ๋ผ๋ณด๊ณ , ์ค์ control-plane ๋ถ์ฐ์ nginx๊ฐ ์ฑ ์์ง๋ค.
kubelet / kube-proxy
↓
https://127.0.0.1:6443
↓
nginx-proxy (local)
↓
192.168.10.11~13:6443 (control-plane)
nginx ์ปจํผ๊ทธ๋ Jinja2 ํ ํ๋ฆฟ์ ์ฌ์ฉํ์ฌ ๋ ธ๋๋ณ nginx.conf ์๋ ์์ฑ๋๋ค.
root@admin-lb:~/kubespray# tree roles/kubernetes/node/tasks/loadbalancer
roles/kubernetes/node/tasks/loadbalancer
โโโ haproxy.yml
โโโ kube-vip.yml
โโโ nginx-proxy.yml
1 directory, 3 files
root@admin-lb:~/kubespray# cat roles/kubernetes/node/tasks/loadbalancer/nginx-proxy.yml
---
- name: Haproxy | Cleanup potentially deployed haproxy
file:
path: "{{ kube_manifest_dir }}/haproxy.yml"
state: absent
- name: Nginx-proxy | Make nginx directory
file:
path: "{{ nginx_config_dir }}"
state: directory
mode: "0700"
owner: root
- name: Nginx-proxy | Write nginx-proxy configuration
template:
src: "loadbalancer/nginx.conf.j2"
dest: "{{ nginx_config_dir }}/nginx.conf"
owner: root
mode: "0755"
backup: true
- name: Nginx-proxy | Get checksum from config
stat:
path: "{{ nginx_config_dir }}/nginx.conf"
get_attributes: false
get_checksum: true
get_mime: false
register: nginx_stat
- name: Nginx-proxy | Write static pod
template:
src: manifests/nginx-proxy.manifest.j2
dest: "{{ kube_manifest_dir }}/nginx-proxy.yml"
mode: "0640"
root@admin-lb:~/kubespray# cat roles/kubernetes/node/templates/loadbalancer/nginx.conf.j2
error_log stderr notice;
worker_processes 2;
worker_rlimit_nofile 130048;
worker_shutdown_timeout 10s;
events {
multi_accept on;
use epoll;
worker_connections 16384;
}
stream {
upstream kube_apiserver {
least_conn;
{% for host in groups['kube_control_plane'] -%}
server {{ hostvars[host]['main_access_ip'] | ansible.utils.ipwrap }}:{{ kube_apiserver_port }};
{% endfor -%}
}
server {
listen 127.0.0.1:{{ loadbalancer_apiserver_port|default(kube_apiserver_port) }};
{% if ipv6_stack -%}
listen [::1]:{{ loadbalancer_apiserver_port|default(kube_apiserver_port) }};
{% endif -%}
proxy_pass kube_apiserver;
proxy_timeout 10m;
proxy_connect_timeout 1s;
}
}
http {
aio threads;
aio_write on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout {{ loadbalancer_apiserver_keepalive_timeout }};
keepalive_requests 100;
reset_timedout_connection on;
server_tokens off;
autoindex off;
{% if loadbalancer_apiserver_healthcheck_port is defined -%}
server {
listen {{ loadbalancer_apiserver_healthcheck_port }};
{% if ipv6_stack -%}
listen [::]:{{ loadbalancer_apiserver_healthcheck_port }};
{% endif -%}
location /healthz {
access_log off;
return 200;
}
location /stub_status {
stub_status on;
access_log off;
}
}
{% endif %}
}
root@admin-lb:~/kubespray# cat roles/kubernetes/node/templates/manifests/nginx-proxy.manifest.j2
apiVersion: v1
kind: Pod
metadata:
name: {{ loadbalancer_apiserver_pod_name }}
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: Reconcile
k8s-app: kube-nginx
annotations:
nginx-cfg-checksum: "{{ nginx_stat.stat.checksum }}"
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-node-critical
containers:
- name: nginx-proxy
image: {{ nginx_image_repo }}:{{ nginx_image_tag }}
imagePullPolicy: {{ k8s_image_pull_policy }}
resources:
requests:
cpu: {{ loadbalancer_apiserver_cpu_requests }}
memory: {{ loadbalancer_apiserver_memory_requests }}
{% if loadbalancer_apiserver_healthcheck_port is defined -%}
livenessProbe:
httpGet:
path: /healthz
port: {{ loadbalancer_apiserver_healthcheck_port }}
readinessProbe:
httpGet:
path: /healthz
port: {{ loadbalancer_apiserver_healthcheck_port }}
{% endif -%}
volumeMounts:
- mountPath: /etc/nginx
name: etc-nginx
readOnly: true
volumes:
- name: etc-nginx
hostPath:
path: {{ nginx_config_dir }}
api ํธ์ถ ๊ฒ์ฆ
root@admin-lb:~/kubespray# ssh k8s-node4 curl -s localhost:8081/healthz -I
HTTP/1.1 200 OK
Server: nginx
Date: Sat, 07 Feb 2026 18:12:23 GMT
Content-Type: text/plain
Content-Length: 0
Connection: keep-alive
root@admin-lb:~/kubespray# ssh k8s-node4 curl -sk https://127.0.0.1:6443/version | grep Version
"gitVersion": "v1.32.9",
"goVersion": "go1.23.12",
root@admin-lb:~/kubespray# ssh k8s-node4 ss -tnlp | grep nginx
LISTEN 0 511 0.0.0.0:8081 0.0.0.0:* users:(("nginx",pid=17561,fd=6),("nginx",pid=17560,fd=6),("nginx",pid=17532,fd=6))
LISTEN 0 511 127.0.0.1:6443 0.0.0.0:* users:(("nginx",pid=17561,fd=5),("nginx",pid=17560,fd=5),("nginx",pid=17532,fd=5))
worker ๋ ธ๋์์ localhost(127.0.0.1)๋ก ์์ฒญํ์ง๋ง ์ค์ ์๋ต์ control-plane API Server๊ฐ ๋ฐํํ๋ค.
์ด๋ nginx-proxy๋ฅผ ํตํ Client-Side LB๊ฐ ์ ์ ๋์ ์ค์์ ์ฆ๋ช ํ๋ ๊ฒ์ด๋ค.
kubelet์ด ๋ฐ๋ผ๋ณด๋ ์๋ํฌ์ธํธ
root@admin-lb:~/kubespray# ssh k8s-node4 cat /etc/kubernetes/kubelet.conf | grep server
server: https://localhost:6443
kube-proxy ๊ตฌ์กฐ
root@admin-lb:~/kubespray# kubectl get cm -n kube-system kube-proxy -o yaml | grep 'kubeconfig.conf:' -A18
kubeconfig.conf: |-
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
namespace: default
user: default
name: default
current-context: default
users:
- name: default
user:
tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
nginx ๋ก๊ทธ ์๋ฟ์ ํด๊ฒฐํด๋ณด๊ธฐ
root@admin-lb:~/kubespray# kubectl logs -n kube-system nginx-proxy-k8s-node4
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf is not a file or does not exist
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2026/02/07 15:42:53 [notice] 1#1: using the "epoll" event method
2026/02/07 15:42:53 [notice] 1#1: nginx/1.28.0
2026/02/07 15:42:53 [notice] 1#1: built by gcc 14.2.0 (Alpine 14.2.0)
2026/02/07 15:42:53 [notice] 1#1: OS: Linux 6.12.0-55.39.1.el10_0.aarch64
2026/02/07 15:42:53 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 65535:65535
2026/02/07 15:42:53 [notice] 1#1: start worker processes
2026/02/07 15:42:53 [notice] 1#1: start worker process 20
2026/02/07 15:42:53 [notice] 1#1: start worker process 21
2026/02/07 15:42:53 [alert] 20#20: setrlimit(RLIMIT_NOFILE, 130048) failed (1: Operation not permitted)
2026/02/07 15:42:53 [alert] 21#21: setrlimit(RLIMIT_NOFILE, 130048) failed (1: Operation not permitted)
2026/02/07 15:43:33 [error] 20#20: *37 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 127.0.0.1, server: 127.0.0.1:6443, upstream: "192.168.10.12:6443", bytes from/to client:1489/0, bytes from/to upstream:0/1489
2026/02/07 15:43:33 [error] 21#21: *31 recv() failed (104: Connection reset by peer) while proxying and reading from upstream, client: 127.0.0.1, server: 127.0.0.1:6443, upstream: "192.168.10.12:6443", bytes from/to client:1489/0, bytes from/to upstream:0/1489
nginx ์ค์ ์์๋ ํ์ผ ๋์คํฌ๋ฆฝํฐ ํ๊ณ๋ฅผ 130048๋ก ์ฌ๋ฆฌ๋ ค ํ์ผ๋, ์ค์ ์ปจํ ์ด๋ ๋ฐํ์ ํ๊ฒฝ์์๋ ์ด๋ฅผ ํ์ฉํ์ง ์์ ์คํจํ ๊ฒ์ด๋ค.
๋ ธ๋4๋ฒ containerd ์ค์ ํ์ธ
root@admin-lb:~/kubespray# ssh k8s-node4 cat /etc/containerd/cri-base.json | jq | grep rlimits -A 6
"rlimits": [
{
"type": "RLIMIT_NOFILE",
"hard": 65535,
"soft": 65535
}
],
containerd๊ฐ OCI Runtime Spec ๋จ๊ณ์์ ์ด๋ฏธ RLIMIT_NOFILE์ 65535๋ก ๊ณ ์ ๋ผ์ nginx ๋ด๋ถ ์ค์ (worker_rlimit_nofile 130048)์ ๋ฐํ์ ๋ ๋ฒจ์์ ์ฐจ๋จ๋ ์๋ฐ์ ์๋ ๊ตฌ์กฐ์ธ ์ํ์ด๋ค.
root@admin-lb:~/kubespray# ssh k8s-node4 crictl inspect --name nginx-proxy | grep rlimits -A6
"rlimits": [
{
"hard": 65535,
"soft": 65535,
"type": "RLIMIT_NOFILE"
}
],
Kubespray๋ containerd ์ค์น ์ containerd_base_runtime_spec_rlimit_nofile: 65535 ๋ฅผ ๊ธฐ๋ณธ๊ฐ์ผ๋ก ์ฌ์ฉํ๋ค.
์ด ์ค์ ์ ๊ณผ๋ํ FD ์ฌ์ฉ์ผ๋ก ์ธํ ๋ ธ๋ ์์ ๊ณ ๊ฐ์ ๋ฐฉ์งํ๊ณ , ๋๋ถ๋ถ์ ์ํฌ๋ก๋์ ์ถฉ๋ถํ ๊ธฐ๋ณธ๊ฐ ์ ๊ณตํ๊ธฐ ์ํจ์ด๋ค.
ํ์ง๋ง nginx-proxy์ฒ๋ผ ์ปค๋ฅ์ ์๊ฐ ๋ง์ ์ธํ๋ผ ์ปดํฌ๋ํธ์๋ ์ด ์ ํ์ด nginx ์ค์ ๊ณผ ์ถฉ๋ํ ์ ์๋ค.
OCI Spec ์์ ํ์ฌ ์ค์๋ธ ํ๋ ์ด๋ถ ์ํ
root@admin-lb:~/kubespray# cat << EOF >> inventory/mycluster/group_vars/all/containerd.yml
containerd_default_base_runtime_spec_patch:
process:
rlimits: []
EOF
grep "^[^#]" inventory/mycluster/group_vars/all/containerd.yml
---
containerd_default_base_runtime_spec_patch:
process:
rlimits: []
root@k8s-node4:~# journalctl -u containerd.service -f
ansible-playbook -i inventory/mycluster/inventory.ini -v cluster.yml --tags "containerd" --limit k8s-node4 -e kube_version="1.32.9"

๋ฐฐํฌํ ๋ 4๋ฒ ๋ ธ๋์์ containerd ์๋น์ค๋ฅผ ๋ชจ๋ํฐ๋งํ๋ฉด์ ๋ณด๋ฉด containerd ์๋น์ค๊ฐ ์ฌ์์๋๋ ๊ฒ์ ํ์ธํ ์ ์๋ค.
containerd ์ฌ์์ ์ ํด๋น ๋ ธ๋์ ์ค์ผ์ค ๋ ๋ชจ๋ ํ๋ ์ํ๊ฐ ์ผ์์ ์ผ๋ก ์๋์ ๊ฐ์ด ์ํฅ์ ๋ฐ์ ์ ์๋ค
- NotReady
- ContainerCreating
- CrashLoopBackOff (์ผ๋ถ ์ผ์ด์ค)
- static pod (nginx-proxy ๋ฑ)์ kubelet์ด ์ฆ์ ์ฌ์์ฑ
๊ทธ๋ฌ๋ฏ๋ก ์ค์ ์์ ์ ์๋ ์ค์๋ธ ํ๋ ์ด๋ถ ์ํฅ๋๋ฅผ ๋ฒ ํ ํ๊ฒฝ์์ ๋ฏธ๋ฆฌ ํ์ธํ๊ณ
kubectl cordon k8s-node4
kubectl drain k8s-node4 --ignore-daemonsets --delete-emptydir-data
์์ ํ๊ฒ ํ๋๋ฅผ ๊ฒฉ๋ฆฌ์ํค๊ณ ์์ ๋์ ๋ ธ๋๋ฅผ drain ํ ํ ์์ ํ๋ ๊ฒ์ด ์์ ํ๋ค!
ํ๋ ์ด๋ถ ๋ฐฐํฌ ํ
root@admin-lb:~/kubespray# ssh k8s-node4 cat /etc/containerd/cri-base.json | jq | grep rlimits
"rlimits": [],
root@admin-lb:~/kubespray# ssh k8s-node4 crictl inspect --name nginx-proxy | grep rlimits -A6
"rlimits": [
{
"hard": 65535,
"soft": 65535,
"type": "RLIMIT_NOFILE"
}
],
cri-base.json ์์ผ๋ก ์์ฑ๋ ์ปจํ ์ด๋์ ์ ์ฉ๋๋ base runtime spec์ผ๋ก ์ด๋ฏธ ์คํ ์ค์ด๋ nginx-proxy Pod๋ ๊ธฐ์กด runtime spec์ ๊ทธ๋๋ก ์ ์ง๋๋ฉฐ, containerd ์ฌ์์๋ง์ผ๋ก๋ ๋ฐ์๋์ง ์๋๋ค.
root@k8s-node4:~# crictl pods --namespace kube-system --name 'nginx-proxy-*' -q | xargs crictl rmp -f
Stopped sandbox 83316bf2859bd6177593dd4b04a662af5f14891602c4af42b887199638229e66
Removed sandbox 83316bf2859bd6177593dd4b04a662af5f14891602c4af42b887199638229e66
nginx-proxy๋ static pod๋ก kubelet์ด /etc/kubernetes/manifests ๊ฐ์ ์ค์ธ๋ฐ,
Pod๋ฅผ ์ญ์ ํ์ฌ kubelet์ด ์ฆ์ ๋ค์ ์์ฑ๋๊ณ ์๋ก์ด OCI runtime spec ์ ์ฉ๋๊ฒ ํ๋ค.
root@admin-lb:~/kubespray# ssh k8s-node4 crictl inspect --name nginx-proxy | grep rlimits -A6
root@admin-lb:~/kubespray# kubectl logs -n kube-system nginx-proxy-k8s-node4 -f
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: /etc/nginx/conf.d/default.conf is not a file or does not exist
/docker-entrypoint.sh: Sourcing /docker-entrypoint.d/15-local-resolvers.envsh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2026/02/07 18:30:26 [notice] 1#1: using the "epoll" event method
2026/02/07 18:30:26 [notice] 1#1: nginx/1.28.0
2026/02/07 18:30:26 [notice] 1#1: built by gcc 14.2.0 (Alpine 14.2.0)
2026/02/07 18:30:26 [notice] 1#1: OS: Linux 6.12.0-55.39.1.el10_0.aarch64
2026/02/07 18:30:26 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2026/02/07 18:30:26 [notice] 1#1: start worker processes
2026/02/07 18:30:26 [notice] 1#1: start worker process 20
2026/02/07 18:30:26 [notice] 1#1: start worker process 21
containerd base runtime spec์์ RLIMIT_NOFILE ์ค์ ์ ์ ๊ฑฐํ ๋ค nginx-proxy static pod๋ฅผ ์ฌ๊ธฐ๋ํจ์ผ๋ก์จ, nginx๊ฐ ํธ์คํธ ulimit์ ๊ทธ๋๋ก ์์๋ฐ์ nginx ๊ฒฝ๊ณ ์์ด ์ ์ ๋์ํ๋๋ก ๊ตฌ์ฑํ์๋ค.
์ปจํธ๋กค ํ๋ ์ธ ๋ ธ๋ -> K8S Api Endpoint
kubeops view ์ค์น
root@admin-lb:~/kubesprayhelm repo add geek-cookbook https://geek-cookbook.github.io/charts/s/
"geek-cookbook" has been added to your repositories
root@admin-lb:~/kubespray# helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 \
--set service.main.type=NodePort,service.main.ports.http.nodePort=30000 \
--set env.TZ="Asia/Seoul" --namespace kube-system \
--set image.repository="abihf/kube-ops-view" --set image.tag="latest"
NAME: kube-ops-view
LAST DEPLOYED: Sun Feb 8 03:40:35 2026
NAMESPACE: kube-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Get the application URL by running these commands:
export NODE_PORT=$(kubectl get --namespace kube-system -o jsonpath="{.spec.ports[0].nodePort}" services kube-ops-view)
export NODE_IP=$(kubectl get nodes --namespace kube-system -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT

ํ์ฌ๋ 1๋ฒ ๋ ธ๋๋ง ํธ๋ํฝ์ด ๋ค์ด์ค๋ ๊ฒ์ 3๋ ๋ ธ๋ ๋ชจ๋ ๋ค์ด์ฌ ์ ์๋๋ก ํ๋ค.
์ํ ์ฑ ๋ฐฐํฌ
# ์ํ ์ ํ๋ฆฌ์ผ์ด์
๋ฐฐํฌ
cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: webpod
spec:
replicas: 2
selector:
matchLabels:
app: webpod
template:
metadata:
labels:
app: webpod
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- sample-app
topologyKey: "kubernetes.io/hostname"
containers:
- name: webpod
image: traefik/whoami
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: webpod
labels:
app: webpod
spec:
selector:
app: webpod
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30003
type: NodePort
EOF
root@admin-lb:~/kubespray# kubectl get deploy,svc,ep webpod -owide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/webpod 2/2 2 2 23s webpod traefik/whoami app=webpod
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/webpod NodePort 10.233.2.178 <none> 80:30003/TCP 23s app=webpod
NAME ENDPOINTS AGE
endpoints/webpod 10.233.67.5:80,10.233.67.6:80 23s
์ฅ์ ์ํฉ ์ฌํ

1๋ฒ ๋ ธ๋๋ฅผ ์ ์์ ๊บผ์ ์ฅ์ ๋ฅผ ์ฌํํ๋ค.
ํ์ง๋ง ๋ฐฑ์๋ ๋์ ์๋ฒ๊ฐ 2๋๊ฐ ๋ ์๊ธฐ ๋๋ฌธ์ 2๋ฒ ๋ ธ๋์์ ์๋ ์์ฒญ์ ๋ํด์๋ ์ ์์ ์ผ๋ก ์ฒ๋ฆฌ๋๋ค.
while true; do curl -sk https://127.0.0.1:6443/version | grep gitVersion ; date; sleep 1; echo ; done

external LB -> HA 3๋๋ก ์ง์
sed -i 's/192.168.10.12/192.168.10.10/g' /root/.kube/config
root@admin-lb:~/kubespray# ssh k8s-node1 kubectl get cm -n kube-system kubeadm-config -o yaml
apiVersion: v1
data:
ClusterConfiguration: |
apiServer:
certSANs:
- kubernetes
- kubernetes.default
- kubernetes.default.svc
- kubernetes.default.svc.cluster.local
- 10.233.0.1
- localhost
- 127.0.0.1
- ::1
- k8s-node1
- k8s-node2
- k8s-node3
- lb-apiserver.kubernetes.local
- 192.168.10.11
- 192.168.10.12
- 192.168.10.13
- 10.0.2.15
- fd17:625c:f037:2:a00:27ff:fe90:eaeb
ansible-playbook -i inventory/mycluster/inventory.ini -v cluster.yml --tags "control-plane" --limit kube_control_plane -e kube_version="1.32.9"
...
Sunday 08 February 2026 04:25:31 +0900 (0:00:00.033) 0:00:20.897 *******
===============================================================================
Gather minimal facts ---------------------------------------------------- 1.99s
kubernetes/control-plane : Kubeadm | Check apiserver.crt SAN hosts ------ 1.36s
kubernetes/control-plane : Backup old certs and keys -------------------- 1.29s
kubernetes/control-plane : Install | Copy kubectl binary from download dir --- 1.24s
kubernetes/control-plane : Kubeadm | Check apiserver.crt SAN IPs -------- 1.07s
Gather necessary facts (hardware) --------------------------------------- 0.98s
kubernetes/preinstall : Create other directories of root owner ---------- 0.74s
kubernetes/preinstall : Create kubernetes directories ------------------- 0.72s
kubernetes/control-plane : Backup old confs ----------------------------- 0.70s
kubernetes/control-plane : Update server field in component kubeconfigs --- 0.67s
win_nodes/kubernetes_patch : debug -------------------------------------- 0.66s
kubernetes/control-plane : Kubeadm | Create kubeadm config -------------- 0.64s
kubernetes/control-plane : Kubeadm | regenerate apiserver cert 2/2 ------ 0.42s
kubernetes/control-plane : Install kubectl bash completion -------------- 0.37s
kubernetes/control-plane : Create kube-scheduler config ----------------- 0.36s
kubernetes/control-plane : Renew K8S control plane certificates monthly 2/2 --- 0.35s
Gather necessary facts (network) ---------------------------------------- 0.34s
kubernetes/control-plane : Kubeadm | aggregate all SANs ----------------- 0.34s
kubernetes/control-plane : Set kubectl bash completion file permissions --- 0.34s
kubernetes/control-plane : Install script to renew K8S control plane certificates --- 0.30s
kubectl edit cm -n kube-system kubeadm-config
data:
ClusterConfiguration: |
apiServer:
certSANs:
- k8s-api-srv.admin-lb.com
- 192.168.10.10
- k8s-node1
- k8s-node2
- k8s-node3
- kubernetes
- kubernetes.default
- kubernetes.default.svc
- kubernetes.default.svc.cluster.local
- lb-apiserver.kubernetes.local
- localhost
- 127.0.0.1
- ::1
- 10.233.0.1
- 192.168.10.11
- 192.168.10.12
- 192.168.10.13
- 10.0.2.15
controlPlaneEndpoint: 192.168.10.10:6443
kubeadm-config ConfigMap์๋ controlPlaneEndpoint์ apiServer.certSANs๋ฅผ ํตํด API Server์ ๊ณต์ ์ ๊ทผ ์๋ํฌ์ธํธ(IP/DNS)๋ฅผ ๋ช ์ํด์ผ ํ๋ค. ์ด๋ ์ธ์ฆ์ ๊ฐฑ์ ๋ฐ ํด๋ฌ์คํฐ ์ ๊ทธ๋ ์ด๋ ์ ๊ธฐ์ค๊ฐ์ผ๋ก ์ฌ์ฉ๋๋ค.
๋ฐ๋ผ์ ํ์ฌ ์ฌ์ฉ ์ค์ธ VIP ๋๋ DNS ์๋ํฌ์ธํธ๋ฅผ ๋ช ์์ ์ผ๋ก ์ถ๊ฐํด๋์ด์ผ ํ๋ค.
root@admin-lb:~/kubespray# kubectl get node -v=6
I0208 04:26:22.096297 21384 loader.go:402] Config loaded from file: /root/.kube/config
I0208 04:26:22.097979 21384 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0208 04:26:22.097992 21384 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0208 04:26:22.097995 21384 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0208 04:26:22.097997 21384 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0208 04:26:22.111997 21384 round_trippers.go:560] GET https://192.168.10.11:6443/api/v1/nodes?limit=500 200 OK in 7 milliseconds
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready control-plane 3h44m v1.32.9
k8s-node2 Ready control-plane 3h43m v1.32.9
k8s-node3 Ready control-plane 3h43m v1.32.9
k8s-node4 Ready <none> 3h43m v1.32.9
sed -i 's/192.168.10.10/k8s-api-srv.admin-lb.com/g' /root/.kube/config
root@admin-lb:~/kubespray# ssh k8s-node1 cat /etc/kubernetes/ssl/apiserver.crt | openssl x509 -text -noout
...
X509v3 Subject Alternative Name:
DNS:k8s-api-srv.admin-lb.com, DNS:k8s-node1, DNS:k8s-node2, DNS:k8s-node3, DNS:kubernetes, DNS:kubernetes.default, DNS:kubernetes.default.svc, DNS:kubernetes.default.svc.cluster.local, DNS:lb-apiserver.kubernetes.local, DNS:localhost, IP Address:10.233.0.1, IP Address:192.168.10.11, IP Address:127.0.0.1, IP Address:0:0:0:0:0:0:0:1, IP Address:192.168.10.10, IP Address:192.168.10.12, IP Address:192.168.10.13, IP Address:10.0.2.15, IP Address:FD17:625C:F037:2:A00:27FF:FE90:EAEB

1๋ฒ ๋ ธ๋๋ฅผ ์ฃฝ์์ง๋ง 2,3๋ฒ ๋ ธ๋๋ก ํธ๋ํฝ์ด ๋ถ๋ฐฐ๋๋ ๊ฒ์ ํ์ธํ ์ ์๋ค!

๋ฒ์ธ์ผ๋ฐ์ค์์ 1๋ฒ ๋ ธ๋๋ฅผ ๋ค์ ์ฌ๊ธฐ๋ํ๋ค.
๋ ธ๋ ๊ด๋ฆฌ
๋ ธ๋ ์ถ๊ฐ
root@admin-lb:~/kubespray# cat << EOF > /root/kubespray/inventory/mycluster/inventory.ini
[kube_control_plane]
k8s-node1 ansible_host=192.168.10.11 ip=192.168.10.11 etcd_member_name=etcd1
k8s-node2 ansible_host=192.168.10.12 ip=192.168.10.12 etcd_member_name=etcd2
k8s-node3 ansible_host=192.168.10.13 ip=192.168.10.13 etcd_member_name=etcd3
[etcd:children]
kube_control_plane
[kube_node]
k8s-node4 ansible_host=192.168.10.14 ip=192.168.10.14
k8s-node5 ansible_host=192.168.10.15 ip=192.168.10.15
EOF
root@admin-lb:~/kubespray# ansible-inventory -i /root/kubespray/inventory/mycluster/inventory.ini --graph
@all:
|--@ungrouped:
|--@etcd:
| |--@kube_control_plane:
| | |--k8s-node1
| | |--k8s-node2
| | |--k8s-node3
|--@kube_node:
| |--k8s-node4
| |--k8s-node5
root@admin-lb:~/kubespray# ansible -i inventory/mycluster/inventory.ini k8s-node5 -m ping
[WARNING]: Platform linux on host k8s-node5 is using the discovered Python
interpreter at /usr/bin/python3.12, but future installation of another Python
interpreter could change the meaning of that path. See
https://docs.ansible.com/ansible-
core/2.17/reference_appendices/interpreter_discovery.html for more information.
k8s-node5 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python3.12"
},
"changed": false,
"ping": "pong"
}
์ธ๋ฒคํ ๋ฆฌ์ 5๋ฒ ๋ ธ๋๋ฅผ ์ถ๊ฐํ๊ณ ํํ ์ฒดํฌํ๋ค.
ANSIBLE_FORCE_COLOR=true ansible-playbook -i inventory/mycluster/inventory.ini -v scale.yml --limit=k8s-node5 -e kube_version="1.32.9" | tee kubespray_add_worker_node.log
...
PLAY RECAP *********************************************************************
k8s-node5 : ok=411 changed=87 unreachable=0 failed=0 skipped=591 rescued=0 ignored=0
Sunday 08 February 2026 04:45:41 +0900 (0:00:00.014) 0:02:21.385 *******
===============================================================================
network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence -- 30.24s
download : Download_container | Download image if required -------------- 6.15s
system_packages : Manage packages --------------------------------------- 6.13s
download : Download_container | Download image if required -------------- 5.89s
download : Download_container | Download image if required -------------- 4.63s
container-engine/containerd : Download_file | Download item ------------- 3.54s
download : Download_file | Download item -------------------------------- 3.21s
container-engine/crictl : Download_file | Download item ----------------- 3.15s
container-engine/runc : Download_file | Download item ------------------- 3.12s
download : Download_container | Download image if required -------------- 2.54s
network_plugin/cni : CNI | Copy cni plugins ----------------------------- 2.48s
container-engine/nerdctl : Download_file | Download item ---------------- 2.37s
download : Download_file | Download item -------------------------------- 2.04s
download : Download_file | Download item -------------------------------- 1.95s
container-engine/containerd : Containerd | Unpack containerd archive ---- 1.91s
download : Download_container | Download image if required -------------- 1.78s
container-engine/nerdctl : Extract_file | Unpacking archive ------------- 1.66s
container-engine/crictl : Extract_file | Unpacking archive -------------- 1.65s
etcd : Check_certs | Register certs that have already been generated on first etcd node --- 1.60s
network_plugin/cni : CNI | Copy cni plugins ----------------------------- 1.43s
root@admin-lb:~/kubespray# kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-node1 Ready control-plane 4h4m v1.32.9 192.168.10.11 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
k8s-node2 Ready control-plane 4h3m v1.32.9 192.168.10.12 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
k8s-node3 Ready control-plane 4h3m v1.32.9 192.168.10.13 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
k8s-node4 Ready <none> 4h3m v1.32.9 192.168.10.14 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
k8s-node5 Ready <none> 82s v1.32.9 192.168.10.15 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
๋ ธ๋ ์ญ์
์ฌ๊ธฐ์ ์ฃผ์ํ ๊ฒ์ ๋น์ ์ ๋ ธ๋ ์ญ์ ์ธ remove-node.yaml์ ํด๋ฌ์คํฐ๊ฐ ๊นจ์ง๊ฒ ๋๊ณ ,
ํด๋ฌ์คํฐ ๋ฆฌ์ reset.yaml์ k8s ํด๋ฌ์คํฐ ์ ์ฒด๋ฅผ ์ญ์ ํ๋ ๊ฒ์ด๋ฏ๋ก... ์คํ ํ ๋ณต๊ตฌ๊ฐ ๋ถ๊ฐ์ธ ์คํฌ๋ฆฝํธ์ด๋ฏ๋ก ์ฌ์ฉํ์ง์๋ ๊ฒ์ ๊ถ์ฅํ๋ค๊ณ ํ๋ค. ๊ฐ๊ธ์ ์ด์์ค์ธ ํด๋ฌ์คํฐ์์๋ ํด๋น ์คํฌ๋ฆฝํธ๋ฅผ ์์ ์ง์๋ฒ๋ฆฌ๊ธฐ
https://github.com/kubernetes-sigs/kubespray/blob/master/playbooks/remove_node.yml
kubespray/playbooks/remove_node.yml at master · kubernetes-sigs/kubespray
Deploy a Production Ready Kubernetes Cluster. Contribute to kubernetes-sigs/kubespray development by creating an account on GitHub.
github.com
๋ ธ๋ ์ญ์ ์ ์ฌ์ฉํด์ผ ํ๋ ํ๋ ์ด๋ถ์ remove-node.yml ์ด๋ค.
์ด ์คํฌ๋ฆฝํธ๋ ์ง์์ผ ๋ ๋ ธ๋๊ฐ ์ ์์ ์ธ ์ํ์ผ ๋ ์ํ๋๋ค.
์ด ํ๋ ์ด๋ถ์ ๋ ธ๋ ์ ๊ฑฐ ์ ๋ฐ์ํ ์ ์๋ ํด๋ฌ์คํฐ ๋ถ์ผ์น๋ฅผ ๋ฐฉ์งํ๊ธฐ ์ํด, ์ ๋ ฅ ๊ฒ์ฆ๊ณผ ์ฌ์ฉ์ ํ์ธ์ ์ ํํ๊ณ , etcd ๋ฐ control-plane ์ญํ ์ ๊ณ ๋ คํ ๋จ๊ณ์ ์ ๊ฑฐ(pre-remove → reset → post-remove) ๋ฅผ ์ํํ๋๋ก ์ค๊ณ๋์๋ค.
์ด ์คํฌ๋ฆฝํธ๊ฐ ์์ ํ ์ด์ ๋ ์ด ์คํฌ๋ฆฝํธ๊ฐ ์คํ๋๊ธฐ ์ ์ฌ์ฉ์์๊ฒ yes๋ฅผ ์ ๋ ฅ์ ํ ๋ฒ ๋ ๋ฐ๊ฒ ๋ผ์๊ณ , ๋ ธ๋ ์ ๊ฑฐ ์ ํด๋ฌ์คํฐ์์ ๋จผ์ ๋ถ๋ฆฌํ๊ณ ๋ ํ ๋ ธ๋๋ฅผ ์ด๊ธฐํํ๊ณ ๋ง์ง๋ง์ผ๋ก ๋ฉํ๋ฐ์ดํฐ๊น์ง ์ ๋ฆฌํ๋ ๊ณผ์ ์ ์ง์ผ์ ์ํ๊ฐ์ ์ผ๊ด๋๊ฒ ๋ง๋ ๋ค.
๊ทธ๋ฆฌ๊ณ etcd ์ฟผ๋ผ์ ๊นจ๋จ๋ฆฌ์ง ์๋๋ก ๋ณ๋ ์ ๊ฑฐ ์ ์ฐจ๋ฅผ ์ํํ์ฌ ๋ฐ์ดํฐ ๋ฌด๊ฒฐ์ฑ์ ์งํค๊ฒ ๋๋ค. ๊ต์ฅํ ๋ณด์์ ์ผ๋ก ์ ์ค๊ณ๋ผ์๋ ๊ฒ ๊ฐ๋ค!
Validate nodes for removal
- ๋ ธ๋ ์ ๊ฑฐ ๋์์ด ๋ช ์๋์๋์ง ์ฌ์ ์ ๊ฒ์ฆํ์ฌ, ์๋ชป๋ ์คํ์ผ๋ก ์ ์ฒด ํด๋ฌ์คํฐ์ ์ํฅ์ ์ฃผ๋ ์ฌ๊ณ ๋ฅผ ๋ฐฉ์งํ๋ค.
Common tasks for every playbooks
- ๋ชจ๋ ํ๋ ์ด๋ถ์์ ๊ณตํต์ผ๋ก ์ฌ์ฉํ๋ ๊ธฐ๋ณธ ์ค์ ๊ณผ ํ๊ฒฝ ์ด๊ธฐํ ์์ ์ ์ํํ๋ค.
Confirm node removal
- ์ค์ ๋ ธ๋ ์ ๊ฑฐ ์์ ์ ์ ์ฌ์ฉ์๋ก๋ถํฐ ๋ช ์์ ์ธ ํ์ธ์ ๋ฐ์, ์๋ํ์ง ์์ ํ๊ดด์ ์์ ์ ํ ๋ฒ ๋ ์ฐจ๋จํ๋ค.
Gather facts
- ์ ๊ฑฐ ๋์ ๋ ธ๋์ ์ญํ (control-plane, worker, etcd ๋ฑ)์ ํ์ ํ๊ธฐ ์ํ ์ ๋ณด๋ฅผ ์์งํ๋ค.
Reset node
- ํด๋ฌ์คํฐ์์ ๋ ธ๋๋ฅผ ์์ ํ๊ฒ ๋ถ๋ฆฌํ ๋ค, ํด๋น ๋ ธ๋์ Kubernetes ๊ตฌ์ฑ ์์์ ์ํ๋ฅผ ์ด๊ธฐํํ๋ค.
- ๋ง์ฝ ๋ ธ๋๊ฐ etcd ๋ฉค๋ฒ๋ผ๋ฉด etcd ํด๋ฌ์คํฐ์์ ๋ฉค๋ฒ๋ฅผ ์ ๊ฑฐํ๊ณ resetํ๋ค.
- kubeadm reset, CNI ์ ๊ฑฐ, kubelet / container runtime ์ค์ ์ ๊ฑฐ, ์ธ์ฆ์, kubeconfig ์ญ์ ํ์ฌ ๋ ธ๋๋ฅผ ์์ ํ ์ด๊ธฐํํ๋ค.
Post node removal
- ๋ ธ๋ ์ ๊ฑฐ ์ดํ ํด๋ฌ์คํฐ ๋ฉํ๋ฐ์ดํฐ์ ์์ฌ ์ค์ ์ ์ ๋ฆฌํ์ฌ ์ํ ์ผ๊ด์ฑ์ ๋ณด์ฅํ๋ค
ansible-playbook -i inventory/mycluster/inventory.ini -v remove-node.yml -e node=k8s-node5
> yes
...
Sunday 08 February 2026 04:58:58 +0900 (0:00:00.252) 0:00:38.802 *******
===============================================================================
reset : Reset | delete some files and directories ----------------------- 7.35s
system_packages : Manage packages --------------------------------------- 6.35s
network_plugin/cni : CNI | Copy cni plugins ----------------------------- 2.71s
bootstrap_os : Fetch /etc/os-release ------------------------------------ 1.54s
Gather necessary facts (hardware) --------------------------------------- 1.20s
Confirm Execution ------------------------------------------------------- 1.11s
reset : Reset | stop services ------------------------------------------- 1.07s
reset : Gather active network services ---------------------------------- 1.04s
reset : Reset | stop containerd and etcd services ----------------------- 1.00s
reset : Reset | remove containerd binary files -------------------------- 0.97s
Gather information about installed services ----------------------------- 0.96s
reset : Reset | remove services ----------------------------------------- 0.70s
bootstrap_os : Assign inventory name to unconfigured hostnames (non-CoreOS, non-Flatcar, Suse and ClearLinux, non-Fedora) --- 0.68s
bootstrap_os : Gather host facts to get ansible_distribution_version ansible_distribution_major_version --- 0.67s
reset : Flush iptables -------------------------------------------------- 0.45s
reset : Reset | systemctl daemon-reload --------------------------------- 0.45s
reset : Restart active network services --------------------------------- 0.42s
system_packages : Gather OS information --------------------------------- 0.41s
Gather necessary facts (network) ---------------------------------------- 0.40s
reset : Reset | stop all cri containers --------------------------------- 0.36s
root@admin-lb:~/kubespray# kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-node1 Ready control-plane 4h17m v1.32.9 192.168.10.11 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
k8s-node2 Ready control-plane 4h16m v1.32.9 192.168.10.12 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
k8s-node3 Ready control-plane 4h16m v1.32.9 192.168.10.13 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
k8s-node4 Ready <none> 4h16m v1.32.9 192.168.10.14 <none> Rocky Linux 10.0 (Red Quartz) 6.12.0-55.39.1.el10_0.aarch64 containerd://2.1.5
๋ชจ๋ํฐ๋ง ์ค์
nfs ํ๋ก๋น์ ๋
root@admin-lb:~/kubespray# kubectl create ns nfs-provisioner
helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner/
helm install nfs-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner -n nfs-provisioner \
--set nfs.server=192.168.10.10 \
--set nfs.path=/srv/nfs/share \
--set storageClass.defaultClass=true
namespace/nfs-provisioner created
"nfs-subdir-external-provisioner" has been added to your repositories
NAME: nfs-provisioner
LAST DEPLOYED: Sun Feb 8 05:08:57 2026
NAMESPACE: nfs-provisioner
STATUS: deployed
REVISION: 1
TEST SUITE: None
ํ๋ก๋ฉํ ์ฐ์ค
root@admin-lb:~/kubespray# helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
"prometheus-community" has been added to your repositories
root@admin-lb:~/kubespray# cat <<EOT > monitor-values.yaml
prometheus:
prometheusSpec:
scrapeInterval: "20s"
evaluationInterval: "20s"
storageSpec:
volumeClaimTemplate:
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
additionalScrapeConfigs:
- job_name: 'haproxy-metrics'
static_configs:
- targets:
- '192.168.10.10:8405'
externalLabels:
cluster: "myk8s-cluster"
service:
type: NodePort
nodePort: 30001
grafana:
defaultDashboardsTimezone: Asia/Seoul
adminPassword: prom-operator
service:
type: NodePort
nodePort: 30002
alertmanager:
enabled: false
defaultRules:
create: false
kubeProxy:
EOT enabled: falsexporter:
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack --version 80.13.3 \
-f monitor-values.yaml --create-namespace --namespace monitoring
๊ทธ๋ผํ๋ ๋์๋ณด๋ ๋ค์ด๋ก๋
curl -o 12693_rev12.json https://grafana.com/api/dashboards/12693/revisions/12/download
curl -o 15661_rev2.json https://grafana.com/api/dashboards/15661/revisions/2/download
curl -o k8s-system-api-server.json https://raw.githubusercontent.com/dotdc/grafana-dashboards-kubernetes/refs/heads/master/dashboards/k8s-system-api-server.json
sed -i -e 's/${DS_PROMETHEUS}/prometheus/g' 12693_rev12.json
sed -i -e 's/${DS__VICTORIAMETRICS-PROD-ALL}/prometheus/g' 15661_rev2.json
sed -i -e 's/${DS_PROMETHEUS}/prometheus/g' k8s-system-api-server.json
kubectl create configmap my-dashboard --from-file=12693_rev12.json --from-file=15661_rev2.json --from-file=k8s-system-api-server.json -n monitoring
kubectl label configmap my-dashboard grafana_dashboard="1" -n monitoring
root@admin-lb:~# kubectl exec -it -n monitoring deploy/kube-prometheus-stack-grafana -- ls -l /tmp/dashboards
total 976
-rw-r--r-- 1 grafana 472 333790 Feb 7 20:13 12693_rev12.json
-rw-r--r-- 1 grafana 472 198839 Feb 7 20:13 15661_rev2.json
-rw-r--r-- 1 grafana 472 12367 Feb 7 20:11 apiserver.json
-rw-r--r-- 1 grafana 472 15598 Feb 7 20:11 cluster-total.json
-rw-r--r-- 1 grafana 472 8600 Feb 7 20:11 controller-manager.json
-rw-r--r-- 1 grafana 472 8340 Feb 7 20:11 etcd.json
-rw-r--r-- 1 grafana 472 7282 Feb 7 20:11 grafana-overview.json
-rw-r--r-- 1 grafana 472 25210 Feb 7 20:11 k8s-coredns.json
-rw-r--r-- 1 grafana 472 26811 Feb 7 20:11 k8s-resources-cluster.json
-rw-r--r-- 1 grafana 472 9837 Feb 7 20:11 k8s-resources-multicluster.json
-rw-r--r-- 1 grafana 472 27310 Feb 7 20:11 k8s-resources-namespace.json
-rw-r--r-- 1 grafana 472 11208 Feb 7 20:11 k8s-resources-node.json
-rw-r--r-- 1 grafana 472 25881 Feb 7 20:11 k8s-resources-pod.json
-rw-r--r-- 1 grafana 472 24677 Feb 7 20:11 k8s-resources-workload.json
-rw-r--r-- 1 grafana 472 27747 Feb 7 20:11 k8s-resources-workloads-namespace.json
-rw-r--r-- 1 grafana 472 35173 Feb 7 20:13 k8s-system-api-server.json
-rw-r--r-- 1 grafana 472 19009 Feb 7 20:11 kubelet.json
-rw-r--r-- 1 grafana 472 11767 Feb 7 20:11 namespace-by-pod.json
-rw-r--r-- 1 grafana 472 19013 Feb 7 20:11 namespace-by-workload.json
-rw-r--r-- 1 grafana 472 8222 Feb 7 20:11 node-cluster-rsrc-use.json
-rw-r--r-- 1 grafana 472 7833 Feb 7 20:11 node-rsrc-use.json
-rw-r--r-- 1 grafana 472 9750 Feb 7 20:11 nodes-aix.json
-rw-r--r-- 1 grafana 472 10987 Feb 7 20:11 nodes-darwin.json
-rw-r--r-- 1 grafana 472 10339 Feb 7 20:11 nodes.json
-rw-r--r-- 1 grafana 472 5971 Feb 7 20:11 persistentvolumesusage.json
-rw-r--r-- 1 grafana 472 6775 Feb 7 20:11 pod-total.json
-rw-r--r-- 1 grafana 472 11115 Feb 7 20:11 prometheus.json
-rw-r--r-- 1 grafana 472 9731 Feb 7 20:11 scheduler.json
-rw-r--r-- 1 grafana 472 11025 Feb 7 20:11 workload-total.json
etcd ๋งคํธ๋ฆญ ์์ง ์ค์ ์ถ๊ฐ
cat << EOF >> inventory/mycluster/group_vars/k8s_cluster/k8s-cluster.yml
etcd_metrics: true
etcd_listen_metrics_urls: "http://0.0.0.0:2381"
EOF
ansible-playbook -i inventory/mycluster/inventory.ini -v cluster.yml --tags "etcd" --limit etcd -e kube_version="1.32.9"
cat <<EOF > monitor-add-values.yaml
prometheus:
prometheusSpec:
additionalScrapeConfigs:
- job_name: 'etcd'
metrics_path: /metrics
static_configs:
- targets:
- '192.168.10.11:2381'
- '192.168.10.12:2381'
- '192.168.10.13:2381'
EOF
helm get values -n monitoring kube-prometheus-stack
helm upgrade kube-prometheus-stack prometheus-community/kube-prometheus-stack --version 80.13.3 \
--reuse-values -f monitor-add-values.yaml --namespace monitoring
k8s ์ ๊ทธ๋ ์ด๋ (1.32.9 -> 1.32.10)
upgrade_cluster.yaml
https://github.com/kubernetes-sigs/kubespray/blob/master/playbooks/upgrade_cluster.yml
kubespray/playbooks/upgrade_cluster.yml at master · kubernetes-sigs/kubespray
Deploy a Production Ready Kubernetes Cluster. Contribute to kubernetes-sigs/kubespray development by creating an account on GitHub.
github.com
Common tasks for every playbooks
- name: Common tasks for every playbooks
import_playbook: boilerplate.yml
- ๋ชจ๋ Kubespray ํ๋ ์ด๋ถ์์ ๊ณตํต์ผ๋ก ํ์ํ ์ค์ ์ ๋จผ์ ๋ถ๋ฌ์จ๋ค.
- ํ๋ก์, ๊ธฐ๋ณธ ๋ณ์, ํ๊ฒฝ ์ค์ ๋ฑ์ด ์ฌ๊ธฐ์ ์ด๊ธฐํ๋๋ค.
- ์ดํ ๋ชจ๋ play๊ฐ ๋์ผํ ๊ธฐ์ค ํ๊ฒฝ์์ ์คํ๋๋๋ก ๋ณด์ฅํ๋ค.
- ์ ๊ทธ๋ ์ด๋ ๋์ค ํ๊ฒฝ ์ฐจ์ด๋ก ์ธํ ์์ธ๋ฅผ ๋ง๋๋ค.
Gather facts
- name: Gather facts
import_playbook: internal_facts.yml
- ๋ ธ๋ ์ญํ (control-plane, worker, etcd, calico_rr ๋ฑ)์ ํ๋จํ๊ธฐ ์ํ ๋ด๋ถ fact๋ฅผ ์์งํ๋ค.
- ํด๋น ๋จ๊ณ์์ ์์ง๋ facts๋ค์ hosts: kube_control_plane, kube_node, etcd ๊ฐ์ ๊ทธ๋ฃน ๋ถ๊ธฐ๊ฐ ์ ํํ ๋์ํ๊ธฐ ์ํ ๋ฐ์ดํฐ์ ๊ธฐ๋ฐ์ผ๋ก ์ฌ์ฉ๋๋ค.
Download images to ansible host cache via first kube_control_plane node
- hosts: kube_control_plane[0]
roles:
- kubespray_defaults
- kubernetes/preinstall
- download
- ์ฒซ ๋ฒ์งธ control-plane ๋ ธ๋ ํ ๋๋ง ์ฌ์ฉํด ์ด๋ฏธ์ง/๋ฐ์ด๋๋ฆฌ๋ฅผ ๋ฏธ๋ฆฌ ๋ค์ด๋ก๋ํ๋ค.
- download_run_once ์กฐ๊ฑด์ผ๋ก ์ค๋ณต ๋ค์ด๋ก๋ ๋ฐฉ์งํ๋ค.
- ์ ๊ทธ๋ ์ด๋ ์ค ๋คํธ์ํฌ ์ด์๋ก ์คํจํ๋ ๊ฑธ ๋ง๋๋ค.
- ์ดํ ๋ ธ๋๋ค์ ๋ก์ปฌ ์บ์๋ฅผ ์ฌ์ฉํ๋ฏ๋ก ์๋์ ์์ ์ฑ์ด ์ข์์ง๋ค.
Prepare nodes for upgrade
- hosts: k8s_cluster:etcd:calico_rr
roles:
- kubespray_defaults
- kubernetes/preinstall
- download
- ํด๋ฌ์คํฐ์ ํฌํจ๋ ๋ชจ๋ ๋ ธ๋๋ฅผ ์ ๊ทธ๋ ์ด๋ ๊ฐ๋ฅํ ์ํ๋ก ์ค๋นํ๋ค.
- OS ์ค์ , ํ์ ํจํค์ง, ์ด๋ฏธ์ง ์ค๋น ๋จ๊ณ์ด๋ค.
- ์ค์ ์ ๊ทธ๋ ์ด๋ ์ ์ ํ๊ฒฝ ์ฐจ์ด๋ก ์ธํ ์คํจ๋ฅผ ์ ๊ฑฐํ๋ค.
Upgrade container engine on non-cluster nodes
- hosts: etcd:calico_rr:!k8s_cluster
serial: 20%
roles:
- container-engine
- Kubernetes ์ํฌ๋ก๋๋ฅผ ์ง์ ์คํํ์ง ์๋ ๋ ธ๋๋ค์ ์ปจํ ์ด๋ ๋ฐํ์์ ๋จผ์ ์ ๊ทธ๋ ์ด๋ํ๋ค.
- ํด๋ฌ์คํฐ ํต์ฌ ๋ ธ๋์ ๋ถ๋ฆฌํด์ ์ํฅ ๋ฒ์๋ฅผ ์ต์ํํ๋ค.
- serial๋ก ํ ๋ฒ์ ์ผ๋ถ๋ง ์ฒ๋ฆฌํด ๋ฆฌ์คํฌ๋ฅผ ๋ฎ์ถ๋ค.
Install etcd
- name: Install etcd
import_playbook: install_etcd.yml
- etcd ํด๋ฌ์คํฐ๋ฅผ ์ ๊ทธ๋ ์ด๋ํ๊ฑฐ๋ ์ฌ๊ตฌ์ฑํ๋ค.
- ๋ค๋ฅธ ์ปดํฌ๋ํธ๋ณด๋ค ๋จผ์ , ๊ทธ๋ฆฌ๊ณ ๋ ๋ฆฝ์ ์ผ๋ก ๋ค๋ค์ง๋ค.
Handle upgrades to control plane components first
- hosts: kube_control_plane
serial: 1
roles:
- upgrade/pre-upgrade
- upgrade/system-upgrade
- kubernetes/control-plane
- kubernetes/client
- control-plane ๋ ธ๋๋ฅผ 1๋์ฉ(serial: 1) ์ ๊ทธ๋ ์ด๋ํ๋ค.
- kube-apiserver, controller-manager, scheduler ํฌํจ๋๋ค.
- API ํธํ์ฑ์ ๊นจ๋จ๋ฆฌ์ง ์๊ธฐ ์ํด 1๋์ฉ ์ ๊ทธ๋ ์ด๋ํ๋ฉฐ, ๋ง์ฝ ๋์์ ์ฌ๋ฌ control-plane์ ๊ฑด๋๋ฆฌ๋ฉด ํด๋ฌ์คํฐ์ ์ ํฉ์ฑ์ด ๊นจ์ง ์ ์๋ค.
Upgrade calico and external cloud provider
- hosts: kube_control_plane:calico_rr:kube_node
roles:
- kubernetes-apps/external_cloud_controller
- network_plugin
- ๋คํธ์ํฌ ํ๋ฌ๊ทธ์ธ(Calico)๊ณผ ํด๋ผ์ฐ๋ ์ปจํธ๋กค๋ฌ๋ฅผ ์ ์ฒด ๋ ธ๋์ ์ ์ฉํ๋ค.
- control-plane ์ ๊ทธ๋ ์ด๋ ํ์ ์คํํด ๋คํธ์ํฌ ๋จ์ ๊ฐ๋ฅ์ฑ์ ์ต์ํํ๋ค.
Finally handle worker upgrades
- hosts: kube_node:calico_rr:!kube_control_plane
serial: 20%
roles:
- upgrade/pre-upgrade
- kubernetes/node
- kubernetes/kubeadm
- ์์ปค ๋ ธ๋๋ฅผ ๋ฐฐ์น ๋จ์๋ก ์์ฐจ ์ ๊ทธ๋ ์ด๋ํ๋ค.
- ์๋น์ค ๊ฐ์ฉ์ฑ์ ์ ์งํ๊ธฐ ์ํด ํ ๋ฒ์ ์ ๋ถ ์ ๊ทธ๋ ์ด๋ํ์ง ์๋๋ค.
- control-plane์ ์ด๋ฏธ ์์ ํ๋ ์ํ.
Patch Kubernetes for Windows
- hosts: kube_control_plane[0]
roles:
- win_nodes/kubernetes_patch
- ์๋์ฐ ๋ ธ๋์ผ ๊ฒฝ์ฐ๋ ๊ณ ๋ ค๋์ด ์๋ค ์ ๊ธฐ
Install Calico Route Reflector
- hosts: calico_rr
roles:
- network_plugin/calico/rr
- BGP ๊ธฐ๋ฐ ๋คํธ์ํฌ ํ์ฅ์ฑ์ ์ํ Route Reflector๋ฅผ ๊ตฌ์ฑํ๋ค.
- ๋๊ท๋ชจ ํด๋ฌ์คํฐ์์ ๋คํธ์ํฌ ์ฑ๋ฅ๊ณผ ์์ ์ฑ์ ๋ณด์ฅํ๋ค.
Install Kubernetes apps
- hosts: kube_control_plane
roles:
- kubernetes-apps/ingress_controller
- kubernetes-apps
- Ingress, CSI ๋ฑ ํด๋ฌ์คํฐ ๋ถ๊ฐ ์ ํ๋ฆฌ์ผ์ด์ ์ ์ค์น/์ ๊ทธ๋ ์ด๋ํ๋ค.
Apply resolv.conf changes now that cluster DNS is up
- hosts: k8s_cluster
roles:
- kubernetes/preinstall
- CoreDNS๊ฐ ์ ์ ๋์ํ ์ดํ์ DNS ์ค์ ์ ์ต์ข ๋ฐ์ํ๋ค.
- DNS ์ค์ ์ ๋๋ฌด ์ผ์ฐ ์ ์ฉํ๋ฉด ์ ๊ทธ๋ ์ด๋ ๋์ค name resolution ์คํจ๊ฐ ๋ฐ์ํ ์ ์๋ค.
ํ๋ ์ด๋ถ ์คํ
ANSIBLE_FORCE_COLOR=true ansible-playbook -i inventory/mycluster/inventory.ini -v upgrade-cluster.yml -e kube_version="1.32.10" --limit "kube_control_plane:etcd" | tee kubespray_upgrade.log


์์ฐจ์ ์ผ๋ก ์ ๊ทธ๋ ์ด๋๊ฐ ์งํ๋๋ ๊ฒ์ ํ์ธํ ์ ์๋ค.
PLAY RECAP *********************************************************************
k8s-node1 : ok=533 changed=49 unreachable=0 failed=0 skipped=1091 rescued=0 ignored=0
k8s-node2 : ok=483 changed=35 unreachable=0 failed=0 skipped=994 rescued=0 ignored=0
k8s-node3 : ok=489 changed=35 unreachable=0 failed=0 skipped=1026 rescued=0 ignored=0
Sunday 08 February 2026 05:23:39 +0900 (0:00:00.041) 0:11:35.319 *******
===============================================================================
kubernetes/control-plane : Kubeadm | Upgrade first control plane node to 1.32.10 - 105.01s
kubernetes/control-plane : Kubeadm | Upgrade other control plane nodes to 1.32.10 -- 92.19s
kubernetes/control-plane : Kubeadm | Upgrade other control plane nodes to 1.32.10 -- 81.87s
network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence -- 15.86s
kubernetes/control-plane : Control plane | wait for the apiserver to be running -- 10.46s
download : Download_container | Download image if required -------------- 9.82s
upgrade/pre-upgrade : Drain node ---------------------------------------- 6.83s
download : Download_file | Download item -------------------------------- 6.78s
etcd : Gen_certs | Write etcd member/admin and kube_control_plane client certs to other etcd nodes --- 6.59s
download : Download_container | Download image if required -------------- 5.67s
system_packages : Manage packages --------------------------------------- 5.54s
download : Download_container | Download image if required -------------- 5.50s
download : Download_file | Download item -------------------------------- 5.29s
network_plugin/cni : CNI | Copy cni plugins ----------------------------- 5.20s
network_plugin/cni : CNI | Copy cni plugins ----------------------------- 5.13s
download : Download_container | Download image if required -------------- 5.00s
kubernetes/control-plane : Backup old certs and keys -------------------- 4.85s
container-engine/containerd : Containerd | Unpack containerd archive ---- 4.83s
kubernetes/control-plane : Kubeadm | Check apiserver.crt SAN IPs -------- 4.17s
etcdctl_etcdutl : Extract_file | Unpacking archive ---------------------- 4.06s
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready control-plane 4h45m v1.32.10
k8s-node2 Ready control-plane 4h45m v1.32.10
k8s-node3 Ready control-plane 4h45m v1.32.10
k8s-node4 Ready <none> 4h44m v1.32.9
k8s-node5 Ready <none> 26m v1.32.9
์ฝ 11๋ถ ์ ๋ ์์๋๊ณ ์ ๊ทธ๋ ์ด๋๊ฐ ์๋ฃ๋์๋ค.
root@admin-lb:~# ssh k8s-node1 crictl images
IMAGE TAG IMAGE ID SIZE
docker.io/flannel/flannel-cni-plugin v1.7.1-flannel1 e5bf9679ea8c3 5.14MB
docker.io/flannel/flannel v0.27.3 cadcae92e6360 33.1MB
quay.io/prometheus/node-exporter v1.10.2 6b5bc413b280c 12.1MB
registry.k8s.io/coredns/coredns v1.11.3 2f6c962e7b831 16.9MB
registry.k8s.io/kube-apiserver v1.32.10 03aec5fd5841e 26.4MB
registry.k8s.io/kube-apiserver v1.32.9 02ea53851f07d 26.4MB
registry.k8s.io/kube-controller-manager v1.32.10 66490a6490dde 24.2MB
registry.k8s.io/kube-controller-manager v1.32.9 f0bcbad5082c9 24.1MB
registry.k8s.io/kube-proxy v1.32.10 8b57c1f8bd2dd 27.6MB
registry.k8s.io/kube-proxy v1.32.9 72b57ec14d31e 27.4MB
registry.k8s.io/kube-scheduler v1.32.10 fcf368a1abd0b 19.2MB
registry.k8s.io/kube-scheduler v1.32.9 1d625baf81b59 19.1MB
registry.k8s.io/metrics-server/metrics-server v0.8.0 bc6c1e09a843d 20.6MB
registry.k8s.io/pause 3.10 afb61768ce381 268kB
etcd ํ์ธ
root@admin-lb:~# ssh k8s-node1 systemctl status etcd --no-pager | grep active
Active: active (running) since Sun 2026-02-08 04:40:35 KST; 47min ago
root@admin-lb:~# ssh k8s-node1 etcdctl.sh member list -w table
+------------------+---------+-------+----------------------------+----------------------------+------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+-------+----------------------------+----------------------------+------------+
| 8b0ca30665374b0 | started | etcd3 | https://192.168.10.13:2380 | https://192.168.10.13:2379 | false |
| 2106626b12a4099f | started | etcd2 | https://192.168.10.12:2380 | https://192.168.10.12:2379 | false |
| c6702130d82d740f | started | etcd1 | https://192.168.10.11:2380 | https://192.168.10.11:2379 | false |
+------------------+---------+-------+----------------------------+----------------------------+------------+
root@admin-lb:~# for i in {1..3}; do echo ">> k8s-node$i <<"; ssh k8s-node$i etcdctl.sh endpoint status -w table; echo; done
>> k8s-node1 <<
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 127.0.0.1:2379 | c6702130d82d740f | 3.5.25 | 23 MB | false | false | 6 | 55240 | 55240 | |
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
>> k8s-node2 <<
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 127.0.0.1:2379 | 2106626b12a4099f | 3.5.25 | 22 MB | true | false | 6 | 55243 | 55243 | |
+----------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
>> k8s-node3 <<
+----------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 127.0.0.1:2379 | 8b0ca30665374b0 | 3.5.25 | 22 MB | false | false | 6 | 55243 | 55243 | |
+----------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
root@admin-lb:~# for i in {1..3}; do echo ">> k8s-node$i <<"; ssh k8s-node$i tree /var/backups; echo; done # etcd ๋ฐฑ์
ํ์ธ
>> k8s-node1 <<
/var/backups
โโโ etcd-2026-02-08_00:41:43
โโโ member
โ โโโ snap
โ โ โโโ db
โ โโโ wal
โ โโโ 0000000000000000-0000000000000000.wal
โโโ snapshot.db
5 directories, 3 files
>> k8s-node2 <<
/var/backups
โโโ etcd-2026-02-08_00:41:43
โโโ member
โ โโโ snap
โ โ โโโ db
โ โโโ wal
โ โโโ 0000000000000000-0000000000000000.wal
โโโ snapshot.db
5 directories, 3 files
>> k8s-node3 <<
/var/backups
โโโ etcd-2026-02-08_00:41:43
โโโ member
โ โโโ snap
โ โ โโโ db
โ โโโ wal
โ โโโ 0000000000000000-0000000000000000.wal
โโโ snapshot.db
5 directories, 3 files
etcd๋ ์ํฅ์ด ์๋ ๊ฒ์ ํ์ธํ ์ ์๋ค.
์์ปค ๋ ธ๋ ์ ๊ทธ๋ ์ด๋
root@admin-lb:~/kubespray# kubectl get pod -A -owide | grep node4
default webpod-697b545f57-m95gv 1/1 Running 0 81m 10.233.67.6 k8s-node4 <none> <none>
default webpod-697b545f57-prbjp 1/1 Running 0 81m 10.233.67.5 k8s-node4 <none> <none>
kube-system coredns-664b99d7c7-5drtx 1/1 Running 0 4h47m 10.233.67.2 k8s-node4 <none> <none>
kube-system kube-flannel-ds-arm64-wkwrg 1/1 Running 0 4h47m 192.168.10.14 k8s-node4 <none> <none>
kube-system kube-ops-view-8484bdc5df-8xf2c 1/1 Running 0 109m 10.233.67.4 k8s-node4 <none> <none>
kube-system kube-proxy-qw84v 1/1 Running 0 12m 192.168.10.14 k8s-node4 <none> <none>
kube-system metrics-server-65fdf69dcb-jnvbd 1/1 Running 0 4h47m 10.233.67.3 k8s-node4 <none> <none>
kube-system nginx-proxy-k8s-node4 1/1 Running 1 4h47m 192.168.10.14 k8s-node4 <none> <none>
monitoring kube-prometheus-stack-prometheus-node-exporter-8rcnt 1/1 Running 0 19m 192.168.10.14 k8s-node4 <none> <none>
monitoring prometheus-kube-prometheus-stack-prometheus-0 2/2 Running 0 19m 10.233.67.8 k8s-node4 <none> <none>
root@admin-lb:~/kubespray# kubectl get pod -A -owide | grep node5
kube-system kube-flannel-ds-arm64-4plpp 1/1 Running 1 (28m ago) 29m 192.168.10.15 k8s-node5 <none> <none>
kube-system kube-proxy-5lzz6 1/1 Running 0 12m 192.168.10.15 k8s-node5 <none> <none>
kube-system nginx-proxy-k8s-node5 1/1 Running 0 29m 192.168.10.15 k8s-node5 <none> <none>
monitoring kube-prometheus-stack-grafana-5cb7c586f9-bnfd2 3/3 Running 0 19m 10.233.69.6 k8s-node5 <none> <none>
monitoring kube-prometheus-stack-kube-state-metrics-7846957b5b-fqbf2 1/1 Running 0 19m 10.233.69.5 k8s-node5 <none> <none>
monitoring kube-prometheus-stack-operator-584f446c98-tjxfj 1/1 Running 0 19m 10.233.69.4 k8s-node5 <none> <none>
monitoring kube-prometheus-stack-prometheus-node-exporter-pkxgl 1/1 Running 0 19m 192.168.10.15 k8s-node5 <none> <none>
nfs-provisioner nfs-provisioner-nfs-subdir-external-provisioner-b549b9dff-tbddr 1/1 Running 0 21m 10.233.69.2 k8s-node5 <none> <none>
root@admin-lb:~/kubespray# ansible-playbook -i inventory/mycluster/inventory.ini -v upgrade-cluster.yml -e kube_version="1.32.10" --limit "k8s-node5"
...
***********
k8s-node5 : ok=369 changed=20 unreachable=0 failed=0 skipped=608 rescued=0 ignored=0
Sunday 08 February 2026 05:33:05 +0900 (0:00:00.049) 0:02:15.924 *******
===============================================================================
network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence --- 5.23s
system_packages : Manage packages --------------------------------------- 5.04s
container-engine/containerd : Containerd | Unpack containerd archive ---- 4.89s
container-engine/containerd : Download_file | Download item ------------- 4.71s
container-engine/nerdctl : Extract_file | Unpacking archive ------------- 3.37s
upgrade/pre-upgrade : Drain node ---------------------------------------- 3.37s
container-engine/nerdctl : Download_file | Download item ---------------- 3.21s
container-engine/containerd : Containerd | Copy containerd config file --- 2.21s
kubernetes/kubeadm : Restart all kube-proxy pods to ensure that they load the new configmap --- 2.20s
container-engine/runc : Download_file | Download item ------------------- 2.17s
download : Download_file | Download item -------------------------------- 2.09s
download : Download_file | Download item -------------------------------- 2.05s
container-engine/crictl : Download_file | Download item ----------------- 1.90s
container-engine/crictl : Extract_file | Unpacking archive -------------- 1.88s
container-engine/containerd : Download_file | Create dest directory on node --- 1.73s
network_plugin/cni : CNI | Copy cni plugins ----------------------------- 1.62s
network_plugin/cni : CNI | Copy cni plugins ----------------------------- 1.53s
container-engine/containerd : Extract_file | Unpacking archive ---------- 1.50s
kubernetes/node : Install | Copy kubeadm binary from download dir ------- 1.27s
container-engine/crictl : Download_file | Create dest directory on node --- 1.25s
root@admin-lb:~/kubespray# ansible-playbook -i inventory/mycluster/inventory.ini -v upgrade-cluster.yml -e kube_version="1.32.10" --limit "k8s-node4"
...
PLAY RECAP *********************************************************************
k8s-node4 : ok=369 changed=19 unreachable=0 failed=0 skipped=608 rescued=0 ignored=0
Sunday 08 February 2026 05:36:05 +0900 (0:00:00.015) 0:02:11.744 *******
===============================================================================
upgrade/pre-upgrade : Drain node --------------------------------------- 12.62s
network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence --- 5.34s
system_packages : Manage packages --------------------------------------- 5.26s
download : Download_file | Download item -------------------------------- 3.45s
download : Download_file | Download item -------------------------------- 3.23s
container-engine/nerdctl : Download_file | Download item ---------------- 2.87s
kubernetes/preinstall : NetworkManager | Prevent NetworkManager from managing K8S interfaces (kube-ipvs0/nodelocaldns) --- 2.78s
container-engine/runc : Download_file | Download item ------------------- 2.65s
container-engine/validate-container-engine : Populate service facts ----- 2.56s
container-engine/crictl : Download_file | Download item ----------------- 2.50s
container-engine/containerd : Download_file | Download item ------------- 2.42s
network_plugin/cni : CNI | Copy cni plugins ----------------------------- 2.00s
container-engine/crictl : Extract_file | Unpacking archive -------------- 1.91s
container-engine/nerdctl : Extract_file | Unpacking archive ------------- 1.91s
network_plugin/cni : CNI | Copy cni plugins ----------------------------- 1.85s
container-engine/containerd : Containerd | Unpack containerd archive ---- 1.64s
container-engine/crictl : Download_file | Create dest directory on node --- 1.27s
download : Download | Download files / images --------------------------- 1.17s
kubernetes/preinstall : Stop if access_ip is not pingable --------------- 1.15s
container-engine/runc : Copy runc binary from download dir -------------- 1.05s
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready control-plane 4h51m v1.32.10
k8s-node2 Ready control-plane 4h51m v1.32.10
k8s-node3 Ready control-plane 4h51m v1.32.10
k8s-node4 Ready <none> 4h50m v1.32.9
k8s-node5 Ready <none> 32m v1.32.10
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready control-plane 4h54m v1.32.10
k8s-node2 Ready control-plane 4h54m v1.32.10
k8s-node3 Ready control-plane 4h54m v1.32.10
k8s-node4 Ready <none> 4h53m v1.32.10
k8s-node5 Ready <none> 35m v1.32.10
k8s ์ ๊ทธ๋ ์ด๋ (1.32.10 -> 1.33.7)
์ปจํธ๋กค ํ๋ ์ธ
ANSIBLE_FORCE_COLOR=true ansible-playbook -i inventory/mycluster/inventory.ini -v upgrade-cluster.yml -e kube_version="1.33.7" --limit "kube_control_plane:etcd" | tee kubespray_upgrade-2.log
PLAY RECAP *********************************************************************
k8s-node1 : ok=536 changed=50 unreachable=0 failed=0 skipped=1126 rescued=0 ignored=0
k8s-node2 : ok=482 changed=36 unreachable=0 failed=0 skipped=995 rescued=0 ignored=0
k8s-node3 : ok=483 changed=36 unreachable=0 failed=0 skipped=994 rescued=0 ignored=1
Sunday 08 February 2026 05:48:01 +0900 (0:00:00.031) 0:11:02.537 *******
===============================================================================
kubernetes/control-plane : Kubeadm | Upgrade first control plane node to 1.33.7 -- 81.21s
kubernetes/control-plane : Kubeadm | Upgrade other control plane nodes to 1.33.7 -- 77.53s
kubernetes/control-plane : Kubeadm | Upgrade other control plane nodes to 1.33.7 -- 72.58s
network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence -- 15.65s
upgrade/pre-upgrade : Drain node --------------------------------------- 14.99s
download : Download_file | Download item ------------------------------- 11.00s
download : Download_container | Download image if required ------------- 10.31s
download : Download_container | Download image if required -------------- 9.18s
etcd : Gen_certs | Write etcd member/admin and kube_control_plane client certs to other etcd nodes --- 7.03s
kubernetes/control-plane : Control plane | wait for the apiserver to be running --- 6.79s
download : Download_file | Download item -------------------------------- 6.16s
etcdctl_etcdutl : Extract_file | Unpacking archive ---------------------- 5.95s
system_packages : Manage packages --------------------------------------- 5.79s
download : Download_file | Download item -------------------------------- 5.63s
download : Download_container | Download image if required -------------- 5.39s
network_plugin/cni : CNI | Copy cni plugins ----------------------------- 4.85s
download : Download_container | Download image if required -------------- 4.76s
container-engine/containerd : Containerd | Unpack containerd archive ---- 4.75s
download : Download_container | Download image if required -------------- 4.72s
network_plugin/cni : CNI | Copy cni plugins ----------------------------- 4.51s
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready control-plane 5h6m v1.33.7
k8s-node2 Ready control-plane 5h5m v1.33.7
k8s-node3 Ready control-plane 5h5m v1.33.7
k8s-node4 Ready <none> 5h5m v1.32.10
k8s-node5 Ready <none> 47m v1.32.10
์์ปค ๋ ธ๋
ansible-playbook -i inventory/mycluster/inventory.ini -v upgrade-cluster.yml -e kube_version="1.33.7" --limit "kube_node"
PLAY RECAP *********************************************************************
k8s-node4 : ok=370 changed=23 unreachable=0 failed=0 skipped=609 rescued=0 ignored=0
k8s-node5 : ok=348 changed=20 unreachable=0 failed=0 skipped=580 rescued=0 ignored=0
Sunday 08 February 2026 05:52:08 +0900 (0:00:00.039) 0:03:26.512 *******
===============================================================================
upgrade/pre-upgrade : Drain node --------------------------------------- 19.77s
network_plugin/flannel : Flannel | Wait for flannel subnet.env file presence -- 10.67s
download : Download_file | Download item -------------------------------- 5.86s
system_packages : Manage packages --------------------------------------- 5.02s
container-engine/validate-container-engine : Populate service facts ----- 4.69s
download : Download_file | Download item -------------------------------- 3.68s
container-engine/containerd : Containerd | Unpack containerd archive ---- 3.62s
network_plugin/cni : CNI | Copy cni plugins ----------------------------- 3.38s
network_plugin/cni : CNI | Copy cni plugins ----------------------------- 2.91s
container-engine/nerdctl : Download_file | Download item ---------------- 2.30s
kubernetes/node : Install | Copy kubelet binary from download dir ------- 2.27s
kubernetes/preinstall : NetworkManager | Prevent NetworkManager from managing K8S interfaces (kube-ipvs0/nodelocaldns) --- 2.25s
container-engine/nerdctl : Extract_file | Unpacking archive ------------- 1.97s
container-engine/containerd : Download_file | Download item ------------- 1.89s
container-engine/runc : Download_file | Download item ------------------- 1.87s
container-engine/crictl : Download_file | Download item ----------------- 1.85s
container-engine/crictl : Extract_file | Unpacking archive -------------- 1.84s
kubernetes/node : Install | Copy kubeadm binary from download dir ------- 1.75s
container-engine/runc : Download_file | Download item ------------------- 1.69s
container-engine/crictl : Extract_file | Unpacking archive -------------- 1.68s
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready control-plane 5h10m v1.33.7
k8s-node2 Ready control-plane 5h10m v1.33.7
k8s-node3 Ready control-plane 5h10m v1.33.7
k8s-node4 Ready <none> 5h10m v1.33.7
k8s-node5 Ready <none> 51m v1.33.7'Infra > ์ฟ ๋ฒ๋คํฐ์ค' ์นดํ ๊ณ ๋ฆฌ์ ๋ค๋ฅธ ๊ธ
| [K8S Deploy] #7์ฃผ์ฐจ RKE2 ์ค์ต (0) | 2026.02.23 |
|---|---|
| [K8S Deploy] #6์ฃผ์ฐจ Kubespray offline ์ค์นํ๊ธฐ (0) | 2026.02.15 |
| [K8S Deploy] #4์ฃผ์ฐจ Kubespray ๋์ ์๋ฆฌ ์ดํดํ๊ธฐ (0) | 2026.02.01 |
| [K8S Deploy] #3์ฃผ์ฐจ kubeadm upgrade (2) (0) | 2026.01.25 |
| [K8S Deploy] #3์ฃผ์ฐจ kubeadm (1) (0) | 2026.01.25 |