728x90
반응형
특정 파드의 노드 상 pid를 확인하고 cgroup에 할당된 값을 확인해 보겠습니다.
파드 pid 확인 및 cgroup 설정값 확인
1. 해당 파드가 배포된 노드 접속
2. 컨테이너 목록 확인 및 파드명으로 container id 검색
# crictl ps
WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD
68aa4891cf93f 05455a08881ea About a minute ago Running nsenter 0 962745781cdb8 nsenter-a1brmb
acbde9dbbb13d e4720093a3c13 4 days ago Running simple-http 0 1bfcc1fdab823 nginx-deployment-test-1-6fbcdc88c9-8mxr9
452eb6a07d711 6a3226f0df713 4 days ago Running liveness-probe 0 46d1a323fb500 ebs-csi-node-zmdgs
e69ee3ec6602e 52b705756054c 4 days ago Running node-driver-registrar 0 46d1a323fb500 ebs-csi-node-zmdgs
94142eaaa7592 e4720093a3c13 4 days ago Running simple-http 0 436e3dc0f27e6 nginx-deployment-test-1-6fbcdc88c9-6f89c
...
# crictl ps | grep [파드명]
94142eaaa7592 e4720093a3c13 4 days ago Running simple-http 0 436e3dc0f27e6 nginx-deployment-test-1-6fbcdc88c9-6f89c
3. 컨테이너의 pid 확인
# crictl inspect 94142eaaa7592 | grep -i pid
WARN[0000] runtime connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
"pid": 5171,
"pid": 1
"type": "pid"
4. 컨테이너 리소스 확인
# crictl inspect 94142eaaa7592
...
"resources": {
"linux": {
"cpuPeriod": "100000",
"cpuQuota": "100000",
"cpuShares": "1024",
"cpusetCpus": "",
"cpusetMems": "",
"hugepageLimits": [],
"memoryLimitInBytes": "2147483648",
"memorySwapLimitInBytes": "2147483648",
"oomScoreAdj": "-997",
"unified": {}
},
"windows": null
}
...
참고) 파드 설정
resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: "1"
memory: 2Gi
5. cgroup 확인
# cd /proc/5171
# cat /proc/5171/cgroup
11:net_cls,net_prio:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
10:devices:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
9:memory:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
8:freezer:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
7:perf_event:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
6:hugetlb:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
5:blkio:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
4:cpuset:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
3:pids:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
2:cpu,cpuacct:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
1:name=systemd:/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
# mem limit 확인
# cd /sys/fs/cgroup/memory/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
# ls | grep limit
# cat memory.limit_in_bytes
2147483648
# cat memory.soft_limit_in_bytes
9223372036854771712
# cpu limit 확인
# /sys/fs/cgroup/cpu/kubepods.slice/kubepods-podcf56787e_9c22_4764_90c1_d5e0e24e2e61.slice/cri-containerd-94142eaaa7592a451b8117caf51e74ddb9d4f89e4d8e1598c20a8ef372dc6842.scope
# cat cpu.cfs_quota_us
100000
# cat cpu.cfs_period_us
100000
# cat cpu.shares
1024
파드 request/limit 별 cgroup 설정값 비교
실제 파드에 설정한 resources 항목이 cgroup에 어떻게 반영되는지 확인해 보겠습니다.
케이스 1)
설정값
resources:
limits:
cpu: "1"
memory: 2Gi
requests:
cpu: "1"
memory: 2Gi
crictl inspect 확인
"resources": {
"linux": {
"cpuPeriod": "100000",
"cpuQuota": "100000",
"cpuShares": "1024",
"cpusetCpus": "",
"cpusetMems": "",
"hugepageLimits": [],
"memoryLimitInBytes": "2147483648",
"memorySwapLimitInBytes": "2147483648",
"oomScoreAdj": "-997",
"unified": {}
},
"windows": null
}
케이스 2)
설정값
resources:
limits:
cpu: 800m
memory: 1Gi
requests:
cpu: 400m
memory: 512Mi
crictl inspect 확인
"resources": {
"linux": {
"cpuPeriod": "100000",
"cpuQuota": "80000",
"cpuShares": "409",
"cpusetCpus": "",
"cpusetMems": "",
"hugepageLimits": [],
"memoryLimitInBytes": "1073741824",
"memorySwapLimitInBytes": "1073741824",
"oomScoreAdj": "968",
"unified": {}
},
cgroup 설정값 확인
# cat memory.soft_limit_in_bytes
9223372036854771712
# cat memory.limit_in_bytes
1073741824
# cat cpu.cfs_quota_us
80000
# cat cpu.cfs_period_us
100000
# cat cpu.shares
409
두 개의 케이스를 비교했을 때 실제 파드의 memory request 값은 cgroup에 별도로 설정되지 않는 것을 확인할 수 있습니다.
cpu의 경우에는 request값이 cpu.shares 값에 반영됩니다.
- 계산식 : requests.cpu/1000*1024
(1 core = 1000m로 환산)
ex) requests.cpu : 400m -> cpu.shares : 400/1000*1024 = 409.6
또한, memory의 limit값이 hard limit에 설정되는 것을 볼 수 있습니다.
참고)
- crictl 설치 :
- Kubernetes Container Resource Requirements — Part 2: CPU :
- Kubernetes에서의 cpu requests, cpu limits는 어떻게 적용될까 :
728x90
'Kubernetes' 카테고리의 다른 글
노드의 memory에 대하여 - cgroup과 OOM killer에 대해 알아보기 (4) | 2024.07.31 |
---|---|
여러 노드와 존에 파드를 분산하여 배포하는 방법 (0) | 2024.04.06 |
컨테이너 root로 접속하는 방법(containerd) (3) | 2023.11.07 |
EKS Node의 최대 Pod 수 (0) | 2022.08.01 |
EFS 권장 옵션을 포함한 EKS 내 PV 생성 (0) | 2022.06.24 |