Kubernetes/Network Study

Amazon EKS VPC CNI + AWS LB Controller 사용 시 설정 별 통신 흐름

백곰곰 2024. 11. 2. 22:13
728x90
반응형

가시다님의 Kubernetes Advanced Networking Study에 참여하게 되어, 스터디 때 다룬 주제를 정리하려고 합니다.
9주차는 VPC CNI + AWS LB Controller를 주제로 진행되었습니다.

이번 글에서는 VPC CNI와 AWS LB Controller를 함께 사용할 때 서비스 접근 시 통신 흐름에 대해 살펴보려 합니다.

 

실습 환경

EKS 클러스터 생성

bash
# YAML 파일 다운로드 curl -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/kans/eks-oneclick.yaml # CloudFormation 스택 배포 # aws cloudformation deploy --template-file eks-oneclick.yaml --stack-name myeks --parameter-overrides KeyName=<My SSH Keyname> SgIngressSshCidr=<My Home Public IP Address>/32 MyIamUserAccessKeyID=<IAM User의 액세스키> MyIamUserSecretAccessKey=<IAM User의 시크릿 키> ClusterBaseName='<eks 이름>' --region ap-northeast-2 예시) aws cloudformation deploy --template-file eks-oneclicks --parameter-overrides KeyName=my-key SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32 MyIamUserAccessKeyID=AKIA5... MyIamUserSecretAccessKey='CVNa2...' ClusterBaseN.yaml --stack-name myekame=myeks --region ap-northeast-2 # SSH 접속 ssh -i ~/.ssh/my-key.pem ec2-user@$(aws cloudformation describe-stacks --stack-name myeks --query 'Stacks[*].Outputs[0].OutputValue' --output text) # cloud-init 실행 과정 로그 확인 tail -f /var/log/cloud-init-output.log # cloud-init 정상 완료 후 eksctl 실행 과정 로그 확인 tail -f /root/create-eks.log # default 네임스페이스 적용 kubectl ns default

AWS LB Controller 설치

bash
helm repo add eks https://aws.github.io/eks-charts helm repo update helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=$CLUSTER_NAME

확인

bash
(test@myeks:N/A) [root@myeks-bastion ~]# k get crd NAME CREATED AT cninodes.vpcresources.k8s.aws 2024-11-02T06:33:35Z eniconfigs.crd.k8s.amazonaws.com 2024-11-02T06:35:58Z ingressclassparams.elbv2.k8s.aws 2024-11-02T06:46:40Z policyendpoints.networking.k8s.aws 2024-11-02T06:33:35Z securitygrouppolicies.vpcresources.k8s.aws 2024-11-02T06:33:35Z targetgroupbindings.elbv2.k8s.aws 2024-11-02T06:46:40Z (test@myeks:N/A) [root@myeks-bastion ~]# k get po -A | grep load kube-system aws-load-balancer-controller-7dd9db8f7d-kcq9w 1/1 Running 0 114s kube-system aws-load-balancer-controller-7dd9db8f7d-r4mgc 1/1 Running 0 114s

Service - LoadBalancer

NLB + Deployment 생성

bash
apiVersion: apps/v1 kind: Deployment metadata: name: deploy-echo spec: replicas: 2 selector: matchLabels: app: deploy-websrv template: metadata: labels: app: deploy-websrv spec: terminationGracePeriodSeconds: 0 containers: - name: akos-websrv image: k8s.gcr.io/echoserver:1.5 ports: - containerPort: 8080 --- apiVersion: v1 kind: Service metadata: name: svc-nlb-ip-type annotations: service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "8080" service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true" spec: ports: - port: 80 targetPort: 8080 protocol: TCP type: LoadBalancer loadBalancerClass: service.k8s.aws/nlb selector: app: deploy-websrv

자원 확인

bash
(test@myeks:N/A) [root@myeks-bastion ~]# kubectl get svc,ep,targetgroupbindings NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 18m service/svc-nlb-ip-type LoadBalancer 10.100.143.3 k8s-default-svcnlbip-abf77b6f4f-84ddc7bd7ad79e40.elb.ap-northeast-2.amazonaws.com 80:32131/TCP 64s NAME ENDPOINTS AGE endpoints/kubernetes 192.168.2.238:443,192.168.3.161:443 18m endpoints/svc-nlb-ip-type 192.168.1.40:8080,192.168.3.17:8080 64s NAME SERVICE-NAME SERVICE-PORT TARGET-TYPE AGE targetgroupbinding.elbv2.k8s.aws/k8s-default-svcnlbip-26a90e8aa8 svc-nlb-ip-type 80 ip 60s (test@myeks:N/A) [root@myeks-bastion ~]# kubectl get deploy,pod NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/deploy-echo 2/2 2 2 69s NAME READY STATUS RESTARTS AGE pod/deploy-echo-857b6cfb88-9fh2r 1/1 Running 0 69s pod/deploy-echo-857b6cfb88-xbpw7 1/1 Running 0 69s

여기서 자동으로 생성되는 targetgroupbinding을 살펴보겠습니다.

설정에는 연결된 Service, TargetGroup arn 정보 등을 확인할 수 있습니다.

bash
(test@myeks:N/A) [root@myeks-bastion ~]# kubectl get targetgroupbindings -o json | jq { "apiVersion": "v1", "items": [ { "apiVersion": "elbv2.k8s.aws/v1beta1", "kind": "TargetGroupBinding", "metadata": { "annotations": { "elbv2.k8s.aws/checkpoint": "Chsuw6QUPo4Fq9rPZndP8nVZS3_wcI-voDX6afk2L3M/moDelbtAk26FWe2YZF1WgXLdqFAMtAWts8HQjXb6I4Q", "elbv2.k8s.aws/checkpoint-timestamp": "1730530233" }, "creationTimestamp": "2024-11-02T06:50:29Z", "finalizers": [ "elbv2.k8s.aws/resources" ], "generation": 1, "labels": { "service.k8s.aws/stack-name": "svc-nlb-ip-type", "service.k8s.aws/stack-namespace": "default" }, "name": "k8s-default-svcnlbip-26a90e8aa8", "namespace": "default", "resourceVersion": "4972", "uid": "4b08b283-4a0a-4891-92bd-f83eac11fbb0" }, "spec": { "ipAddressType": "ipv4", "networking": { "ingress": [ { "from": [ { "securityGroup": { "groupID": "sg-00c8de0f83dbca3b8" } } ], "ports": [ { "port": 8080, "protocol": "TCP" } ] } ] }, "serviceRef": { "name": "svc-nlb-ip-type", "port": 80 }, "targetGroupARN": "arn:aws:elasticloadbalancing:ap-northeast-2:111111111111:targetgroup/k8s-default-svcnlbip-26a90e8aa8/c9dc4d30b1da2d93", "targetType": "ip", "vpcID": "vpc-0c0d603ed1c5134fa" }, "status": { "observedGeneration": 1 } } ], "kind": "List", "metadata": { "resourceVersion": "" } }

이는 실제로 ELB에 연결하는 TargetGroup과 동일합니다.

따라서, NLB, ALB를 사용 시 TargetGroup에 EC2를 등록하는 것과 동일한 방식으로 파드를 등록하는 것을 확인할 수 있습니다.

참고로, TargetGroupBinding을 별도로 만들고 생성된 TargetGroup을 ELB에 수동으로 연결하는 것도 가능합니다.

 

통신 흐름을 간편하게 진행하기 위해 파드 수를 하나로 줄이겠습니다.

bash
(test@myeks:N/A) [root@myeks-bastion ~]# k scale deploy deploy-echo --replicas=1 deployment.apps/deploy-echo scaled (test@myeks:N/A) [root@myeks-bastion ~]# k get po -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES deploy-echo-857b6cfb88-xbpw7 1/1 Running 0 14m 192.168.3.17 ip-192-168-3-193.ap-northeast-2.compute.internal <none> <none>

 

targetType : IP

NLB에 대해 요청을 보내보겠습니다.

퍄드의 log에서 접속 로그를 확인할 수 있고, src ip는 NLB의 private IP입니다.

bash
192.168.1.18 - - [02/Nov/2024:08:27:49 +0000] "GET / HTTP/1.1" 200 689 "-" "curl/8.6.0" 192.168.1.18 - - [02/Nov/2024:08:27:51 +0000] "GET / HTTP/1.1" 200 689 "-" "curl/8.6.0"

추가적으로, iptables 패킷 내역을 보면 Service에 대해 패킷 수가 증가하지 않는 것을 볼 수 있습니다.

bash
Every 2.0s: iptables -L KUBE-SVC-DW3DPGWHL3IDAXM7 -n -v -t nat; iptables -L KUBE-SEP-5GECT5YQBD6GJSHN -n -v -t nat Sat Nov 2 08:24:42 2024 Chain KUBE-SVC-DW3DPGWHL3IDAXM7 (2 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-5GECT5YQBD6GJSHN all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-nlb-ip-type -> 192.168.3.17:8080 * / Chain KUBE-SEP-5GECT5YQBD6GJSHN (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- * * 192.168.3.17 0.0.0.0/0 /* default/svc-nlb-ip-type */ 0 0 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-nlb-ip-type */ tcp to:192.168.3.17:8080

이를 통해 NLB에서 파드로 직접 패킷이 전달되는 것을 확인할 수 있습니다.

이는 VPC CNI를 사용하여 노드와 파드가 동일 IP대역을 할당받을 수 있기 때문에 가능합니다.

참고로, 노드에서 tcpdump를 실행하면 NLB -> 파드 외에 노드 -> 파드로 전달되는 트래픽은 없는 것도 확인할 수 있습니다.

 

targetType : instance

이제 instance 모드로 설정 후 테스트 해보겠습니다.

instance mode는 노드의 kube-proxy 설정을 통해서 노드에서 파드로 트래픽이 전달되는 모드입니다.

Service에서 annotation을 수정하고 적용하면 됩니다. 

bash
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: instance

변경 후에는 TargetGroup에 파드 ip가 아닌 전체 노드가 등록된 것을 확인할 수 있습니다.

등록된 포트를 보면 instance mode를 사용할 때는 NodePort를 사용하는 것을 알 수 있습니다.

healthcheck는 실패하지만, 모든 target의 healthcheck가 실패한 상태라면 LB에서 target으로 트래픽을 보내기 때문에 이어서 통신 흐름을 확인해보겠습니다.

참고로, healthcheck 패킷은 노드에서만 확인할 수 있습니다.

NLB에 대해 curl을 해보면, Service의 chain에 패킷 수가 늘어나는 것을 확인할 수 있습니다.

bash
Every 2.0s: iptables -L KUBE-SVC-DW3DPGWHL3IDAXM7 -n -v -t nat; iptables -L KUBE-SEP-5GECT5YQBD6GJSHN -n -v -t nat Sat Nov 2 08:12:31 2024 Chain KUBE-SVC-DW3DPGWHL3IDAXM7 (2 references) pkts bytes target prot opt in out source destination 10 640 KUBE-SEP-5GECT5YQBD6GJSHN all -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-nlb-ip-type -> 192.168.3.17:8080 * / Chain KUBE-SEP-5GECT5YQBD6GJSHN (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- * * 192.168.3.17 0.0.0.0/0 /* default/svc-nlb-ip-type */ 10 640 DNAT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default/svc-nlb-ip-type */ tcp to:192.168.3.17:8080

또한, NLB target에 모든 노드가 등록되므로, 서로 다른 노드에서 파드로 접근하는 것을 확인할 수 있습니다.

bash
192.168.3.193 - - [02/Nov/2024:10:46:48 +0000] "GET / HTTP/1.1" 200 690 "-" "curl/8.6.0" 192.168.3.193 - - [02/Nov/2024:10:47:07 +0000] "GET / HTTP/1.1" 200 690 "-" "curl/8.6.0" 192.168.1.99 - - [02/Nov/2024:10:47:26 +0000] "GET / HTTP/1.1" 200 689 "-" "curl/8.6.0" 192.168.3.193 - - [02/Nov/2024:10:47:31 +0000] "GET / HTTP/1.1" 200 690 "-" "curl/8.6.0" 192.168.1.99 - - [02/Nov/2024:10:47:44 +0000] "GET / HTTP/1.1" 200 689 "-" "curl/8.6.0" 192.168.2.10 - - [02/Nov/2024:10:47:55 +0000] "GET / HTTP/1.1" 200 689 "-" "curl/8.6.0"

파드 tcpdump (노드 -> 파드:8080)

bash
deploy-echo-857b6cfb88-xbpw7  ~  tcpdump -i any -nn port 8080 tcpdump: data link type LINUX_SLL2 tcpdump: verbose output suppressed, use -v[v]... for full protocol decode listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes 10:46:48.382313 eth0 In IP 192.168.3.193.25473 > 192.168.3.17.8080: Flags [S], seq 168423937, win 65535, options [mss 1360,sackOK,eol], length 0 10:46:48.382327 eth0 Out IP 192.168.3.17.8080 > 192.168.3.193.25473: Flags [S.], seq 1523813177, ack 168423938, win 62727, options [mss 8961,nop,nop,sackOK], length 0 10:46:48.393947 eth0 In IP 192.168.3.193.25473 > 192.168.3.17.8080: Flags [.], ack 1, win 65535, length 0 10:46:48.397509 eth0 In IP 192.168.3.193.25473 > 192.168.3.17.8080: Flags [P.], seq 1:145, ack 1, win 65535, length 144: HTTP: GET / HTTP/1.1 10:46:48.397536 eth0 Out IP 192.168.3.17.8080 > 192.168.3.193.25473: Flags [.], ack 145, win 62583, length 0 10:46:48.397728 eth0 Out IP 192.168.3.17.8080 > 192.168.3.193.25473: Flags [P.], seq 1:752, ack 145, win 62583, length 751: HTTP: HTTP/1.1 200 OK 10:46:48.397769 eth0 Out IP 192.168.3.17.8080 > 192.168.3.193.25473: Flags [P.], seq 752:845, ack 145, win 62583, length 93: HTTP 10:46:48.416346 eth0 In IP 192.168.3.193.25473 > 192.168.3.17.8080: Flags [.], ack 752, win 65535, length 0 10:46:48.421652 eth0 In IP 192.168.3.193.25473 > 192.168.3.17.8080: Flags [.], ack 845, win 65535, length 0 10:46:48.421655 eth0 In IP 192.168.3.193.25473 > 192.168.3.17.8080: Flags [F.], seq 145, ack 845, win 65535, length 0 10:46:48.421783 eth0 Out IP 192.168.3.17.8080 > 192.168.3.193.25473: Flags [F.], seq 845, ack 146, win 62582, length 0 10:46:48.437630 eth0 In IP 192.168.3.193.25473 > 192.168.3.17.8080: Flags [.], ack 846, win 65535, length 0 10:47:07.372618 eth0 In IP 192.168.3.193.18337 > 192.168.3.17.8080: Flags [S], seq 842677125, win 65535, options [mss 1360,nop,wscale 6,nop,nop,TS val 962215318 ecr 0,sackOK,eol], length 0

노드 tcpdump(노드 -> 파드:8080)

bash
[root@ip-192-168-3-193 ~]# tcpdump -i any -nn dst 192.168.3.17 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes 10:46:48.382162 ARP, Request who-has 192.168.3.17 tell 192.168.3.193, length 28 10:46:48.382311 IP 192.168.3.193.25473 > 192.168.3.17.8080: Flags [S], seq 168423937, win 65535, options [mss 1360,sackOK,eol], length 0 10:46:48.393939 IP 192.168.3.193.25473 > 192.168.3.17.8080: Flags [.], ack 1523813178, win 65535, length 0 10:46:48.397500 IP 192.168.3.193.25473 > 192.168.3.17.8080: Flags [P.], seq 0:144, ack 1, win 65535, length 144: HTTP: GET / HTTP/1.1 10:46:48.416338 IP 192.168.3.193.25473 > 192.168.3.17.8080: Flags [.], ack 752, win 65535, length 0 10:46:48.421644 IP 192.168.3.193.25473 > 192.168.3.17.8080: Flags [.], ack 845, win 65535, length 0 10:46:48.421654 IP 192.168.3.193.25473 > 192.168.3.17.8080: Flags [F.], seq 144, ack 845, win 65535, length 0 10:46:48.437620 IP 192.168.3.193.25473 > 192.168.3.17.8080: Flags [.], ack 846, win 65535, length 0

 

이를 통해 노드를 거쳐서 iptables에 의해 파드로 다시 포워딩되는 것을 확인할 수 있습니다. 

 

Ingress

ingress 테스트를 위해 새로운 자원을 배포합니다.

bash
apiVersion: v1 kind: Namespace metadata: name: game-2048 --- apiVersion: apps/v1 kind: Deployment metadata: namespace: game-2048 name: deployment-2048 spec: selector: matchLabels: app.kubernetes.io/name: app-2048 replicas: 1 template: metadata: labels: app.kubernetes.io/name: app-2048 spec: containers: - image: public.ecr.aws/l6m2t8p7/docker-2048:latest imagePullPolicy: Always name: app-2048 ports: - containerPort: 80 --- apiVersion: v1 kind: Service metadata: namespace: game-2048 name: service-2048 spec: ports: - port: 80 targetPort: 80 protocol: TCP type: NodePort selector: app.kubernetes.io/name: app-2048 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: namespace: game-2048 name: ingress-2048 annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip spec: ingressClassName: alb rules: - http: paths: - path: / pathType: Prefix backend: service: name: service-2048 port: number: 80

자원 확인

bash
(test@myeks:N/A) [root@myeks-bastion ~]# kubectl get ingress,svc,ep,pod -n game-2048 NAME CLASS HOSTS ADDRESS PORTS AGE ingress.networking.k8s.io/ingress-2048 alb * k8s-game2048-ingress2-70d50ce3fd-1825485374.ap-northeast-2.elb.amazonaws.com 80 4m9s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/service-2048 NodePort 10.100.191.24 <none> 80:30824/TCP 4m9s NAME ENDPOINTS AGE endpoints/service-2048 192.168.1.40:80 4m9s NAME READY STATUS RESTARTS AGE pod/deployment-2048-85f8c7d69-f88zh 1/1 Running 0 4m9s (test@myeks:N/A) [root@myeks-bastion ~]# k get po,no -n game-2048 -owide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/deployment-2048-85f8c7d69-f88zh 1/1 Running 0 4m32s 192.168.1.40 ip-192-168-1-99.ap-northeast-2.compute.internal <none> <none> NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME node/ip-192-168-1-99.ap-northeast-2.compute.internal Ready <none> 4h30m v1.30.4-eks-a737599 192.168.1.99 3.38.250.89 Amazon Linux 2 5.10.226-214.880.amzn2.x86_64 containerd://1.7.22 node/ip-192-168-2-10.ap-northeast-2.compute.internal Ready <none> 4h30m v1.30.4-eks-a737599 192.168.2.10 3.35.6.103 Amazon Linux 2 5.10.226-214.880.amzn2.x86_64 containerd://1.7.22 node/ip-192-168-3-193.ap-northeast-2.compute.internal Ready <none> 4h30m v1.30.4-eks-a737599 192.168.3.193 13.209.49.122 Amazon Linux 2 5.10.226-214.880.amzn2.x86_64 containerd://1.7.22

Service Type : NodePort

[targetType : ip]

TargetGroup을 확인해보면, Service Type이 NodePort이지만, 파드의 ip와 port로 등록된 것을 볼 수 있습니다.

노드의 iptables에서도 패킷이 증가하지 않는 것을 볼 수 있습니다.

bash
Every 1.0s: iptables -L KUBE-NODEPORTS -v -t nat; iptables -L KUBE-SVC-V7WHPSTR7G6YHTBY -v -t nat; iptables -L KUBE-SEP-CUH33... Sat Nov 2 11:29:22 2024 Chain KUBE-NODEPORTS (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-EXT-DW3DPGWHL3IDAXM7 tcp -- any any anywhere anywhere /* default/svc-nlb-ip-type */ tcp dpt:32131 0 0 KUBE-EXT-V7WHPSTR7G6YHTBY tcp -- any any anywhere anywhere /* game-2048/service-2048 */ tcp dpt:30824 Chain KUBE-SVC-V7WHPSTR7G6YHTBY (2 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-CUH33EVLPJIBNJ2U all -- any any anywhere anywhere /* game-2048/service-2048 -> 192.168.1.40:80 */ Chain KUBE-SEP-CUH33EVLPJIBNJ2U (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- any any ip-192-168-1-40.ap-northeast-2.compute.internal anywhere /* game-2048/service-2048 */ 0 0 DNAT tcp -- any any anywhere anywhere /* game-2048/service-2048 */ tcp to:192.168.1.40:80

[targetType : instance]

targetType을 instance로 변경하면, TargetGroup에 전체 노드가 등록됩니다.

그리고 ALB에 요청을 보내면, 노드의 iptables에서 패킷 수가 증가하는 것을 볼 수 있습니다.

bash
Every 1.0s: iptables -L KUBE-NODEPORTS -v -t nat; iptables -L KUBE-SVC-V7WHPSTR7G6YHTBY -v -t nat; iptables -L KUBE-SEP-CUH33... Sat Nov 2 11:39:18 2024 Chain KUBE-NODEPORTS (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-EXT-DW3DPGWHL3IDAXM7 tcp -- any any anywhere anywhere /* default/svc-nlb-ip-type */ tcp dpt:32131 12 720 KUBE-EXT-V7WHPSTR7G6YHTBY tcp -- any any anywhere anywhere /* game-2048/service-2048 */ tcp dpt:30824 Chain KUBE-SVC-V7WHPSTR7G6YHTBY (2 references) pkts bytes target prot opt in out source destination 12 720 KUBE-SEP-CUH33EVLPJIBNJ2U all -- any any anywhere anywhere /* game-2048/service-2048 -> 192.168.1.40:80 */ Chain KUBE-SEP-CUH33EVLPJIBNJ2U (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- any any ip-192-168-1-40.ap-northeast-2.compute.internal anywhere /* game-2048/service-2048 */ 12 720 DNAT tcp -- any any anywhere anywhere /* game-2048/service-2048 */ tcp to:192.168.1.40:80

Ingress에 요청이 도달했을 때 Service와 동일하게, targetType에 따라 노드를 거쳐서 파드로 전달되는지와 파드로 바로 트래픽이 전달되는지가 결정되는 것을 알 수 있습니다.

Service Type : ClusterIP

이제 Service 타입을 ClusterIP로 바꿔보겠습니다.

bash
(test@myeks:N/A) [root@myeks-bastion ~]# k get svc -n game-2048 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service-2048 ClusterIP 10.100.191.24 <none> 80/TCP 36m

[targetType : ip]

bash
Every 1.0s: iptables -L KUBE-NODEPORTS -v -t nat; iptables -L KUBE-SVC-V7WHPSTR7G6YHTBY -v -t nat; iptables -L KUBE-SEP-CUH33... Sat Nov 2 11:44:49 2024 Chain KUBE-NODEPORTS (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-EXT-DW3DPGWHL3IDAXM7 tcp -- any any anywhere anywhere /* default/svc-nlb-ip-type */ tcp dpt:32131 Chain KUBE-SVC-V7WHPSTR7G6YHTBY (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-CUH33EVLPJIBNJ2U all -- any any anywhere anywhere /* game-2048/service-2048 -> 192.168.1.40:80 */ Chain KUBE-SEP-CUH33EVLPJIBNJ2U (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- any any ip-192-168-1-40.ap-northeast-2.compute.internal anywhere /* game-2048/service-2048 */ 0 0 DNAT tcp -- any any anywhere anywhere /* game-2048/service-2048 */ tcp to:192.168.1.40:80

 

[targetType : instance]

instance 타입에는 NodePort, LoadBalancer만 설정 가능하여, instance는 설정 불가합니다.

Service Type : LoadBalancer

Service에 annotation으로 별도의 설정을 하지 않으면, Classic Load Balancer가 생성되기 때문에, 아래와 같이 annotation을 추가합니다.

bash
kind: Service metadata: namespace: game-2048 name: service-2048 annotations: service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
bash
(test@myeks:N/A) [root@myeks-bastion ~]# k get svc -n game-2048 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service-2048 LoadBalancer 10.100.36.246 k8s-game2048-service2-8f49cdbe8f-deb08062e68c7bf2.elb.ap-northeast-2.amazonaws.com 80:32119/TCP 8m37s

 

 

[targetType : ip]

bash
Every 1.0s: iptables -L KUBE-NODEPORTS -v -t nat; iptables -L KUBE-SVC-V7WHPSTR7G6YHTBY -v -t nat; iptables -L KUBE-SEP-CUH33... Sat Nov 2 12:48:44 2024 Chain KUBE-NODEPORTS (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-EXT-DW3DPGWHL3IDAXM7 tcp -- any any anywhere anywhere /* default/svc-nlb-ip-type */ tcp dpt:32131 0 0 KUBE-EXT-V7WHPSTR7G6YHTBY tcp -- any any anywhere anywhere /* game-2048/service-2048 */ tcp dpt:32119 Chain KUBE-SVC-V7WHPSTR7G6YHTBY (2 references) pkts bytes target prot opt in out source destination 0 0 KUBE-SEP-CUH33EVLPJIBNJ2U all -- any any anywhere anywhere /* game-2048/service-2048 -> 192.168.1.40:80 */ Chain KUBE-SEP-CUH33EVLPJIBNJ2U (1 references) pkts bytes target prot opt in out source destination 0 0 KUBE-MARK-MASQ all -- any any ip-192-168-1-40.ap-northeast-2.compute.internal anywhere /* game-2048/service-2048 */ 0 0 DNAT tcp -- any any anywhere anywhere /* game-2048/service-2048 */ tcp to:192.168.1.40:80

Service 관련 iptables chain에 패킷 수가 증가하지 않는 것을 확인할 수 있습니다.

ingress에 요청 시 Service를 통해 파드로 트래픽이 전달되지는 않습니다. 또, Service(NLB)를 통해서도 별도로 접근이 가능합니다.

동일한 파드에 대해 ALB와 NLB를 함께 사용이 필요한 상황이 있다면, 해당 설정을 활용할 수 있습니다.

 

[targetType : instance]

Every 1.0s: iptables -L KUBE-NODEPORTS -v -t nat; iptables -L KUBE-SVC-V7WHPSTR7G6YHTBY -v -t nat; iptables -L KUBE-SEP-CUH33...  Sat Nov  2 12:53:21 2024

Chain KUBE-NODEPORTS (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 KUBE-EXT-DW3DPGWHL3IDAXM7  tcp  --  any    any     anywhere             anywhere             /* default/svc-nlb-ip-type */ tcp dpt:32131
   10   600 KUBE-EXT-V7WHPSTR7G6YHTBY  tcp  --  any    any     anywhere             anywhere             /* game-2048/service-2048 */ tcp dpt:32119
Chain KUBE-SVC-V7WHPSTR7G6YHTBY (2 references)
 pkts bytes target     prot opt in     out     source               destination
   10   600 KUBE-SEP-CUH33EVLPJIBNJ2U  all  --  any    any     anywhere             anywhere             /* game-2048/service-2048 -> 192.168.1.40:80 */
Chain KUBE-SEP-CUH33EVLPJIBNJ2U (1 references)
 pkts bytes target     prot opt in     out     source               destination
    0     0 KUBE-MARK-MASQ  all  --  any    any     ip-192-168-1-40.ap-northeast-2.compute.internal  anywhere             /* game-2048/service-2048 */
   10   600 DNAT       tcp  --  any    any     anywhere             anywhere             /* game-2048/service-2048 */ tcp to:192.168.1.40:80

다른 Service Type을 사용할 때와 동일하게, 노드에서 파드로 트래픽이 전달되는 것을 확인할 수 있습니다.

결론

VPC CNI를 사용하면서 AWS LB Controller를 사용할 때, 노드의 iptables를 거치지 않고 파드로 직접 트래픽이 전달되는 것은 큰 이점이라고 볼 수 있습니다.

따라서, Service, Ingress를 생성할 때 targetType: ip로 설정하는 것이 효율적이며, 해당 설정을 통해 파드에 대한 직접적인 healthcheck도 가능합니다.

Ingress를 생성할 때 연결할 Service type은 targetType에 따라 다르게 설정이 필요하지만, targetType: ip로 설정할 때는 특별한 요구사항이 없는 한, ClusterIP를 사용하는 것이 좋습니다.

참고

728x90