问题描述
很抱歉出现新手问题;我是k8s世界的新手,目前的部署方式是在EC2上部署应用程序.我尝试将容器化应用程序部署到VPC的新方法.
Sorry for newbie question; I am new to the k8s world.The current way of deploying is to deploy the app on EC2. The new way I am trying to deploy the containerized app to VPC.
AWS用旧的方式将aaa.bbb.com
的流量路由到vpc-ip:443
ELB,然后将其进一步路由到private subnet:443
上的ASG,并且应用程序可以正常工作.
In the old way AWS would route the traffic for aaa.bbb.com
to vpc-ip:443
ELB which would further route it to ASG on private subnet:443
and app would work fine.
图中有k8s时,交通流看起来如何?
With k8s in the picture, how does traffic flow look like?
我正在尝试确定是否可以在ELB上使用具有各自dns的多个端口,并将流量路由到工作节点上的特定端口.
I'm trying to figure out if I could use multiple ports on ELB with respective dns and route traffic to on certain port on worker nodes.
即
xxx.yyy.com -> vpc-ip:443/ -> ec2:443/
aaa.bbb.com -> vpc-ip:9000/ -> ec2:9000/
是否可以在同一VPC上使用k8s?任何指导和指向文档的链接都会有很大帮助.
Is it doable with k8s on the same VPC? Any guidance and links to documentation would be of great help.
推荐答案
通常,您将拥有一个AWS Load Balancer实例,该实例将具有多个K8s worker作为具有特定端口的后端服务器.流量进入工作节点后,K8内部的网络将接手工作.
In general, you would have a AWS Load-balancer instance that would have multiple K8s workers as backend server with a specific port. After traffic entering worker nodes, networking inside K8s would take the job.
假设您已经为两个域分别设置了两个K8S服务作为负载平衡器,分别具有端口38473和38474:
Suppose you have setup two K8S services as load-balancer with port 38473 and 38474 for your two domains, respectively:
xxx.yyy.com -> AWS LoadBalancer1 -> Node1:38473 -> K8s service1 -> K8s Pod1
-> Node2:38473 -> K8s service1 -> K8s Pod2
aaa.bbb.com -> AWS LoadBalancer2 -> Node1:38474 -> K8s service2 -> K8s Pod3
-> Node2:38474 -> K8s service2 -> K8s Pod4
上面的简单解决方案需要您创建不同的服务作为负载均衡器,这将增加成本,因为它们是实际的AWS负载均衡器实例.为了降低成本,您可以在集群中有一个ingress-controller
实例并编写ingress
配置.这只需要一个实际的AWS负载均衡器即可完成网络连接:
This simple solution above would need to have you create different services as load-balancer, which would increase your cost because they are actual AWS load-balancer instances. To reduce cost, you could have an ingress-controller
instance in your cluster and write ingress
config. This would only require one actual AWS load-balancer to finish your networking:
xxx.yyy.com -> AWS LoadBalancer1 -> Node1:38473 -> Ingress-service -> K8s service1 -> K8s Pod1
-> Node2:38473 -> Ingress-service -> K8s service1 -> K8s Pod2
aaa.bbb.com -> AWS LoadBalancer1 -> Node1:38473 -> Ingress-service -> K8s service2 -> K8s Pod3
-> Node2:38473 -> Ingress-service -> K8s service2 -> K8s Pod4
有关更多信息,您可以在此处参考更多信息:
For more information, you could refer more information here:
- 基本网络和K8s服务: https://kubernetes.io/docs/概念/服务网络/服务/
- 进入&入口控制器(Nginx实现): https://www.nginx.com/products/nginx/kubernetes-ingress-控制器
- Basic Networking and K8s Services: https://kubernetes.io/docs/concepts/services-networking/service/
- Ingress & ingress controller (Nginx Implementation):https://www.nginx.com/products/nginx/kubernetes-ingress-controller
这篇关于AWS VPC-K8S-负载平衡的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!