You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have few EKS clusters, We want to directly access the cluster-ip on other ec2 instance in same vpc,
But we cannot find any robust solution to deal with this scenario,
If we can make cluster-ip accessable, that will be very simple to config ALB or access from EC2 instance,
and also we can make all EKS clusters communicate each other by cluster-ip(or service name : xxx.xxx.svc.cluster.local)
There is a simplest way to route the cluster-ip traffic to eks,
change the kube-proxy mode to ipvs, and enable kube-proxy masq all
config the VPC route table, add a static route nexthop to one of EKS NODE, like this: 10.35.0.0/16 nexthop EKS_NODE_1 IP
Then,we can access cluster-ip from the outside of eks,
But, There are some problem,
EC2 AUTOSCALING GROUP will dynamicly scale up or scale down the EKS, It will be a big problem that use a singel EKS NODE as nexthop on VPC route table
The VPC route table can't support ECMP, we can't config multiple nexthop on cluster-ip like traditional hardware router
Hence,we considerated that is not a reliable solution,
We try to find other more reliable and robust solution deal with this scenario !!!!
Are there any suggections about this scenario ? Thanks!!!!
There is one of our EKS cluster network configuration:
VPC CIDR: 10.34.0.0/16
POD CIDR: 10.34.0.0/18
Cluster IP CIDR: 10.35.0.0/16
The text was updated successfully, but these errors were encountered:
No, it is not possible to route Cluster-IP traffic to the outside of the cluster (be it within same VPC). The cluster-ip is meant for services within the cluster. The mechanisms you like proxying can temporary work, but they aren't a valid solutions or workarounds. Inter Cluster communication can best be setup if you are LB, like AWS Load Balancer Controller.
This issue is now closed. Comments on closed issues are hard for our team to see.
If you need more assistance, please either tag a team member or open a new issue that references this one.
No, it is not possible to route Cluster-IP traffic to the outside of the cluster (be it within same VPC). The cluster-ip is meant for services within the cluster. The mechanisms you like proxying can temporary work, but they aren't a valid solutions or workarounds. Inter Cluster communication can best be setup if you are LB, like AWS Load Balancer Controller.
I finally find a solution, I use the GWLB + Liunx geneve tunnel , it works .
What happened:
We have few EKS clusters, We want to directly access the cluster-ip on other ec2 instance in same vpc,
But we cannot find any robust solution to deal with this scenario,
If we can make cluster-ip accessable, that will be very simple to config ALB or access from EC2 instance,
and also we can make all EKS clusters communicate each other by cluster-ip(or service name : xxx.xxx.svc.cluster.local)
There is a simplest way to route the cluster-ip traffic to eks,
Then,we can access cluster-ip from the outside of eks,
But, There are some problem,
Hence,we considerated that is not a reliable solution,
We try to find other more reliable and robust solution deal with this scenario !!!!
Are there any suggections about this scenario ? Thanks!!!!
There is one of our EKS cluster network configuration:
VPC CIDR: 10.34.0.0/16
POD CIDR: 10.34.0.0/18
Cluster IP CIDR: 10.35.0.0/16
The text was updated successfully, but these errors were encountered: