Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Ability to restrict security groups around the workload cluster API LB #418

Open
pli01 opened this issue Jan 19, 2025 · 2 comments
Labels
enhancement New feature or request

Comments

@pli01
Copy link
Contributor

pli01 commented Jan 19, 2025

Explain problem to solve

Hello

The security groups to the workload LB cluster api are very open on the internet by default (from 0.0.0.0/0 to 6443)
(LB = internet-facing)

They must be configurable to be more restricted.

But by wanting to restrict the security groups, on the workload cluster, there is a limit :

Explanation:
As the VPCs in mgmt cluster and workload are distinct and not linked , it is necessary to authorize on the workload LB cluster:

  • the nat gateway of the mgmt cluster
  • the nat gateway of the workload cluster: because the workload nodes contact the public ip of the LB via the nat gateway.
    But The nat gw of the workload cluster is created dynamically by the outscale provider, and is not known beforehand

We can't add automatically the nat gw in the OscCluster security group at deploy time

Describe the solution you would like

1- The nat gw of the workload cluster is created dynamically by the outscale provider, and is not known beforehand.

It must be added to the LB security group automatically

For example, you can see how this is implemented on the provider cluster-api-provider-aws

https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/main/pkg/cloud/services/securitygroup/securitygroups.go#L926

2- other solution: allow to enroll outside ressources (VPC, nat gw, etc...) in the workload cluster . So we can explicitly declare pre created ressources.

As described in cluster-api-provider-aws:

https://cluster-api-aws.sigs.k8s.io/topics/bring-your-own-aws-infrastructure

Additional context

Ensure Kubernetes API servers are not publicly accessible

Environment

- Kubernetes version: (use `kubectl version`):
- OS (e.g. from `/etc/os-release`):
- Kernel (e.g. `uname -a`):
- cluster-api-provider-outscale version:
- cluster-api version:
- Install tools:
- Kubernetes Distribution:
- Kubernetes Diestribution version:
@pli01 pli01 added the enhancement New feature or request label Jan 19, 2025
@pierreozoux
Copy link
Contributor

@pli01
Copy link
Contributor Author

pli01 commented Jan 20, 2025

Yes , it s not a problem of editing the SG.

It ' s a problem of dynamically allow the Public IP of the nat gw of the current workload cluster to allow workload nodes to communicate with is own LB

In our helm chart for cluster-api,:
i have added a values "allow_cidr_api" : a list of CIDR to allow.
https://github.com/cloud-gouv/k8s-cluster-api-helm-charts/blob/5366c81c3af97825ae16a9c00357c9a97fff0280/helm-charts/capi-cluster/charts/outscale/templates/OscCluster.yaml#L167

I can add a list of cidr to allow

  • ip of nat gw of mgmt cluster
  • external allowed cidr
  • but the IP nat gw of the workload cluster is known after the cluster is created

I have open an issue on outscale to add this features

As you can see with cluster-api-provider-aws, they automatically add the IP of the nat gw in the SG by default

https://github.com/kubernetes-sigs/cluster-api-provider-aws/blob/main/pkg/cloud/services/securitygroup/securitygroups.go#L926

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Development

No branches or pull requests

2 participants