Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

enable network policy by default #1271

Merged
merged 3 commits into from
Sep 30, 2020

Conversation

minrk
Copy link
Member

@minrk minrk commented May 10, 2019

rather than making these opt-in, let's make them opt-out

@betatim
Copy link
Member

betatim commented May 10, 2019

Do we know of things that could trip up users? I don't have a good overview what would need to happen if you upgrade an existing cluster or what new things could go wrong during setup. If anyone can remember some adding them to the docs will save us time on support later.

@consideRatio
Copy link
Member

Thinking out loud:

  1. Network policies are not enforced without something like project calico installed in the cluster to care for them, they are after all just .yaml files at the k8s api-server and there is no default k8s network-policy-enforcer.
  2. Hmmm... What did our network policies restrict now again?
  3. Anyhow only those with a network policy enforcer like project calico installed, could be surprised about the behavior I think... Hmmmm...

@consideRatio
Copy link
Member

Two test failures relating to a reference to minikube-netpol.yaml remains, and the removal of DISABLE_TEST_NETPOL made the network policy test run on all environments but it fails because a blocked domain was allowed.

I think it relates to us not using the minikube-netpol.yaml configurations additions any more, where the following section was available:

singleuser:
  networkPolicy:
    # Block all egress apart from DNS and jupyter.org
    # CIDR must match the allowed URL in test_singleuser_netpol
    egress:
      - ports:
        - port: 53
          protocol: UDP
      - to:
        - ipBlock:
            cidr: 104.28.9.110/32
        - ipBlock:
            cidr: 104.28.8.110/32

debug:
  enabled: true

@manics
Copy link
Member

manics commented May 13, 2019

Do we know of things that could trip up users? I don't have a good overview what would need to happen if you upgrade an existing cluster or what new things could go wrong during setup. If anyone can remember some adding them to the docs will save us time on support later.

The current policy allows all egress for all pods apart from singleuser→169.254.169.254/32 (the internal cloud metadata server) so most things should just work: https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/facf1c31972b32ffa1418b1431efb5c7392a9175/jupyterhub/values.yaml

hub:
  networkPolicy:
    enabled: false
    egress:
      - to:
          - ipBlock:
              cidr: 0.0.0.0/0

proxy:
  networkPolicy:
    enabled: false
    egress:
      - to:
          - ipBlock:
              cidr: 0.0.0.0/0

singleuser:
  networkPolicy:
    enabled: false
    egress:
    # Required egress is handled by other rules so it's safe to modify this
      - to:
          - ipBlock:
              cidr: 0.0.0.0/0
              except:
                - 169.254.169.254/32

The only thing that might not is if someone is running an additional network server, or wants to connect directly to one of the pods from some other non-jupyter service. E.g. jupyterhub/kubespawner#299

@consideRatio
Copy link
Member

consideRatio commented Jun 17, 2020

I'm conflicted about making these opt-out instead of opt-in.

  • It is only functional if you have a controller that can enforce the network policy, and it is typically not installed by default, and when it is like in k3s, it may fail to enforce them reliably. My general fear is that it can lead to broken expectations and faulty assumptions.
  • It should be considered a breaking change, as it will influence deployments where a network policy controller is available.

I think it is okay to enable them by default anyhow, but I'd like to have the breaking change documentation made along with the PR so it doesn't becomes additional required work before the next release.

Since this PR was opened

I've updated the network policies since this PR was opened in #1670. I made general use of port 53 allowed which is questionable as it can be used for anything. If we want to target it better, we need to make some assumptions about the k8s cluster though, for example that the kubernetes DNS server is in the namespace of kube-system and it has a label called k8s-app=kube-dns or similar.

My take on the action points for this PR

  • Start fresh from master and update values.yaml to have netpol's enabled
  • Update dev-values.yaml to not explicitly enable them
  • Document the breaking change in the changelog

expand network policy documentation,
noting that they are on by default and how to grant ingress via label selectors
and limit access via egress rules
@minrk minrk force-pushed the netpol-by-default branch from 717eafe to e982c45 Compare August 26, 2020 10:38
I wanted to help a novice reader better understand this breaking change
so I reduced some assumptions about prerequisite knowledge by adding
some detail and explanations.
@consideRatio
Copy link
Member

Thanks for your updates @minrk, sorry for the slow turnaround. I'm pushing towards 0.10.0-beta.1 now and want this part of that, so I pushed two commits related to documentation in the changelog, and an inline comment in values.yaml that I found to be saying too little.

If this build succeeds, I'll go ahead and merge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants