Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add ci-pr deployment #75

Merged
merged 2 commits into from
Nov 22, 2019
Merged

add ci-pr deployment #75

merged 2 commits into from
Nov 22, 2019

Conversation

cirocosta
Copy link
Member

@cirocosta cirocosta commented Nov 22, 2019

Hey,

This PR is intended to allow the prs pipeline to co-exist with
nci in the hush-house gke cluster.

it does so by:

  • adding a node pool to the cluster (ci-workers-pr) whose size is the
    same as we had previously in our BOSH deployment
  • adding a deployment (ci-pr) that puts workers in such node pool.

As such deployment is supposed to run untrusted workloads, we needed a
way of restricting the network access that this deployment could have in
order to avoid lateral movements in the internal net (in the case of our
BOSH environment, we had a totally different network - in k8s, we'd need
to be in a separate cluster if we wanted to go w/ the same approach of
different nets).

This led us to enabling the enforcement of network policies in the
cluster, and creating a policy for ci-pr that would target the pods
delpoyed by it, effectively blocking internal connectivity to anything
we didn't want (i.e., anything that's not ci's TSA).

ps.: these changes have already been applied.

related: concourse/prod#36

@cirocosta cirocosta changed the title add ci-pr deployments add ci-pr deployment Nov 22, 2019
given that we have both `web` and `worker` activated in this deployment,
the worker will have the TSA host automatically set, meaning that such
value will be completely ignored.

see https://github.com/concourse/concourse-chart/blob/9844b8d089af162167659440959d79f73bde10f0/templates/worker-statefulset.yaml#L181-L187

Signed-off-by: Denise Yu <[email protected]>
Signed-off-by: Ciro S. Costa <[email protected]>
this commit is intended to allow the [`prs` pipeline] to co-exist with
`nci` in the `hush-house` gke cluster.

it does so by:

- adding a node pool to the cluster (`ci-workers-pr`) whose size is the
  same as we had previously in our BOSH deployment
- adding a deployment (`ci-pr`) that puts workers in such node pool.

As such deployment is supposed to run untrusted workloads, we needed a
way of restricting the network access that this deployment could have in
order to avoid lateral movements in the internal net (in the case of our
BOSH environment, we had a totally different network - in k8s, we'd need
to be in a separate cluster if we wanted to go w/ the same approach of
different nets).

This led us to enabling the enforcement of network policies in the
cluster, and creating a policy for `ci-pr` that would target the pods
delpoyed by it, effectively blocking internal connectivity to anything
we didn't want (i.e., anything that's not ci's TSA).

ps.: these changes have already been applied.

[`prs` pipeline]: https://nci.concourse-ci.org/teams/main/pipelines/prs

Signed-off-by: Denise Yu <[email protected]>
Signed-off-by: Ciro S. Costa <[email protected]>
@cirocosta
Copy link
Member Author

merging it directly to reflect the current state

@cirocosta cirocosta merged commit 417e462 into master Nov 22, 2019
@cirocosta cirocosta deleted the ci-workers-pr branch November 22, 2019 16:24
@cirocosta
Copy link
Member Author

w/ regards to "why to go w/ internal connection for connecting to tsa: it turns out that calico (I guess?) puts an external constraint on communications to even external services:

# matches tcp conns to our external load-balancer - once matched, gets us to the
# `KUBE-FW-blabla` rule.
#
-A KUBE-SERVICES \
        -d 34.68.37.70/32 \
        -p tcp \
        -m comment --comment "ci/ci-web:tsa loadbalancer IP" \
        -m tcp --dport 2222 \
        -j KUBE-FW-3WZFA3OBZWICIHEP# in this "catch anyone that landed here" rule, we jump to the "mark to drop"
#
-A KUBE-FW-3WZFA3OBZWICIHEP \
        -m comment --comment "ci/ci-web:tsa loadbalancer IP" \
        -j KUBE-MARK-DROP# mark drop just puts a mark so that a "catch all with this mark" rule can than
# act on it.
#
-A KUBE-MARK-DROP \
        -j MARK --set-xmark 0x8000/0x8000# acting on those marked as `0x8000`: drop!
#
-A KUBE-FIREWALL \
        -m comment --comment "kubernetes firewall for dropping marked packets" \
        -m mark --mark 0x8000/0x8000 \
        -j DROP

^ (from the host)

@cirocosta
Copy link
Member Author

Thus, the final condition is to:

  • allow any "dns resolution" (actually, it's more of "any traffic to TCP/UDP:53)
    • (perhaps, we should restrict this to only external dns'es? if so, we'd need to be able to provide custom dns settings to the Concourse chart)
  • allow egress to ci-web pod in the ci namespace
  • allow any out except to 10.0.0.0/8 (internal nets)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant