Skip to content
This repository has been archived by the owner on Jan 11, 2023. It is now read-only.

freshen docker images on aptdocker.azureedge.net #1914

Closed
jackfrancis opened this issue Dec 11, 2017 · 30 comments
Closed

freshen docker images on aptdocker.azureedge.net #1914

jackfrancis opened this issue Dec 11, 2017 · 30 comments

Comments

@jackfrancis
Copy link
Member

No description provided.

@MorrisLaw
Copy link

I would like to start contributing. Can I take this issue?

@jackfrancis
Copy link
Member Author

Hi @MorrisLaw ! We currently use apt to install `docker-engine on the cluster hosts, but it appears that our mirror of Docker's apt repo doesn't have the latest CE images. You could verify that Docker has stopped maintaining that repo, and investigate alternative delivery mechanisms for docker-engine onto Ubuntu.

@jackfrancis
Copy link
Member Author

These are the available docker-engine images on a cluster host:

$ apt-cache madison docker-engine
docker-engine | 17.05.0~ce-0~ubuntu-xenial | https://aptdocker.azureedge.net/repo ubuntu-xenial/main amd64 Packages
docker-engine | 17.04.0~ce-0~ubuntu-xenial | https://aptdocker.azureedge.net/repo ubuntu-xenial/main amd64 Packages
docker-engine | 17.03.1~ce-0~ubuntu-xenial | https://aptdocker.azureedge.net/repo ubuntu-xenial/main amd64 Packages
docker-engine | 17.03.0~ce-0~ubuntu-xenial | https://aptdocker.azureedge.net/repo ubuntu-xenial/main amd64 Packages
docker-engine | 1.13.1-0~ubuntu-xenial | https://aptdocker.azureedge.net/repo ubuntu-xenial/main amd64 Packages
docker-engine | 1.13.0-0~ubuntu-xenial | https://aptdocker.azureedge.net/repo ubuntu-xenial/main amd64 Packages
docker-engine | 1.12.6-0~ubuntu-xenial | https://aptdocker.azureedge.net/repo ubuntu-xenial/main amd64 Packages
docker-engine | 1.12.5-0~ubuntu-xenial | https://aptdocker.azureedge.net/repo ubuntu-xenial/main amd64 Packages
docker-engine | 1.12.4-0~ubuntu-xenial | https://aptdocker.azureedge.net/repo ubuntu-xenial/main amd64 Packages
docker-engine | 1.12.3-0~xenial | https://aptdocker.azureedge.net/repo ubuntu-xenial/main amd64 Packages
docker-engine | 1.12.2-0~xenial | https://aptdocker.azureedge.net/repo ubuntu-xenial/main amd64 Packages
docker-engine | 1.12.1-0~xenial | https://aptdocker.azureedge.net/repo ubuntu-xenial/main amd64 Packages
docker-engine | 1.12.0-0~xenial | https://aptdocker.azureedge.net/repo ubuntu-xenial/main amd64 Packages
docker-engine | 1.11.2-0~xenial | https://aptdocker.azureedge.net/repo ubuntu-xenial/main amd64 Packages
docker-engine | 1.11.1-0~xenial | https://aptdocker.azureedge.net/repo ubuntu-xenial/main amd64 Packages
docker-engine | 1.11.0-0~xenial | https://aptdocker.azureedge.net/repo ubuntu-xenial/main amd64 Packages

Compare CE images to published versions here:

https://docs.docker.com/release-notes/docker-ce/

@MorrisLaw
Copy link

MorrisLaw commented Dec 13, 2017

Thank you for the response @jackfrancis , I'll start working on this.

@jackfrancis
Copy link
Member Author

Thanks @MorrisLaw !

@dtzar
Copy link
Contributor

dtzar commented Jun 19, 2018

Per #2876 it sounds like we may be moving over to containerd, but it would still be good to see the mirror we use here and valid versions be updated to the latest apt repository for those who want to use (or try to use) the latest version of docker as the engine.

Our default 1.13.x version of docker runtime is ancient 2017-02-08 and contains countless security and bug fixes :(

@grenzr
Copy link

grenzr commented Jun 20, 2018

I agree with @dtzar, I think we should still be able to use a newer version of docker if its supported by kubernetes. We could use a different mirror right now, but the validation breaks our ability to use that version.

You'll have to excuse my ignorance a little, I haven't read enough about containerd yet to know whether it is a sufficient drop-in replacement to docker for our needs.
For example I run a custom vsts-agent container in kubernetes, and I need it to be able to build multi-stage Dockerfiles (only introduced since 17.06) inside the vsts-agent container (ie. docker in docker) .. will /var/run/docker.sock still be present on the host for us to continue leveraging that inside the vsts agent container?

@jackfrancis
Copy link
Member Author

@khenidak @jessfraz Can you answer @grenzr's question?

@grenzr
Copy link

grenzr commented Jun 20, 2018

https://stackoverflow.com/questions/46649592/dockerd-vs-docker-containerd-vs-docker-runc-vs-docker-containerd-ctr-vs-docker-c seems to suggest containerd does listen on a unix socket, but can docker client inside a container use a volume mounted containerd socket of the host? I'm guessing the interfaces are a bit different? Might be able to work around it using a dind docker image to run make tasks inside the vsts-agent container, but thats a bit nesty.

Edit - actually no that wont work as the build tools I need wont be in the dind image .. duh :(

This leads me to ask if you would consider leaving the docker option in for future updates, otherwise I fear it may box us into a corner.

@jessfraz
Copy link
Contributor

You'll have to excuse my ignorance a little, I haven't read enough about containerd yet to know whether it is a sufficient drop-in replacement to docker for our needs.
For example I run a custom vsts-agent container in kubernetes, and I need it to be able to build multi-stage Dockerfiles (only introduced since 17.06) inside the vsts-agent container (ie. docker in docker) .. will /var/run/docker.sock still be present on the host for us to continue leveraging that inside the vsts agent container?

no since docker is not there it will not, but you can use the docker:dind containers on docker hub, which is actually a better idea than mounting the host socket anyways

@jessfraz
Copy link
Contributor

why not do a FROM docker:dind.... then add your build tools and run a custom image...

@grenzr
Copy link

grenzr commented Jun 20, 2018

@jessfraz thanks, yeah was thinking the same just now. I will give that a shot.

@jessfraz
Copy link
Contributor

there are lots of solutions here... you can run docker:dind and share the network namespace with a different container with your build tools and just use the docker daemon in the other container.... you do not need a omnolithic container with your tools either

i see many possible solutions:

  1. custom FROM docker:dind which installs your build tools
  2. pod with docker:dind in a container sharing the socket file from that container with another or using the docker tcp daemon endpoint and calling it from other containers

@grenzr
Copy link

grenzr commented Jun 20, 2018

@jessfraz will docker client in the vsts-agent container work with a containerd socket to be able to start the docker:dind image? Or would I base the vsts-agent on docker:dind?

@jessfraz
Copy link
Contributor

what I would do is in that pod with the vsts agent have another container that is running docker:dind, then in the vsts-agent container if you are shelling out to docker you need the docker tool in there if you are using an api client library you do not... here is an example... https://github.com/genuinetools/contained.af/blob/master/Makefile#L149

@jessfraz
Copy link
Contributor

dont worry about the containerd socket you do not need it, you just need a dind container...

@jessfraz
Copy link
Contributor

the dind container just runs the docker daemon... so you can use any client in another container to communicate with the daemon (containerd not involved at all.)

@grenzr
Copy link

grenzr commented Jun 20, 2018

@jessfraz ah .. I only ask as we're helm deploying the vsts-agent with /var/run/docker.sock mounted into the vsts-agent (so its really more like docker on docker than in it) : https://github.com/Azure/helm-vsts-agent/blob/master/templates/vsts-agent.yaml#L53-L56

@jessfraz
Copy link
Contributor

well its docker running a nested docker, thats where the name comes from, mounting the socket is actually known as a "bind mount" not docker-in-docker

@jessfraz
Copy link
Contributor

and yeah... that really should not have ever been mounting the host socket......

@grenzr
Copy link

grenzr commented Jun 20, 2018

thanks for the feedback :) I'll fix the vsts-agent image and try again with dind.

@grenzr
Copy link

grenzr commented Jun 21, 2018

@jessfraz this almost works now. I removed the socket bind mount from the helm vsts-agent deployment, and added a dind container in the same pod as the vsts-agent.

It almost works, but fails to TLS handshake on a nuget.org url when doing a 'dotnet restore' in a dind based build container. I have a feeling it might be MTU related.

curl -v -I https://api.nuget.org/v3/index.json
*   Trying 152.199.19.160...
* TCP_NODELAY set
* Connected to api.nuget.org (152.199.19.160) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.2 (OUT), TLS header, Certificate Status (22):
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* Unknown SSL protocol error in connection to api.nuget.org:443
* Curl_http_done: called premature == 1
* stopped the pause stream!
* Closing connection 0
curl: (35) Unknown SSL protocol error in connection to api.nuget.org:443

Different problem now I know, but would be interested to see if anyone else has a similar problem using dind in this way.

Heres one:
https://stackoverflow.com/questions/38882113/docker-in-docker-ssl-problems?rq=1

@jessfraz
Copy link
Contributor

jessfraz commented Jun 21, 2018 via email

@jessfraz
Copy link
Contributor

jessfraz commented Jun 21, 2018 via email

@jessfraz
Copy link
Contributor

jessfraz commented Jun 21, 2018 via email

@grenzr
Copy link

grenzr commented Jun 21, 2018

It might not be a docker problem, but I can use the same image to build locally and I don't hit the nuget problem when doing a dotnet restore, so it doesn't feel like anything in the build image needs to change?

I only mention it here as this may be dind related, and others who follow this ticket and are trying the dind approach out might see similar problems:

pypi/warehouse#4069
docker-library/docker#112 (comment)

I've tried the libressl suggestion in the ticket though but it didn't seem to make any difference.

@grenzr
Copy link

grenzr commented Jun 24, 2018

For what its worth I fixed my issue. It was MTU related. I had to change dockerd's --mtu to 1450 in the dind image which adjusted it for me inside my build container.

@grenzr
Copy link

grenzr commented Oct 11, 2018

if anyone is interested, I have a PR open to the helm-vsts-agent repo to handle additional containers, volumes and env vars, so that I can neatly add a DIND sidecar to the vsts-agent deployment: Azure/helm-vsts-agent#7

@blsaws
Copy link

blsaws commented Apr 5, 2019

@grenzr, I am running into a similar issue when trying to use docker-dind (running under k8s) to create ML model images, for the Acumos AI project (an LF project). The issue seems similar, as our Java components that use the docker-dind service to build images are logging build event sequences that are quite unusual, and lead up to the error below as reported by docker-dind:

Notify com.github.dockerjava.api.exception.DockerClientException: Could not build image: The command '/bin/sh -c curl -OL https://github.com/google/protobuf/releases/download/v3.4.0/protoc-3.4.0-linux-x86_64.zip' returned a non-zero code: 35

"35" is an SSL error.

This error does not occur when our platform is deployed using a host-based docker-engine, or when docker-dind is used in a bare-metal k8s cluster. Just in Azure.

I need to find a workaround for this, so I am interested in trying out your MTU approach. Can you explain how you "change dockerd's --mtu to 1450 in the dind image"?

@grenzr
Copy link

grenzr commented Apr 6, 2019

@blsaws yes its an annoying problem that took me a little while to work out!
Did you check my example in the PR? Azure/helm-vsts-agent#7

args: ["--mtu=1450"]

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

7 participants