How (and Why) to Run Docker Inside Docker

Join 5,000 subscribers and get a periodic digest of news, articles, and more.
By submitting your email, you agree to the Terms of Use and Privacy Policy.
James Walker is a CloudSavvy IT contributor. He is the founder of Heron Web, a UK-based digital agency providing bespoke software development services to SMEs. He has experience managing complete end-to-end web development workflows with DevOps, CI/CD, Docker, and Kubernetes. Read more…

Running Docker inside Docker lets you build images and start containers within an already containerized environment. There are two possible approaches to achieve this depending on whether you want to start child or sibling containers.
Access to Docker from inside a Docker container is most often desirable in the context of CI and CD systems. It’s common to host the agents that run your pipeline inside a Docker container. You’ll end up using a Docker-in-Docker strategy if one of your pipeline stages then builds an image or interacts with containers.
Docker is provided as a self-contained image via the docker:dind tag on Docker Hub. Starting this image will give you a functioning Docker daemon installation inside your new container. It’ll operate independently of your host’s daemon that’s running the dind container, so docker ps inside the container will give different results to docker ps on your host.
Using Docker-in-Docker in this way comes with one big caveat: you need to use privileged mode. This constraint applies even if you’re using rootless containers. Privileged mode is activated by the --privileged flag in the command shown above.
Using privileged mode gives the container complete access to your host system. This is necessary in a Docker-in-Docker scenario so your inner Docker is able to create new containers. It may be an unacceptable security risk in some environments though.
There are other issues with dind too. Certain systems may experience conflicts with Linux Security Modules (LSM) such as AppArmor and SELinux. This occurs when the inner Docker applies LSM policies that the outer daemon can’t anticipate.
Another challenge concerns container filesystems. The outer daemon will run atop your host’s regular filesystem such as ext4. All its containers, including the inner Docker daemon, will sit on a copy-on-write (CoW) filesystem though. This can create incompatibilities if the inner daemon is configured to use a storage driver which can’t be used on top of an existing CoW filesystem.
The challenges associated with dind are best addressed by avoiding its use altogether. In many scenarios, you can achieve the intended effect by mounting your host’s Docker socket into a regular docker container:
The Docker CLI inside the docker image interacts with the Docker daemon socket it finds at /var/run/docker.sock. Mounting your host’s socket to this path means docker commands run inside the container will execute against your existing Docker daemon.
This means containers created by the inner Docker will reside on your host system, alongside the Docker container itself. All containers will exist as siblings, even if it feels like the nested Docker is a child of the parent. Running docker ps will produce the same results, whether it’s run on the host or inside your container.
This technique mitigates the implementation challenges of dind. It also removes the need to use privileged mode, although mounting the Docker socket is itself a potential security concern. Anything with access to the socket can send instructions to the Docker daemon, providing the ability to start containers on your host, pull images, or delete data.
Docker-in-Docker via dind has historically been widely used in CI environments. It means the “inner” containers have a layer of isolation from the host. A single CI runner container supports every pipeline container without polluting the host’s Docker daemon.
While it often works, this is fraught with side effects and not the intended use case for dind. It was added to ease the development of Docker itself, not provide end user support for nested Docker installations.
According to Jérôme Petazzoni, the creator of the dind implementation, adopting the socket-based approach should be your preferred solution. Bind mounting your host’s daemon socket is safer, more flexible, and just as feature-complete as starting a dind container.
If your use case means you absolutely require dind, there is a safer way to deploy it. The modern Sysbox project is a dedicated container runtime that can nest other runtimes without using privileged mode. Sysbox containers become VM-like so they’re able to support software that’s usually run bare-metal on a physical or virtual machine. This includes Docker and Kubernetes without any special configuration.
Running Docker within Docker is a relatively common requirement. You’re most likely to see it while setting up CI servers which need to support container image builds from within user-created pipelines.
Using docker:dind gives you an independent Docker daemon running inside its own container. It effectively creates child containers that aren’t directly visible from the host. While it seems to offer strong isolation, dind actually harbors many edge case issues and security concerns. These are due to Docker’s operating system interactions.
Mounting your host’s Docker socket into a container which includes the docker binary is a simpler and more predictable alternative. This lets the nested Docker process start containers that become its own siblings. No further settings are needed when you use the socket-based approach.
The above article may contain affiliate links, which help support CloudSavvy IT.
Facebook
Twitter
LinkedIn
RSS Feed
Cloud wisdom in your inbox
By submitting your email, you agree to the Terms of Use and Privacy Policy.

source

Digital Strategist Chris Hood

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2021 SHAQ HAX - Proudly powered by theme Octo