Docker
Docker is a utility to pack, ship and run any application as a lightweight container.
Contents
- 1 Installation
- 2 Usage
- 3 Configuration
- 4 Images
- 5 Remove Docker and images
- 6 Run GPU accelerated Docker containers with NVIDIA GPUs
- 7 Useful tips
-
8 Troubleshooting
- 8.1 docker0 Bridge gets no IP / no internet access in containers when using systemd-networkd
- 8.2 Default number of allowed processes/threads too low
- 8.3 Error initializing graphdriver: devmapper
- 8.4 Failed to create some/path/to/file: No space left on device
- 8.5 Invalid cross-device link in kernel 4.19.1
- 8.6 CPUACCT missing in docker with Linux-ck
- 8.7 Docker-machine fails to create virtual machines using the virtualbox driver
- 8.8 Starting Docker breaks KVM bridged networking
- 9 See also
Installation
Install the docker package or, for the development version, the docker-gitAUR package. Next start and enable docker.service
and verify operation:
# docker info
Note that starting the docker service may fail if you have an active VPN connection due to IP conflicts between the VPN and Docker's bridge and overlay networks. If this is the case, try disconnecting the VPN before starting the docker service. You may reconnect the VPN immediately afterwards. You can also try to deconflict the networks.
Next, verify that you can run containers. The following command downloads the latest Arch Linux image and uses it to run a Hello World program within a container:
# docker run -it --rm archlinux bash -c "echo hello world"
If you want to be able to run the docker
CLI command as a non-root user, add your user to the docker
user group.
docker
group is root equivalent because they can use the docker run --privileged
command to start containers with root privileges. More information here and here.Usage
Docker consists of multiple parts:
- The Docker daemon (sometimes also called the Docker Engine), which is a process which runs as
docker.service
. It serves the Docker API and manages Docker containers. - The
docker
CLI command, which allows users to interact with the Docker API via the command line and control the Docker daemon. - Docker containers, which are namespaced processes that are started and managed by the Docker daemon as requested through the Docker API.
Typically, users use Docker by running docker
CLI commands, which in turn request the Docker daemon to perform actions which in turn result in management of Docker containers. Understanding the relationship between the client (docker
), server (docker.service
) and containers is important to successfully administering Docker.
Note that if the Docker daemon stops or restarts, all currently running Docker containers are also stopped or restarted.
Also note that it is possible to send requests to the Docker API and control the Docker daemon without the use of the docker
CLI command. See the Docker API developer documentation for more information.
See the Docker Getting Started guide for more usage documentation.
Configuration
The Docker daemon can be configured either through a configuration file at /etc/docker/daemon.json
or by adding command line flags to the docker.service
systemd unit. According to the Docker official documentation, the configuration file approach is preferred. If you wish to use the command line flags instead, use systemd drop-in files to override the ExecStart
directive in docker.service
.
For more information about options in daemon.json
see dockerd documentation.
Storage driver
The storage driver controls how images and containers are stored and managed on your Docker host. The default overlay2
driver has good performance and is a good choice for all modern Linux kernels and filesystems. There are a few legacy drivers such as devicemapper
and aufs
which were intended for compatibility with older Linux kernels, but these have no advantages over overlay2
on Arch Linux.
Users of btrfs or ZFS may use the btrfs
or zfs
drivers, each of which take advantage of the unique features of these filesystems. See the btrfs driver and zfs driver documentation for more information and step-by-step instructions.
Daemon socket
By default, the Docker daemon serves the Docker API using a Unix socket at /var/run/docker.sock
. This is an appropriate option for most use cases.
It is possible to configure the Daemon to additionally listen on a TCP socket, which can allow remote Docker API access from other computers.
Note that the default docker.service
file sets the -H
flag by default, and Docker will not start if an option is present in both the flags and /etc/docker/daemon.json
file. Therefore, the simplest way to change the socket settings is with a drop-in file, such as the following which adds a TCP socket on port 4243:
/etc/systemd/system/docker.service.d/execstart.conf
[Service] ExecStart= ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:4243
Reload the systemd daemon and restart docker.service
to apply changes.
HTTP Proxies
There are two parts to configuring Docker to use an HTTP proxy: Configuring the Docker daemon and configuring Docker containers.
Docker daemon proxy configuration
See Docker documentation on configuring a systemd drop-in unit to configure HTTP proxies.
Docker container proxy configuration
See Docker documentation on configuring proxies for information on how to automatically configure proxies for all containers created using the docker
CLI.
Configuring DNS
See Docker's DNS documentation for the documented behavior of DNS within Docker containers and information on customizing DNS configuration. In most cases, the resolvers configured on the host are also configured in the container.
Most DNS resolvers hosted on 127.0.0.0/8
are not supported due to conflicts between the container and host network namespaces. Such resolvers are removed from the container's /etc/resolv.conf
. If this would result in an empty /etc/resolv.conf
, Google DNS is used instead.
Additionally, a special case is handled if 127.0.0.53
is the only configured nameserver. In this case, Docker assumes the resolver is systemd-resolved and uses the upstream DNS resolvers from /run/systemd/resolve/resolv.conf
.
If you are using a service such as dnsmasq to provide a local resolver, consider adding a virtual interface with a link local IP address in the 169.254.0.0/16
block for dnsmasq to bind to instead of 127.0.0.1
to avoid the network namespace conflict.
Images location
By default, docker images are located at /var/lib/docker
. They can be moved to other partitions, e.g. if you wish to use a dedicated partition or disk for your images. In this example, we will move the images to /mnt/docker
.
First, stop docker.service
, which will also stop all currently running containers and unmount any running images. You may then move the images from /var/lib/docker
to the target destination, e.g. cp -r /var/lib/docker /mnt/docker
.
Configure data-root
in /etc/docker/daemon.json
:
/etc/docker/daemon.json
{ "data-root": "/mnt/docker" }
Restart docker.service
to apply changes.
Insecure registries
If you decide to use a self signed certificate for your private registries, Docker will refuse to use it until you declare that you trust it. For example, to allow images from a registry hosted at myregistry.example.com:8443
, configure insecure-registries
in the /etc/docker/daemon.json
file:
/etc/docker/daemon.json
{ "insecure-registries": [ "my.registry.example.com:8443" ] }
Restart docker.service
to apply changes.
User namespace remapping
By default, containers run within the host user namespace (user_namespaces(7)) and run as the user defined in the USER
directive in the Dockerfile used to build the container's image. This allows the process within the container to access configured resources on the host according to Users and groups#Permissions and ownership. This maximizes compatibility, but poses a security risk if a container privilege escalation or breakout vulnerability is discovered that allows the container to access unintended resources on the host. (One such vulnerability was published and patched in February 2019.)
The impact of such a vulnerability can be reduced by enabling user namespace remapping. This runs each container inside of an isolated user namespace and maps any UID and GID inside that user namespace to a different UID and GID within the host user namespace. The UIDs and GIDs in the host user namespace can be given little or no permissions.
Note that there are some limitations when enabling this feature. Notably, Kubernetes currently does not work with this feature.
Configure userns-remap
in /etc/docker/daemon.json
. default
is a special value that will automatically create a user and group named dockremap
for use with remapping.
/etc/docker/daemon.json
{ "userns-remap": "default" }
Configure /etc/subuid
and /etc/subgid
with a username/group name, starting UID/GID and UID/GID range size to allocate to the remap user and group. This example allocates a range of 4096 UIDs and GIDs starting at 165536 to the dockremap
user and group.
/etc/subuid
dockremap:165536:4096
/etc/subgid
dockremap:165536:4096
Restart docker.service
to apply changes.
After applying this change, all containers will run in an isolated user namespace by default. The remapping may be partially disabled on specific containers passsing the --userns=host
flag to the docker
command. See [1] for details.
Images
Arch Linux
The following command pulls the archlinux x86_64 image. This is a stripped down version of Arch core without network, etc.
# docker pull archlinux
See also README.md.
For a full Arch base, clone the repo from above and build your own image.
$ git clone https://github.com/archlinux/archlinux-docker.git
Make sure that the devtools package is installed.
Edit the packages
file so it only contains 'base'. Then run:
# make docker-image
Alpine Linux
Alpine Linux is a popular choice for small container images, especially for software compiled as static binaries. The following command pulls the latest Alpine Linux image:
# docker pull alpine
Alpine Linux uses the musl libc implementation instead of the glibc libc implementation used by most Linux distributions. Because Arch Linux uses glibc, there are a number of functional differences between an Arch Linux host and an Alpine Linux container that can impact the performance and correctness of software. A list of these differences is documented here.
Note that dynamically linked software built on Arch Linux (or any other system using glibc) may have bugs and performance problems when run on Alpine Linux (or any other system using a different libc). See [2], [3] and [4] for examples.
CentOS
The following command pulls the latest centos image:
# docker pull centos
See the Docker Hub page for a full list of available tags for each CentOS release.
Debian
The following command pulls the latest debian image:
# docker pull debian
See the Docker Hub page for a full list of available tags, including both standard and slim versions for each Debian release.
Distroless
Google maintains distroless images for several popular programming languages such as Java, Python, Go, Node.js, .NET Core and Rust. These images contain only the programming language runtime without any OS related files, resulting in very small images for packaging software.
See the GitHub README for a list of images and instructions on their use.
Remove Docker and images
In case you want to remove Docker entirely you can do this by following the steps below:
Check for running containers:
# docker ps
List all containers running on the host for deletion:
# docker ps -a
Stop a running container:
# docker stop <CONTAINER ID>
Killing still running containers:
# docker kill <CONTAINER ID>
Delete all containers listed by ID:
# docker rm <CONTAINER ID>
List all Docker images:
# docker images
Delete all images by ID:
# docker rmi <IMAGE ID>
Delete all images, containers, volumes, and networks that are not associated with a container (dangling):
# docker system prune
To additionally remove any stopped containers and all unused images (not just dangling ones), add the -a flag to the command:
# docker system prune -a
Delete all Docker data (purge directory):
# rm -R /var/lib/docker
Run GPU accelerated Docker containers with NVIDIA GPUs
With NVIDIA Container Toolkit (recommended)
Starting from Docker version 19.03, NVIDIA GPUs are natively supported as Docker devices. NVIDIA Container Toolkit is the recommended way of running containers that leverage NVIDIA GPUs.
Install the nvidia-container-toolkitAUR package. Next, restart docker. You can now run containers that make use of NVIDIA GPUs using the --gpus
option:
# docker run --gpus all nvidia/cuda:9.0-base nvidia-smi
Specify how many GPUs are enabled inside a container:
# docker run --gpus 2 nvidia/cuda:9.0-base nvidia-smi
Specify which GPUs to use:
# docker run --gpus '"device=1,2"' nvidia/cuda:9.0-base nvidia-smi
or
# docker run --gpus '"device=UUID-ABCDEF,1"' nvidia/cuda:9.0-base nvidia-smi
Specify a capability (graphics, compute, ...) for the container (though this is rarely if ever used this way):
# docker run --gpus all,capabilities=utility nvidia/cuda:9.0-base nvidia-smi
For more information see README.md and Wiki.
With NVIDIA Container Runtime
Install the nvidia-container-runtimeAUR package. Next, register the NVIDIA runtime by editing /etc/docker/daemon.json
/etc/docker/daemon.json
{ "runtimes": { "nvidia": { "path": "/usr/bin/nvidia-container-runtime", "runtimeArgs": [] } } }
and then restart docker.
The runtime can also be registered via a command line option to dockerd:
# /usr/bin/dockerd --add-runtime=nvidia=/usr/bin/nvidia-container-runtime
Afterwards GPU accelerated containers can be started with
# docker run --runtime=nvidia nvidia/cuda:9.0-base nvidia-smi
or (required Docker version 19.03 or higher)
# docker run --gpus all nvidia/cuda:9.0-base nvidia-smi
See also README.md.
With nvidia-docker (deprecated)
nvidia-docker is a wrapper around NVIDIA Container Runtime which registers the NVIDIA runtime by default and provides the nvidia-docker command.
To use nvidia-docker, install the nvidia-dockerAUR package and then restart docker. Containers with NVIDIA GPU support can then be run using any of the following methods:
# docker run --runtime=nvidia nvidia/cuda:9.0-base nvidia-smi
# nvidia-docker run nvidia/cuda:9.0-base nvidia-smi
or (required Docker version 19.03 or higher)
# docker run --gpus all nvidia/cuda:9.0-base nvidia-smi
Useful tips
To grab the IP address of a running container:
$ docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container-name OR id>
172.17.0.37
For each running container, the name and corresponding IP address can be listed for use in /etc/hosts
:
#!/usr/bin/env sh for ID in $(docker ps -q | awk '{print $1}'); do IP=$(docker inspect --format="{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}" "$ID") NAME=$(docker ps | grep "$ID" | awk '{print $NF}') printf "%s %s\n" "$IP" "$NAME" done
Troubleshooting
docker0 Bridge gets no IP / no internet access in containers when using systemd-networkd
Docker attempts to enables IP forwarding globally, but by default systemd-networkd overrides the global sysctl setting for each defined network profile. Set IPForward=yes
in the network profile. See Internet sharing#Enable packet forwarding for details.
- You may need to restart
docker.service
each time you restartsystemd-networkd.service
oriptables.service
. - Also be aware that nftables may block docker connections by default. Use
nft list ruleset
to check for blocking rules.nft flush chain inet filter forward
removes all forwarding rules temporarily. Edit/etc/nftables.conf
to make changes permanent. Remember to restartnftables.service
to reload rules from the config file.
Default number of allowed processes/threads too low
If you run into error messages like
# e.g. Java java.lang.OutOfMemoryError: unable to create new native thread # e.g. C, bash, ... fork failed: Resource temporarily unavailable
then you might need to adjust the number of processes allowed by systemd. The default is 500 (see system.conf
), which is pretty small for running several docker containers. Edit the docker.service
with the following snippet:
# systemctl edit docker.service
[Service] TasksMax=infinity
Error initializing graphdriver: devmapper
If systemctl fails to start docker and provides an error:
Error starting daemon: error initializing graphdriver: devmapper: Device docker-8:2-915035-pool is not a thin pool
Then, try the following steps to resolve the error. Stop the service, back up /var/lib/docker/
(if desired), remove the contents of /var/lib/docker/
, and try to start the service. See the open GitHub issue for details.
Failed to create some/path/to/file: No space left on device
If you are getting an error message like this:
ERROR: Failed to create some/path/to/file: No space left on device
when building or running a Docker image, even though you do have enough disk space available, make sure:
-
Tmpfs is disabled or has enough memory allocation. Docker might be trying to write files into
/tmp
but fails due to restrictions in memory usage and not disk space. - If you are using XFS, you might want to remove the
noquota
mount option from the relevant entries in/etc/fstab
(usually where/tmp
and/or/var/lib/docker
reside). Refer to Disk quota for more information, especially if you plan on using and resizingoverlay2
Docker storage driver. - XFS quota mount options (
uquota
,gquota
,prjquota
, etc.) fail during re-mount of the file system. To enable quota for root file system, the mount option must be passed to initramfs as a kernel parameterrootflags=
. Subsequently, it should not be listed among mount options in/etc/fstab
for the root (/
) filesystem.
Invalid cross-device link in kernel 4.19.1
If commands like dpkg fail to run in docker, e.g:
dpkg: error: error creating new backup file '/var/lib/dpkg/status-old': Invalid cross-device link
Either add a overlay.metacopy=N
kernel parameter or downgrade to 4.18.x until this issue is resolved. More info in the Arch forum.
CPUACCT missing in docker with Linux-ck
In newer versions of Linux-ck (some experienced with 4.19, 4.20 seems general), a change to the MuQSS was made that disables the CONFIG_CGROUP_CPUACCT
option from the kernel, which makes some usage of docker (run
or build
) to produce the following error:
$ docker run --rm hello-world
docker: Error response from daemon: unable to find "cpuacct" in controller set: unknown.
This error does not seem to affect the docker daemon, just containers. Read more on Linux-ck#CPUACCT missing in docker[broken link: invalid section].
Docker-machine fails to create virtual machines using the virtualbox driver
In case docker-machine fails to create the VM's using the virtualbox driver, with the following:
VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory
Simply reload the virtualbox via CLI with vboxreload
.
Starting Docker breaks KVM bridged networking
This is a known issue. You can use the following workaround:
/etc/docker/daemon.json
{ "iptables": false }