Docker
Docker is [[Article description::a container virtualization environment]] which can establish development or runtime environments without modifying the environment of the base operating system. It has the ability to deploy instances of containers that provide a thin virtualization, using the host kernel, which makes it faster and lighter than full hardware virtualization.
Containers that produce kernel panics will induce kernel panics into the host operating system.
Installation
Kernel
Kernel version 3.10 or greater is required in order to run Docker.
If the kernel has not been configured properly before merging the app-emulation/docker package a list of missing kernel options will be printed by emerge. These kernel features must be enabled manually.
Pressing the / key while in the ncurses-based menuconfig to search the name of the configuration option.
For the most up-to-date values check the contents of the CONFIG_CHECK in /usr/portage/app-emulation/docker/docker-9999.ebuild file.
A graphical representation would look something like this:
Kernel configuration may be various according to different kernel versions, Docker versions, and different USE flags. It is commended to read messages for package app-emulation/docker when emerging Docker, and recompile kernel based on what is not set when it should be.
General setup ---> [*] POSIX Message Queues -*- Control Group support ---> [*] Memory controller [*] Swap controller [*] Swap controller enabled by default [*] IO controller [ ] IO controller debugging [*] CPU controller ---> [*] Group scheduling for SCHED_OTHER [*] CPU bandwidth provisioning for FAIR_GROUP_SCHED [*] Group scheduling for SCHED_RR/FIFO [*] PIDs controller [*] Freezer controller [*] HugeTLB controller [*] Cpuset controller [*] Include legacy /proc/<pid>/cpuset file [*] Device controller [*] Simple CPU accounting controller [*] Perf controller [ ] Example controller -*- Namespaces support [*] UTS namespace -*- IPC namespace [*] User namespace [*] PID Namespaces -*- Network namespace -*- Enable the block layer ---> [*] Block layer bio throttling support -*- IO Schedulers ---> [*] CFQ IO scheduler [*] CFQ Group Scheduling support [*] Networking support ---> Networking options ---> [*] Network packet filtering framework (Netfilter) ---> [*] Advanced netfilter configuration [*] Bridged IP/ARP packets filtering Core Netfilter Configuration ---> <*> Netfilter connection tracking support *** Xtables matches *** <*> "addrtype" address type match support <*> "conntrack" connection tracking match support <M> "ipvs" match support <M> IP virtual server support ---> *** IPVS transport protocol load balancing support *** [*] TCP load balancing support [*] UDP load balancing support *** IPVS scheduler *** <M> round-robin scheduling [*] Netfilter connection tracking IP: Netfilter Configuration ---> <*> IPv4 connection tracking support (required for NAT) <*> IP tables support (required for filtering/masq/NAT) <*> Packet filtering <*> IPv4 NAT <*> MASQUERADE target support <*> iptables NAT support <*> MASQUERADE target support <*> NETMAP target support <*> REDIRECT target support <*> 802.1d Ethernet Bridging [*] QoS and/or fair queueing ---> <*> Control Group Classifier [*] L3 Master device support [*] Network priority cgroup -*- Network classid cgroup Device Drivers ---> [*] Multiple devices driver support (RAID and LVM) ---> <*> Device mapper support <*> Thin provisioning target [*] Network device support ---> [*] Network core driver support <M> Dummy net driver support <M> MAC-VLAN support <M> IP-VLAN support <M> Virtual eXtensible Local Area Network (VXLAN) <*> Virtual ethernet pair device Character devices ---> -*- Enable TTY -*- Unix98 PTY support [*] Support multiple instances of devpts (option appears if you are using systemd) File systems ---> <*> Overlay filesystem support Pseudo filesystems ---> [*] HugeTLB file system support Security options ---> [*] Enable access key retention support [*] Enable register of persistent per-UID keyrings <M> ENCRYPTED KEYS [*] Diffie-Hellman operations on retained keys
After exiting the kernel configuration, rebuild the kernel. If the kernel rebuild as also a kernel upgrade be sure to rebuild the bootloader's menu configuration, then reboot the system to the newly recompiled kernel binary.
Compatibility check
There is a Docker way of checking the kernel configuration compatibility:
user $
/usr/share/docker/contrib/check-config.sh
Emerge
Install app-emulation/docker:
root #
emerge --ask --verbose app-emulation/docker
PaX kernel
When running a PaX kernel (like sys-kernel/hardened-sources), memory protection on containerd needs to be disabled.
Tools in the sys-apps/paxctl package are necessary for this operation. See Hardened/PaX Quickstart for an introduction.
root #
/sbin/paxctl -m /usr/bin/containerd
For the hello-world example set this flag for containerd-shim and runc:
root #
/sbin/paxctl -m /usr/bin/containerd-shim
root #
/sbin/paxctl -m /usr/bin/runc
If an issue with denied chmods in chroots occurs, a more recent version of Docker (>=1.12) is needed. Use the ~amd64 Keyword for Docker and its dependencies listed subsequently when running emerge app-emulation/docker again.
Configuration
Service
OpenRC
After Docker has been successfully installed, add it to the system's default runlevel then tell OpenrC to start the daemon:
root #
rc-update add docker default
root #
rc-service docker start
If you need to pass any additional options to the docker daemon edit /etc/conf.d/docker file. See upstream documentation for the various options that can be passed to the DOCKER_OPTS variable.
systemd
To have Docker start on boot, enable it:
root #
systemctl enable docker.service
To start it now:
root #
systemctl start docker.service
If you need to pass any additional options to the docker daemon create the /etc/docker/daemon.json file. See the upstream documentation for various options that be placed into this systemd specific configuration file.
Permissions
Add relevant users to the docker group:
root #
usermod -aG docker <username>
Allowing a user to talk to the Docker daemon is equivalent to giving it full root access to the host. More
Storage driver
By default on Gentoo Docker will use the device-mapper storage driver. View docker's settings in detail with the info subcommand:
user $
docker info
To change the storage driver, first verify the host machines kernel has support for the desired fileystem. The btrfs filesystem will be used in this example:
user $
grep btrfs /proc/filesystems
Be aware the root of the docker engine (/var/lib/docker/ by default) must be adjusted to use the btrfs filesystem. If the btrfs storage pool is located under /mnt or /srv, then be sure to change the root (call the 'graph' in docker speak) of the engine.
OpenRC
OpenRC users will need to adjust the DOCKER_OPTS variable in the service configuration file located in /etc/conf.d. The example below displays a change to the storage driver and the docker engine root:
/etc/conf.d/docker
<syntaxhighlight lang="bash">DOCKER_OPTS="--storage-driver btrfs --data-root /srv/var/lib/docker"</syntaxhighlight>
Start or restart the docker service in order to for the changes to take effect and then validate the changes:
root #
docker info
systemd
systemd users will need to create a /etc/docker/daemon.json file in order to change the storage driver for the docker service. For example, to use the btrfs driver:
/etc/docker/daemon.json
<syntaxhighlight lang="json">{ "storage-driver": "btrfs" }</syntaxhighlight>
(Re)start the service in order to make the changes take effect:
root #
systemctl restart docker
Usage
Testing
In order to test the installation, run the following command:
user $
docker run --rm hello-world
Hello from Docker. This message shows that your installation appears to be working correctly. To generate this message, Docker took the following steps: 1. The Docker client contacted the Docker daemon. 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. 3. The Docker daemon created a new container from that image which runs the executable that produces the output you are currently reading. 4. The Docker daemon streamed that output to the Docker client, which sent it to your terminal. To try something more ambitious, you can run an Ubuntu container with: $ docker run -it ubuntu bash Share images, automate workflows, and more with a free Docker Hub account: https://hub.docker.com For more examples and ideas, visit: https://docs.docker.com/userguide/
That will first download from the Docker Hub the image named hello-world (if it has not been downloaded locally yet), then it will run it inside new namespaces. It purpose is just to display some text through a container.
Building from a Dockerfile
Create a new Dockerfile in an empty directory with the following content:
Dockerfile
FROM php:5.6-apache
Run:
user $
docker build -t my-php-app .
user $
docker run -it --rm --name my-running-app my-php-app
Own images
There are two different ideas how a container should be built.
- The minimal approach: According to the container philosophy a container should only contain what is needed to serve one process. In this case ideally the container consists of one static binary.
- The VM approach: A container can be treated like a full system virtualization environment. In this case the container includes a whole operating system.
Build environment for the image
The image can be created out of a live system or - preferably - out of a special build environment
- To create a build environment for the image, follow the Cross_build_environment guide. There is no need to emerge a full @system. The build essentials are enough.
- The toolchain tuple could look like
x86_64-docker-linux-gnu
- The build essentials can be build like this
root #
x86_64-docker-linux-gnu-emerge -uva1 --keep-going $(egrep '^[a-z]+' /usr/portage/profiles/default/linux/packages.build) portage openrc util-linux netifrc
- The toolchain tuple could look like
The minimal approach: Statically linked binaries using Crossdev
There are some caveats with this. The hints for statically linked binaries should be kept in mind for this.
To build an nginx-image:
- Chroot into the build environment (e.g.
chroot-x86_64-docker
) - Build the desired package statically linked
root #
NGINX_MODULES_HTTP="gzip" CFLAGS="$(emerge --info|grep ^CFLAGS|grep -oP '(?<=").*(?=")') -static" CXXFLAGS=$CFLAGS LDFLAGS="$(emerge --info|grep LDFLAGS|grep -oP '(?<=").*(?=")') -static" PKGDIR=/tmp/ emerge-chroot -va1 --buildpkgonly nginx:mainline
- Extract the binary package to a tmp dir (e.g.
mkdir /tmp/nginx && cd /tmp/nginx && tar xjvf /tmp/www-servers/nginx-*.tbz2
) - Change the nginx configuration. At least add
daemon off;
and swaplisten 127.0.0.1
forlisten 0.0.0.0
. - Add
etc/passwd
,etc/resolv.conf
,etc/nsswitch.conf
and a appropriateetc/ssl
directory. Make sure theetc/nsswitch.conf
has "files" instead of "compat" and the etc/passwd file has an "nginx" user entry. - Create the docker image out of the current directory
user $
tar --numeric-owner -cj --to-stdout . |docker import - nginx-image
- Spawn a container and start nginx
user $
docker run -p 80:80 -p 443:443 --name nginx-test -ti --rm nginx-image nginx
Alternative minimal approach: Dynamically linked binaries using Kubler
Kubler is a Gentoo based image meta builder. It will help to automate the build process to create Gentoo based containers and is especially helpful if you have not used Crossdev before. It allows a fine graded configuration of the build process but also comes with a list of predefined containers that will be build on your system according to current Portage, the script will extract the dynamic libraries required by the application and copy them in the container. The container are linked to a static busybox image that allow basic shell interaction but the only way to update it is rebuild it with the kubler script.
The VM-like approach
- Create the image out of the full environment
user $
cd /usr/x86_64-docker-linux-gnu/ && tar --numeric-owner -cj --to-stdout . --exclude=./{proc,sys,tmp/portage} .|docker import - gentoo-image
- Spawn a new gentoo container and start a shell:
user $
docker run -v /usr/portage:/usr/portage --name gentoo-test -ti gentoo-image /bin/bash
- This image can used as a base image. To build a nginx image for example run emerge nginx inside the container and push it back as new image afterwards:
user $
docker commit --message "nginx-image" gentoo-test
Troubleshooting
Docker service crashes/fails to start (OpenRC)
After adding --storage-driver btrfs
to DOCKER_OPTS and restarting the Docker service, Docker may crash. You can check this with rc-status
If this is the case try adding the btrfs
USE flag for the Docker package, and updating Docker package.
root #
touch /etc/portage/package.use/docker
root #
nano /etc/portage/package.use/docker
/etc/portage/package.use/docker
<syntaxhighlight lang="bash">app-emulation/docker btrfs device-mapper</syntaxhighlight>
Install Docker with the new USE flags
root #
emerge --update --deep --newuse app-emulation/docker
Docker service restart
root #
rc-service docker restart
Docker service fails to start (systemd)
- Some users have issues on starting
docker.service
because of device-mapper error. It can be solved by loading a different storage-driver. E.g. Loading “overlay” graph driver instead of “device-mapper” graph driver. - “overlay” graph driver requires "Overlay filesystem support" in kernel configuration:
File systems ---> <*> Overlay filesystem support
- Add following to
/etc/portage/package.use/docker
, then re-emerge Docker will solve this issue.
/etc/portage/package.use/docker
<syntaxhighlight lang="bash">app-emulation/docker overlay -device-mapper</syntaxhighlight>
- If you receive an error saying,
Error starting daemon: Error initializing network controller: list bridge addresses failed: no available network
, you may be missing the docker0 network bridge. Please see the following Docker issue which provides a bash script solution to create the docker0 network bridge: https://github.com/docker/docker/issues/31546
Docker service runs but fails to start container (systemd)
If using systemd-232 or higher and receive an error related to cgroups:
user $
docker run hello-world
container_linux.go:247: starting container process caused "process_linux.go:359:lib/docker/overlay2/523ed887f681de6ea3838aa5b26c57e88547d65bdd883a6d3538729f19a3 docker: Error response from daemon: invalid header field value "oci runtime errotainer init caused \\\"rootfs_linux.go:54: mounting \\\\\\\"cgroup\\\\\\\" to ro38729f19a34501/merged\\\\\\\" at \\\\\\\"/sys/fs/cgroup\\\\\\\" caused \\\\\\\"n
Add the following line to the kernel boot parameters:
systemd.legacy_systemd_cgroup_controller=yes
Docker service runs but fails to start container (systemd)
- if you are using systemd-232 or higher, and receive this error:
user $
docker run hello-world
applying cgroup configuration for process caused \"open /sys/fs/cgroup/docker/cpuset.cpus.effective: no such file or directory
- You will need to add the following line to your kernel boot parameters
systemd.unified_cgroup_hierarchy=0
- If you are using systemd and received this error:
user $
docker run hello-world
cgroup mountpoint does not exist
- You will need to run the following commands as root:
root #
mkdir /sys/fs/cgroup/systemd
root #
mount -t cgroup -o none,name=systemd cgroup /sys/fs/cgroup/systemd
This is not ideal as you will have to run these commands after each reboot but it works.
Docker service fails because cgroup device not mounted (systemd)
By default systemd uses hybrid cgroup hierarchy combining cgroup and cgroup2 devices. Docker still needs cgroup(v1) devices.
Activate USE flag cgroup-hybrid
for systemd.
Activate USE flag for systemd
/etc/portage/package.use/systemd
<syntaxhighlight lang="bash">sys-apps/systemd cgroup-hybrid</syntaxhighlight>
Install systemd with the new USE flags
root #
emerge --ask --oneshot sys-apps/systemd
systemd-networkd
If systemd-networkd
is used for network management, additional options are needed for IP forwarding and/or IP masquerade.
/etc/systemd/network/50-static.network
<syntaxhighlight lang="ini">[Match] Name=enp6s0 [Network] DHCP=yes IPForward=true IPMasquarade=true</syntaxhighlight>
These options are used instead of the sysctl settings for ip forwarding and/or masquerade.
- In case the Docker containers are shutting down, with errors from
systemd-udevd
that complain of not being able to assign persistent MAC address to virtual interface(s):
See https://github.com/systemd/systemd/issues/3374#issuecomment-339258483
/etc/systemd/network/99-default.link
<syntaxhighlight lang="ini">[Link] NamePolicy=kernel database onboard slot path MACAddressPolicy=none</syntaxhighlight>