Linux Containers
Related articles
Linux Containers (LXC) is an operating-system-level virtualization method for running multiple isolated Linux systems (containers) on a single control host (LXC host). It does not provide a virtual machine, but rather provides a virtual environment that has its own CPU, memory, block I/O, network, etc. space and the resource control mechanism. This is provided by namespaces and cgroups features in Linux kernel on LXC host. It is similar to a chroot, but offers much more isolation.
Alternatives for using containers are systemd-nspawn and docker.
Setup
Required software
Install the lxc and arch-install-scripts packages.
Verify that the running kernel is properly configured to run a container:
$ lxc-checkconfig
Due to security concerns, the default Arch kernel does not ship with the ability to run containers as an unprivileged user; therefore, it is normal to see a missing status for "User namespaces" when running the check. See FS#36969 for this feature request.
Host network configuration
LXCs support different virtual network types and devices (see lxc.container.conf(5)). A bridge device on the host is required for most types of virtual networking. The examples of creating a bridge provided below are not meant to be limiting, but illustrative. Users may use other programs to achieve the same results. A wired and wireless example is provided below, but other setups are possible. Users are referred to the Network bridge article for additional options.
Example for a wired network
This example uses netctl: a bridge template can be found in /etc/netctl/examples
which needs to be edited to match the host network hardware specs and IP ranges of the host network. Below are two example bridge configs, one using a dhcp setup and the other using a static IP setup.
/etc/netctl/lxcbridge
Description="LXC bridge" Interface=br0 Connection=bridge BindsToInterfaces=('eno1') IP=dhcp SkipForwardingDelay=yes
/etc/netctl/lxcbridge
Description="LXC bridge" Interface=br0 Connection=bridge BindsToInterfaces=('eno1') IP=static Address=192.168.0.2/24 Gateway='192.168.0.1' DNS=('192.168.0.1')
Before attempting to start the bridge, disable the running network interface on the host as the bridge will replace it; this depends on how the host network is configured, see Network configuration.
For users already using netctl to manage an adapter, simply switch-to it:
# netctl switch-to lxcbridge # netctl enable lxcbridge
Verify network connectivity on the host before continuing. This can be accomplished with a simple ping:
$ ping -c 1 www.google.com
Example for a wireless network
Wireless networks cannot be bridged directly; a different method must be used in this case. First, a bridge must be created similar to the previous examples, but it should not have any interface defined to it (other than the virtual interface of the container itself, which is done automatically). Assign a static IP address to the bridge, but do not assign a gateway.
The host must be configured to perform NAT using iptables:
# iptables -t nat -A POSTROUTING -o wlp3s0 -j MASQUERADE
where wlp3s0
is the name of the wireless interface. Enable packet forwarding, which is disabled by default.
The remaining steps are similar, except for one thing: for the container, the gateway must be configured to be the IP address of the host (in this example, it was 192.168.0.2). This is specified in /var/lib/lxc/container_name/config
(see the following sections).
Container creation
Select a template from /usr/share/lxc/templates
that matches the target distro to containerize. Users wishing to containerize non-Arch distros will need additional packages on the host depending on the target distro:
- Debian-based: debootstrap
- Fedora-based: yumAUR
Run lxc-create
to create the container, which installs the root filesystem of the LXC to /var/lib/lxc/CONTAINER_NAME/rootfs
by default. Example creating an Arch Linux LXC named "playtime":
# lxc-create -n playtime -t /usr/share/lxc/templates/lxc-archlinux
Container configuration
Basic config with networking
System resources to be virtualized/isolated when a process is using the container are defined in /var/lib/lxc/CONTAINER_NAME/config
. By default, the creation process will make a minimum setup without networking support. Below is an example config with networking:
/var/lib/lxc/playtime/config
# Template used to create this container: /usr/share/lxc/templates/lxc-archlinux # Parameters passed to the template: # For additional config options, please look at lxc.container.conf(5) ## default values lxc.rootfs = /var/lib/lxc/playtime/rootfs lxc.utsname = playtime lxc.arch = x86_64 lxc.include = /usr/share/lxc/config/archlinux.common.conf ## network lxc.network.type = veth lxc.network.link = br0 lxc.network.flags = up lxc.network.ipv4 = 192.168.0.3/24 lxc.network.ipv4.gateway = 192.168.0.1 lxc.network.name = eth0 ## mounts ## specify shared filesystem paths in the format below ## make sure that the mount point exists on the lxc #lxc.mount.entry = /mnt/data/share mnt/data none bind 0 0 # # if running the same Arch linux on the same architecture it may be # adventitious to share the package cache directory #lxc.mount.entry = /var/cache/pacman/pkg var/cache/pacman/pkg none bind 0 0
Xorg program considerations (optional)
In order to run programs on the host's display, some bind mounts need to be defined so that the containerized programs can access the host's resources. Add the following section to /var/lib/lxc/playtime/config
:
## for xorg ## fix overmounting see: https://github.com/lxc/lxc/issues/434 lxc.mount.entry = tmpfs tmp tmpfs defaults lxc.mount.entry = /dev/dri dev/dri none bind,optional,create=dir lxc.mount.entry = /dev/snd dev/snd none bind,optional,create=dir lxc.mount.entry = /tmp/.X11-unix tmp/.X11-unix none bind,optional,create=dir lxc.mount.entry = /dev/video0 dev/video0 none bind,optional,create=file
If you still get a permission denied error in your LXC guest, then you may need to call xhost +
in your host to allow the guest to connect to the host's display server. Take note of the security concerns of opening up your display server by doing this.
OpenVPN considerations
Users wishing to run OpenVPN within the container should read the OpenVPN in Linux containers article.
Managing Containers
To list all installed LXC containers:
# lxc-ls -f
Systemd can be used to start and to stop LXCs via lxc@CONTAINER_NAME.service
. Enable lxc@CONTAINER_NAME.service
to have it start when the host system boots.
Users can also start/stop LXCs without systemd. Start a container:
# lxc-start -n CONTAINER_NAME
Stop a container:
# lxc-stop -n CONTAINER_NAME
To login into a container:
# lxc-console -n CONTAINER_NAME
Once logged, treat the container like any other linux system, set the root password, create users, install packages, etc.
To attach to a container:
# lxc-attach -n CONTAINER_NAME
It works nearly the same as lxc-console, but you are automatically accessing root prompt inside the container, bypassing login.
Running Xorg programs
Either attach to or SSH into the target container and prefix the call to the program with the DISPLAY ID of the host's X session. For most simple setups, the display is always 0.
An example of running Firefox from the container in the host's display:
$ DISPLAY=:0 firefox
Alternatively, to avoid directly attaching to or connecting to the container, the following can be used on the host to automate the process:
# lxc-attach -n playtime --clear-env -- sudo -u YOURUSER env DISPLAY=:0 firefox
Troubleshooting
root login fails
If you get the following error when you try to login using lxc-console:
login: root Login incorrect
And the container's journalctl
shows:
pam_securetty(login:auth): access denied: tty 'pts/0' is not secureĀ !
Add pts/0
to the list of terminal names in /etc/securetty
on the container filesystem, see [1]. You can also opt to delete /etc/securetty
on the container to allow always root to login, see [2].
Alternatively, create a new user in lxc-attach and use it for logging in to the system, then switch to root.
# lxc-attach -n playtime [root@playtime]# useradd -m -Gwheel newuser [root@playtime]# passwd newuser [root@playtime]# passwd root [root@playtime]# exit # lxc-console -n playtime [newuser@playtime]$ su
no network-connection with veth in container config
If you can't access your LAN or WAN with a networking interface configured as veth and setup through /etc/lxc/containername/config
.
If the virtual interface gets the ip assigned and should be connected to the network correctly.
ip addr show veth0 inet 192.168.1.111/24
You may disable all the relevant static ip formulas and try setting the ip through the booted container-os like you would normaly do.
Example container/config
...
lxc.network.type = veth
lxc.network.name = veth0
lxc.network.flags = up
lxc.network.link = bridge
...
And then assign your IP through your preferred method inside the container, see also Network configuration#Configure the IP address.