#!/usr/bin/python3 # SPDX-License-Identifier: AGPL-3.0-or-later """Script to manage a container or a VM for FreedomBox development. This script creates either a simple container using systemd-nspawn or a virtual machine using libvirt for developing FreedomBox. Containers have many advantages over running a VM. RAM is allocated to processes in the container as needed without any fixed limit. Also RAM does not have to be statically allocated so it is typically much lighter than running an VM. There is no hardware emulation when running a container with same architecture, so processes run as fast as they would on the host machine. On the other hand, VMs have the advantage of full machine emulation. They allow full permissions as required for mounting filesystems, USB passthrough of Wi-Fi devices, emulation of multiple disks, etc. as required for testing some of the features of FreedomBox. Environment: The script will run only run on hosts having systemd-nspawn, virsh, and network-manager installed, typical GNU/Linux distributions. It has been primarily developed and tested on Debian Buster but should work on most modern GNU/Linux distributions. Disk image: For a container, systemd-nspawn accepts not only a directory for starting a container but also a disk image. This disk image is loop-back mounted and container is started from that mounted directory. The partition to use is determined by looking at the boot flag in the partition table. This happens to work well with all existing FreedomBox images. In future, we may be able to run different architectures in this manner. For a VM, a disk drive is created that is backed by the image file. The image is a bootable image using GRUB as built by freedom-maker. After downloading, the disk image is expanded along with the partition and file system inside so that development can be done without running into disk space issues. Expanding the disk does not immediately consume disk space because it will be a sparse file. As data is written to the disk, it will occupy more and more space but the upper limit is the size to which disk has been expanded. Downloading images: Images are downloaded from FreedomBox download server using fixed URLs for each distribution. Signature is verified for the download images. The fingerprint of the allowed signing key is hard-coded in this script. Downloaded images are kept even after destroying the extracted raw image along with container. This allows for quickly resetting the container without downloading again. Booting: For a container, systemd-nspawn is run in 'boot' mode. This means that init process (happens to be systemd) is started inside the container. It then spawns all the other necessary daemons including openssh-server, firewalld and network-manager. A login terminal can be opened using 'machinectl login' because container is running systemd. SSH into the container is possible because network is up, configured by network-manager, and openssh server is running. For a VM, when the virtual machine is started, the firmware of the machine boots the machine from the attached disk. The boot process is similar to a physical machine. Shared folder: For a container, using systemd-nspawn, the project directory is mounted as /freedombox inside the container. The project directory is determined as directory in which this script resides. The project folder from the container point of view will be read-only. Container should be able to write various files such as build files into the /freedombox folder. To enable writing, an additional read-write folder is overlayed onto /freedombox folder in the container. This directory can't be created under the project folder and is created instead in $XDG_DATA_HOME/freedombox-container/overlay/$DISTRIBUTION. If XDG_DATA_HOME is not set, it is assumed to be $HOME/.local/shared/. Whenever data is written into /freedombox directory inside the container, this directory on the host receives the changes. See documentation for Overlay filesystem for further details. When container is destroyed, this overlay folder is destroyed to ensure clean state after bringing up the container again. For a VM, the project directory is exposed into the virtual machine with the mount token 'freedombox' using virtiofs. This is done as part of virtual machine configuration. Inside the virtual machine, a systemd .mount unit will mount the virtiofs filesystem using the 'freedombox' token onto the folder /freedombox. The folder is read-write. Users: In container, PrivateUsers configuration flag for systemd-nspawn is currently off. This means that each user's UID on the host is also the same UID in the container as along as there is an entry in the container's password database. In future, we may explore using private users inside the container. 'fbx' is the development user and its UID is changed during setup phase to 10000 hoping it would not match anything on the host system. 'fbx' user has full sudo access inside the container without needing a password. Password for this user is not set by default, but can be set if needed. If there is no access to the container in any way, one can run 'sudo machinectl shell' and then run 'passwd fbx' to set the password for the 'fbx' user. 'plinth' user's UID in the container is also changed and set to the UID of whichever user owns the project directory. This allows the files to written by 'plinth' container user in the project directory because UID of the owner of the directory is same as the 'plinth' user's UID in container. Network: For a container, a private network is created inside the container using systemd-nspawn feature. Network interfaces from the host are not available inside the container. A new network interface called 'host0' is configured inside the container which is automatically configured by network-manager. On the host a new network interface is created. This script creates configuration for a 'shared' network using network-manager. When bringing up the container, this network connection is also brought up. A DHCP server and a DNS server are started network-manager on the host side so that DHCP and DNS client functions work inside the container. Traffic from the container is also masqueraded so that Internet connectivity inside the container works if the host has one. If necessary, the network interface on host side can be differently configured. For example, it can be bridged with another interface to expose the container on a network that the host machine participates in. The network IP address inside the container can be queried using machinectl. This script queries that IP address and presents the address in its console messages. All ports in the container can be reached from the host using this IP address as long as the firewall inside the container allows it. There is no need to perform port forwarding or mapping. For a VM, the network device is fully emulated. On the host it is exposed as network interface that is bridged with the default libvirt bridge. The bridge interface is configured by libvirt and it listens for DHCP requests from the guests and also has a DNS server running. All traffic from the guest is NATed and, as a result, the guest has full network access. The guest is accessible from the host using the guest IP address which can be retrieved by asking libvirt. SSH: It is assumed that openssh-server is installed inside the container. SSH server keys in the container are created if missing. Client side keys are created in .container/ssh directory and the public key is installed in the authorized keys file of the 'fbx' user. The 'ssh' sub-command to this script is simply a convenience mechanism for quick launch of ssh with the right IP address, user name and identity file. Role of machinectl: For a container, most of the work is done by systemd-nspawn. machinectl is useful for running systemd-nspawn in the background and querying its current state. It also helps with providing the IP address of the container. machinectl is made to recognize the container by creating a link in /var/lib/machines/ to the image file. systemd-nspawn options are added by creating a temporary file in /run/systemd/nspawn. All machinectl commands should work. """ import argparse import datetime import ipaddress import itertools import json import logging import os import pathlib import platform import re import shlex import shutil import subprocess import sys import tempfile import time import urllib.parse from typing import Callable from urllib.request import urlopen URLS_AMD64 = { 'oldstable': 'https://ftp.freedombox.org/pub/freedombox/hardware/' 'amd64/bookworm/freedombox-bookworm_all-amd64.img.xz', 'stable': 'https://ftp.freedombox.org/pub/freedombox/hardware/' 'amd64/trixie/freedombox-trixie_all-amd64.img.xz', 'testing': 'https://ftp.freedombox.org/pub/freedombox/hardware/' 'amd64/testing/freedombox-testing_dev_all-amd64.img.xz', 'unstable': 'https://ftp.freedombox.org/pub/freedombox/hardware/' 'amd64/nightly/freedombox-unstable_dev_all-amd64.img.xz', } URLS_ARM64 = { 'oldstable': 'https://ftp.freedombox.org/pub/freedombox/hardware/' 'arm64/bookworm/freedombox-bookworm_all-arm64.img.xz', 'stable': 'https://ftp.freedombox.org/pub/freedombox/hardware/' 'arm64/trixie/freedombox-trixie_all-arm64.img.xz', 'testing': 'https://ftp.freedombox.org/pub/freedombox/hardware/' 'arm64/testing/freedombox-testing_dev_all-arm64.img.xz', 'unstable': 'https://ftp.freedombox.org/pub/freedombox/hardware/' 'arm64/nightly/freedombox-unstable_dev_all-arm64.img.xz', } URLS = URLS_AMD64 TRUSTED_KEYS = ['D4B069124FCF43AA1FCD7FBC2ACFC1E15AF82D8C'] KEY_SERVER = 'keyserver.ubuntu.com' KEY_SERVER_HTTPS_API = \ 'https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x{key_id}' PROVISION_SCRIPT = ''' set -xe pipefail cd /freedombox/ sudo apt-get -y install make sudo make provision-dev # Make some pytest related files and directories writable to the fbx user sudo touch geckodriver.log sudo chmod a+rw geckodriver.log sudo mkdir -p .pytest_cache/ sudo chmod --recursive a+rw .pytest_cache/ sudo chmod a+w /freedombox sudo chmod --recursive --silent a+w htmlcov || true sudo chmod --silent a+w .coverage || true exit 0 ''' # noqa SETUP_AND_RUN_TESTS_SCRIPT = ''' set -x BACKPORTS_SOURCES_LIST=/etc/apt/sources.list.d/freedombox2.list LDAPSCRIPTS_CONF=/etc/ldapscripts/freedombox-ldapscripts.conf # Remount /freedoombox to be up to date echo "> In machine: Remounting /freedombox" mount -o remount /freedombox # Activate backports if Debian stable if [[ "{distribution}" == "stable" && ! -e $BACKPORTS_SOURCES_LIST ]] then echo "> In machine: Enable backports" /freedombox/bin/freedombox-cmd upgrades activate_backports --no-args fi echo "> In machine: Upgrade packages" apt-get update DEBIAN_FRONTEND=noninteractive apt-get -yq --with-new-pkgs upgrade # Install requirements for tests if not already installed as root if ! [[ -e /usr/local/bin/geckodriver ]] then /freedombox/plinth/tests/functional/install.sh fi # Run the plinth server if functional tests are requested if [[ "{pytest_command}" =~ "--include-functional" ]] then make -C /freedombox wait-while-first-setup if [[ "{pytest_command}" != *"--splinter-headless"* ]] then # Use the X11 authority file from the fbx user to run GUI programs xauth merge /home/fbx/.Xauthority fi fi # Run pytest cd /freedombox export FREEDOMBOX_URL=https://localhost export FREEDOMBOX_SSH_PORT=22 export FREEDOMBOX_SAMBA_PORT=445 {pytest_command} # Make pytest cache files writable to the fbx user chmod --recursive --silent a+rw .pytest_cache/ chmod --recursive --silent a+w htmlcov chmod --silent a+w .coverage exit 0 ''' LIBVIRT_DOMAIN_XML_TEMPLATE = ''' {domain_name} {memory_mib} {memory_mib} {cpus} hvm destroy restart destroy /usr/bin/qemu-system-x86_64