Switching from Docker to Podman

For some time I had Podman on my list to look at. Mainly because it focuses on rootless container setups. With docker and the docker daemon I always felt that this is a bit too much. Overall reasons for me to switch:

  • Rootless container without a daemon running in the background
  • The pod setup in podman is reducing complexity. You start a webserver and database in one pod and they can talk to each other via localhost, no networking needed
  • Auto updating of containers and restarting services
  • Systemd integration (although I did not know the last two in advance)

Reading a lot of other howtos, it sounded like a piece of cake. It is easy, however the switch in thinking from having a daemon running in root to a rootless user namespace setup is hard. Solutions you used before might now work and new settings are needed to get things running.

Also, all of this is within a Debian home server setup, so nothing big/professional.

The hardest part is podmans release and development style. A lot of features get partly developed, released and advertised as the “new thing” to do, but they are not ready and you need to switch back to older features. With running 4.9.x on Debian, I had this with features like

  • Quadlets would not produce pod systemd files (needs v5 or use old systemd generator)
  • Auto-update would not work with containers in pods (needs v5)
  • Pasta networking, is far from stable (ipv6 connections take time to come up, start/stop of containers in pods kill networking at some point) and it makes DNS setups in pods hard to understand
  • runc and crun have different feature sets

All of this can be worked around, however if you just read documentation, blog posts, etc. you start configuring and then figure out it is not feature complete and need to redo the whole config. So be warned and always check to what version the documentation refers to.

So how is the move done?

First of all: You need to stop all docker, whilst working on podman. I would even recommend to uninstall docker or set alias docker=podman right away. It is easy to get confused between the two and - believe me - the docker command is sort of muscle memory for containers. I worked on podman, whilst having docker installed and it does not pose a problem, other than me staring on docker ps and being baffled that my podman container has not started. ;-)

Podman is very compatible to the docker commands, so you won’t feel a difference. You mostly can start your docker configurations right away. However I would recommend to work on it in the following way. Next steps are executed as root / sudo.

Fundamental things on the host to get podman running

First you need to install Podman. Currently it is very hard to get latest versions in the non-RedHat world. My server is on Debian Bookworm with quite old podman versions. However There is a sort of up-to-date version (time of writing 4.9.x) in testing, hence you can get it with a bit of Pinning.

First create a new source file /apt/sources.list.d/testing.list for access to testing:

deb http://deb.debian.org/debian testing main contrib

This allows now to use testing packages in your system.

Next we need to pin single packages, so that not all packages get upgraded to testing, but only needed ones. Pinning basically sorts priorities. Highest number wins and standard is 500. For that create /etc/apt/preferences.d/testing:

Package: *
Pin: release a=testing
Pin-Priority: -1

Package: podman
Pin: release a=testing
Pin-Priority: 1001

Package: podman-compose
Pin: release a=testing
Pin-Priority: 1001

Package: podman-docker
Pin: release a=testing
Pin-Priority: 1001

Package: passt
Pin: release a=testing
Pin-Priority: 1001

Package: libgpgme11t64
Pin: release a=testing
Pin-Priority: 1001

This will reduce the priority for all packages of testing to -1, making sure, that your system stays on bookworm. Otherwise the testing packages would all override the bookworm packages due to same priority but higher version number. We then have exemptions for the podman packages.

  • podman and podman-compose are the fundamental packages needed
  • podman-docker is needed if you want to use podman as a full drop-in for docker and e.g. use something like the docker socket in Traefik
  • passt is the package to use the pasta networking (which really looks promising)
  • libgpgme11t64 is a dependency needed

After this a apt-get update will reload the apt cache and with apt-cache policy podman you can check which version will be installed.

IMPORTANT: Check with apt-get upgrade -s first on what effect it will have. If on a fully upgraded system more packages get an upgrade than the pinned ones above you made something wrong. Do not upgrade!

Now that we have podman installed, there is some configuration to do. I used a new user for running my rootless containers. You can simply create a regular user with a home folder.

Important read: Here you will have the first difference to rootful Docker: Your containers will run as that user, hence will only have access to what the user has access to. If you need devices in the container, it is not enough to add them in the config, but the user must have access to the device. The same holds true for file access.

To make it more complicated, there are different modes for the rootless setup. Two fundamental ideas are important to understand:

  • Running with root inside the container: Even though the processes inside the container run as ID 0 = root, they will be mapped to your user on the outside. It is playing root without being one. This is - I feel - the easiest to understand and to administer. Just configure your host user to have access on what you need in the container (direct ownership or groups)
  • Running as a different user in the container: Now it becomes tricky, as the internal user gets an offset as the external ones, defined in the subuid configuration.

Both types are explained here.

But lets first make sure the system is setup, before we get to container specific configurations.

As a regular user, we have some troubles handling system processes. Two settings I found essential

  1. Starting processes on privileged ports (<1024), e.g. a http/https webserver.
  2. Pinging from containers to check connectivity.

To allow this, create /etc/sysctl.d/podman-settings.conf with

net.ipv4.ip_unprivileged_port_start=0
net.ipv4.ping_group_range=0 2000000

It is not very specific of a configuration, but I do not know any other way. Reload it with sysctl --load /etc/sysctl.d/podman-settings.conf.

After that we need to make sure, that rootless containers can continue to run after the user logs out. For that a simple loginctl enable-linger <your unprivileged username> will suffice.

Next we need to check if subuid and subgid is set. For that run cat /etc/subuid and see if there is a line for your user. Normally it is created when adding the user. If it is missing run: usermod --add-subuids 100000-165535 --add-subgids 100000-165535 <your unprivileged username>.

This is the offset I mentioned earlier. If you run a user inside the container, e.g. with ID 999, it will be on the host 100998. Similar for gids. This delivers even more user separation, but also is complicated to track over all containers, especially if you need to interact with container files on the host or over different containers.

Finally we need to add container registries to podman. You can always use the registry in the name of the image you are pulling, but I felt it is a good idea to have the standard Docker Hub included, to simplify search. To explain: If you want to get traefik:latest, you could pull docker.io/library/traefik:latest. But with a registry configured you can just use traefik:latest again.

For that create /etc/containers/registries.conf and add

unqualified-search-registries = ["docker.io"]

You can add more here, but this should be enough for now.

All this was fundamental configuration made by root. Now you can start building containers. Most things you had with docker will work right away, or will need minimal changes to work with the rootless user. It is possible to partially use sudo -u <your unprivileged username> podman..., however this has limits. I would advise to log in as that user and continue from there.

Some additional helpful things

I did the following little helpful things in /etc/containers/containers.conf, as this applies to all containers and reduces definition work. To get the full options for the conf file, you find a well commented version (Debian) in /usr/share/containers/containers.conf.

[containers]
# For bookworm with journald logging
log_driver = "journald"
log_tag = "podman/{{.Name}}"
tz = "Europe/Berlin"

[network]
# Make default network fit to general setup
default_network = "podman"
default_subnet = "192.168.14.1/24"
default_subnet_pools = [
  {"base" = "192.168.15.1/24", "size" = 24},
  {"base" = "192.168.16.1/24", "size" = 24},
  {"base" = "192.168.17.1/24", "size" = 24},
  {"base" = "192.168.18.1/24", "size" = 24},
  {"base" = "192.168.19.1/24", "size" = 24},
]
Full article view...

Really stop autoplay in Firefox / Librefox

I am not sure who thought, that autoplay on music and videos is a good idea in a browser. Using my tablet on travel, it is just a joy to click on a link and some video starts entertaining everyone else but me. Firefox offers some toggles in the regular settings to deactivate autoplay, however I found, that it does not stop videos from playing in certain situations.

So what is the solution

Open a page with about:config and set the following settings:

Setting Value
media.autoplay.allow-extension-background-pages false
media.autoplay.block-event.enabled true
media.autoplay.blocking_policy 2
media.autoplay.default 5

This works for me in Firefox / Librefox and Firefox Mobile (nightly)

Full article view...

Add google traffic overlay to OSMAnd

OSMAnd is great with detailed maps, however when traveling I often miss traffic information. Luckily you can add an online overlay, that will show the google traffic data.

I used guides like these here or here, however all I could find would add the traffic information PLUS a whole google map, street names or anything else. I wanted to just have the traffic information.

So what is the solution

  1. First of all to use online overlays, you need to activate the plugin “Online maps” in OSMAnd

  2. After activating the plugin you can add in “Configure map” -> “Map source…” a new map:

New map settings

The URL string: https://mt1.google.com/vt/lyrs=h,traffic&x={1}&y={2}&z={0}&apistyle=s.t%3A0|s.e%3Al|p.v%3Aoff

The important part to remove labels and maps in here is the URL with the following config:

  • lyrs=h,traffic - selects the street layer with traffic information. If you use s or m in the beginning you will get the satellite or maps view with the traffic information
  • x, y and z - are the coordinates and zoom level filled by OSMAnd
  • apistyle=s.t%3A0|s.e%3Al|p.v%3Aoff - selects all feature styles (s.t), labels (s.e) and turns them off (p.v). Otherwise you will have street and location names in the overlay.

Especially the apistyle part was hard to find. Finally I found an explanation on stackoverflow for the different possible settings.

Saving this map, then you can configure your map and activate an overlay map. If you need to change the traffic map settings you find them in the local maps for editing. Be advised that this is an online function and requires a working internet connection and is also not included in route finding.

Full article view...

What I learnt using docker as a beginner

Running my own server setup with docker, I ran into some issues over time. To not forget what I did and maybe to help others here is what I found.

It is a gathering of simple things I now try to include in all my docker setups, as they make life with docker easier. For some this might be obvious things. If you have additional thoughts, I am happy to hear of them.

Do not mix your hosts file system with your docker mounts

It might be a tempting idea to store config files of your container within your regular /etc or logs to /var/log. However you easily get lost to know which file is used by whom. Log files may get merged with others and so on. Hence have your own directory in which you store your docker stuff. It also makes backups, git use etc. easier.

Use git - a lot

You should use git to version everything you do on docker. Starting with your dockerfiles, to configuration files for services. Especially building a new container needs a lot of testing. Things might go forth and back, hence a history and versioning is utterly helpful.

Keep ENTRYPOINT and CMD separate

Use the entrypoint for something that needs to be done before each start of the service, like correcting permissions, update small things, etc. Keep the command separated from this (hence do not put the service start command in the entrypoint) so you can override it easily.

Use dumb-init in your own Dockerfiles

It is a small package you can easily install and easily add to your image. It simply puts a tiny init system into the container to properly start and stop your service. You can read more on the dumb-init github page. A lot of images out there ignore this. It is not per se harmful, but a lot of services get killed instead of properly shut down.

Prosody container example:

ENTRYPOINT ["/usr/bin/dumb-init", "--"]
CMD ["bash", "-c", "/usr/local/sbin/entrypoint.sh && exec prosodyctl start"]

Learn about IPv6, iptables and internal docker networks

Docker takes over iptables when running a container, be sure to understand what is happening. Especially take a look on internal networks, as they can easily secure database container and other backend services.

For IPv6 better still use NAT. I didn’t like that in the beginning as well, however you run into so many troubles (like dynamic prefixes, limited IPv6 configuration options, etc.), that it is just way easier to use NAT again. Sadly docker does not support NAT on IPv6, for that use https://github.com/robbertkl/docker-ipv6nat. It is a simple binary to run in the background, that fills the gap.

Make sure your container runs with the correct user:group

Especially when using bind-mounts, make sure your use proper UID:GID combinations. I have created the users on my host machine that will be used inside the container and pass the UID:GID into the container. This ensures, that files stored on the host will not by accident be accessible by other processes due to UID overlap.

Again a prosody example on debian for your Dockerfile

ARG uid=1001
ARG gid=1002

# Add user and group before install, for correct file permission
RUN addgroup --gid $gid prosody \
    && adduser --system --disabled-login --disabled-password --home /var/lib/prosody --uid $uid --gid $gid prosody

Use docker logging

It is tempting to just mount a folder and store logfiles. However when using images, e.g. from hub.docker it can be quite difficult to correctly map this and often leads into some sort of chaos.

I actually like the webserver approach to just link the logfiles to /dev/stdout and /dev/stderr

RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log

With that you can use the regular docker logs command and use configuration options to divert it to whatever log facility you are using.

Make sure your container are in the right timezone

A lot of images nowadays use the TZ=<timezone> as an environment variable in the container to control the timezone. You should set this always and use it in your own Dockerfiles

RUN ln -sf /usr/share/zoneinfo/$TZ /etc/localtime

Nothing is more confusing than log files with non matching timestamps.

If you need to build from source use multi-stage builds

Instead of bloating your production image with build-tools and sources, use multi-stage builds.

With that you can create a build image, run it and use the build results in a regular image:

Shortened example for coturn

### 1. stage: create build image
FROM debian:stable-slim AS coturn-build

# Install build dependencies
[...]

# Clone Coturn
WORKDIR /usr/local/src
RUN git clone https://github.com/coturn/coturn.git

# Build Coturn
WORKDIR /usr/local/src/coturn
RUN ./configure
RUN make


### 2. stage: create production image
FROM debian:stable-slim AS coturn
[...]

# Copy out of build environment
COPY --from=coturn-build /usr/local/src/coturn/bin/ /usr/local/bin/
COPY --from=coturn-build /usr/local/src/coturn/man/ /usr/local/man/
COPY --from=coturn-build /usr/local/src/coturn/sqlite/turndb /usr/local/var/lib/coturn/turndb
COPY --from=coturn-build /usr/local/src/coturn/turndb /usr/local/turndb

# Install missing software
[...]

Make sure your services start in order

A lot of errors come up when services that depend on each other just start in random order. Sometimes you are lucky and it works, sometimes you get an error. Within docker-compose you can use the depends_on directive. If you have for some reason separate setups, you can use the following bit within your entrypoint:

/bin/bash -c " \
  while ! nc -z mariadb 3306; \
  do \
    echo 'Waiting for MariaDB'; \
    sleep 5; \
  done; \
  echo 'Connected!';"
Full article view...

Parsing config files in bash using awk

Ever so often I get to some sort of complex bash script and find, that a configuration or ini file would make sense, to control the script and make it more flexible. However bash does not bring nice libraries to parse config files, e.g. like python.

In essence two things are needed to write a config file parser:

  1. good trim of the config option, as you can have
    • <option>=<value>
    • <option> = <value>
    • <option> =<value>
    • and so on…
  2. structured way to move the config to usable variables within bash.

So what is the solution

First lets take a look on how to analyze a config file line. It is normally setup like: <option><delimiter><value>

To parse this, we need to separate the option and value by the delimiter. For this awk is perfect to use as it can seperate a line with a field separator in parts and store each in a variable.

Additionally we can eliminate spaces around the delimiter to make the config parser more robust. However we need to do that left and right of the delimiter, to only trim the leading and trailing spaces of each side. Just eliminating spaces in the whole string would delete spaces in configurations as well. Using the delimiter = this would look like the following for the left side (for right side change $1 to $2).

awk -F'[=]' '{gsub("^\\s+|\\s+$", "", $1); print $1}' <<< $line

So what does this do?

  • -F[=] sets the delimiter to =
  • gsub("^\\s+|\\s+$", "", $1); starting starting from the first letter (^) replace 1 or more spaces at beginning and end (\\s+|\\s+$) with an empty string (""). $1 use the left part of the delimiter for the gsub command
  • print $1 as awk automatically splits with the delimiter, print the first part (left side of the config line)
  • <<< $line is the feeded line of our config. Within bash you can easily loop through a file line by line with the read command.

Now that we have the basics, we can write a parser function with it.

An example within a script:

#!/bin/bash

declare -A settings_map

function read_config () {
    while IFS= read -r line; do
        case "$line" in
            ""|\#*)
                # do nothing on comments and empty lines
                ;;
            *)
                # parse config name and value in array
                settings_map[$(awk -F'[=]' '{gsub("^\\s+|\\s+$", "", $1); print $1}' <<< $line)]=$(awk -F'[=]' '{gsub("^\\s+|\\s+$", "", $2);print $2}' <<< $line)
                ;;
        esac
    done < $1
    echo ${settings_map[*]}
}

# call fuction with the config file to parse
read_config parser-test.conf

# show that it is working and picking "option" from the array
echo ${settings_map[option]}

This would read a config file like this (or with some spaces around the =):

# parser-test.conf
# Use option=value
#
type=value_type
option = value_option

test =value_test
spaces    = value space

It reads line by line, remove leading and trailing spaces on the left and right site of the config line, then store it into the settings_map array. The array can be used within the rest of the bash script.

Important: This function uses a globally declared array. Handing back an array from a function is not trivial. You could create and return a string, then initialize an array afterwards. However I feel that to be a bit cumberstone and not adding to the functionality.

Full article view...