Dovecot learn ham/spam with rspamd via inet protocol for docker

I started rebuilding my mail server following Thomas Leisters Howto. However I decided to dockerize the whole setup. With that I needed to get rid of any socket communication and move to tcp based communication between different docker containers.

This was surprisingly easy, as most components already communicate via tcp. However the learn spam and ham mechanism still uses a socket. So here are some details for my setup:

  • I used a user defined network via docker compose to connect the different containers. By that I have full control over the containers IPs
  • Each process is running in one container, so I have unbound, redis, rspamd, dovecot, postfix
  • Host system is a debian stretch
  • Docker containers are based on Debian:stable-slim

EDIT 14.04.20: I switched my setup to Debian based Docker. Especially the postfix container needs to be NOT Alpine right now. The resolver implementation in musl-libc cripples the DNSSEC calls of postfix, making outgoing DANE unusable.

So what is the solution

BEWARE: I am basing my guide on Thomas config linked above.

First you need to change a few details in the ham/spam piping. Within the dovecot.conf down at the plugin settings you need to set the sieve_pipe_bin_dir option to the location, where the pipe scripts (following steps) will be stored. Beware to set the path as it will be in your docker image. My setting: sieve_pipe_bin_dir = /usr/local/sbin

Next adapt the sieve scripts. These scripts trigger the learning as you can see in dovecot.conf. Ham on copying out of SPAM folder, Spam on copying into SPAM folder. Do not forget to call sievec after placing them in the sieve folder.

learn-spam.sieve

require ["vnd.dovecot.pipe", "copy", "imapsieve"];
pipe :copy "rspamd-pipe-spam";

learn-ham.sieve

require ["vnd.dovecot.pipe", "copy", "imapsieve"];
pipe :copy "rspamd-pipe-ham";

Now adapt the pipe scripts itself. These scripts will actually connect to rspamd to deliver the mail for learning. During docker image creation you will need to copy the rspamd-pipe-spam and rspam-pipe-ham scripts into the sieve_pipe_bin_dir location (first step ) and make them executable. The script is connecting via the container name rspamd if you have a different one, you need to change or use the IP.

rspamd-pipe-spam

#!/bin/bash
cat $1 | /usr/bin/curl -s --data-binary @- http://rspamd:11334/learnspam
exit 0

rspamd-pipe-ham

#!/bin/bash
cat $1 | /usr/bin/curl -s --data-binary @- http://rspamd:11334/learnham
exit 0

To allow this scripts to call rspamd you need to allow the IP of dovecot for the worker controller.

worker-controller.inc

bind_socket = "rspamd container>:11334";
password = "<your pwd as described in the guide>";
secure_ip = "<dovecot container ip>";

This should enable ham/spam learning via sieve within a docker setup.

####Addendum To train existing mails, e.g. from an old server, you need to execute the following commands in the dovecot docker. Please make sure you adapt paths, if you changed them. Learn HAM: find /var/vmail/mailboxes/*/*/mail/cur -type f -exec /usr/local/sbin/rspamd-pipe-ham {} \; Learn SPAM: find /var/vmail/mailboxes/*/*/mail/Spam/cur -type f -exec /usr/local/sbin/rspamd-pipe-spam {} \;

Full article view...

Regular update of Docker images/containers

After converting my servers into docker setups, I was in need to update the images/containers regularly for security reasons. Baffled I found, that there is no standard update method to make sure that everything is up-to-date. The ephemeral setup allows you to throw away your containers and images and recreate them with the latest version. As easy as this sounds, you figure that there are some loopholes in the setup.

First we need to understand, there are 3 types of images that we need to keep up-to-date

  • Images from the docker hub, that just get pulled and are used as they are with some configs
  • Images from the docker hub, that get pulled and then are only used as base for own dockerfiles
  • The images created out of own dockerfiles

Ridiculously all three of them need to be updated to make sure everything is up-to-date (most important, a new local build won’t get the latest base image update) and additionally we have to care for cleanup.

I found some solutions in the net to automatically update docker, the so far best version by binfalse.de. But this leaves out my own dockerfiles with a build and some minor steps, pruning, etc. So I am only using the dupdate script out of the Handy Docker Tools to incorporate in a little script.

So what is the solution

WARNING: This just updates images. If your setup needs additional update steps, you need to plan these in. Otherwise you risk breaking your setup.

Multiple steps are needed to completely update. First, I use /usr/local/sbin/dupdate -v to update all docker images coming from a hub, covering the ones I use directly and as base for builds.

This will give an error for the images you created out of your own dockerfiles, but update all pulled ones from docker hub. Second, I update the images of my own dockerfiles by rebuilding all via /usr/local/bin/docker-compose -f docker-compose.yml build --no-cache. If you use docker without docker-compose, you just have to do something similar for each dockerfile.

This will use the newly pulled base images in their build, hence create the latest version for your dockerfile. IMPORTANT: The --no-cache is needed to force the update of self build Dockerfiles. Docker determines an update on your own builds only by the commands in the Dockerfile (hence if they changed). It CANNOT see a version change of a package installed with e.g. apt-get install. But exactly that version change you want, so you have to force a rebuild.

Now the images are all updated and we only need to restart the containers.

Addition for cleaning up

However you end up with a lot of images tagged or named . These are your old images, which are now cluttering the hard drive. The ones only tagged with are the ones you updated from docker hub, the ones with name and tag are the ones you build. You will need to run ```/usr/bin/docker image prune -a --force``` to get rid of them and free up space. Warning: This will erase all older images. If you need them as a safety precaution, skip this step.

Full article view...

Reinstall GRUB after BIOS update on LUKS encrypted system

Due to reasons unknown I upgraded my Lenovos BIOS via a USB Stick. :-) Everything went well, however after reboot, all my boot options for Linux Mint were gone. Turns out, somehow my boot setup was erased as well. Using UEFI without CSM and without secure boot with LUKS encrypted Linux Mint, this was already an issue when first installing. Getting everything right seems to be more of good luck.

This answer ist mainly for straight forward installations of Ubuntu/Mint with LVM on LUKS and unencrypted boot. If you have a different setup, make sure to adapt the different mounts.

Advice on using boot-repair utility: Don’t. The tool messes with a lot of configs unnecessarily. If you are not absolutely sure, what it does, don’t use it. As an example for myself: When using it to restore Grub, it edited my fstab and uncommented my root mapper and set it to noauto. Result: You end up after Grub in an initramfs prompt, as the volume has not been unlocked. Possibly wasting time by checking on why LVM times out, cryptsetup not working, etc.

So what is the solution

To get back your boot menu, I tried several things, e.g. boot-repair. However since my system is LUKS encrypted, I guess the tools all had some problems. To get back my system, I accessed my old system via chroot from a Linux Live CD. In this case Linux Mint Live CD.

First boot from Linux Live CD, get keyboard locale and network set up. Unlock your LUKS device. Make sure, that the name in the end (sda3_crypt) is as specified in your original /etc/crypttab (yes, if you do not know, you need to open the crypt device somewhere, take a look, close and reopen it). Otherwise you might get a warning later on.

If you have a different setup, make sure to take the correct device.

cryptsetup luksOpen /dev/sda3 sda3_crypt

For overall LUKS informations and commands, take a look here: https://wiki.ubuntuusers.de/LUKS/

Next mount all necessary partitions out of the old system To find the LUKS drives, use sudo lvscan

For me my /root turns out to be in /dev/mint-vg/root

If you have separated partitions, e.g. for home, or other devices, make sure to adapt parts below for mounting.

sudo mount /dev/mint-vg/root /mnt
sudo mount /dev/sda2 /mnt/boot
for i in /dev /dev/pts /proc /sys /run ; do sudo mount -B $i /mnt$i ; done
sudo mount -o bind /etc/resolv.conf /mnt/etc/resolv.conf
sudo mount /dev/sda1 /mnt/boot/efi/

Make sure you add the /mnt/boot/efi, otherwise grub will complain grub-install: "cannot find EFI directory". It is not included in your boot partition, but a separate partition.

After that, enter the chroot environment with sudo chroot /mnt /bin/bash

To install GRUB run sudo grub-install /dev/sda

Now just a reboot is needed and the system should work as before.

Addendum

If you are here because something on boot is not working and you messed things up even further as I did, additionally update-initramfs -c -k all could help.

Full article view...

OPENPGPKEY and DANE

As a long time PGP user I wanted to improve the key landscape and offer my public key via DANE. This is quite simple, if you have a DNS host, that supports DNSSEC and the different needed DANE types. (Note: I use core-networks.de)

The principle behind this: You publish your PGP key in your signed DNS and by that a correctly configured mail system could opt into sending you encrypted E-Mails even without having exchanged keys before.

I have tried to use the gpg2 function --print-dane-records (available from 2.1.9 upwards), however could not generate a usable data part of the DNS record via this.

So what is the solution

As a prerequisite you need to have created your own PGP keys with the E-Mail you want to use as an User ID. Note: I am only doing a per E-Mail Setup. Use the following website: https://www.huque.com/bin/openpgpkey

Generate an OPENPGPKEY and the output pretty much does the trick. The generated output is sorted as follows:

  • Owner Name goes into the record. Be careful, if your DNS provider automatically adds the domain.If you used --print-dane-records you need to concatenate the ID of the key (before TYPE61) and string after $ORIGIN The syntax is <SHA256 hash of your e-mail name before the @>._openpgpkey.<your domain>

  • Out of Generated DNS OPENPGPKEY Record you take the part within the () into the data part of the record. Here you need to transform the key into one line string. If you used --print-dane-records, I discarded the data part, as I could not get it to work. Simply use your key data exported with ASCII armor (-a parameter) without the last line. Of course leaving headers and footers out as well.

  • The type is OPENPGPKEY

  • Class is IN

  • TTL you can set on a decent value. For testing I used a hour.

To test the setup use

dig OPENPGPKEY <owner name goes here>

This will simply send back the data block and test the DNS setup. To test the DANE lookup do the following

gpg2 --auto-key-locate clear,dane,local -v --locate-key <your e-mail goes here>

This gives quite a good feedback on the setup and tells you if the key was fetched via DANE or not.

Full article view...

SendXMPP mail forward on Debian Jessie

To have a more comfortable way of receiving messages by my servers, I wanted all my root E-Mails to be forwarded to my mobile via XMPP. I only have a limited exim4 on my machine running, configured for local mail delivery only.

So what is the solution

First install sendxmpp with apt-get install sendxmpp, then create the config file as /etc/sendxmpp.conf and insert XMPP credentials:

<sender>@<sender_server>:<port> <password>

Set the right permissions chmod 600 /etc/sendxmpp.conf and owner chown Debian-exim:Debian-exim /etc/sendxmpp.conf

Then create a script to call sendxmpp /usr/sbin/mail2xmpp. It might be that you could put this completely into the alias, however I decided to use the script. Exchange your receiving ID. -t enables the TLS connection for sending the message.

#!/bin/bash
echo "$(cat)" | sendxmpp -t -f /etc/sendxmpp.conf <receiver>@<receiving_server>

Make the script executable chmod 755 /usr/sbin/mail2xmpp and create the alias for the user, which E-Mails you want to forward in /etc/aliases:

# /etc/aliases
root:,|/usr/local/bin/mail2xmpp

To activate pipe forwarding we have to create /etc/exim4/exim4.conf.localmacros SoWhatIsTheSolution

SYSTEM_ALIASES_PIPE_TRANSPORT = address_pipe

After that run newaliases and service exim4 restart for config to take effect. Now you should be able to test if it works, by simply sending a local test E-Mail to user root.

Full article view...