Regular update of Docker images/containers

After converting my servers into docker setups, I was in need to update the images/containers regularly for security reasons. Baffled I found, that there is no standard update method to make sure that everything is up-to-date. The ephemeral setup allows you to throw away your containers and images and recreate them with the latest version. As easy as this sounds, you figure that there are some loopholes in the setup.

First we need to understand, there are 3 types of images that we need to keep up-to-date

  • Images from the docker hub, that just get pulled and are used as they are with some configs
  • Images from the docker hub, that get pulled and then are only used as base for own dockerfiles
  • The images created out of own dockerfiles

Ridiculously all three of them need to be updated to make sure everything is up-to-date (most important, a new local build won’t get the latest base image update) and additionally we have to care for cleanup.

I found some solutions in the net to automatically update docker, the so far best version by binfalse.de. But this leaves out my own dockerfiles with a build and some minor steps, pruning, etc. So I am only using the dupdate script out of the Handy Docker Tools to incorporate in a little script.

So what is the solution

WARNING: This just updates images. If your setup needs additional update steps, you need to plan these in. Otherwise you risk breaking your setup.

Multiple steps are needed to completely update. First, I use /usr/local/sbin/dupdate -v to update all docker images coming from a hub, covering the ones I use directly and as base for builds.

This will give an error for the images you created out of your own dockerfiles, but update all pulled ones from docker hub. Second, I update the images of my own dockerfiles by rebuilding all via /usr/local/bin/docker-compose -f docker-compose.yml build --no-cache. If you use docker without docker-compose, you just have to do something similar for each dockerfile.

This will use the newly pulled base images in their build, hence create the latest version for your dockerfile. IMPORTANT: The --no-cache is needed to force the update of self build Dockerfiles. Docker determines an update on your own builds only by the commands in the Dockerfile (hence if they changed). It CANNOT see a version change of a package installed with e.g. apt-get install. But exactly that version change you want, so you have to force a rebuild.

Now the images are all updated and we only need to restart the containers.

Addition for cleaning up

However you end up with a lot of images tagged or named . These are your old images, which are now cluttering the hard drive. The ones only tagged with are the ones you updated from docker hub, the ones with name and tag are the ones you build. You will need to run ```/usr/bin/docker image prune -a --force``` to get rid of them and free up space. Warning: This will erase all older images. If you need them as a safety precaution, skip this step.

Full article view...

Reinstall GRUB after BIOS update on LUKS encrypted system

Due to reasons unknown I upgraded my Lenovos BIOS via a USB Stick. :-) Everything went well, however after reboot, all my boot options for Linux Mint were gone. Turns out, somehow my boot setup was erased as well. Using UEFI without CSM and without secure boot with LUKS encrypted Linux Mint, this was already an issue when first installing. Getting everything right seems to be more of good luck.

This answer ist mainly for straight forward installations of Ubuntu/Mint with LVM on LUKS and unencrypted boot. If you have a different setup, make sure to adapt the different mounts.

Advice on using boot-repair utility: Don’t. The tool messes with a lot of configs unnecessarily. If you are not absolutely sure, what it does, don’t use it. As an example for myself: When using it to restore Grub, it edited my fstab and uncommented my root mapper and set it to noauto. Result: You end up after Grub in an initramfs prompt, as the volume has not been unlocked. Possibly wasting time by checking on why LVM times out, cryptsetup not working, etc.

So what is the solution

To get back your boot menu, I tried several things, e.g. boot-repair. However since my system is LUKS encrypted, I guess the tools all had some problems. To get back my system, I accessed my old system via chroot from a Linux Live CD. In this case Linux Mint Live CD.

First boot from Linux Live CD, get keyboard locale and network set up. Unlock your LUKS device. Make sure, that the name in the end (sda3_crypt) is as specified in your original /etc/crypttab (yes, if you do not know, you need to open the crypt device somewhere, take a look, close and reopen it). Otherwise you might get a warning later on.

If you have a different setup, make sure to take the correct device.

cryptsetup luksOpen /dev/sda3 sda3_crypt

For overall LUKS informations and commands, take a look here: https://wiki.ubuntuusers.de/LUKS/

Next mount all necessary partitions out of the old system To find the LUKS drives, use sudo lvscan

For me my /root turns out to be in /dev/mint-vg/root

If you have separated partitions, e.g. for home, or other devices, make sure to adapt parts below for mounting.

sudo mount /dev/mint-vg/root /mnt
sudo mount /dev/sda2 /mnt/boot
for i in /dev /dev/pts /proc /sys /run ; do sudo mount -B $i /mnt$i ; done
sudo mount -o bind /etc/resolv.conf /mnt/etc/resolv.conf
sudo mount /dev/sda1 /mnt/boot/efi/

Make sure you add the /mnt/boot/efi, otherwise grub will complain grub-install: "cannot find EFI directory". It is not included in your boot partition, but a separate partition.

After that, enter the chroot environment with sudo chroot /mnt /bin/bash

To install GRUB run sudo grub-install /dev/sda

Now just a reboot is needed and the system should work as before.

Addendum

If you are here because something on boot is not working and you messed things up even further as I did, additionally update-initramfs -c -k all could help.

Full article view...

OPENPGPKEY and DANE

As a long time PGP user I wanted to improve the key landscape and offer my public key via DANE. This is quite simple, if you have a DNS host, that supports DNSSEC and the different needed DANE types. (Note: I use core-networks.de)

The principle behind this: You publish your PGP key in your signed DNS and by that a correctly configured mail system could opt into sending you encrypted E-Mails even without having exchanged keys before.

I have tried to use the gpg2 function --print-dane-records (available from 2.1.9 upwards), however could not generate a usable data part of the DNS record via this.

So what is the solution

As a prerequisite you need to have created your own PGP keys with the E-Mail you want to use as an User ID. Note: I am only doing a per E-Mail Setup. Use the following website: https://www.huque.com/bin/openpgpkey

Generate an OPENPGPKEY and the output pretty much does the trick. The generated output is sorted as follows:

  • Owner Name goes into the record. Be careful, if your DNS provider automatically adds the domain.If you used --print-dane-records you need to concatenate the ID of the key (before TYPE61) and string after $ORIGIN The syntax is <SHA256 hash of your e-mail name before the @>._openpgpkey.<your domain>

  • Out of Generated DNS OPENPGPKEY Record you take the part within the () into the data part of the record. Here you need to transform the key into one line string. If you used --print-dane-records, I discarded the data part, as I could not get it to work. Simply use your key data exported with ASCII armor (-a parameter) without the last line. Of course leaving headers and footers out as well.

  • The type is OPENPGPKEY

  • Class is IN

  • TTL you can set on a decent value. For testing I used a hour.

To test the setup use

dig OPENPGPKEY <owner name goes here>

This will simply send back the data block and test the DNS setup. To test the DANE lookup do the following

gpg2 --auto-key-locate clear,dane,local -v --locate-key <your e-mail goes here>

This gives quite a good feedback on the setup and tells you if the key was fetched via DANE or not.

Full article view...

SendXMPP mail forward on Debian Jessie

To have a more comfortable way of receiving messages by my servers, I wanted all my root E-Mails to be forwarded to my mobile via XMPP. I only have a limited exim4 on my machine running, configured for local mail delivery only.

So what is the solution

First install sendxmpp with apt-get install sendxmpp, then create the config file as /etc/sendxmpp.conf and insert XMPP credentials:

<sender>@<sender_server>:<port> <password>

Set the right permissions chmod 600 /etc/sendxmpp.conf and owner chown Debian-exim:Debian-exim /etc/sendxmpp.conf

Then create a script to call sendxmpp /usr/sbin/mail2xmpp. It might be that you could put this completely into the alias, however I decided to use the script. Exchange your receiving ID. -t enables the TLS connection for sending the message.

#!/bin/bash
echo "$(cat)" | sendxmpp -t -f /etc/sendxmpp.conf <receiver>@<receiving_server>

Make the script executable chmod 755 /usr/sbin/mail2xmpp and create the alias for the user, which E-Mails you want to forward in /etc/aliases:

# /etc/aliases
root:,|/usr/local/bin/mail2xmpp

To activate pipe forwarding we have to create /etc/exim4/exim4.conf.localmacros SoWhatIsTheSolution

SYSTEM_ALIASES_PIPE_TRANSPORT = address_pipe

After that run newaliases and service exim4 restart for config to take effect. Now you should be able to test if it works, by simply sending a local test E-Mail to user root.

Full article view...

Unlock LUKS via SSH in Debian

As already described in my previous post Headless Debian install via SSH, I am dealing with a headless system. As I am encrypting my system and drives with LUKS, I need a way to enter the password in case of a reboot.

So what is the solution

First install Dropbear on the server by apt-get install dropbear. Then configure initramfs network usage; edit /etc/initramfs-tools/initramfs.conf. You probably have to add the lines for dropbear and update the device string. This configuration is using DHCP to obtain an IP, if you have a static configuration, use: IP=<SERVER-IP>::<STANDARD-GATEWAY>:<SUBNETMASK>:<HOSTNAME>:eth0:off

#
# DROPBEAR: [ y | n ]
#
# Use dropbear if available.
#
DROPBEAR=y

DEVICE=eth0
IP=:::::eth0:dhcp

Next, delete the standard private and public keys on the server

rm /etc/initramfs-tools/root/.ssh/id_rsa
rm /etc/initramfs-tools/root/.ssh/id_rsa.pub

Then create your own key pair (we assume you use id_rsa as a name) on your client machine and upload it to the server.

ssh-keygen
scp ~/.ssh/id_rsa.pub myuser@debian_headless:id_rsa.pub

After that, log in to the server and add the key to authorized_key file an remove the public key on the server.

ssh myuser@debian_headless
sudo sh -c "cat id_rsa.pub &gt;&gt; /etc/initramfs-tools/root/.ssh/authorized_keys"
rm id_rsa.pub

Now we need to update initramfs and grub by update-initramfs -u -k all and update-grub2

On some configurations the network won’t get reconfigured on runtime values, hence we need to trigger an update. Edit /etc/network/interfaces and add as first line of the primary interface pre-up ip addr flush dev eth0

Restart server and log in from your client with ssh -i ~/.ssh/id_rsa root@<server-ip> to set the password to unlock

echo -n "<LUKS encryption password>" > /lib/cryptsetup/passfifo
exit

EDIT: on newer systems a cryptroot-unlock will suffice.

The server should now boot normally and regular SSH should come up.

Optional

You can also create a little script for the passphrase in /etc/initramfs-tools/hooks/unlock

#!/bin/bash
PREREQ=""
prereqs() {
  echo "$PREREQ"
}

case $1 in
prereqs)
prereqs
exit 0
;;
esac

. /usr/share/initramfs-tools/hook-functions

cat > "${DESTDIR}/root/unlock" << EOF #!/bin/sh /lib/cryptsetup/askpass 'passphrase: ' > /lib/cryptsetup/passfifo
EOF

chmod u+x "${DESTDIR}/root/unlock"

exit 0

Do not forget to make it executable with chmod +x /etc/initramfs-tools/hooks/unlock and update initramfs with update-initramfs -u -k all and update-grub2

Full article view...