Arch Planet

Planet Arch Linux is a window into the world, work and lives of Arch Linux developers, trusted users and support staff.

RSS Feed

hplip 3.20.3-2 update requires manual intervention


The hplip package prior to version 3.20.3-2 was missing the compiled python modules. This has been fixed in 3.20.3-2, so the upgrade will need to overwrite the untracked pyc files that were created. If you get errors such as these hplip: /usr/share/hplip/base/__pycache__/__init__.cpython-38.pyc exists in filesystem hplip: /usr/share/hplip/base/__pycache__/avahi.cpython-38.pyc exists in filesystem hplip: /usr/share/hplip/base/__pycache__/codes.cpython-38.pyc exists in filesystem ...many more... when updating, use pacman -Suy --overwrite /usr/share/hplip/\* to perform the upgrade.



If you had a closer look on my domain you’ve might checked my MX records: ❯ resolvectl query -t mx IN MX 10 IN MX 10 IN MX 20 Yes, I have to admit I don’t host my own mail infrastructure. I think this is too toilsome and I have better things to do, like writing this blog article. In this article I want to explain to you how I’ve configured the SPF, DKIM and DMARC settings for my domain.

More ways to handle dotfiles


I’ve received plenty of feedback for my last blog article on how I handle dotfiles, hence I’ve decided that I want to give a glimpse on how others are managing their dotfiles. Another way of handling dotfiles is using GNU stow as explained here: With GNU stow it’s possible to store your dotfiles in a separate directory and then symlink to the files in this directory via invoking stow <directory name>.

firewalld>=0.8.1-2 update requires manual intervention


The firewalld package prior to version 0.8.1-2 was missing the compiled python modules. This has been fixed in 0.8.1-2, so the upgrade will need to overwrite the untracked pyc files created. If you get errors like these firewalld: /usr/lib/python3.8/site-packages/firewall/__pycache__/__init__.cpython-38.pyc exists in filesystem firewalld: /usr/lib/python3.8/site-packages/firewall/__pycache__/client.cpython-38.pyc exists in filesystem firewalld: /usr/lib/python3.8/site-packages/firewall/__pycache__/dbus_utils.cpython-38.pyc exists in filesystem ...many more... when updating, use pacman -Suy --overwrite /usr/lib/python3.8/site-packages/firewall/\* to perform the upgrade.

How to handle dotfiles


In this article I want to show how I handle my dotfiles and why I think it’s the best way to handle them. I tried different approaches for handling dotfiles in the past: puppet ansible home made shell script magic maybe a few more I don’t remember, because i didn’t use them so much. So what’s wrong with puppet or ansible? Don’t get me wrong, I love config management and I love using both for bigger infrastructure.

New Caddyfile and more


I made a few significant changes on my blog. First, I have a new Caddyfile for Caddy: {experimental_http3},,{redir*https://{}.{}{path}}{redir*{path}},{file_serverroot*/srv/www/{host}/public/header{Strict-Transport-Security"max-age=31536000; includeSubDomains; preload; always"Public-Key-Pins"pin-sha256=\"sRHdihwgkaib1P1gxX8HFszlD+7/gTfNvuAybgLPNis=\"; pin-sha256=\"YLh1dUR9y6Kja30RrAn7JKnbQG/uEtLMkBgFF2Fuihg=\"; pin-sha256=\"C5+lpZ7tcVwmwQIMcRtPbsQtWLABXhQzejna0wHFr8M=\"; includeSubdomains; max-age=2629746;"X-Frame-Options"SAMEORIGIN"X-Content-Type-Options"nosniff"X-XSS-Protection"1; mode=block"Content-Security-Policy"default-src 'none'; base-uri 'self'; form-action 'none'; img-src 'self'; script-src 'self'; style-src 'self'; font-src 'self'; worker-src 'self'; object-src 'self'; media-src 'self'; frame-ancestors 'none'; manifest-src 'self'; connect-src 'self'"Referrer-Policy"strict-origin"Feature-Policy"geolocation 'none';midi 'none'; sync-xhr 'none';microphone 'none';camera 'none';magnetometer 'none';gyroscope 'none';speaker 'none';fullscreen 'self';payment 'none';"Expect-CT"max-age=604800"}header/.well-known/openpgpkey/*{Content-Typeapplication/octet-streamAccess-Control-Allow-Origin*}encode{zstdgzip}}The new Caddyfile enables experimental HTTP3 support. Also I’ve added a few redirects to my new domain.

The Future of the Arch Linux Project Leader


Hello everyone, Some of you may know me from the days when I was much more involved in Arch, but most of you probably just know me as a name on the website. I’ve been with Arch for some time, taking the leadership of this beast over from Judd back in 2007. But, as these things often go, my involvement has slid down to minimal levels over time. It’s high time that changes. Arch Linux needs involved leadership to make hard decisions and direct the project where it needs to go. And I am not in a position to do this. In a team effort, the Arch Linux staff devised a new process for determining future leaders. From now on, leaders will be elected by the staff for a term length of two years. Details of this new process can be found here In the first official vote with Levente Polyak (anthraxx), Gaetan Bisson (vesath), Giancarlo Razzolini (grazzolini), and Sven-Hendrik Haase (svenstaro) as candidates, and through 58 verified votes, a winner was chosen: Levente Polyak (anthraxx) will be taking over the reins of this ship. Congratulations! Thanks for everything over all these years, Aaron Griffin (phrakture)

Planet Arch Linux migration


The software behind was implemented in Python 2 and is no longer maintained upstream. This functionality has now been implemented in's archweb backend which is actively maintained but offers a slightly different experience. The most notable changes are the offered feeds and the feed location. Archweb only offers an Atom feed which is located at here.

Terraforming my blog


I’ve just pushed a first step for managing my infrastructure via Hashicorps Terraform. In this article I want to speak about this first step and I want to give a glimpse into the future for it. My infrastructure is hosted in Hetzner Cloud (there is luckily a terraform provider for it). DNS will be talked about in a later blog article. I usually store my passwords in a gopass password store, hence I’ve wanted to let Terraform retrieve the Hetzner Cloud API key magically.

sshd needs restarting after upgrading to openssh-8.2p1


After upgrading to openssh-8.2p1, the existing SSH daemon will be unable to accept new connections. (See FS#65517.) When upgrading remote hosts, please make sure to restart the SSH daemon using systemctl restart sshd right after running pacman -Syu. If you are upgrading to openssh-8.2p1-3 or higher, this restart will happen automatically.

Automate (offline) backups with restic and systemd


This blog post builds on the content of the fedora magazine article Automate backups with restic and systemd. 2 important features were missing in the article for my use case: Don't reveal restic passwords in plain-text files Backup to offline storage (USB flash drive) Fortunately modern Linux distributions offer all mechanisms to implement these 2 requirements: Udisks(2) allows non-privileged users to mount external USB-disks automatically systemd.

How to setup your own WKD server


You may have heard about the problems with recent PGP key server implementations. I don’t want to reiterate the technical challenges with recent PGP key server implementations. I think there are enough explanations for this in the Web. So let us focus on preventing the problems. One possible solution around this problem is self-hosting your own WKD server. WKD stands for Web Key Directory. It’s a new standard for hosting PGP keys via using existing infrastructure (webservers and HTTPS).

A new domain:


Hello everybody, I’ve just moved my blog to a domain. The new domain is called Why a new domain? Well, I think this domain suits me better. It reflects my nickname on platforms like, IRC, and various others. Also, all dev domains have HSTS enabled on default for even more security and I’ve switched my domain registrar from Hetzner to (Finally, DNSSEC yeah). Because some websites still link to nullday.

Tests for the Arch Linux infrastructure


The Arch Linux DevOps team uses a combination of Ansible and Terraform to manage their hosts. If you want to have a look on their infrastructure repository, you can do so via this link: The combination of Ansible and Terraform works quite well for Arch Linux, the only subject we are missing is proper testing. I want to present a small proof of concept on how we could do tests in the future.

Disable routing for Wireguard


Think about the following scenario. You have a client at home and you have a server. The server permits ssh connections only from the wireguard network (eg. You have wireguard configured and running on your client, but you don’t want to route all traffic through wireguard. You actually just want to access the server via wireguard and route all other traffic normally through your local gateway (let’s say The solution is disabling the routing for the wireguard client.

Bandwidth tests with iperf3


If you ever come to the need of a simple bandwidth test for your server or client, you can setup a bandwidth test via iperf3. For starting an iperf3 server, just use iperf3 -s: ❯ iperf3 -s ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- Accepted connection from, port 33133 [ 5] local port 5201 connected to port 21516 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 12.

Isolated clients with Wireguard


The Wireguard VPN doesn’t isolate clients on default. If you want to enable client isolation, you can do so via the following iptables rules: iptables -I FORWARD -i wg0 -o wg0 -j REJECT --reject-with icmp-adm-prohibited ip6tables -I FORWARD -i wg0 -o wg0 -j REJECT --reject-with icmp6-admin-prohibited If you want relax the rules for certain clients you can do as follows (where refers to the client and to the Wireguard VPN network):

Routing applications through a VPN


You may know this problem: You are using a laptop for work and for private stuff and you don’t want that your private traffic gets leaked when you activate your company/university VPN. I solved this problem via using systemd-nspawn containers for routing certain applications (like webbrowsers) through a specific VPN. First you need a systemd-nspawn container. On Arch Linux you can achieve this via using one of the following steps:

A forum for Flathub


Flathub is primarily built around GitHub, where applications manifests and infrastructure code live. Unsurprisingly, it turns out that code hosting platform isn’t exactly a go-to place for the community to connect, even if one slaps a discussion label on an issue. Timezones and personal commitments mean that IRC is also not an ideal platform for discussion, and Flathub does not have a mailing list for discussion and announcements. To remedy this, Flathub now has a forum at! Right now, there are three categories:
  • Requests for requesting new applications. (In the past, GitHub issues requesting new applications would tend to just be closed unless somebody is actively working on the request.)
  • Announcements for news posts from the Flathub team. (In the past, news posts have been scattered around the personal blogs of the Flathub team.)
  • General for other discussion about Flathub.
Please drop by the forum and say hello, even if you don’t have any specific questions yet! You can sign in with your GitHub account, if you already have one; and you can configure Discourse to send you emails if that’s what you prefer. (If you have an issue with a particular application, please open an issue on GitHub as before – this is the best way to get the attention of the maintainers of that application.) See you there!

How I moved from Nginx to Caddy


Nginx has been my webserver of choice for several years now. But I had always some issues with nginx that bothered me for quite a while: Weak defaults (no TLS on default, weak ciphers, no OSCP stapling on default, …) The configuration is very verbose (this doesn’t need to be something bad) New technologies like (QUIC or zstd compression need ages until their are available in downstream) Dealing with Let’s Encrypt / certificates has always been an error-prone process (I never got that working for a longer period of time without issues).

rsync compatibility


Our rsync package was shipped with bundled zlib to provide compatibility with the old-style --compress option up to version 3.1.0. Version 3.1.1 was released on 2014-06-22 and is shipped by all major distributions now. So we decided to finally drop the bundled library and ship a package with system zlib. This also fixes security issues, actual ones and in future. Go and blame those running old versions if you encounter errors with rsync 3.1.3-3.

Now using Zstandard instead of xz for package compression


As announced on the mailing list, on Friday, Dec 27 2019, our package compression scheme has changed from xz (.pkg.tar.xz) to zstd (.pkg.tar.zst). zstd and xz trade blows in their compression ratio. Recompressing all packages to zstd with our options yields a total ~0.8% increase in package size on all of our packages combined, but the decompression time for all packages saw a ~1300% speedup. We already have more than 545 zstd-compressed packages in our repositories, and as packages get updated more will keep rolling in. We have not found any user-facing issues as of yet, so things appear to be working. As a packager, you will automatically start building .pkg.tar.zst packages if you are using the latest version of devtools (>= 20191227). As an end-user no manual intervention is required, assuming that you have read and followed the news post from late last year. If you nevertheless haven't updated libarchive since 2018, all hope is not lost! Binary builds of pacman-static are available from Eli Schwartz' personal repository (or direct link to binary), signed with their Trusted User keys, with which you can perform the update.

My pacman.conf file


Many users don’t modify their pacman.conf file. Either because they think there is not so much to configure or because they are afraid to break something. In this short article I want to highlight some nice options, that make my daily use with Arch Linux a lot easier. First of all, here is my pacman.conf without comments: [options] HoldPkg = pacman glibc Architecture = auto IgnorePkg = Color TotalDownload CheckSpace VerbosePkgLists ILoveCandy SigLevel = Required DatabaseOptional LocalFileSigLevel = Optional [testing] Include = /etc/pacman.

Flathub 2019 roundup


One could say that the Flathub team is working silently behind the scenes most of the time and it wouldn’t be far from the truth. Unless changes are substantial, they are rarely announced elsewhere than under a pull request or issue on GitHub. Let’s change it a bit and try to summarize what was going on with Flathub over the last year. Beta branch and test builds 2019 started off strong. In February, several improvements to general workflow but also how things under the hood work landed. Maintainers gained the ability to sign-in to buildbot to manage the builds and …

Xorg cleanup requires manual intervention


In the process of Xorg cleanup the update requires manual intervention when you hit this message: :: installing xorgproto (2019.2-2) breaks dependency 'inputproto' required by lib32-libxi :: installing xorgproto (2019.2-2) breaks dependency 'dmxproto' required by libdmx :: installing xorgproto (2019.2-2) breaks dependency 'xf86dgaproto' required by libxxf86dga :: installing xorgproto (2019.2-2) breaks dependency 'xf86miscproto' required by libxxf86misc when updating, use: pacman -Rdd libdmx libxxf86dga libxxf86misc && pacman -Syu to perform the upgrade.