Arch Planet

Planet Arch Linux is a window into the world, work and lives of Arch Linux developers, trusted users and support staff.

RSS Feed

New Caddyfile and more

2020-02-26

I made a few significant changes on my blog. First, I have a new Caddyfile for Caddy: {experimental_http3}www.nullday.de,www.nspawn.org,www.shibumi.dev{redir*https://{http.request.host.labels.1}.{http.request.host.labels.0}{path}}nullday.de{redir*https://shibumi.dev{path}}nspawn.org,shibumi.dev{file_serverroot*/srv/www/{host}/public/header{Strict-Transport-Security"max-age=31536000; includeSubDomains; preload; always"Public-Key-Pins"pin-sha256=\"sRHdihwgkaib1P1gxX8HFszlD+7/gTfNvuAybgLPNis=\"; pin-sha256=\"YLh1dUR9y6Kja30RrAn7JKnbQG/uEtLMkBgFF2Fuihg=\"; pin-sha256=\"C5+lpZ7tcVwmwQIMcRtPbsQtWLABXhQzejna0wHFr8M=\"; includeSubdomains; max-age=2629746;"X-Frame-Options"SAMEORIGIN"X-Content-Type-Options"nosniff"X-XSS-Protection"1; mode=block"Content-Security-Policy"default-src 'none'; base-uri 'self'; form-action 'none'; img-src 'self'; script-src 'self'; style-src 'self'; font-src 'self'; worker-src 'self'; object-src 'self'; media-src 'self'; frame-ancestors 'none'; manifest-src 'self'; connect-src 'self'"Referrer-Policy"strict-origin"Feature-Policy"geolocation 'none';midi 'none'; sync-xhr 'none';microphone 'none';camera 'none';magnetometer 'none';gyroscope 'none';speaker 'none';fullscreen 'self';payment 'none';"Expect-CT"max-age=604800"}header/.well-known/openpgpkey/*{Content-Typeapplication/octet-streamAccess-Control-Allow-Origin*}encode{zstdgzip}}The new Caddyfile enables experimental HTTP3 support. Also I’ve added a few redirects to my new domain.

The Future of the Arch Linux Project Leader

2020-02-24

Hello everyone, Some of you may know me from the days when I was much more involved in Arch, but most of you probably just know me as a name on the website. I’ve been with Arch for some time, taking the leadership of this beast over from Judd back in 2007. But, as these things often go, my involvement has slid down to minimal levels over time. It’s high time that changes. Arch Linux needs involved leadership to make hard decisions and direct the project where it needs to go. And I am not in a position to do this. In a team effort, the Arch Linux staff devised a new process for determining future leaders. From now on, leaders will be elected by the staff for a term length of two years. Details of this new process can be found here In the first official vote with Levente Polyak (anthraxx), Gaetan Bisson (vesath), Giancarlo Razzolini (grazzolini), and Sven-Hendrik Haase (svenstaro) as candidates, and through 58 verified votes, a winner was chosen: Levente Polyak (anthraxx) will be taking over the reins of this ship. Congratulations! Thanks for everything over all these years, Aaron Griffin (phrakture)

Planet Arch Linux migration

2020-02-22

The software behind planet.archlinux.org was implemented in Python 2 and is no longer maintained upstream. This functionality has now been implemented in archlinux.org's archweb backend which is actively maintained but offers a slightly different experience. The most notable changes are the offered feeds and the feed location. Archweb only offers an Atom feed which is located at here.

Terraforming my blog

2020-02-18

I’ve just pushed a first step for managing my infrastructure via Hashicorps Terraform. In this article I want to speak about this first step and I want to give a glimpse into the future for it. My infrastructure is hosted in Hetzner Cloud (there is luckily a terraform provider for it). DNS will be talked about in a later blog article. I usually store my passwords in a gopass password store, hence I’ve wanted to let Terraform retrieve the Hetzner Cloud API key magically.

sshd needs restarting after upgrading to openssh-8.2p1

2020-02-17

After upgrading to openssh-8.2p1, the existing SSH daemon will be unable to accept new connections. (See FS#65517.) When upgrading remote hosts, please make sure to restart the SSH daemon using systemctl restart sshd right after running pacman -Syu. If you are upgrading to openssh-8.2p1-3 or higher, this restart will happen automatically.

Automate (offline) backups with restic and systemd

2020-02-17

This blog post builds on the content of the fedora magazine article Automate backups with restic and systemd. 2 important features were missing in the article for my use case: Don't reveal restic passwords in plain-text files Backup to offline storage (USB flash drive) Fortunately modern Linux distributions offer all mechanisms to implement these 2 requirements: Udisks(2) allows non-privileged users to mount external USB-disks automatically systemd.

How to setup your own WKD server

2020-02-16

You may have heard about the problems with recent PGP key server implementations. I don’t want to reiterate the technical challenges with recent PGP key server implementations. I think there are enough explanations for this in the Web. So let us focus on preventing the problems. One possible solution around this problem is self-hosting your own WKD server. WKD stands for Web Key Directory. It’s a new standard for hosting PGP keys via using existing infrastructure (webservers and HTTPS).

A new domain: shibumi.dev

2020-02-09

Hello everybody, I’ve just moved my blog to a domain. The new domain is called https://shibumi.dev/. Why a new domain? Well, I think this domain suits me better. It reflects my nickname on platforms like https://github.com/, IRC, and various others. Also, all dev domains have HSTS enabled on default for even more security and I’ve switched my domain registrar from Hetzner to https://inwx.de (Finally, DNSSEC yeah). Because some websites still link to nullday.

Tests for the Arch Linux infrastructure

2020-02-05

The Arch Linux DevOps team uses a combination of Ansible and Terraform to manage their hosts. If you want to have a look on their infrastructure repository, you can do so via this link: https://git.archlinux.org/infrastructure.git/tree/ The combination of Ansible and Terraform works quite well for Arch Linux, the only subject we are missing is proper testing. I want to present a small proof of concept on how we could do tests in the future.

Disable routing for Wireguard

2020-02-04

Think about the following scenario. You have a client at home and you have a server. The server permits ssh connections only from the wireguard network (eg. 10.0.0.0/24). You have wireguard configured and running on your client, but you don’t want to route all traffic through wireguard. You actually just want to access the server via wireguard and route all other traffic normally through your local gateway (let’s say 192.168.2.1). The solution is disabling the routing for the wireguard client.

Bandwidth tests with iperf3

2020-02-03

If you ever come to the need of a simple bandwidth test for your server or client, you can setup a bandwidth test via iperf3. For starting an iperf3 server, just use iperf3 -s: ❯ iperf3 -s ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- Accepted connection from 139.174.228.245, port 33133 [ 5] local 78.46.124.83 port 5201 connected to 139.174.228.245 port 21516 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 12.

Isolated clients with Wireguard

2020-02-02

The Wireguard VPN doesn’t isolate clients on default. If you want to enable client isolation, you can do so via the following iptables rules: iptables -I FORWARD -i wg0 -o wg0 -j REJECT --reject-with icmp-adm-prohibited ip6tables -I FORWARD -i wg0 -o wg0 -j REJECT --reject-with icmp6-admin-prohibited If you want relax the rules for certain clients you can do as follows (where 10.10.10.3 refers to the client and 10.10.10.0/24 to the Wireguard VPN network):

Routing applications through a VPN

2020-01-27

You may know this problem: You are using a laptop for work and for private stuff and you don’t want that your private traffic gets leaked when you activate your company/university VPN. I solved this problem via using systemd-nspawn containers for routing certain applications (like webbrowsers) through a specific VPN. First you need a systemd-nspawn container. On Arch Linux you can achieve this via using one of the following steps:

A forum for Flathub

2020-01-24

Flathub is primarily built around GitHub, where applications manifests and infrastructure code live. Unsurprisingly, it turns out that code hosting platform isn’t exactly a go-to place for the community to connect, even if one slaps a discussion label on an issue. Timezones and personal commitments mean that IRC is also not an ideal platform for discussion, and Flathub does not have a mailing list for discussion and announcements. To remedy this, Flathub now has a forum at discourse.flathub.org! Right now, there are three categories:
  • Requests for requesting new applications. (In the past, GitHub issues requesting new applications would tend to just be closed unless somebody is actively working on the request.)
  • Announcements for news posts from the Flathub team. (In the past, news posts have been scattered around the personal blogs of the Flathub team.)
  • General for other discussion about Flathub.
Please drop by the forum and say hello, even if you don’t have any specific questions yet! You can sign in with your GitHub account, if you already have one; and you can configure Discourse to send you emails if that’s what you prefer. (If you have an issue with a particular application, please open an issue on GitHub as before – this is the best way to get the attention of the maintainers of that application.) See you there!

How I moved from Nginx to Caddy

2020-01-18

Nginx has been my webserver of choice for several years now. But I had always some issues with nginx that bothered me for quite a while: Weak defaults (no TLS on default, weak ciphers, no OSCP stapling on default, …) The configuration is very verbose (this doesn’t need to be something bad) New technologies like (QUIC or zstd compression need ages until their are available in downstream) Dealing with Let’s Encrypt / certificates has always been an error-prone process (I never got that working for a longer period of time without issues).

rsync compatibility

2020-01-15

Our rsync package was shipped with bundled zlib to provide compatibility with the old-style --compress option up to version 3.1.0. Version 3.1.1 was released on 2014-06-22 and is shipped by all major distributions now. So we decided to finally drop the bundled library and ship a package with system zlib. This also fixes security issues, actual ones and in future. Go and blame those running old versions if you encounter errors with rsync 3.1.3-3.

Now using Zstandard instead of xz for package compression

2020-01-04

As announced on the mailing list, on Friday, Dec 27 2019, our package compression scheme has changed from xz (.pkg.tar.xz) to zstd (.pkg.tar.zst). zstd and xz trade blows in their compression ratio. Recompressing all packages to zstd with our options yields a total ~0.8% increase in package size on all of our packages combined, but the decompression time for all packages saw a ~1300% speedup. We already have more than 545 zstd-compressed packages in our repositories, and as packages get updated more will keep rolling in. We have not found any user-facing issues as of yet, so things appear to be working. As a packager, you will automatically start building .pkg.tar.zst packages if you are using the latest version of devtools (>= 20191227). As an end-user no manual intervention is required, assuming that you have read and followed the news post from late last year. If you nevertheless haven't updated libarchive since 2018, all hope is not lost! Binary builds of pacman-static are available from Eli Schwartz' personal repository (or direct link to binary), signed with their Trusted User keys, with which you can perform the update.

My pacman.conf file

2020-01-01

Many users don’t modify their pacman.conf file. Either because they think there is not so much to configure or because they are afraid to break something. In this short article I want to highlight some nice options, that make my daily use with Arch Linux a lot easier. First of all, here is my pacman.conf without comments: [options] HoldPkg = pacman glibc Architecture = auto IgnorePkg = Color TotalDownload CheckSpace VerbosePkgLists ILoveCandy SigLevel = Required DatabaseOptional LocalFileSigLevel = Optional [testing] Include = /etc/pacman.

Flathub 2019 roundup

2019-12-31

One could say that the Flathub team is working silently behind the scenes most of the time and it wouldn’t be far from the truth. Unless changes are substantial, they are rarely announced elsewhere than under a pull request or issue on GitHub. Let’s change it a bit and try to summarize what was going on with Flathub over the last year. Beta branch and test builds 2019 started off strong. In February, several improvements to general workflow but also how things under the hood work landed. Maintainers gained the ability to sign-in to buildbot to manage the builds and …

Xorg cleanup requires manual intervention

2019-12-20

In the process of Xorg cleanup the update requires manual intervention when you hit this message: :: installing xorgproto (2019.2-2) breaks dependency 'inputproto' required by lib32-libxi :: installing xorgproto (2019.2-2) breaks dependency 'dmxproto' required by libdmx :: installing xorgproto (2019.2-2) breaks dependency 'xf86dgaproto' required by libxxf86dga :: installing xorgproto (2019.2-2) breaks dependency 'xf86miscproto' required by libxxf86misc when updating, use: pacman -Rdd libdmx libxxf86dga libxxf86misc && pacman -Syu to perform the upgrade.

Shell aliases for Flatpak applications

2019-12-16

Although I gave up on tuning AwesomeWM configuration years ago and switched completely to GNOME, I still spend most of my time in the terminal. Instead of navigating to photos directory in Nautilus, I instinctively spawn a new terminal window with Super + Enter and type eog ~/pics/filename.jpg. This has become harder as I replaced almost all desktop applications provided by my distribution of choice with packages from Flathub. Modern-day Flatpak generates simple shell wrappers at /var/lib/flatpak/exports/bin. To make it more secure, it uses application ID as the command name. It would not be fun to have sudo shadowed by a rogue application if bin directory had a higher priority in $PATH. While it is easy to guess reverse DNS used for GNOME and KDE applications, guessing IDs of other applications may be burdensome. I am not only lazy so I wrote a script to generate shell aliases based on command defined by applications. #!/bin/bash declare -A aliases for bin in /var/lib/flatpak/exports/bin/* ~/.local/share/flatpak/exports/bin/*; do appid="$(basename $bin)" cmd="$(flatpak info -m $appid | awk -F= '/^command=/ {print $2}')" [[ -z $cmd ]] && continue aliases[$appid]="$(basename ${cmd##*/})" done ( for appid in "${!aliases[@]}"; do echo "alias ${aliases[$appid]}=$appid" done ) > ~/.cache/flatpak-aliases This approach is not ideal as command can be set arbitrarily. Some applications use helper scripts as the entry point, for example org.keepassxc.KeePassXC ends up aliased as command-wrapper.sh. I consider it good enough though.

Traefik BasicAuth

2019-12-05

In this short blog article we revisit traefik and add password authentication to our reverse proxy example.Password authentication means we use a (user,password) tuple for the login. We don’t want to safe our password in clear text, therefore we need to encrypt it. At this moment, traefik supports three hash algorithms: MD5, SHA1, BCrypt. Two of them are considered to be broken, hence you should use BCrypt: $ htpasswd -nbB myName myPassword myName:$2y$05$c4WoMPo3SXsafkva.

External data checker for Flathub

2019-11-29

To work around restrictions surrounding redistribution and repackaging of proprietary applications, Flatpak supports storing only the URL and metadata needed for verifying file integrity in the app bundle. During installation, such files are downloaded and extracted completely locally, thus not breaking the license. However, this solution is not ideal. If the vendor decides to use an unversioned URL or removes older releases when new versions are released, it makes the application impossible to install until the Flatpak maintainer updates the metadata. It has become a growing problem for Flathub, and annoying enough that fine people at Endless wrote a tool which periodically checks Flatpak manifests and submits pull requests with fixed extra-data information, and started to run it on few chosen apps. The fact that it operated without a Flathub stamp of approval meant that it remained rather unknown. Thanks to Will Thompson‘s patience, last week it was transferred to Flathub organization on Github and can be considered officially supported. It is enough to define x-checker-data in application’s manifest and it will be scanned every hour. If you maintain an extra-data app – or use one and want to contribute! – please take a look at the documentation of flatpak-external-data-checker. It will prove useful also for regular applications, as it can detect broken checksums of any external data too. While we do not run it on all applications yet, it will happen in near future.

primus_vk>=1.3-1 update requires manual intervention

2019-11-25

The primus_vk package prior to version 1.3-1 was missing some soname links. This has been fixed in 1.3-1 so the upgrade will need to overwrite the untracked soname links. If you get an error like: primus_vk: /usr/lib/libnv_vulkan_wrapper.so.1 exists in filesystem primus_vk: /usr/lib/libprimus_vk.so.1 exists in filesystem when updating, use: pacman -Syu --overwrite=/usr/lib/libnv_vulkan_wrapper.so.1,/usr/lib/libprimus_vk.so.1 to perform the upgrade.

Arch Conf 2019 Report

2019-11-17

During the 5th and 6th of October, 21 team members attended the very first internal Arch Conf. We spent 2 days at Native Instruments in Berlin having workshops, discussions and hack sessions together. We even managed to get into, and escape, an escape room! It was a great and productive weekend which we hope will continue in the next years. Hopefully we will be able to expand on this in the future and include more community members and users.