This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 04 Aug – 08 Aug 2025
Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
The moon has circled us a few times since that last post and some update is in order. First of all: all the internal work required for plugins was released as libinput 1.29 but that version does not have any user-configurable plugins yet. But cry you not my little jedi and/or sith lord in training, because support for plugins has now been merged and, barring any significant issues, will be in libinput 1.30, due somewhen around October or November. This year. 2025 that is.
Which means now is the best time to jump in and figure out if your favourite bug can be solved with a plugin. And if so, let us know and if not, then definitely let us know so we can figure out if the API needs changes.
The API Documentation for Lua plugins is now online too and will auto-update as changes to it get merged. There have been a few minor changes to the API since the last post so please refer to the documentation for details. Notably, the version negotiation was re-done so both libinput and plugins can support select versions of the plugin API. This will allow us to iterate the API over time while designating some APIs as effectively LTS versions, minimising plugin breakages. Or so we hope.
What warrants a new post is that we merged a new feature for plugins, or rather, ahaha, a non-feature. Plugins now have an API accessible that allows them to disable certain internal features that are not publicly exposed, e.g. palm detection. The reason why libinput doesn't have a lot of configuration options have been explained previously (though we actually have quite a few options) but let me recap for this particular use-case: libinput doesn't have a config option for e.g. palm detection because we have several different palm detection heuristics and they depend on device capabilities. Very few people want no palm detection at all[1] so disabling it means you get a broken touchpad and we now get to add configuration options for every palm detection mechanism. And keep those supported forever because, well, workflows.
But plugins are different, they are designed to take over some functionality. So the Lua API has a EvdevDevice:disable_feature("touchpad-palm-detection") function that takes a string with the feature's name (easier to make backwards/forwards compatible this way). This example will disable all palm detection within libinput and the plugin can implement said palm detection itself. At the time of writing, the following self-explanatory features can be disabled: "button-debouncing", "touchpad-hysteresis", "touchpad-jump-detection", "touchpad-palm-detection", "wheel-debouncing". This list is mostly based on "probably good enough" so as above - if there's something else then we can expose that too.
So hooray for fewer features and happy implementing!
[1] Something easily figured out by disabling palm detection or using a laptop where palm detection doesn't work thanks to device issues
First of all, what's outlined here should be available in libinput 1.29 1.30 but I'm not 100% certain on all the details yet so any feedback (in the libinput issue tracker) would be appreciated. Right now this is all still sitting in the libinput!1192 merge request. I'd specifically like to see some feedback from people familiar with Lua APIs. With this out of the way:
Come libinput 1.29 1.30, libinput will support plugins written in Lua. These plugins sit logically between the kernel and libinput and allow modifying the evdev device and its events before libinput gets to see them.
The motivation for this are a few unfixable issues - issues we knew how to fix but we cannot actually implement and/or ship the fixes without breaking other devices. One example for this is the inverted Logitech MX Master 3S horizontal wheel. libinput ships quirks for the USB/Bluetooth connection but not for the Bolt receiver. Unlike the Unifying Receiver the Bolt receiver doesn't give the kernel sufficient information to know which device is currently connected. Which means our quirks could only apply to the Bolt receiver (and thus any mouse connected to it) - that's a rather bad idea though, we'd break every other mouse using the same receiver. Another example is an issue with worn out mouse buttons - on that device the behavior was predictable enough but any heuristics would catch a lot of legitimate buttons. That's fine when you know your mouse is slightly broken and at least it works again. But it's not something we can ship as a general solution. There are plenty more examples like that - custom pointer deceleration, different disable-while-typing, etc.
libinput has quirks but they are internal API and subject to change without notice at any time. They're very definitely not for configuring a device and the local quirk file libinput parses is merely to bridge over the time until libinput ships the (hopefully upstreamed) quirk.
So the obvious solution is: let the users fix it themselves. And this is where the plugins come in. They are not full access into libinput, they are closer to a udev-hid-bpf in userspace. Logically they sit between the kernel event devices and libinput: input events are read from the kernel device, passed to the plugins, then passed to libinput. A plugin can look at and modify devices (add/remove buttons for example) and look at and modify the event stream as it comes from the kernel device. For this libinput changed internally to now process something called an "evdev frame" which is a struct that contains all struct input_events up to the terminating SYN_REPORT. This is the logical grouping of events anyway but so far we didn't explicitly carry those around as such. Now we do and we can pass them through to the plugin(s) to be modified.
The aforementioned Logitech MX master plugin would look like this: it registers itself with a version number, then sets a callback for the "new-evdev-device" notification and (where the device matches) we connect that device's "evdev-frame" notification to our actual code:
libinput:register(1) -- register plugin version 1
libinput:connect("new-evdev-device", function (_, device)
if device:vid() == 0x046D and device:pid() == 0xC548 then
device:connect("evdev-frame", function (_, frame)
for _, event in ipairs(frame.events) do
if event.type == evdev.EV_REL and
(event.code == evdev.REL_HWHEEL or
event.code == evdev.REL_HWHEEL_HI_RES) then
event.value = -event.value
end
end
return frame
end)
end
end)
This file can be dropped into /etc/libinput/plugins/10-mx-master.lua and will be loaded on context creation.
I'm hoping the approach using named signals (similar to e.g. GObject) makes it easy to add different calls in future versions. Plugins also have access to a timer so you can filter events and re-send them at a later point in time. This is useful for implementing something like disable-while-typing based on certain conditions.
So why Lua? Because it's very easy to sandbox. I very explicitly did not want the plugins to be a side-channel to get into the internals of libinput - specifically no IO access to anything. This ruled out using C (or anything that's a .so file, really) because those would run a) in the address space of the compositor and b) be unrestricted in what they can do. Lua solves this easily. And, as a nice side-effect, it's also very easy to write plugins in.[1]
Whether plugins are loaded or not will depend on the compositor: an explicit call to set up the paths to load from and to actually load the plugins is required. No run-time plugin changes at this point either, they're loaded on libinput context creation and that's it. Otherwise, all the usual implementation details apply: files are sorted and if there are files with identical names the one from the highest-precedence directory will be used. Plugins that are buggy will be unloaded immediately.
If all this sounds interesting, please have a try and report back any APIs that are broken, or missing, or generally ideas of the good or bad persuation. Ideally before we ship it and the API is stable forever :)
[1] Benjamin Tissoires actually had a go at WASM plugins (via rust). But ... a lot of effort for rather small gains over Lua
Flossy ( https://2025.fossy.us/ ) is in it's 3rd year now.
I always wanted to attend, but in previous years it was right
around the time that flock was, or some release event that was
very busy (or both!). This year happily it was not and I was able
to attend.
Flossy was in portland, oregon. Which is a 2ish hour drive from my
house. No planes, no jetlag! Hurray! The first day was somewhat of
a half day: registration started in the morning, but the keynote
wasn't until the afternoon. This worked great for me, as I was able
to do a few things in the morning and then drive up and park at
my hotel and walk over to the venue.
The venue was the 3rd floor of the PSU student union building.
I had a bit of confusion at first as the first set of doors I got
to were locked with a sign to enter from one street direction
for events. However, the correct door you can enter by _also_
has the sign on it, so I passed it thinking it would have a nice
"hey, this is the right door". A nice walk around the building and
I figured it out.
Registration was easy and into the ballroom/main room where keynotes
and such and tables for some free software projects as well as coffee
and snacks. I wished a few times there were some tables around to help
juggling coffee and a snack, and perhaps be a place to gather folks.
Somehow a table with 2 people standing at it and more room seems like
a chance to walk up and introduce yourself, where as two people talking
seems like it would be intruding.
Some talks I attended:
Is There Really an SBOM Mandate?
The main surprise for me is that SBOM has no actual definition and
companies/communities are just winging it. I definitely love the idea
of open source projects making their complete source code and scripts
available as their SBOM.
Panel: Ongoing Things in the Kernel Community
I tossed a 'what do you think of AI contributions' question to get
things started and there was a lot of great discussion.
The Subtle Art of Lying with Statistics
Lots of fun graphs here and definitely something people should keep
in mind when they see some stat that doesn't 'seem' right.
The Future of Fixing Technology
This was noting how we should try and define the tech we get
and make sure it's repairable/configurable. I completely agree,
but I am really not sure how we get to that future.
The evening event was nice, got to catch up with several folks.
Friday we had:
keynote: Assessing and Managing threats to the Nonprofit Infrastructure of FOSS
This was an interesting discussion from the point of view of nonprofit
orgs and manageing them. Todays landscape is pretty different than it was
even a few years ago.
After that they presented the Distinguished Service Award in Software Freedom
to Lance Albertson. Well deserved award! Congrats lance!
Starting an Open Mentorship Handbook!
This was a fun talk, it was more of a round table with the audience talking
about their experences and areas. I'm definitely looking forward to
hearing more about the handbook they are going to put together and
I got some ideas about how we can up our game.
Raising the bar on your conference presentation
This was a great talk! Rich Bowen did a great job here. There's tons of
good advice on how to make your conference talk so much better. I found
myself after this checking off when I saw things in talks he mentioned.
everyone speaking should give a watch to this one.
I caught next "Making P2P apps with Spritely Goblins".
Seemed pretty cool, but I am not so much into web design, so I was
a bit lost.
How to Hold It Together When It All Falls Apart:
Surviving a Toxic Open Source Project Without Losing it.
This was kind of a cautionary tale and a bit harrowing, but kudos
to sharing for others to learn by. In particular I was struck by
"being available all the time" (I do this) and "people just expect
you to do things so no one does them if you dont" (this happens to me
as well. Definitely something to think about.
It's all about the ecosystem! was next.
This was a reminder that when you make a thing that a ecosystem
comes up around, it's that part that people find great. If you try and
make it harder for them to keep doing their ecosystem things they will
leave.
DevOps is a Foreign Language (or Why There Are No Junior SREs)
This was a great take on new devops folks learning things like
adults learn new languages. There were a lot of interesting parallels.
More good thoughts about how we could onboard folks better.
Saturday started out with:
Q&A on SFC's lawsuit against Vizio
Some specifics I had not heard before, was pretty interesting.
Good luck on the case conservency!
DRM, security, or both? How do we decide?
Matthew Garret went over a bunch of DRM and security measures.
Good background/info for those that dont follow that space.
Some general thoughts: It was sure nice to see a more diverse audience!
Things seemed to run pretty smoothly and portland was a lovely space
for it. Next year, they are going to be moving to Vancouver, BC.
I think it might be interesting to see about having a fedora booth
and/or having some more distro related talks.
So some time back, I wrote this highly-performant, network-wide, transparent-proxy service in C which was incredibly fast as it could read 8192 bytes off of the client’s TCP sockets directly and proxy them in one write call over TCP directly to the VPN server without needing a tunnel interface with a small sized MTU which bottlenecks reads+writes to <1500 bytes per function call.
I thought about it for a while and came up with a proof of concept to incorporate similar ideas into OpenVPN’s source code as well. The summary of improvements are:
Max-MTU which now matches the rest of your standard network clients (1500 bytes)
Bulk-Reads which are properly sized and multiply called from the tun interface (6 reads)
Jumbo-TCP connection protocol operation mode only (single larger write transfers)
Performance improvements made above now allows for (6 reads x 1500 bytes == 9000 bytes per transfer call)
As you can see below, this was a speed test performed on a Linux VM running on my little Mac Mini which is piping all of my network traffic through it so the full sized MTU which the client assumes doesn’t have to be fragmented or compressed at all!
Note: Also, the client/server logs show the multi-batched TUN READ/WRITE calls along with the jumbo-sized TCPv4 READ/WRITE calls.
Note-note: My private VPS link is 1.5G and my internet link is 1G and my upload speed is hard rate limited by iptables and this test was done via a WiFi network client and not the actual host VPN client itself which to me makes it a bit more impressive.
~
~
~
~
I created a new GitHub repo with a branch+commit which has the changes made to the source code.
As everyone probably knows rust is considered a great language for secure programming and hence a lot of people are looking at it for everything from low level firmware to GPU drivers. In a similar vein it can already be to build UEFI applications.
Upstream in U-Boot we’ve been adding support for UEFI HTTP and HTTPs Boot and it’s now stable enough I am looking to enable this in Fedora. While I was testing the features on various bits of hardware I wanted a small UEFI app I could pull across a network quickly and easily from a web server for testing devices.
Of course adding in display testing as well would be nice…. so enter UEFI nyan cat for a bit of fun!
Thankfully the Fedora rust toolchain already has the UEFI targets built (aarch64 and x86_64) so you don’t need to mess with third party toolchains or repos, it works the same for both targets, just substitute the one you want in the example where necessary.
I did most of this in a container:
$ podman pull fedora
$ podman run --name=nyan_cat --rm -it fedora /bin/bash
# dnf install -y git-core cargo rust-std-static-aarch64-unknown-uefi
# git clone https://github.com/diekmann/uefi_nyan_80x25.git
# cd uefi_nyan_80x25/nyan/
# cargo build --release --target aarch64-unknown-uefi
# ls target/aarch64-unknown-uefi/release/nyan.efi
target/aarch64-unknown-uefi/release/nyan.efi
From outside the container then copy the binary, and add it to the EFI boot menu:
$ podman cp nyan_cat:/uefi_nyan_80x25/nyan/target/aarch64-unknown-uefi/release/nyan.efi ~/
$ sudo mkdir /boot/efi/EFI/nyan/
$ sudo cp ~/nyan.efi /boot/efi/EFI/nyan/nyan-a64.efi
$ sudo efibootmgr --create --disk /dev/nvme0n1 --part 1 --label "nyan" --loader \\EFI\\nyan\\nyan-a64.efi
Bastille, the lightweight jail (container) management system for FreeBSD, was already covered here. Recently, they released Bastille 1.0 and BastilleBSD, a hardened FreeBSD variant that comes with Bastille pre-installed.
What is Bastille?
Bastille is a lightweight jail management system for FreeBSD. Linux users might know jails as containers. FreeBSD jails arrived many years earlier. Just like on Linux, jails were originally not that easy to use. Bastille provides an easy-to-use shell script to work with FreeBSD jails, slightly similar to Podman (I will compare Bastille to Podman, as unlike Docker, Bastille does not have a continuously running daemon component). I wish it had already been available when I was managing tens of thousands of FreeBSD jails two decades ago :-)
BastilleBSD is a variant of FreeBSD by the creators of Bastille. The installation process looks pretty similar to the original FreeBSD installer, however it comes with security hardening enabled by default:
I t also installs Bastille out of box and configures the firewall among other things during installation. You can learn more about it and download at https://bastillebsd.org/platform/
As the 14.3 FreeBSD release is already pre-loaded, there is no need for the bootstrapping steps from the previous blog. The pf firewall is also configured and ready to use. As a first step, we create the jail. We use the name alcatraz, use the 14.3 FreeBSD release and a random IP address (AFAIR this was in the Bastille documentation, but could be anything unique and private).
bastille create alcatraz 14.3-RELEASE 10.17.89.50
Next, we bootstrap (that is, checkout using Git) the syslog-ng template. Bastille templates are similar to Dockerfiles from a distance. They describe what packages to install and how to configure them. There are many other templates, but here we use the one for syslog-ng.
At the end of the process, syslog-ng is started within the jail. As the BastilleBSD template was created four years ago, it still has a version 3.30 syslog-ng configuration. It prints a warning on the terminal:
[2025-07-30T10:21:08.374200] WARNING: Configuration file format is too old, syslog-ng is running in compatibility mode. Please update it to use the syslog-ng 4.8 format at your time of convenience. To upgrade the configuration, please review the warnings about incompatible changes printed by syslog-ng, and once completed change the @version header at the top of the configuration file; config-version='3.30'
The configuration in the template has several modifications compared to the regular FreeBSD syslog-ng configuration. It adds a tcp source on port 514 and disables a few lines in the configuration, which would try to print important logs to the console. The default configuration does not use any new features, so if you are OK with a warning message, you can leave it as-is. But it is easy to fix the config. Just open a shell within the jail:
bastille console alcatraz
And edit the syslog-ng configuration:
vi /usr/local/etc/syslog-ng.conf
Replace the version number with the actual syslog-ng version. By the time of this blog, it is 4.8. Restart syslog-ng for the configuration to take effect:
service syslog-ng restart
Exit from the jail, and configure the firewall, so outside hosts can also log to your syslog-ng server.
bastille rdr alcatraz tcp 514 514
Testing
First, do a local checkthat logging works. Open a shell within the jail, and follow the /var/log/messages file:
This blog is just scratching the surface. In a production environment, you most likely want to build your own configuration, with encrypted connections and remote logs saved separately from local logs. Data is now stored within the jail, but you will most likely want to store it in a separate directory. And so on.
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
July is already behind us – time really flies when you’re learning and growing! I’m happy to share what I’ve been up to this month as a Fedora DEI Outreachy intern.
To be honest, I feel like I’m learning more and more about open source every week. This month was a mix of video editing, working on event checklists, helping with fun Fedora activities, and doing prep work for community outreach.
Shoutouts
First, I want to say a big thank you to my mentor, Jona Azizaj, for her amazing support and guidance.
Also, shoutout to Mat H (the program) – thank you for always helping out and making sure I stay on track.
Being part of the Fedora DEI team has been a really positive experience. It feels awe-some to be part of the team.
What I’ve been working on
Event Checklists
I reviewed the existing event checklist and started making updates to improve it. I want to make it easier for future organizers to plan Fedora events. If you haven’t checked it, you can find the draft in the Discourse post and be sure to leave your comments if you have something to add.
How I managed swags and funding
This month, I figured out how Fedora handles swag distribution, event funding, and reimbursements through ticket requests and Mindshare support – especially useful for local events like mine in June. I’m documenting the process to make it easier for future contributors to plan events smoothly.
Video editing
I worked on editing Fedora DEI and Docs workshop video, which will be used to promote our work. Actually I didn’t know anything about editing, but it’s worth editing this to learn a few things about video editing.
Fedora Fun Activities
I’ve taken over the Fedora Fun socials! We hosted a social session on July 25 at 15:00 UTC, and another one followed on Friday, August 1. These are light, 1-hour sessions to hang out with our community members.
I’ve also started working on community outreach, you can check this discourse post, a draft for what I am looking forward to, helping prepare for future Fedora outreach within our regions, especially around how to reach out to students, what topics would be covered and more.
Chairing DEI Meetings
I got the chance to chair multiple Fedora DEI team meetings. It felt a bit scary at first, but with each meeting, I’ve become more confident running the meetings and helping keep things on track.
It helped me understand more about being inclusive when working with others – super helpful in open source communities!
What’s next
As we move into August, our main focus will be:
Community Outreach – bring more people into Open Source and Fedora’s space.
Event Checklist Documentation – we’re working on getting the updated checklist into Fedora DEI Docs so it’s more accessible and useful to everyone planning an event soon.
I’ll also continue contributing to DEI docs and helping organize fun and inclusive Fedora activities.
You’ve got anything we can do together? Hit me up on the matrix (username: lochipi) about it.
Final thoughts
This internship continues to teach me how open source works – from collaboration, planning events, to working with different Fedora teams.
If you’re curious about Fedora or want to support DEI work, join us in the Matrix room – we’d love to have you around.
The 46th annual TeX Users Group conference (TUG2025) took place in Kerala during July 18–20, 2025. I’ve attended and presented at a few of the past TUG conferences; but this time it is different as I was to present a paper and help to organize the conference. This is a personal, incomplete, (and potentially hazy around the edges) reflection of my experience organizing the event, which had participants from many parts of the Europe, the US and India.
Preparations
The Indian TeX Users Group, lead by CVR, have conducted TUG conferences in 2002 and 2011. We, a group of about 18 volunteers, lead by him convened as soon as the conference plan was announced in September 2024, and started creating todo-lists, schedules and assigning responsible persons for each.
STMDocs campus has excellent conference facilities including large conference hall, audio/video systems, high-speed internet with fallback, redundant power supply etc. making it an ideal choice, as done in 2011. Yet, we prioritized the convenience of the speakers and delegates to avoid travel from and to the hotel in city — prior experience found it is best to locate the conference facility closer to the stay. We scouted for a few hotels with good conference facilities in Thiruvananthapuram city, and finalized the Hyatt Regency; even though we had to take greater responsibility and coordination as they had no prior experience organizing a conference with requirements similar to TUG. Travel and visit advisories were published on the conference web site as soon as details were available.
Projector, UPS, display connectors, microphones, WiFi access points and a lot of related hardware were procured. Conference materials such as t-shirt, mug, notepad, pen, tote bag etc. were arranged. Noted political cartoonist E.P. Unny graciously drew the beloved lion sketches for the conference.
Karl Berry, from the US, orchestrated mailing lists for coordination and communication. CVR, Shan and I assumed the responsibility of answering speaker & delegate emails.
Audio/video and live streaming setup
I traveled to Thiruvananthapuram a week ahead of the conference to be present in person for the final preparations. One of the important tasks for me was the setup the audio/video and live streaming for the workshop and conference. The audio/video team and volunteers in charge did a commendable job of setting up all the hardware and connectivity on 16th evening and we tested presentation, video playing, projector, audio in/out, prompt, clicker, microphones and live streaming. There was no prompt at the hotel, so we split the screen-out to two monitors placed on both side of the podium — this was much appreciated by the speakers later. In addition to the A/V team’s hardware and (primary) laptop, two laptops (running Fedora 42) were used: a hefty one to run the presentation & backup OBS setup; another for video conferencing remote speakers’ Q&A session. The laptop used for presentation had 4K screen resolution. Thanks to Wayland (specifically, Kwin), the connected HDMI out can be independently configured for 1080p resolution; but it failed to drive the monitors split further for prompt. Changing the laptop built-in display resolution also to 1080p fixed the issue (may changing from 120 Hz refresh rate to 60 Hz might have helped, but we didn’t fiddle any further).
Also met with Erik Nijenhuis in front of the hotel, who was hand-rolling a cigarette (which turned out to be quite in demand during and after the conference), to receive a copy of the book ‘The Stroke’ by Gerrit Noordzij he kindly bought for me — many thanks!
Workshop
The ‘Tagging PDF for accessibility’ workshop was conducted on 17th July at STMDocs campus — the A/V systems & WiFi were setup and tested a couple of days prior. Delegates were picked up at the hotel in the morning and dropped off after the workshop. Registration of workshop attendees were done on the spot, and we collected speaker introductions to share with session chairs. Had interesting discussions with Frank Mittelbach and Boris Veytsman during lunch.
Reception & Registration
There was a reception at Hyatt on 17th evening, where almost everyone got registered, collected the conference material with program pre-print, t-shirt, mug, notepad & pen, a handwritten (by N. Bhattathiri) copy of Daiva Daśakam, and a copy of the LaTeX tutorial. All delegates introduced themselves — but I had to step out at the exact moment to get into a video call to prepare for live Q&A with Norman Gray from UK, who was presenting remotely on Saturday. There were two more remote speakers — Ross Moore from Australia and Martin J. Osborne from Canada — with whom I conducted the same exercise, despite at inconvenient times for them. Frank Mittelbach needed to use his own laptop for presentation; so we tested the A/V & streaming setup with that too. Doris Beherndt had a presentation with videos; its setup was also tested & arranged.
An ode to libre software & PipeWire
Tried to use a recent Macbook for the live video conference of remote speakers, but it failed miserably to detect the A/V splitter connected via USB to pick up the audio in and out. Resorting to my old laptop running Fedora 42; the devices were detected automagically and PipeWire (plus WirePlumber) made those instantly available for use.
With everything organized and tested for A/V & live streaming, I went back to get some sleep to wake early on the next day.
Day 1 — Friday
Woke up at 05:30, reached hotel by 07:00, and met with some attendees during breakfast. By 08:45, the live stream for day 1 started. Boris Veytsman, the outgoing vice-president of TUG opened TUG2025, handed over to the incoming vice-president and the session chair Erik Nijenhuis; who then introduced Rob Schrauwen to deliver the keynote titled ‘True but Irrelevant’ reflecting on the design of Elsevier XML DTD for archiving scientific articles. It was quite enlightening, especially when one of the designers of a system looks back at the strength, shortcomings and impact of their design decisions, approached with humility and openness. Rob and I had a chat later, about the motto of validating documents and its parallel with IETF’s robustness principle.
You may see a second stream for day 1, this is entirely my fault as I accidentally stopped streaming during tea break; and started a new one. The group photo was taken after a few exercises in cat-herding.
All the talks on day 1 were very interesting: with many talks about tagging pdf project (that of Mittelbach, Fischer, & Moore); the state of CTAN by Braun — to which I had a suggestion for inactive package maintainer process to consider some Linux distributions’ procedures; Vrajar�ja explained their use of XeTeX to typeset in multiple scripts; Hufflen’s experience in teaching LaTeX to students; Beherendt & Busse’s talk about use of LaTeX in CryptTool; and CVR’s talk about long running project of archiving Malayalam literary works in TEI XML format using TeX and friends. The session chairs, speakers and audience were all punctual and kept their allotted time in check; with many followup discussions happening during coffee break, which had ample time to feel the sessions not rushed.
Ross Moore’s talk was pre-recorded. As the video played out, he joined via a video conference link. The audio in/out & video out (for projecting on screen and for live streaming) were connected to my laptop, and we could hear him through the audio system as well as the audience questions via microphone was relayed to him with no lag — this worked seamlessly (thanks to PipeWire). We had a small problem with pausing a video that locked up the computer running the presentation; but quickly recovered — after the conference, I diagnosed it to be a noveau driver issue (a GPU hang).
By the end of the day, Rahul & Abhilash were accustomed to driving the presentation and live streams, so I could hand over the rein and enjoy the talks. Decided to stay back at the hotel to avoid travel, and went to bed by 22:00 but sleep descended on this poor soul only by 04:30 or so; thanks to that cup of ristretto for breakfast!
Judging by the ensuing laughs and questions; it appears not everyone was asleep during my talk. Frank & Ulrike suggested not to colour underscore glyph in math, instead properly colour LaTeX3 macro names (which can have underscore and colon in addition to letters) in the font.
The sessions on second day were also varied and interesting, in particular Novotný’s talk about static analysis of LaTeX3 macros; Vaishnavi’s fifteen-year long project of researching and encoding Tulu-Tigalari script in Unicode; bibliography processing talks separately by Gray and Osborne (both appeared on video conferencing for live Q&A which worked like a charm), etc.
I had interesting discussions with many participants during lunch and coffee breaks. In the evening, we walked (the monsoon rain was at respite) to the music and dance concert; both of which were fantastic cultural & audio-visual experience.
Veena music, and fusion dance concerts.
Day 3 — Sunday
The morning session of final day had a few talks: Rishi lamented about eroding typographic beauty in publishing (which Rob concurred with, Vrajar�ja earlier pointed out as the reason for choosing TeX, …); Doris on LaTeX village in CCC — and about ‘tuwat’ (to take action); followed by the TeX Users Group annual general body meeting presided by Boris as the first session post lunch; then on his approach to solve editorial review process of documents in TeX; and a couple more talks: Rahul’s presentation about pdf tagging used our opentype font for syntax highlighting (yay!); and the lexer developed by Overleaf team was interesting.
Two Hermann Zapf fans listening to one who collaborated with Zapf [published with permission].
Calligraphy
For the final session, Narayana Bhattathiri gave us a calligraphy demonstration, in four scripts — Latin, Malayalam, Devanagari and Tamil; which was very well received judging by the applause. I was deputed to explain what he does; and also to translate for the Q&A session. He obliged the audience’s request of writing names: of themselves, or spouse or children, even a bär, or as Hà n Thế Thà nh wanted — Nhà khủng lồ (the house of dinosaurs, name for the family group); for the next half hour.
Bhattathiri signing his calligraphy work for TUG2025.
Nijenhuis was also giving away swags by Xerdi, and I made the difficult choice between a pen and a pendrive, opting for the latter.
The banquet followed; where in between enjoying delicious food I could find time to meet and speak with even more people and say good byes and ‘tot ziens’.
Later, I had some discussions with Frank about generating MathML using TeX.
Many thanks
A number of people during the conference shared their appreciation of how well the conference was organized, this was heartwarming. I would like to express thanks to many people involved, including the TeX Users Group, the sponsors (who made it fiscally possible to run the event and support many travels via bursary), STMDocs volunteers who handled many other responsibilities of organizing, the audio-video team (who were very thoughtful to place the headshot of speakers away from the presentation text), the unobtrusive hotel staff; and all the attendees, especially the speakers.
Thanks particularly to those who stayed at and/or visited the campus, for enjoying the spicy food, delicious fruits from the garden, and surviving the long techno-socio-eco-political discussions. Boris seems to have taken it to heart my request for a copy of the TeXbook signed by Don Knuth — I cannot express the joy & thanks in words!
The TeXbook signed by Don Knuth.
The recorded videos were handed over to Norbert Preining, who graciously agreed to make the individual lectures available after processing. The total file size was ~720 GB; so I connected the external SSD to one of the servers and made it available to a virtual machine via USB-passthrough; then mounted and made it securely available for copying remotely.
Special note of thanks to CVR, and Karl Berry — who I suspect is actually a kubernetes cluster running hundreds of containers each doing a separate task (with apologies to a thousand gnomes), but there are reported sightings of him; so I sent personal thanks via people who have seen him in flesh — for leading and coordinating the conference organizing. Barbara Beeton and Karl copy-edited our article for the TUGboat conference proceedings, which is gratefully acknowledged. I had a lot of fun and a lot less stress participating in TUG2025 conference!
There's a lovely device called a pistorm, an adapter board that glues a Raspberry Pi GPIO bus to a Motorola 68000 bus. The intended use case is that you plug it into a 68000 device and then run an emulator that reads instructions from hardware (ROM or RAM) and emulates them. You're still limited by the ~7MHz bus that the hardware is running at, but you can run the instructions as fast as you want.
These days you're supposed to run a custom built OS on the Pi that just does 68000 emulation, but initially it ran Linux on the Pi and a userland 68000 emulator process. And, well, that got me thinking. The emulator takes 68000 instructions, emulates them, and then talks to the hardware to implement the effects of those instructions. What if we, well, just don't? What if we just run all of our code in Linux on an ARM core and then talk to the Amiga hardware?
We're going to ignore x86 here, because it's weird - but most hardware that wants software to be able to communicate with it maps itself into the same address space that RAM is in. You can write to a byte of RAM, or you can write to a piece of hardware that's effectively pretending to be RAM[1]. The Amiga wasn't unusual in this respect in the 80s, and to talk to the graphics hardware you speak to a special address range that gets sent to that hardware instead of to RAM. The CPU knows nothing about this. It just indicates it wants to write to an address, and then sends the data.
So, if we are the CPU, we can just indicate that we want to write to an address, and provide the data. And those addresses can correspond to the hardware. So, we can write to the RAM that belongs to the Amiga, and we can write to the hardware that isn't RAM but pretends to be. And that means we can run whatever we want on the Pi and then access Amiga hardware.
And, obviously, the thing we want to run is Doom, because that's what everyone runs in fucked up hardware situations.
Doom was Amiga kryptonite. Its entire graphical model was based on memory directly representing the contents of your display, and being able to modify that by just moving pixels around. This worked because at the time VGA displays supported having a memory layout where each pixel on your screen was represented by a byte in memory containing an 8 bit value that corresponded to a lookup table containing the RGB value for that pixel.
The Amiga was, well, not good at this. Back in the 80s, when the Amiga hardware was developed, memory was expensive. Dedicating that much RAM to the video hardware was unthinkable - the Amiga 1000 initially shipped with only 256K of RAM, and you could fill all of that with a sufficiently colourful picture. So instead of having the idea of each pixel being associated with a specific area of memory, the Amiga used bitmaps. A bitmap is an area of memory that represents the screen, but only represents one bit of the colour depth. If you have a black and white display, you only need one bitmap. If you want to display four colours, you need two. More colours, more bitmaps. And each bitmap is stored in an independent area of RAM. You never use more memory than you need to display the number of colours you want to.
But that means that each bitplane contains packed information - every byte of data in a bitplane contains the bit value for 8 different pixels, because each bitplane contains one bit of information per pixel. To update one pixel on screen, you need to read from every bitmap, update one bit, and write it back, and that's a lot of additional memory accesses. Doom, but on the Amiga, was slow not just because the CPU was slow, but because there was a lot of manipulation of data to turn it into the format the Amiga wanted and then push that over a fairly slow memory bus to have it displayed.
The CDTV was an aesthetically pleasing piece of hardware that absolutely sucked. It was an Amiga 500 in a hi-fi box with a caddy-loading CD drive, and it ran software that was just awful. There's no path to remediation here. No compelling apps were ever released. It's a terrible device. I love it. I bought one in 1996 because a local computer store had one and I pointed out that the company selling it had gone bankrupt some years earlier and literally nobody in my farming town was ever going to have any interest in buying a CD player that made a whirring noise when you turned it on because it had a fan and eventually they just sold it to me for not much money, and ever since then I wanted to have a CD player that ran Linux and well spoiler 30 years later I'm nearly there. That CDTV is going to be our test subject. We're going to try to get Doom running on it without executing any 68000 instructions.
We're facing two main problems here. The first is that all Amigas have a firmware ROM called Kickstart that runs at powerup. No matter how little you care about using any OS functionality, you can't start running your code until Kickstart has run. This means even documentation describing bare metal Amiga programming assumes that the hardware is already in the state that Kickstart left it in. This will become important later. The second is that we're going to need to actually write the code to use the Amiga hardware.
First, let's talk about Amiga graphics. We've already covered bitmaps, but for anyone used to modern hardware that's not the weirdest thing about what we're dealing with here. The CDTV's chipset supports a maximum of 64 colours in a mode called "Extra Half-Brite", or EHB, where you have 32 colours arbitrarily chosen from a palette and then 32 more colours that are identical but with half the intensity. For 64 colours we need 6 bitplanes, each of which can be located arbitrarily in the region of RAM accessible to the chipset ("chip RAM", distinguished from "fast ram" that's only accessible to the CPU). We tell the chipset where our bitplanes are and it displays them. Or, well, it does for a frame - after that the registers that pointed at our bitplanes no longer do, because when the hardware was DMAing through the bitplanes to display them it was incrementing those registers to point at the next address to DMA from. Which means that every frame we need to set those registers back.
Making sure you have code that's called every frame just to make your graphics work sounds intensely irritating, so Commodore gave us a way to avoid doing that. The chipset includes a coprocessor called "copper". Copper doesn't have a large set of features - in fact, it only has three. The first is that it can program chipset registers. The second is that it can wait for a specific point in screen scanout. The third (which we don't care about here) is that it can optionally skip an instruction if a certain point in screen scanout has already been reached. We can write a program (a "copper list") for the copper that tells it to program the chipset registers with the locations of our bitplanes and then wait until the end of the frame, at which point it will repeat the process. Now our bitplane pointers are always valid at the start of a frame.
Ok! We know how to display stuff. Now we just need to deal with not having 256 colours, and the whole "Doom expects pixels" thing. For the first of these, I stole code from ADoom, the only Amiga doom port I could easily find source for. This looks at the 256 colour palette loaded by Doom and calculates the closest approximation it can within the constraints of EHB. ADoom also includes a bunch of CPU-specific assembly optimisation for converting the "chunky" Doom graphic buffer into the "planar" Amiga bitplanes, none of which I used because (a) it's all for 68000 series CPUs and we're running on ARM, and (b) I have a quad core CPU running at 1.4GHz and I'm going to be pushing all the graphics over a 7.14MHz bus, the graphics mode conversion is not going to be the bottleneck here. Instead I just wrote a series of nested for loops that iterate through each pixel and update each bitplane and called it a day. The set of bitplanes I'm operating on here is allocated on the Linux side so I can read and write to them without being restricted by the speed of the Amiga bus (remember, each byte in each bitplane is going to be updated 8 times per frame, because it holds bits associated with 8 pixels), and then copied over to the Amiga's RAM once the frame is complete.
And, kind of astonishingly, this works! Once I'd figured out where I was going wrong with RGB ordering and which order the bitplanes go in, I had a recognisable copy of Doom running. Unfortunately there were weird graphical glitches - sometimes blocks would be entirely the wrong colour. It took me a while to figure out what was going on and then I felt stupid. Recording the screen and watching in slow motion revealed that the glitches often showed parts of two frames displaying at once. The Amiga hardware is taking responsibility for scanning out the frames, and the code on the Linux side isn't synchronised with it at all. That means I could update the bitplanes while the Amiga was scanning them out, resulting in a mashup of planes from two different Doom frames being used as one Amiga frame. One approach to avoid this would be to tie the Doom event loop to the Amiga, blocking my writes until the end of scanout. The other is to use double-buffering - have two sets of bitplanes, one being displayed and the other being written to. This consumes more RAM but since I'm not using the Amiga RAM for anything else that's not a problem. With this approach I have two copper lists, one for each set of bitplanes, and switch between them on each frame. This improved things a lot but not entirely, and there's still glitches when the palette is being updated (because there's only one set of colour registers), something Doom does rather a lot, so I'm going to need to implement proper synchronisation.
Except. This was only working if I ran a 68K emulator first in order to run Kickstart. If I tried accessing the hardware without doing that, things were in a weird state. I could update the colour registers, but accessing RAM didn't work - I could read stuff out, but anything I wrote vanished. Some more digging cleared that up. When you turn on a CPU it needs to start executing code from somewhere. On modern x86 systems it starts from a hardcoded address of 0xFFFFFFF0, which was traditionally a long way any RAM. The 68000 family instead reads its start address from address 0x00000004, which overlaps with where the Amiga chip RAM is. We can't write anything to RAM until we're executing code, and we can't execute code until we tell the CPU where the code is, which seems like a problem. This is solved on the Amiga by powering up in a state where the Kickstart ROM is "overlayed" onto address 0. The CPU reads the start address from the ROM, which causes it to jump into the ROM and start executing code there. Early on, the code tells the hardware to stop overlaying the ROM onto the low addresses, and now the RAM is available. This is poorly documented because it's not something you need to care if you execute Kickstart which every actual Amiga does and I'm only in this position because I've made poor life choices, but ok that explained things. To turn off the overlay you write to a register in one of the Complex Interface Adaptor (CIA) chips, and things start working like you'd expect.
Except, they don't. Writing to that register did nothing for me. I assumed that there was some other register I needed to write to first, and went to the extent of tracing every register access that occurred when running the emulator and replaying those in my code. Nope, still broken. What I finally discovered is that you need to pulse the reset line on the board before some of the hardware starts working - powering it up doesn't put you in a well defined state, but resetting it does.
So, I now have a slightly graphically glitchy copy of Doom running without any sound, displaying on an Amiga whose brain has been replaced with a parasitic Linux. Further updates will likely make things even worse. Code is, of course, available.
[1] This is why we had trouble with late era 32 bit systems and 4GB of RAM - a bunch of your hardware wanted to be in the same address space and so you couldn't put RAM there so you ended up with less than 4GB of RAM
Joplin یک نرمافزار متنباز و رایگان برای یادداشتبرداری، مدیریت وظایف (To-do)، و ذخیرهسازی اطلاعات شخصی است. این برنامه به عنوان جایگزینی قوی برای نرمافزارهایی مانند Evernote طراحی شده و بر حفظ حریم خصوصی کاربر و قابلیت همگامسازی امن تأکید دارد. معرفی Joplin Joplin ابزاری برای نوشتن و مدیریت یادداشتها به شیوهای ساختارمند و امن است. […]
Yet another day, yet another need for testing a device I don't have. That's fine and that's why many years ago I wrote libinput record and libinput replay (more powerful successors to evemu and evtest). Alas, this time I had a dependency on multiple devices to be present in the system, in a specific order, sending specific events. And juggling this many terminal windows with libinput replay open was annoying. So I decided it's worth the time fixing this once and for all (haha, lolz) and wrote unplug. The target market for this is niche, but if you're in the same situation, it'll be quite useful.
Pictures cause a thousand words to finally shut up and be quiet so here's the screenshot after running pip install unplug[1]:
This shows the currently pre-packaged set of recordings that you get for free when you install unplug. For your use-case you can run libinput record, save the output in a directory and then start unplug path/to/directory. The navigation is as expected, hitting enter on the devices plugs them in, hitting enter on the selected sequence sends that event sequence through the previously plugged device.
Annotation of the recordings (which must end in .yml to be found) can be done by adding a YAML unplug: entry with a name and optionally a multiline description. If you have recordings that should be included in the default set, please file a merge request. Happy emulating!
[1] And allowing access to /dev/uinput. Details, schmetails...
At GnuTLS, our journey into optimizing GitLab CI began when we faced a significant challenge: we lost our GitLab.com Open Source Program subscription. While we are still hoping that this limitation is temporary, this meant our available CI/CD resources became considerably lower. We took this opportunity to find smarter ways to manage our pipelines and reduce our footprint.
This blog post shares the strategies we employed to optimize our GitLab CI usage, focusing on reducing running time and network resources, which are crucial for any open-source project operating with constrained resources.
CI on every PR: a best practice, but not cheap
While running CI on every commit is considered a best practice for secure software development, our experience setting up a self-hosted GitLab runner on a modest Virtual Private Server (VPS) highlighted its cost implications, especially with limited resources. We provisioned a VPS with 2GB of memory and 3 CPU cores, intending to support our GnuTLS CI pipelines.
The reality, however, was a stark reminder of the resource demands. A single CI pipeline for GnuTLS took an excessively long time to complete, often stretching beyond acceptable durations. Furthermore, the extensive data transfer involved in fetching container images, dependencies, building artifacts, and pushing results quickly led us to reach the bandwidth limits imposed by our VPS provider, resulting in throttled connections and further delays.
This experience underscored the importance of balancing CI best practices with available infrastructure and budget, particularly for resource-intensive projects.
Reducing CI running time
Efficient CI pipeline execution is paramount, especially when resources are scarce. GitLab provides an excellent article on pipeline efficiency, though in practice, project specific optimization is needed. We focused on three key areas to achieve faster pipelines:
Tiering tests
Layering container images
De-duplicating build artifacts
Tiering tests
Not all tests need to run on every PR. For more exotic or costly tasks, such as extensive fuzzing, generating documentation, or large-scale integration tests, we adopted a tiering approach. These types of tests are resource-intensive and often provide value even when run less frequently. Instead of scheduling them for every PR, they are triggered manually or on a periodic basis (e.g., nightly or weekly builds). This ensures that critical daily development workflows remain fast and efficient, while still providing comprehensive testing coverage for the project without incurring excessive resource usage on every minor change.
Layering container images
The tiering of tests gives us an idea which CI images are more commonly used in the pipeline. For those common CI images, we transitioned to using a more minimal base container image, such as fedora-minimal or debian:<flavor>-slim. This reduced the initial download size and the overall footprint of our build environment.
For specialized tasks, such as generating documentation or running cross-compiled tests that require additional tools, we adopted a layering approach. Instead of building a monolithic image with all possible dependencies, we created dedicated, smaller images for these specific purposes and layered them on top of our minimal base image as needed within the CI pipeline. This modular approach ensures that only the necessary tools are present for each job, minimizing unnecessary overhead.
De-duplicating build artifacts
Historically, our CI pipelines involved many “configure && make” steps for various options. One of the major culprits of long build times is repeatedly compiling source code, oftentimes resulting in almost identical results.
We realized that many of these compile-time options could be handled at runtime. By moving configurations that didn’t fundamentally alter the core compilation process to runtime, we simplified our build process and reduced the number of compilation steps required. This approach transforms a lengthy compile-time dependency into a quicker runtime check.
Of course, this approach cuts both ways: while it simplifies the compilation process, it could increase the code size and attack surface. For example, support for legacy protocol features such as SSL 3.0 or SHA-1 that may lower the entire security should still be able to be switched off at the compile time.
Another caveat is that some compilation options are inherently incompatible with each other. One example is that thread sanitizer cannot be enabled with address sanitizer at the same time. In such cases a separate build artifact is still needed.
The impact: tangible results
The efforts put into optimizing our GitLab CI configuration yielded significant benefits:
The size of the container image used for our standard build jobs is now 2.5GB smaller than before. This substantial reduction in image size translates to faster job startup times and reduced storage consumption on our runners.
9 “configure && make” steps were removed from our standard build jobs. This streamlined the build process and directly contributed to faster execution times.
By implementing these strategies, we not only adapted to our reduced resources but also built a more efficient, cost-effective, and faster CI/CD pipeline for the GnuTLS project. These optimizations highlight that even small changes can lead to substantial improvements, especially in the context of open-source projects with limited resources.
For further information on this, please consult the actual changes.
Next steps
While the current optimizations have significantly improved our CI efficiency, we are continuously exploring further enhancements. Our future plans include:
Distributed GitLab runners with external cache: To further scale and improve resource utilization, we are considering running GitLab runners on multiple VPS instances. To coordinate these distributed runners and avoid redundant data transfers, we could set up an external cache, potentially using a solution like MinIO. This would allow shared access to build artifacts, reducing bandwidth consumption and build times.
Addressing flaky tests: Flaky tests, which intermittently pass or fail without code changes, are a major bottleneck in any CI pipeline. They not only consume valuable CI resources by requiring entire jobs to be rerun but also erode developer confidence in the test suite. In TLS testing, it is common to write a test script that sets up a server and a client as a separate process, let the server bind a unique port to which the client connects, and instruct the client to initiate a certain event through a control channel. This kind of test could fail in many ways regardless of the test itself, e.g., the port might be already used by other tests. Therefore, rewriting tests without requiring a complex setup would be a good first step.
The NeuroFedora team has decided to make a couple of changes to the artefacts that we produce and maintain:
The Comp Neuro Lab ISO image has been dropped.
We are moving away from packaging Python software that is easily installable from PyPi into rpms for the Fedora repositories to testing it out on Fedora.
Over the years that the NeuroFedora team has been maintaining neuroscience rpm packages for the Fedora repositories, we have amassed a rather large number, almost ~500 packages.
That is great, and we are extremely pleased with our coverage of the neuroscience software ecosystem.
However, given that the team is composed of a few volunteers who can only dedicate limited amounts of their time to maintaining packages, we were beginning to find that we were unable to keep up with the increasing workload.
Further, we realised that the use case for including all neuroscience software in Fedora was no longer clear---was it really required for us to package all of it for users?
An example case is the Python ecosystem.
Usually, users/researchers/developers tend to install Python software directly from PyPi rather than relying on system packages that we provide.
The suggested use of virtual environments to isolate projects and their dependencies also requires the use of software directly from PyPi, rather than using our system packages.
Therefore, for software that can be installed directly, we argue that it is less important that we package them.
Instead, it is more useful to our user base if we thoroughly test that this set of software can be properly installed and used on Fedora---on all the different versions of Python that a Fedora release supports.
So, a new guideline that we now follow is:
prioritise packaging software that cannot be easily installed from upstream forges (such as PyPi)
Following this, we have made a start on the Python packages that we maintain:
we made a list of software that is easily installable from PyPi
we began dropping them from Fedora, and instead testing their usage from PyPi
The testing involves:
checking that the software and its extras can be successfully installed on Fedora using pip
checking that the modules that the software include can be successfully imported
Our documentation has also been updated to reflect this change.
We now include two tables on each page.
One table provides information about the software that can be installed from PyPi, and so is not included in rpm form in Fedora.
The other provides information about the software that continues to be included in Fedora, because it cannot be easily installed from PyPi directly.
We will continue reporting issues to upstream developers as we have done before.
The difference now is that we work directly with what they publish, rather than our rpm packaged versions of what they publish.
You can follow this discussion and progress here.
Please refer to the lists in the documentation for the up to date list of packages we include/test.
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 28th July – 01st August, 2025
Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
Most linux distributions choose to drop it from their repositories. Various forks exist and Valkey seems a serious one and was chosen as a replacement.
So starting with Fedora 41 or Entreprise Linux 10 (CentOS, RHEL, AlmaLinux, RockyLinux...) redis is no more available, but valkey is.
With version 8.0 Redis Labs choose to switch to AGPLv3 license, and so is back as an OpenSource project, but lot of users already switch and want to keep valkey.
RPMs of Valkey version 8.1.3 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
So you now have the choice between Redis and Valkey.
1. Installation
Packages are available in the valkey:remi-8.1 module stream.
These packages are weak dependencies of Valkey, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).
The Modules are automatically loaded after installation and service (re)start.
Some modules are not available for Enterprise Linux 8.
3. Future
Valkey also provides a set of modules, requiring some packaging changes already proposed for the Fedora official repository.
Redis may be proposed for reintegration and return to the Fedora official repository, by me if I find enough motivation and energy, or by someone else.
So users will have the choice and can even use both.
ℹ️ Notice: Enterprise Linux 10.0 and Fedora have valley 8.0 in their repository. Fedora 43 will have valkey 8.1. CentOS Stream 9 also has valley 8.0, so it should be part of EL-9.7.
Apache provides a pretty standard screen to display directoyr contents if you do not provide any mods. We post artifacts up to a local server that I later need to download. Here are my hacky notes using command line utilities. I probably will convert this to python next.
If you download a default page using curl you get something like this:
Since they are sorted by time, we can grab the first one on the list (assuming we want the latest, which I do). First, download it to a file so we don’t have to wait for a download each iteration.
You can use XPath to navigate most of the html. At first,I kinda gave up and used awk, but suspected I would get a cleaner solution with just XPath if I stayed on it. And I did:
I don’t think XPath will remove the quotes or the href, but I can deal with that.
Here is the same logic in Python:
#!/bin/python3
import requests
from lxml import etree
url = "https://yourserver.net/file/"
response = requests.get(url)
html_content = response.text
# Parse the HTML content using lxml
tree = etree.HTML(html_content)
# Use XPath to select elements
# Select the text content of the first h1 tag
val = tree.xpath("//html/body/pre/a[text()='Parent Directory']/following::a[1]/@href")[0]
print(val)
#To keep going down the tree...
release_url=url + val
response = requests.get(release_url)
html_content = response.text
print(html_content)
LWN wrote an article which opens with the assertion "Linux users who have Secure Boot enabled on their systems knowingly or unknowingly rely on a key from Microsoft that is set to expire in September". This is, depending on interpretation, either misleading or just plain wrong, but also there's not a good source of truth here, so.
First, how does secure boot signing work? Every system that supports UEFI secure boot ships with a set of trusted certificates in a database called "db". Any binary signed with a chain of certificates that chains to a root in db is trusted, unless either the binary (via hash) or an intermediate certificate is added to "dbx", a separate database of things whose trust has been revoked[1]. But, in general, the firmware doesn't care about the intermediate or the number of intermediates or whatever - as long as there's a valid chain back to a certificate that's in db, it's going to be happy.
That's the conceptual version. What about the real world one? Most x86 systems that implement UEFI secure boot have at least two root certificates in db - one called "Microsoft Windows Production PCA 2011", and one called "Microsoft Corporation UEFI CA 2011". The former is the root of a chain used to sign the Windows bootloader, and the latter is the root used to sign, well, everything else.
What is "everything else"? For people in the Linux ecosystem, the most obvious thing is the Shim bootloader that's used to bridge between the Microsoft root of trust and a given Linux distribution's root of trust[2]. But that's not the only third party code executed in the UEFI environment. Graphics cards, network cards, RAID and iSCSI cards and so on all tend to have their own unique initialisation process, and need board-specific drivers. Even if you added support for everything on the market to your system firmware, a system built last year wouldn't know how to drive a graphics card released this year. Cards need to provide their own drivers, and these drivers are stored in flash on the card so they can be updated. But since UEFI doesn't have any sandboxing environment, those drivers could do pretty much anything they wanted to. Someone could compromise the UEFI secure boot chain by just plugging in a card with a malicious driver on it, and have that hotpatch the bootloader and introduce a backdoor into your kernel.
This is avoided by enforcing secure boot for these drivers as well. Every plug-in card that carries its own driver has it signed by Microsoft, and up until now that's been a certificate chain going back to the same "Microsoft Corporation UEFI CA 2011" certificate used in signing Shim. This is important for reasons we'll get to.
The "Microsoft Windows Production PCA 2011" certificate expires in October 2026, and the "Microsoft Corporation UEFI CA 2011" one in June 2026. These dates are not that far in the future! Most of you have probably at some point tried to visit a website and got an error message telling you that the site's certificate had expired and that it's no longer trusted, and so it's natural to assume that the outcome of time's arrow marching past those expiry dates would be that systems will stop booting. Thankfully, that's not what's going to happen.
First up: if you grab a copy of the Shim currently shipped in Fedora and extract the certificates from it, you'll learn it's not directly signed with the "Microsoft Corporation UEFI CA 2011" certificate. Instead, it's signed with a "Microsoft Windows UEFI Driver Publisher" certificate that chains to the "Microsoft Corporation UEFI CA 2011" certificate. That's not unusual, intermediates are commonly used and rotated. But if we look more closely at that certificate, we learn that it was issued in 2023 and expired in 2024. Older versions of Shim were signed with older intermediates. A very large number of Linux systems are already booting certificates that have expired, and yet things keep working. Why?
Let's talk about time. In the ways we care about in this discussion, time is a social construct rather than a meaningful reality. There's no way for a computer to observe the state of the universe and know what time it is - it needs to be told. It has no idea whether that time is accurate or an elaborate fiction, and so it can't with any degree of certainty declare that a certificate is valid from an external frame of reference. The failure modes of getting this wrong are also extremely bad! If a system has a GPU that relies on an option ROM, and if you stop trusting the option ROM because either its certificate has genuinely expired or because your clock is wrong, you can't display any graphical output[3] and the user can't fix the clock and, well, crap.
The upshot is that nobody actually enforces these expiry dates - here's the reference code that disables it. In a year's time we'll have gone past the expiration date for "Microsoft Windows UEFI Driver Publisher" and everything will still be working, and a few months later "Microsoft Windows Production PCA 2011" will also expire and systems will keep booting Windows despite being signed with a now-expired certificate. This isn't a Y2K scenario where everything keeps working because people have done a huge amount of work - it's a situation where everything keeps working even if nobody does any work.
So, uh, what's the story here? Why is there any engineering effort going on at all? What's all this talk of new certificates? Why are there sensationalist pieces about how Linux is going to stop working on old computers or new computers or maybe all computers?
Microsoft will shortly start signing things with a new certificate that chains to a new root, and most systems don't trust that new root. System vendors are supplying updates[4] to their systems to add the new root to the set of trusted keys, and Microsoft has supplied a fallback that can be applied to all systems even without vendor support[5]. If something is signed purely with the new certificate then it won't boot on something that only trusts the old certificate (which shouldn't be a realistic scenario due to the above), but if something is signed purely with the old certificate then it won't boot on something that only trusts the new certificate.
How meaningful a risk is this? We don't have an explicit statement from Microsoft as yet as to what's going to happen here, but we expect that there'll be at least a period of time where Microsoft signs binaries with both the old and the new certificate, and in that case those objects should work just fine on both old and new computers. The problem arises if Microsoft stops signing things with the old certificate, at which point new releases will stop booting on systems that don't trust the new key (which, again, shouldn't happen). But even if that does turn out to be a problem, nothing is going to force Linux distributions to stop using existing Shims signed with the old certificate, and having a Shim signed with an old certificate does nothing to stop distributions signing new versions of grub and kernels. In an ideal world we have no reason to ever update Shim[6] and so we just keep on shipping one signed with two certs.
If there's a point in the future where Microsoft only signs with the new key, and if we were to somehow end up in a world where systems only trust the old key and not the new key[7], then those systems wouldn't boot with new graphics cards, wouldn't be able to run new versions of Windows, wouldn't be able to run any Linux distros that ship with a Shim signed only with the new certificate. That would be bad, but we have a mechanism to avoid it. On the other hand, systems that only trust the new certificate and not the old one would refuse to boot older Linux, wouldn't support old graphics cards, and also wouldn't boot old versions of Windows. Nobody wants that, and for the foreseeable future we're going to see new systems continue trusting the old certificate and old systems have updates that add the new certificate, and everything will just continue working exactly as it does now.
Conclusion: Outside some corner cases, the worst case is you might need to boot an old Linux to update your trusted keys to be able to install a new Linux, and no computer currently running Linux will break in any way whatsoever.
[1] (there's also a separate revocation mechanism called SBAT which I wrote about here, but it's not relevant in this scenario)
[2] Microsoft won't sign GPLed code for reasons I think are unreasonable, so having them sign grub was a non-starter, but also the point of Shim was to allow distributions to have something that doesn't change often and be able to sign their own bootloaders and kernels and so on without having to have Microsoft involved, which means grub and the kernel can be updated without having to ask Microsoft to sign anything and updates can be pushed without any additional delays
[3] It's been a long time since graphics cards booted directly into a state that provided any well-defined programming interface. Even back in 90s, cards didn't present VGA-compatible registers until card-specific code had been executed (hence DEC Alphas having an x86 emulator in their firmware to run the driver on the card). No driver? No video output.
[4] There's a UEFI-defined mechanism for updating the keys that doesn't require a full firmware update, and it'll work on all devices that use the same keys rather than being per-device
[5] Using the generic update without a vendor-specific update means it wouldn't be possible to issue further updates for the next key rollover, or any additional revocation updates, but I'm hoping to be retired by then and I hope all these computers will also be retired by then
[6] I said this in 2012 and it turned out to be wrong then so it's probably wrong now sorry, but at least SBAT means we can revoke vulnerable grubs without having to revoke Shim
[7] Which shouldn't happen! There's an update to add the new key that should work on all PCs, but there's always the chance of firmware bugs
The JIT-enabled Firefox 128.13.0 ESR has been built for Fedora/ppc64le in my Talos COPR repository. It took longer than expected because of some COPR infra issues (builds have been timing out), but all F-41, F-42 and Rawhide builds are available now. The corresponding sources are in my fork of the official Fedora Firefox package and the JIT is coming from Cameron's work.
❝PDFs were created as a way to give a document an absolute, invariable design suitable for PRINT. It was never meant to be how we consumed documents on a screen.❞
And I must add:
We the data professionals, we hate PDFs. They might look good and structured for your human eyes, but the data inside them is a mess, unstructured and not suitable to be processed by computer programs.
Although we still didn’t reached an agreement for ubiquitous formats, here are some better options:
ePub (which is basically packaged HTML + CSS + images) for long text such as articles, T&Cs or contracts. ePub is usually associated with books but I hope it can be popularized for other used, given its versatility.
YAML, JSON, XML including digital signatures as JWS, for structured data such as government issued documents.
SVG (Scalable Vector Graphics, which is an XML application) for high quality graphics, including paged and interactive content, such as exported presentation slides.
MPEG-4 for interactive sequence of images, including dynamic animations and SVG with JavaScript, for content such as slide shows. Although MPEG-4 is usually associated with video, it can do much more than that. Player support is extremely weak for these other possibilities though.
SQLite for pure tabular and relational data. The SQLite engine is now ubiquitous, present in every browser and on every platform you can think of.
Eventually I want to be able to smoothly recall the chords at speed. However, when memorizing them, it helps to have a series of mnemonics, and to chunk them together. Just as you practice a song slow before you play it fast, you memorize the chords slow.
Here is my analysis of “All the Things You Are.”
Notation:
if there is no modifier on the chord, I mean a Maj7.
A minus sign means a minor 7th.
A lowercase o means diminished
A 0 mean minor 7 flat 5
G- | C7 | F7 | Bb
Eb | E- A7| D | D
D- | G- | C | F
Bb | B- E7| A | A
B- | E7 | A | A
Ab-| Db7 | Gb | D7b13
G- | C7 | F7 | Bb
Eb | Eb- | D- | Dbo
C- | F7 |Bb | A0 D7b9
Start by noting that the whole thing is in the key of Bb and that is the landing chord, one measure before the end; the last measure is the turn around that brings you back to the top and would not be played the last iteration.
This song is all about moving in fourths. Remember the a ii-V-I progression is a series of fourths, as that sequence is used a lot in this tune.
To chunk this, start at the line level: ii-V7 in the key of C, V7-I in Bb, the key of the tune. This is all set up to establish the tone center. Essentially, it starts on the F, which is the fifth of the Bb base chord. Each chord is a fourth from the one preceding it. In order to remember it, focus in on the fact that each chord is designed to lead to the next, and resolve to the Bb, (but not stay there) Finger the G, C F and Bb keys as you would play them.
To transition to the next line, note that the Eb is also a fourth from the chord before it, and it targets the D major at the end of the line. It does this by chromatically stepping up the the E and using a ii-V-I. This is going to give a not-quite-resolved feeling (again) when you land on the D.
The transition from the second to third line is a move to the parallel minor. The root note stays the same, as does the fifth of the chord, but the third and the seventh both change, and these are the important notes for soling, the differences in the scale you would play over the D major and the D minor is that F# and C# move to the F natural and C natural.
The pattern is now the same as the first two bars…but starting on D minor. All of the relationships between the chords are the same, but down a fourth. The target chord here is now the A.
To transition to the bridge, we stay in the key of A, and, in standard jazz form, run it through a ii-V-I sequence for four bars. The sixth line is a half step down from the A to the Ab.
This leaves only the last bar of the measure as a standalone chord. Tonality-wise, we need to move from a G flat to an F, which is a half step down. But since we need a measure to do it, we need to somehow prolong the transition. The jump from Gb to the D is One of the few transitions that is not along the cycle of fourths. The best thing to note is that the Gb can be rewritten as an F#, and that is the third of the D. The Flat 13th of the D is the Bb/A# which is the third of the Gb. So you keep two of the notes of the chord constant while moving the root.
To transition back to the main them. The D leads to the G- along the cycle of 4th. This is also a return to the beginning of the song, and this and the following four chords are identical to the start of the song.
Where things change is the seventh line, and it is this variation that makes the song even more different from most “Song-Form” songs (Songs that follow the A-A-B-A pattern) in that it has an extra four bars in the last A section. You can think of the eighth line as that extra four bars: It is a descending line: Eb to Eb minor, then down a half step through Dminor D diminished a Cminor. All of this was to target the C minor as the ii of the final ii-V-I of the song, leading to the Final resolution to Bb.
The last two bars are the turn around, targetting a Gminor. The A-7b5 is still in the key of Bflat: it is built off the major 7th, and usually you play the locrian mode over it. The D7flat 9 can be played with the same set of notes. This is a ii-V in G minor. G minor is the relative (not parallel) major to Bb.
OK, that is a lot to think about. But again, you can chunk it:
Start on Gminor.
5 measures of cycle of fourths
1/2 step up ii-V-I.
Stay on that key, 5 measures cycle of fourths,
1/2 step up ii-V-I. Stay in that key for the bridge 4 bars.
Half step up ii-V I (kinda) for the rest of the bridge.
Weird Transition back to Gminor.
5 measures of cycle of fourths.
Move to parallel minor.
Chromatic walkdown for 4 bars.
ii V-I.
Turnaround.
That should be enough of a structure to help you recall the chords as you try to run through them mentally.
I have been working on memorizing chords for a bunch of Standards and originals. It helps tremendously. A couple things that have worked for me:
Focus on the roots first. There are usually patterns to the ways the roots move: ii-V and Whole steps up or down are the most common.
Think in terms of Key-centers. Often, the bridges are simpler chord sequences, all in one key. A long series of ii-V-I will chunk these together.
Keep an eye at the smaller i-Vs that lead to key changes. These will help me tie the chunks together.
The goal is to solo over the tunes while playing saxophone. Finger the roots as I go through the progression. Instead of (or as well as) thinking “D-G” air-finger all 6 fingers down and then just the three left hand fingers…It gives me an additional channel of memorization.
Run through the changes in my head, without looking at the sheet music. Chunk series of changes, 2 to 4 bars at a time.
Whistle the melody line, and air-finger the roots of the changes.
Once I have it down, I listen to my favorite version of the song, and keep the pattern of the changes running through my head. I will restart many times to keep track, especially with Bebop.
Play through the changes , but SLOW….start with playing just roots…then gradually expand out to thirds fifths and sevenths, but make sure I keep track of the changes. I use ireal pro or musescore for this. Stop the player if I get lost.
In my last post on
Fedora’s signing infrastructure, I ended with some protocol changes I would be
interested in making. Over the last few weeks, I’ve tried them all in a
proof-of-concept project and I’m fairly satisfied with most of them. In this
post I’ll cover the details of the new protocol, as well as what’s next.
Sigul Protocol 2.0
The major change to the protocol, as I mentioned in the last post, is that all
communication between the client and the server happens over the nested TLS
session. Since the bridge cannot see any of the traffic, its role is reduced
significantly and it is now a proxy server that requires client authentication.
I considered an approach where there was no communication from the client or
server to the bridge directly (beyond the TLS handshake). However, adding a
handshake between the bridge and the server/client makes it possible for all
three to share a few pieces of data useful for debugging and error cases. While
it feels a little silly to add this complexity primarily for debugging
purposes, the services are designed to be isolated and there’s no opportunity
for live debugging.
The handshake
After a server or client connects to the bridge, it sends a message in the
outer TLS session to the bridge. The message contains the protocol version the
server/client will use on the connection, as well as its role (client or
server). The bridge listens on two different ports for client and server
connections, so the role is only included to catch mis-configurations where the
server connects to the client port or vice versa.
The bridge responds with a message that includes a status code to indicate it
accepts the connection (or not), and a UUID to identify the connection.
The bridge sends the same UUID to both the client and server so it can be used
to identify the inner TLS session on the client, bridge, and server. This makes
it easy to, for example, collect logs from all three services for a connection.
It also can be used during development with OpenTelemetry to trace a
request across all
three services.
Reasons the bridge might reject the connection include (but is not limited
to): the protocol version is not supported, the role the connection announced
is not the correct role for the port it connected to, or the client certificate
does not include a valid Common Name field (which is used for the username).
After the bridge responds with an “OK” status, servers accept incoming inner
TLS session on the socket, and clients connect. All further communication is
opaque to the bridge.
Client/Server
In the last blog post, I discussed requests and responds being JSON
dictionaries, and content that needed to be signed could be base64 encoded, but
I didn’t go into the details of RPM header signatures. While I’d still love to
have everything be JSON, after examining the size of RPM headers, I opted to
use a request/response format closer to what Sigul 1.2 currently uses. The
reason is that base64-encoding data increases its size by around 33%. After
poking at a few RPMs with the
rpm-head-signing tool, I
concluded that headers were still too large (cloud-init’s headers were hundreds
of kilobytes) to pay a 33% tax for a slightly simpler protocol.
So, the way a request or response works is like this: first, a frame is sent.
This is two unsigned 64-bit integers. The first u64 is the size of the JSON,
and the second is the size of the arbitrary binary payload that follows it.
What that binary is depends on the command specified in the JSON. The
alternative was to send a single u64 describing the JSON size alone, so the
added complexity here is minimal. The binary size can also be 0 for commands
that don’t use it, just like the Sigul 1.2 protocol.
Unlike Sigul 1.2, none of the messages need HMAC signatures since they all
happen inside the inner TLS session. Additionally, the protocol does not allow
any further communication on the outer TLS session, so the implementation to
parse the incoming requests is delightfully straightforward.
Authentication
Both the client and server authenticate to the bridge using TLS certificates.
Usernames are provided in the Common Name field of the client certificate. For
servers, this is not terribly useful, although the bridge could have an
allowlist of names that can connect to the server socket.
Clients and servers also mutually authenticate via TLS certificates on the
inner TLS session. Technically, the server and client could use a separate set
of TLS certificates for authentication, but in the current proof-of-concept it
uses the same certificates for both authenticating with the bridge and with the
server/client. I’m not sure there’s any benefit to introducing additional sets
of certificates, either.
For clients, commands no longer need to include the “user”: it’s pulled from
the certificate by the server. Additionally, there’s no user passwords. Users
that want a password in addition to their client certificate can encrypt their
client key with a password. If this doesn’t sit well with users (mostly Fedora
Infrastructure) we can add passwords back, of course.
Users do still need to exist in the database or their requests will be
rejected, so administrators will need to create the users via the command-line
interface on the server or via a client authenticated as an admin user.
Proof of concept
Everything I’ve described has been implemented in my proof-of-concept
branch. I’m in
the process of cleaning it up to be merge-able, and command-line interfaces
still need to be added for the bridge and client. There’s plenty of TODOs still
sprinkled around, and I expect I’ll reorganize the APIs a bit. I’ve also made
no effort to make things fast. Still, it will happily process hundreds or
thousands of connections concurrently. I plan to add benchmarks to the test
suite prior to merging this so I can be a little less handwave-y about how
capable the service is.
What’s next?
Now that the protocol changes are done (although there’s still time to tweak
things), it’s time to turn our attention to what we actually want to do:
managing keys and signing things.
Management
In the current Sigul implementation, some management is possible remotely using
commands, while some are only possible from on the server. One thing I would
like to consider is moving much of the uncommon or destructive management tasks
to be done locally on the server rather than exposing them as remote commands.
These tasks could include:
All user management (adding, removing, altering)
Removing signing keys
Users could still do any non-destructive actions remotely. This includes
creating signing keys, read operations on users and keys, granting and revoking
access to keys for other users, and of course signing content.
Moving this to be local to the server makes the permission model for requests
simpler. I would love feedback on whether this would be inconvenient from
anyone in the Fedora Release Engineering or Infrastructure teams.
Signing
When Sigul was first written, Fedora was not producing so many different kinds
of things that needed signatures. Mostly, it was RPMs. Since then, many other
things have needed signatures. For some types of content (containers, for
example), we still don’t sign them. For other types, they aren’t signed in an
automated fashion.
Even the RPM case needs to be reevaluated. RPM 6.0 is on the horizon with a new
format that supports multiple signatures - something we will want to support
post-quantum algorithms. However, we still need to sign the old v4 format for
EPEL 8.0. We probably don’t want to continue to shell out to rpmsign on the
server.
Ideally, the server should not need to be aware of all these details. Instead,
we can push that complexity to the client and offer a few signature types. We
can, for example, produce SecureBoot signatures and container signatures the
same way. I need to
understand the various signature formats and specifications, as well as the
content we sign. The goal is to offer broad support without requiring
constantly teaching the server about new types of content.
Comments and Feedback
Thoughts, comments, or feedback greatly welcomed on Mastodon
Recently, several people have asked me about the syslog-ng project’s view on AI. In short, there is cautious optimism: we embrace AI, but it does not take over any critical tasks from humans. But what does this mean for syslog-ng?
Well, it means that syslog-ng code is still written by humans. This does not mean that we do not use AI tools at all, but we do not use AI tools to write code for two reasons.
Firstly, this is because of licensing. The syslog-ng source code uses a combination of GLPv2 and LGPLv2.1, and there are no guarantees that the code generated by AI tools would be compliant with the licensing we use. The second reason is code quality. Syslog-ng is built on high performance C code and is used in highly secure environments. So even if the code generated by AI worked, there would be no guarantee that it was also efficient and secure. And optimizing and securing code later takes a lot more effort than writing it with those principles in mind from scratch.
Writing portable code is also difficult. X86_64 Linux is just one platform out of the many supported by syslog-ng. Additional platforms like ARM, RiscV, POWER, s390 and many others are also supported, which means that the code needs to run also on big-endian machines. And not just on Linux, but also on MacOS, FreeBSD and others.
So, where do we use AI tools, then? Well, we have tons of automated test cases and humans review each pull request before they are merged. However, various tools also analyze our code regularly both on GitHub and internally, and said tools give us recommendations on how we could improve code quality and security. Needless to say, they raise many false alarms, though. Still, it’s good to have them, as sometimes they spot valid problems. However, the decision on whether to apply their recommendations are always made by humans, as AI tools often do not have a full understanding of the code.
The other cornerstone of syslog-ng is its documentation. Some of our most active users decided to use syslog-ng because of the quality of its documentation, written by humans. Because of this, we also plan to use AI to help users find information in the documentation. We already had a proof-of-concept running on a laptop where users could ask questions from an AI tool and find information a lot quicker that way than browsing the documentation.
It is not directly related to syslog-ng development, but I have learned just recently that sequence-rtg, the tool I use to generate syslog-ng PatternDB rules based on a large number of log messages is also considered to be an AI tool by most definitions, even though its documentation never mentions AI. I guess this is probably because it was born before AI became an important buzzword… :-) You can learn more about sequence and how I used it at https://www.syslog-ng.com/community/b/blog/posts/sequence-making-patterndb-creation-for-syslog-ng-easier
So TL;DR: We use AI. Less than AI fanatics would love to see, but more than what AI haters can likely accept. We are on a safe middle ground, where AI does not replace our work, but rather augments it.
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
I own a Prusa Core One. Lately I had to print several things for my car (Camper Van) in ASA. If you wonder about the reason, it is ASA is heat resistant and temperatures can go quite high in a car.
ASA is really hard to print successfully. The reason is that it retracts when it cools. If you have an enclosed printer, you normally heat the chamber with the print bed. So the heating is coming from below. The more area of the heat bed you cover with your printer, the cooler is will get.
Print sheet lifted off the print bed (retraction forces).
As soon as you print especially flat surfaces, the middle of the surface cools down faster than the edges. If you print a box, the wall will also shield the heat from the print bed and you get even more cooling inside. More cooling means more retraction. Sooner or later your print will start to bend and lift off either the print sheet or print bed. The only way to avoid that would be to have 50째C warm air pushing down on your print from above. I haven’t seen a solution for that yet.
If you have a five hour print job, you want that your print to lift of as late as possible. Small jobs you can print successfully, but big jobs like 5×4 H10 Gridfinity boxes or something like that are nearly impossible to print 100% successfully.
However here are all my tricks how to master ABS/ASA:
Pre-requirements
Satin Powder-coated Print Sheet
Magigoo Glue for PC (Its the only one which works with temperature over 100째C)
10 mm dia X 5mm N52 Neodym Magnets
Masking tape (Krepp)
Slicer Settings
Infill: Gyroid 10% (This works the best and prints fast)
Outer Brim: Witdh 3mm
For larger and longer prints which cover quite some area on the print sheet, after slicing add a “Pause” after 5mm of printing. That would be for layer height:
0.25mm = Pause at layer 21
0.20mm = Pause at layer 26
0.15mm = Pause at layer 35
Preparing the print
Clean the print sheet, put Magigoo on the it and place it in the printer
Auto home the printer
Move the print bed to 40mm
Select Custom Preheat and preheat to 120째C
You can use a hair dryer to heat the chamber below the print bed. BE CAREFUL! Do not point it to the door or it will melt. Make sure it sucks air from outside.
Close the door
Cover the gaps of the upper half of the printer with tape. The filtration system needs to suck air into the chamber. We want that it does it from below the print bed. See picture below.
Set the Filtration fan to 35%! This is important or it will remove too much heat form the chamber.
Prusa Core One with taped gaps
The Print
Start the print, it will take some time to absorb heat but it warms up to ~50째C. I have a thermometer at the top of the chamber inside. It normally has ~45째C when it starts to print.
In the Slicer you see how long it takes till you reach the Pause you set. Once the printer starts, set a timer that reminds you when it will Pause.
Adding Magnets
The Magigoo glue is very strong. If the retraction forces are getting too high, the print sheet will lift of the print bed. We can try to prevent that or at least postpone the lift off by adding magnets around our model once the printer pauses.
Additional magnets to hold down the print sheet
Event with the added magnets from above. After 4 hours of printing the retraction forces got so high, that it lifted of the print sheet from the print bend on both sides.
Let me know in the comments if you have a trick I don’t know yet!
Greaseweazle is a USB device and host tools allowing versatile floppy drive control which comes very handy when working with old computers. And as usually I prefer software on my systems to be installed in form of packages, thus I have spent some time to prepare greaseweazle rpms and they are now available from my copr repository. The repo also includes latest python-bitarray, because version 3 or newer needed by greaseweazle and it hasn't been updated in Fedora yet. The tools written in Python seems to work OK on my x86_64 laptop, but fail on my ppc64le workstation. Probably some serial port / terminal handling goes wrong in the pyserial library. There are discussions about terminal handling and its PowerPC specifics on the glibc development mailing list for a month or two now. I need to keep digging into it ...
I recently found, under the rain, next to a book swap box, a pile of 90's “software magazines” which I spent my evening cleaning, drying, and sorting in the days afterwards.
In the 90s, this was used by Diamond Editions[1] (a publisher related to tech shop Pearl, which French and German computer enthusiasts probably know) to publish magazines with just enough original text to qualify for those subsidies, bundled with the really interesting part, a piece of software on CD.
If you were to visit a French newsagent nowadays, you would be able to find other examples of this: magazines bundled with music CDs, DVDs or Blu-rays, or even toys or collectibles. Some publishers (including the infamous and now shuttered Éditions Atlas) will even get you a cheap kickstart to a new collection, with the first few issues (and collectibles) available at very interesting prices of a couple of euros, before making that “magazine” subscription-only, with each issue being increasingly more expensive (article from a consumer protection association).
Other publishers have followed suite.
I guess you can only imagine how much your scale model would end up costing with that business model (50 eurocent for the first part, 4.99€ for the second), although I would expect them to have given up the idea of being categorised as “written press”.
To go back to Diamond Editions, this meant the eventual birth of 3 magazines: Presqu'Offert, BestSellerGames and StratéJ. I remember me or my dad buying a few of those, an older but legit and complete version of ClarisWorks, CorelDraw or a talkie version of a LucasArt point'n'click was certainly a more interesting proposition than a cut-down warez version full of viruses when budget was tight.
3 of the magazines I managed to rescue from the rain
This brings us back to today and while the magazines are still waiting for scanning, I tried to get a wee bit organised and digitising the CDs.
Some of them will have printing that covers the whole of the CD, a fair few use the foil/aluminium backing of the CD as a blank surface, which will give you pretty bad results when scanning them with a flatbed scanner: the light source keeps moving with the sensor, and what you'll be scanning is the sensor's reflection on the CD.
My workaround for this is to use a digital camera (my phone's 24MP camera), with a white foam board behind it, so the blank parts appear more light grey. Of course, this means that you need to take the picture from an angle, and that the CD will appear as an oval instead of perfectly circular.
I tried for a while to use GIMP perspective tools, and “Multimedia” Mike Melanson's MobyCAIRO rotation and cropping tool. In the end, I settled on Darktable, which allowed me to do 4-point perspective deskewing, I just had to have those reference points.
So I came up with a simple "deskew" template, which you can print yourself, although you could probably achieve similar results with grid paper.
My janky setup
The resulting picture
After opening your photo with Darktable, and selecting the “darkroom” tab, go to the “rotate and perspective tool”, select the “manually defined rectangle” structure, and adjust the rectangle to match the centers of the 4 deskewing targets. Then click on “horizontal/vertical fit”. This will give you a squished CD, don't worry, and select the “specific” lens model and voilà.
Tools at the ready
Targets acquired
Straightened but squished
You can now export the processed image (I usually use PNG to avoid data loss at each step), open things up in GIMP and use the ellipse selection tool to remove the background (don't forget the center hole), the rotate tool to make the writing straight, and the crop tool to crop it to size.
And we're done!
The result of this example is available on Archive.org, with the rest of my uploads being made available on Archive.org and Abandonware-Magazines for those 90s magazines and their accompanying CDs.
[1]: Full disclosure, I wrote a couple of articles for Linux Pratique and Linux Magazine France in the early 2000s, that were edited by that same company.
One of the datacenters we have machines in is reorganizing things and
is moving us to a new vlan. This is just fine, it will allow more speration,
consolidating ips and makes sense.
So, they added tagging for the new vlan to our switch there, and this week
I setup things so we could have a seperate bridge on that vlan. There
was a bit of a hiccup with ipv6 routing, but that was quickly fixed.
Now we should be able to move vm's as we like to the new vlan by just changing
their IP address and moving them to use the new bridge over the old.
Once everything is moved, we can drop the old bridge and be all on
the new one.
IPV6 now live in main datacenter
Speaking of ipv6, we added ipv6 AAAA records to our new datacenter machines/applications
on thursday. Finally ipv6 users should be able to reach some services
that were not available before. Please let us know if you see any problems
with it, or notice any services that are not yet enabled.
Fun with bonding / LCAP 802.3ad
Machines in our new datacenter use two bonded interfaces. This allows us to
reboot/upgrade switches and/or continue to operate normally if a switch dies.
This is great, but when initially provisioning a machine you want to be able
to do so on one interface until it's installed and the bonding setup is
configured. So, one of the two interfaces is set to allow this. On our x86_64
machines, it's always the lower numbered/sorting first interface. However,
this week I found that when I couldn't get any aarch64 machines to pxe boot
that on those machines the initial live interface is THE HIGHER numbered one.
Frustrating, but easy to work with once you figure it out.
Mass rebuild
The f43 mass rebuild was started wed per the schedule. There was a bit
of a hiccup thursday morning as the koji db restarted and stopped the
mass rebuild script from submitting, but things resumed after that.
Everything pretty much finished friday night. This is a good deal faster than
in the past where it usually took until sunday afternoon.
I expect we will be merging it into rawhide very soon, so look for...
everything to update.
Most linux distributions choose to drop it from their repositories. Various forks exist and Valkey seems a serious one and was chosen as a replacement.
So starting with Fedora 41 or Entreprise Linux 10 (CentOS, RHEL, AlmaLinux, RockyLinux...) redis is no more available, but valkey is.
With version 8.0 Redis Labs choose to switch to AGPLv3 license, and so is back as an OpenSource project.
RPMs of Redis version 8.0.3 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
1. Installation
Packages are available in the redis:remi-8.0 module stream.
These packages are weak dependencies of Redis, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).
The modules are automatically loaded after installation and service (re)start.
The modules are not available for Enterprise Linux 8.
3. Future
Valkey also provides a similar set of modules, requiring some packaging changes already proposed for Fedora official repository.
Redis may be proposed for unretirement and be back in the Fedora official repository, by me if I find enough motivation and energy, or by someone else.
I may also try to solve packaging issues for other modules (e.g. RediSearch). For now, module packages are very far from Packaging Guidelines, so obviously not ready for a review.
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 21st – 25th July 2025
Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
One of my pet peeves around running local LLMs and inferencing is the sheer mountain of shit^W^W^W complexity of compute stacks needed to run any of this stuff in an mostly optimal way on a piece of hardware.
CUDA, ROCm, and Intel oneAPI all to my mind scream over-engineering on a massive scale at least for a single task like inferencing. The combination of closed source, over the wall open source, and open source that is insurmountable for anyone to support or fix outside the vendor, screams that there has to be a simpler way. Combine that with the pytorch ecosystem and insanity of deploying python and I get a bit unstuck.
What can be done about it?
llama.cpp to me seems like the best answer to the problem at present, (a rust version would be a personal preference, but can't have everything). I like how ramalama wraps llama.cpp to provide a sane container interface, but I'd like to eventually get to the point where container complexity for a GPU compute stack isn't really needed except for exceptional cases.
On the compute stack side, Vulkan exposes most features of GPU hardware in a possibly suboptimal way, but with extensions all can be forgiven. Jeff Bolz from NVIDIA's talk at Vulkanised 2025 started to give me hope that maybe the dream was possible.
The main issue I have is Jeff is writing driver code for the NVIDIA proprietary vulkan driver which reduces complexity but doesn't solve my open source problem.
Enter NVK, the open source driver for NVIDIA GPUs. Karol Herbst and myself are taking a look at closing the feature gap with the proprietary one. For mesa 25.2 the initial support for VK_KHR_cooperative_matrix was landed, along with some optimisations, but there is a bunch of work to get VK_NV_cooperative_matrix2 and a truckload of compiler optimisations to catch up with NVIDIA.
But since mesa 25.2 was coming soon I wanted to try and get some baseline figures out.
I benchmarked on two systems (because my AMD 7900XT wouldn't fit in the case). Both Ryzen CPUs. The first I used system I put in an RTX5080 then a RTX6000 Ada and then the Intel A770. The second I used for the RX7900XT. The Intel SYCL stack failed to launch unfortunately inside ramalama and I hacked llama.cpp to use the A770 MMA accelerators.
I picked this model at random, and I've no idea if it was a good idea.
Some analysis:
The token generation workload is a lot less matmul heavy than prompt processing, it also does a lot more synchronising. Jeff has stated CUDA wins here mostly due to CUDA graphs and most of the work needed is operation fusion on the llama.cpp side. Prompt processing is a lot more matmul heavy, extensions like NV_coopmat2 will help with that (NVIDIA vulkan already uses it in the above), but there may be further work to help close the CUDA gap. On AMD radv (open source) Vulkan is already better at TG than ROCm, but behind in prompt processing. Again coopmat2 like extensions should help close the gap there.
NVK is starting from a fair way behind, we just pushed support for the most basic coopmat extension and we know there is a long way to go, but I think most of it is achievable as we move forward and I hope to update with new scores on a semi regular basis. We also know we can definitely close the gap on the NVIDIA proprietary Vulkan driver if we apply enough elbow grease and register allocation :-)
I think it might also be worth putting some effort into radv coopmat2 support, I think if radv could overtake ROCm for both of these it would remove a large piece of complexity from the basic users stack.
As for Intel I've no real idea, I hope to get their SYCL implementation up and running, and maybe I should try and get my hands on a B580 card as a better baseline. When I had SYCL running once before I kinda remember it being 2-4x the vulkan driver, but there's been development on both sides.
A recent question lead me down a rabbit hole: how can we list the people that report up to George Slate? While we should be able to query this from LDAP, it seems to be shut off. However, using FreeIPA’s HTTP API, we can, if you know what you are doing. I do…
The classpath.org domain expired a couple of days ago and none of the subdomain, like planet, devel, icedtea resolved. Oops. It has been renewed for at least 5 years now.
Here are the release notes from Cockpit 343 and cockpit-machines 336:
Machines: Graphical VNC and serial consoles have been improved
It is now easier to switch between these two console types, and also easier to launch a remote viewer. Cockpit can now add VNC and serial consoles to machines that are missing them, and can change the port and password of the VNC servers of individual machines.
Machines: Control VNC console resizing and scaling
You can now explicitly control the graphical console’s resizing and scaling behaviour in expanded mode:
“No scaling or resizing” gets you an 1-to-1 pixel perfect view of a remote desktop, but possibly with scrollbars, depending on the browser window size.
“Local scaling” scales the console in the browser and thus is compatible with small browser windows and large remote resolutions, but is harder to read.
“Remote resizing” asks the remote machine to set a native resolution that matches Cockpit’s console view. This keeps crisp graphics, but may not be compatible with all operating systems.
Try it out
Cockpit 343 and cockpit-machines 336 are available now:
Whenever people use a non-x86 system, sooner or later someone asks: “But can it run
[name of x86-64 only binary]?”. So, let’s check how to make it possible.
It is a good thing to read to understand the stack. But if your Arm system runs
a 4K page size kernel, then most of that documentation can be skipped. You would
not need muvm nor binfmt-dispatcher packages.
What you need is FEX-emu, and nothing else, as it recommends all required components.
To make it easier, I recommend removing some of QEMU packages:
dnf remove qemu-static-* so only FEX-emu will be used for running foreign
architecture binaries.
Checking emulation
Installing FEX-emu should also install the Fedora/x86-64 rootfs. So, let check
if emulation is working:
$ uname -m
aarch64
$ FEXBash "uname -m"
erofsfuse 1.8.9
<W> erofs: /usr/share/fex-emu/RootFS/default.erofs mounted on /run/user/1000/.FEXMount2262322-Fjd32T with offset 0
x86_64
If you have errors at this stage, then check what you have in the
/usr/lib/binfmt.d/ directory. There should be only two files:
That’s the level of an Intel Atom CPU from 2021. My AMD Ryzen 5 3600 has much
better results.
Can it be better?
I asked FEX-Emu developers a few days ago with this kind of question. There are
several Arm CPU features that can be used to make emulation faster:
Section
Features
Crypto
AES, CRC, SHA, PMULL
TSO emulation
Apple Silicon TSO bit, LRCPC, LRCPC2, LSE2
Flags stuff
AFP, FlagM, FlagM2
Atomic operations
LSE
Misc
FCMA, FRINTTS, RPRES, SVE, SVE_bitperm
Future
RCPC3, CSSC, RAND
The Ampere Altra is an old, Arm v8.2 CPU. Its Neoverse-N1 cores are from 2019
and can support only a small subset of entries from the table above: all crypto
ones + LSE and LRCPC.
Tips and tricks
After publishing post I got some additional hints from FEX-emu folks to make
emulation a bit faster.
One is reducing precision of x86 FPU, second is disabling TSO. So my
configuration file (“~/.fex-emu/Config.json”) looks like this now:
This gave Factorio a visible speed-up. Cannot compare Geekbench results cause it
crashes randomly after configuration changes.
Steam
Now that we have sorted out software, let’s start with games (because what else
would you do with x86(-64) emulation?). Which usually means Steam.
First of all, we need Steam package. I took it from the
RPM Fusion non-free
page. The next step is installation. This is done without dependencies (those
will be provided by the FEX-emu rootfs), and we need to tell RPM that the
architecture of the package is not that important:
The next step is the Steam installation, where we run it under emulation:
FEXBash steam
It can take some time, but at the end you should get the Steam window:
Steam
So now we are ready to play some games, right?
Factorio
As you may know from one of my previous posts,
I am addicted to the the Factorio game. So how it goes on an Ampere Altra-based system?
Factorio
Let me be honest: without tweaking FEX-Emu config it was unplayable. At least
not with my saved games, in 3440x1440 resolution. What was 60/60 on a Ryzen 5
3600 went to 8-11/59.5 ;(
After tweaking FEX-Emu configuration Factorio started to be more playable. In
areas with many robots it was running at 16-25 FPS and there were moments with
close to 60/60 but it had to be close zoom to deserted area. I may consider
attempt to finish game during 100 hours.
At first, I built the ALTRA8UD-1L2T firmware without using Ninja. It took 30
seconds (11 minutes of total CPU time). Then I did a run with Ninja. It took
2 minutes (72 minutes of total CPU time).
Hmm… It should be much faster. I retried and looked at the htop window. And I
saw a lot of “FEXInterpreter” lines… So it was running in emulation.
I then replaced the x86-64 binary of Ninja with a symlink to the aarch64 one, and
then it was a visible improvement.
I commented on the pull request with a NAK.
Is it worth using?
So, the final question is: “Is it worth using x86(-64) emulation at all?”…
I do not plan to use it much. I may try to play some older games like Torchlight
II, which I managed to run once and then it stopped working.
There are multiple syslog protocols with multiple variants. The new transport(auto) option of the syslog() source allows you to support all TCP-based variants with a single source driver.
Why?
When it comes to syslog, there are many transport options. RFC3164 describes the “legacy” or “BSD” syslog protocol, while RFC5424 refers to the “new” syslog protocol (which is also more than a decade old now… :-) ). RFC5424-formatted messages normally come with framing or octet counting (as per RFC6587), where messages are prefixed with the length of the message. And just to increase confusion even more, some software use RFC5424 message formatting, but without octet counting.
Up until now, the amount of these variants meant that if you wanted to receive logs from RFC3164 and RFC5424 with or without octet counting, then you had to configure three different ports on syslog-ng to parse all of them correctly.
But not anymore! The new transport(auto) option of syslog-ng allows you to collect all these variants using a single port. And not just those, but even a variant that I do not recall seeing before: RFC3164 formatting with octet counting… :-)
Before you begin
Make sure that you have at least syslog-ng 4.9.0 installed. If it is not (yet) available in the operating system of your choice, then check if there are any third-party packages available. Of course, you can also build syslog-ng yourself, but using pre-built packages is a lot more convenient.
Configuring syslog-ng
Depending on your syslog-ng configuration, append the following configuration snippet to syslog-ng.conf, or create a new .conf file for it under the /etc/syslog-ng/conf.d/ directory.
The source driver opens port 514 and sets transport mode to auto. It means that any TCP-based syslog protocol will work.
The destination drivers will simply write incoming log messages to a file.
Testing
Once the syslog-ng configuration is live, you are ready for some testing. I used logger with these options on openSUSE, but the available options on your distribution or OS might be different. The third variant is only there for fun, as I do not recall ever seeing it in the wild… :-)
logger -T -n 127.0.0.1 -P 514 --rfc5424 bla bla rfc5414
logger -T -n 127.0.0.1 -P 514 --rfc3164 bla bla rfc3164
logger -T -n 127.0.0.1 -P 514 --rfc3164 --octet-count bla bla rfc3164 octet count
logger -T -n 127.0.0.1 -P 514 --rfc5424 --octet-count bla bla rfc5424 octet count
And the resulting log messages should look something like these:
Jul 14 13:12:39 localhost root: bla bla rfc5414
Jul 14 13:12:58 localhost root: bla bla rfc3164
Jul 14 13:14:09 localhost root: bla bla rfc3164 octet count
Jul 14 13:14:29 localhost root: bla bla rfc5424 octet count
What is next?
You can now simplify your syslog-ng configuration. On a larger network, this might also mean that you can simplify your firewall configuration.
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
I am writing this blog post as Claude Code is working on
upsint, a tool we worked on many years
back. I haven’t touched upsint’s codebase for some time. It worked just fine
all those years but recently I started getting 401 and 403 while creating pull
requests, probably due to my API token expiring. I have never implemented any
serious error handling in the tool so it was hard to diagnose the issue
quickly:
requests.exceptions.RetryError: HTTPSConnectionPool(host='api.github.com',
port=443): Max retries exceeded with url: /repos/packit/ai-workflows/pulls
(Caused by ResponseError('too many 403 error responses'))
Opportunity is upon us! For the past few years, the desktop Linux user base has been growing at a historically high rate. StatCounter currently has us at 4.14% desktop OS market share for Q2 2025. For comparison, when Fedora Workstation was first released in Q4 2014, desktop Linux was at 1.38%. Now, StatCounter measures HTTP requests, not computers, but it’s safe to say the trend is highly encouraging. Don’t trust StatCounter? Cloudflare reports 2.9% for Q2 2025. One of the world’s most popular websites reports 5.1%. And although I was unable to figure out how to make a permanent link to the results, analytics.usa.gov is currently reporting a robust 6.2% for the past 90 days, and increasing. The Linux user base is already much larger than I previously suspected would ever be possible, and it seems to be increasing quickly. I wonder if we are perhaps nearing an inflection point where our user base may soon increase even more considerably. The End of 10 and enthusiastic YouTubers are certainly not hurting.
Compared to its peers, Fedora is doing particularly well. It’s pretty safe to say that Fedora is now one of the 2 or 3 most popular and successful desktop Linux operating systems, a far cry from its status 10 years ago, when Fedora suffered from an unfortunate longstanding reputation that it was an unstable “test bed” OS only suitable for experienced technical users. Those days are long gone; nowadays, Fedora has an army of social media users eager to promote it as a reliable, newcomer-friendly choice.
But we cannot stop here. If we become complacent and content ourselves with the status quo, then we will fail to take maximum advantage of the current opportunity.
Although Fedora Workstation works well for most users, and although quality and reliability has improved considerably over the past decade, it is still far too easy for inexperienced users to break the operating system. Today’s Fedora Workstation is fundamentally just a nicer version of the same thing we already had 10 years ago. The original plan called for major changes that we have thus far failed to deliver, like “Robust Upgrades,” “Better upgrade/rollback control,” and “Container based application install.” These critical goals are notably all already achieved by Fedora Silverblue, the experimental image-based alternative to Fedora Workstation, but few Fedora users benefit because only the most experienced and adventurous users are willing to install Silverblue. I had long assumed that Silverblue would eventually become the next Fedora Workstation, and that the Silverblue code name would eventually be retired. This is now an explicit project goal of Fedora’s Strategy 2028, and it is critical for Fedora’s success. The Fedora Workstation of the future must be:
Safe and image-based by default: an atomic operating system composed of RPMs built on bootc. Most users should stick with image-based mode because it’s much harder to break the OS, and easier to troubleshoot when something does go wrong.
Flexible if you so choose: converting the image-based OS into the traditional package-based OS managed by RPM and dnf must be allowed, for users who prefer or require it. Or alternatively, if converting is not possible, then installing a traditional non-atomic Fedora must remain possible. Either way, we must not force users to use image-based desktops if they do not want to, so no need to panic. But image-based must eventually become the new default.
Silverblue is not ready yet, but Fedora has a large community of developers and should be able to eventually resolve the remaining problems.
But wait, wasn’t this supposed to be a blog post about Flathub? Well, consider that with an image-based OS, you cannot easily install traditional RPM packages. Instead, in Fedora Silverblue, desktop applications are installed only via Flatpak. (This is also true of Fedora Kinoite and Fedora’s other atomic desktop variants.) So Fedora must have a source of Flatpaks, and that source must be enabled by default, or there won’t be any apps available.
(Don’t like Flatpak? This blog post is long enough already, so I’ll ask you to just make a leap of faith and accept that Flatpak is cool. Notably, Flatpak applications that keep their bundled dependencies updated and do not subvert the sandbox are much safer to use than traditional distro-packaged applications.)
In practice, there are currently only two interesting sources of Flatpaks to choose from: Fedora Flatpaks and Flathub. Flathub is the much better choice, and enabling it by default should be our end goal. Fedora is already discussing whether to do this. But Flathub also has several disadvantages, some of which ought to be blockers.
Why Flathub?
There are important technical differences between Fedora’s Flatpaks, built from Fedora RPMs, vs. Flathub’s Flatpaks, which are usually built on top of freedesktop-sdk. But I will not discuss those, because the social differences are more important than the technical differences.
Users Like Flathub
Feedback from Fedora’s user base has been clear: among users who like Flatpaks, Flathub is extremely popular. When installing a Flatpak application, users generally expect it to come from Flathub. In contrast, many users of Fedora Flatpaks do not install them intentionally, but rather by accident, only because they are the preferred software source in GNOME Software. Users are often frustrated to discover that Fedora Flatpaks are not supported by upstream software developers and have a different set of bugs than upstream Flatpaks do. It is also common for users and even Fedora developers to entirely remove the Fedora Flatpak application source.
Not so many users prefer to use Fedora Flatpaks. Generally, these users cite some of Flathub’s questionable packaging practices as justification for avoiding use of Flathub. These concerns are valid; Flathub has some serious problems, which I will discuss in more detail below. But improving Flathub and fixing these problems would surely be much easier than creating thousands of Fedora Flatpak packages and attempting to compete with Flathub, a competition that Fedora would be quite unlikely to win.
This is the most important point. Flathub has already won.
Cut Out the Middleman
In general, upstream software developers understand their software much better than downstream packagers. Bugs reported to downstream issue trackers are much less likely to be satisfactorily resolved. There are a variety of ways that downstream packagers could accidentally mess up a package, whether by failing to enable a feature flag, or upgrading a dependency before the application is compatible with the new version. Downstream support is almost never as good as upstream support.
Adding a middleman between upstream and users really only makes sense if the middleman is adding significant value. Traditional distro-packaged applications used to provide considerable value by making it easy to install the software. Nowadays, since upstreams can distribute software directly to users via Flathub, that value is far more limited.
Bus Factor is Critical
Most Flatpak application developers prefer to contribute to Flathub. Accordingly, there are very few developers working on Fedora Flatpaks. Almost all of the Fedora Flatpaks are actually owned by one single developer who has packaged many hundreds of applications. This is surely not a healthy situation.
I suspect this situation is permanent, reflecting a general lack of interest in Fedora Flatpak development, not just a temporary shortfall of contributors. Quality is naturally going to be higher where there are more contributors. The quality of Fedora Flatpak applications is often lower than Flathub applications, sometimes significantly so. Fedora Flatpaks also receive significantly less testing than Flathub Flatpaks. Upstream developers do not test the Fedora Flatpaks, and downstream developers are spread too thin to have plausible hope of testing them adequately.
Focus on What Really Matters
Fedora’s main competency and primary value is the core operating system, not miscellaneous applications that ship on top of it for historical reasons.
When people complain that “distros are obsolete,” they don’t mean that Linux operating systems are not needed anymore. Of course you need an OS on which to run applications. The anti-distro people notably all use distros.
But it’s no longer necessary for a Linux distribution to attempt to package every open source desktop application. That used to be a requirement for a Linux operating system to be successful, but nowadays it is an optional activity that we perform primarily for historical reasons, because it is what we have always done rather than because it is still truly strategic or essential. It is a time-consuming, resource-intensive side quest that no longer makes sense and does not add meaningful value.
The Status Quo
Let’s review how things work currently:
By default, Fedora Workstation allows users to install open source software from the following sources: Fedora Flatpaks, Fedora RPMs, and Cisco’s OpenH264 RPM.
The post-install initial setup workflow, gnome-initial-setup, suggests enabling third-party repositories. If the user does not click the Enable button, then GNOME Software will make the same suggestion the first time it is run. Clicking this button enables all of Flathub, plus a few other RPM repositories.
Fedora will probably never enable software sources that contain proprietary software by default, but it’s easy to enable searching for proprietary software if desired.
(Technically, Fedora actually has a filter in place to allow hiding any Flathub applications we don’t want users to see. But since Fedora 38, this filter is empty, so no apps are hidden in practice. The downstream filter was quite unpopular with users, and the mechanism still exists only as a safety hatch in case there is some unanticipated future emergency.)
The Future
Here are my proposed requirements for Fedora Workstation to become a successful image-based OS.
This proposal applies only to Fedora Workstation (Fedora’s GNOME edition). These proposals could just as well apply to other Fedora editions and spins, like Fedora KDE Plasma Desktop, but different Fedora variants have different needs, so each should be handled separately.
Flathub is Enabled by Default
Since Flathub includes proprietary software, we cannot include all of Flathub by default. But Flathub already supports subsets. Fedora can safely enable the floss subset by default, and replace the “Enable Third-Party Repositories” button with an “Enable Proprietary Software Sources” button that would allow users to switch from the floss subset to the full Flathub if they so choose.
This goal can be implemented today, but we should wait because Flathub has some problems that we ought to fix first. More on that below.
All Default Applications are Fedora Flatpak Applications
All applications installed by default in Fedora Workstation should be Fedora Flatpaks. (Or almost all. Certain exceptions, like gnome-control-center, would make more sense as part of the OS image rather than as a Flatpak.)
Notice that I said Fedora Flatpaks, not Flathub. Fedora surely does need to control the handful of applications that are shipped by default. We don’t want to be at the mercy of Flathub to provide the core user experience.
With the exception of the default Fedora Flatpak applications, Flathub should be the only source of applications in GNOME Software.
It will soon be time to turn off GNOME Software’s support for installing RPM applications, making it a Flatpak-only software center by default. (Because GNOME Software uses a plugin architecture, users of traditional package-based Fedora who want to use GNOME Software to install RPM applications would still be able to do so by installing a subpackage providing a plugin, if desired.)
This requirement is an end goal. It can be done today, but it doesn’t necessarily need to be an immediatenext step.
Flathub Must Improve
Flathub has a few serious problems, and needs to make some policy changes before Fedora enables it by default. I’ll discuss this in more detail next.
Fedora Must Help
We should not make demands of Flathub without helping to implement them. Fedora has a large developer community and significant resources. We must not barge in and attempt to take over the Flathub project; instead, let’s increase our activity in the Flathub community somewhat, and lend a hand where requested.
The Case for Fedora Flatpaks
Earlier this year, Yaakov presented The Case for Fedora Flatpaks. This is the strongest argument I’ve seen in favor of Fedora Flatpaks. It complains about five problems with Flathub:
Lack of source and build system provenance: on this point, Yaakov is completely right. This is a serious problem, and it would be unacceptable for Fedora to embrace Flathub before it is fixed. More on this below.
Lack of separation between FOSS, legally encumbered, and proprietary software: this is not a real problem. Flathub already has a floss subset to separate open source vs. proprietary software; it may not be a separate repository, but that hardly matters because subsets allow us to achieve an equivalent user experience. Then there is indeed no separate subset for legally-encumbered software, but this also does not matter. Desktop users invariably wish to install encumbered software; I have yet to meet a user who does not want multimedia playback to work, after all. Fedora cannot offer encumbered multimedia codecs, but Flathub can, and that’s a major advantage for Flathub. Users and operating systems can block the multimedia extensions if truly desired. Lastly, some of the plainly-unlicensed proprietary software currently available on Flathub does admittedly seem pretty clearly outrageous, but if this is a concern for you, simply stick to the floss subset.
Lack of systemic upgrading of applications to the latest runtime: again, Yaakov is correct. This is a serious problem, and it would be unacceptable for Fedora to embrace Flathub before it is fixed. More on this below.
Lack of coordination of changes to non-runtime dependencies: this is a difference from Fedora, but it’s not necessarily a problem. In fact, allowing applications to have different versions of dependencies can be quite convenient, since upgrading dependencies can sometimes break applications. It does become a problem when bundled dependencies become significantly outdated, though, as this creates security risk. More on this below.
Lack of systemic community engagement: it’s silly to claim that Flathub has no community. Unresponsive Flathub maintainers are a real problem, but Fedora has an unresponsive maintainer problem too, so this can hardly count as a point against Flathub. That said, yes, Flathub needs a better way to flag unresponsive maintainers.
So now we have some good reasons to create Fedora Flatpaks. But maintaining Flatpaks is a tremendous effort. Is it really worth doing if we can improve Flathub instead?
Flathub Must Improve
I propose the following improvements:
Open source software must be built from source on trusted infrastructure.
Applications must not depend on end-of-life runtimes.
Applications must use flatpak-external-data-checker to monitor bundled dependencies wherever possible.
Sandbox holes must be phased out, except where this is fundamentally technically infeasible.
Let’s discuss each point in more detail.
Build Open Source from Source
Open source software can contain all manner of vulnerabilities. Although unlikely, it might even contain malicious backdoors. Building from source does nothing to guarantee that the software is in any way safe to use (and if it’s written in C or C++, then it’s definitely not safe). But it sets an essential baseline: you can at least be confident that the binary you install on your computer actually corresponds to the provided source code, assuming the build infrastructure is trusted and not compromised. And if the package supports reproducible builds, then you can reliably detect malicious infrastructure, too!
In contrast, when shipping a prebuilt binary, whoever built the binary can easily insert an undetectable backdoor; there is no need to resort to stealthy obfuscation tactics. With proprietary software, this risk is inherent and unavoidable: users just have to accept the risk and trust that whoever built the software is not malicious. Fine. But users generally do not expect this risk to extend to open source software, because all Linux operating systems fortunately require open source software to be built from source. Open source software not built from source is unusual and is invariably treated as a serious bug.
Flathub is different. On Flathub, shipping prebuilt binaries of open source software is, sadly, a common accepted practice. Here are several examples.Flathub itself admits that around 6% of its software is not built from source, so this problem is pervasive, not an isolated issue. (Although that percentage unfortunately considers proprietary software in addition to open source software, overstating the badness of the problem, because building proprietary software from source is impossible and not doing so is not a problem.)Update: I’ve been advised that I misunderstood the purpose of extra-data. Most apps that ship prebuilt binaries do not use extra-data. I’m not sure how many apps are shipping prebuilt binaries, but the problem is pervasive.
Security is not the only problem. In practice, Flathub applications that do not build from source sometimes package binaries only for x86_64, leaving aarch64 users entirely out of luck, even though Flathub normally supports aarch64, an architecture that is important for Fedora. This is frequently cited by Flathub’s opponents as major disadvantage relative to Fedora Flatpaks.
A plan to fix this should exist before Fedora enables Flathub by default. I can think of a few possible solutions:
Create a new subset for open source software not built from source, so Fedora can filter out this subset. Users can enable the subset at their own risk. This is hardly ideal, but it would allow Fedora to enable Flathub without exposing users to prebuilt open source software.
Declare that any software not built from source should be treated equivalent to proprietary software, and moved out of the floss subset. This is not quite right, because it is open source, but it has the same security and trust characteristics of proprietary software, so it’s not unreasonable either.
Set a flag date by which any open source software not built from source must be delisted from Flathub. I’ll arbitrarily propose July 1, 2027, which should be a generous amount of time to fix apps. This is my preferred solution. It can also be combined with either of the above.
Some of the apps not currently built from source are Electron packages. Electron takes a long time to build, and I wonder if building every Electron app from source might overwhelm Flathub’s existing build infrastructure. We will need some sort of solution to this. I wonder if it would be possible to build Electron runtimes to provide a few common versions of Electron. Alternatively, Flathub might just need more infrastructure funding.
Tangent time: a few applications on Flathub are built on non-Flathub infrastructure, notably Firefox and OBS Studio. It would be better to build everything on Flathub’s infrastructure to reduce risk of infrastructure compromise, but as long as this practice is limited to only a few well-known applications using trusted infrastructure, then the risk is lower and it’s not necessarily a serious problem. The third-party infrastructure should be designed thoughtfully, and only the infrastructure should be able to upload binaries; it should not be possible for a human to manually upload a build. It’s unfortunately not always easy to assess whether an application complies with these guidelines or not. Let’s consider OBS Studio. I appreciate that it almost follows my guidelines, because the binaries are normally built by GitHub Actions and will therefore correspond with the project’s source code, but I think a malicious maintainer could bypass that by uploading a malicious GitHub binary release? This is not ideal, but fortunately custom infrastructure is an unusual edge case, rather than a pervasive problem.
Penalize End-of-life Runtimes
When a Flatpak runtime reaches end-of-life (EOL), it stops receiving all updates, including security updates. How pervasive are EOL runtimes on Flathub? Using the Runtime Distribution section of Flathub Statistics and some knowledge of which runtimes are still supported, I determined that 994 out of 3,438 apps are currently using an EOL runtime. Ouch. (Note that the statistics page says there are 3,063 total desktop apps, but for whatever reason, the number of apps presented in the Runtime Distribution graph is higher. Could there really be 375 command line apps on Flathub?)
Using an EOL runtime is dangerous and irresponsible, and developers who claim otherwise are not good at risk assessment. Some developers will say that security does not matter because their app is not security-critical. It’s true that most security vulnerabilities are not actually terribly important or worth panicking over, but this does not mean it’s acceptable to stop fixing vulnerabilities altogether. In fact, security matters for most apps. A few exceptions would be apps that do not open files and also do not use the network, but that’s really probably not many apps.
I recently saw a developer use the example of a music player application to argue that EOL runtimes are not actually a serious problem. This developer picked a terrible example. Our hypothetical music player application can notably open audio files. Applications that parse files are inherently high risk because users love to open untrusted files. If you give me a file, the first thing I’m going to do is open it to see what it is. Who wouldn’t? Curiosity is human nature. And a music player probably uses GStreamer, which puts it at the veryhighesttier of security risk (alongside your PDF reader, email client, and web browser). I know of exactly one case of a GNOME user being exploited in the wild: it happened when the user opened a booby-trapped video using Totem, GNOME’s GStreamer-based video player. At least your web browser is guaranteed to be heavily sandboxed; your music player might very well not be.
The Flatpak sandbox certainly helps to mitigate the impact of vulnerabilities, but sandboxes are intended to be a defense in depth measure. They should not be treated as a primary security mechanism or as an excuse to not fix security bugs. Also, too Flatpak many apps subvert the sandbox entirely.
Of course, each app has a different risk level. The risk of you being attacked via GNOME Calculator is pretty low. It does not open files, and the only untrusted input it parses is currency conversion data provided by the International Monetary Fund. Life goes on if your calculator is unmaintained. Any number of other applications are probably generally safe. But it would be entirely impractical to assess 3000 different apps individually to determine whether they are a significant security risk or not. And independent of security considerations, use of an EOL runtime is a good baseline to determine whether the application is adequately maintained, so that abandoned apps can be eventually delisted. It would not be useful to make exceptions.
The solution here is simple enough:
It should not be possible to build an application that depends on an EOL runtime, to motivate active maintainers to update to a newer runtime. Flathub already implemented this rule in the past, but it got dropped at some point.
An application that depends on an EOL runtime for too long should eventually be delisted. Perhaps 6 months or 1 year would be good deadlines.
A monitoring dashboard would make it easier to see which apps are using maintained runtimes and which need to be fixed.
Monitor Bundled Dependencies
Flatpak apps have to bundle any dependencies not present in their runtime. This creates considerable security risk if the maintainer of the Flathub packaging does not regularly update the dependencies. The negative consequences are identical to using an EOL runtime.
Fortunately, Flathub already has a tool to deal with this problem: flatpak-external-data-checker. This tool automatically opens pull requests to update bundled dependencies when a new version is available. However, not all applications use flatpak-external-data-checker, and not all applications that do use it do so for all dependencies, and none of this matters if the app’s packaging is no longer maintained.
I don’t know of any easy ways to monitor Flathub for outdated bundled dependencies, but given the number of apps using EOL runtimes, I assume the status quo is pretty bad. The next step here is to build better monitoring tools so we can better understand the scope of this problem.
Phase Out Most Sandbox Holes (Eventually)
Applications that parse data are full of security vulnerabilities, like buffer overflows and use-after-frees. Skilled attackers can turn these vulnerabilities into exploits, using carefully-crafted malicious data to gain total control of your user account on your computer. They can then install malware, read all the files in your home directory, use your computer in a botnet, and do whatever else they want with it. But if the application is sandboxed, then a second type of exploit, called a sandbox escape, is needed before the app can harm your host operating system and access your personal data, so the attacker now has to exploit two vulnerabilities instead of just one. And while app vulnerabilities are extremely common, sandbox escapes are, in theory, rare.
In theory, Flatpak apps are drastically safer than distro-packaged apps because Flatpak provides a strong sandbox by default. The security benefit of the sandbox cannot be understated: it is amazing technology and greatly improves security relative to distro-packaged apps. But in practice, Flathub applications routinely subvert the sandbox by using expansive static permissions to open sandbox holes. Flathub claims that it carefully reviews apps’ use of static permissions and allows only the most narrow permissions that are possible for the app to function properly. This claim is dubious because, in practice, the permissions of actual apps on Flathub are extremely broad, as often as not making a total mockery of the sandbox.
While some applications use sandbox holes out of laziness, in many cases it’s currently outright impossible to sandbox the application without breaking key functionality. For example, Sophie has documented many problems that necessitate sandbox holes in GNOME’s image viewer, Loupe. These problems are fixable, but they require significant development work that has not happened yet. Should we punish the application by requiring it to break itself to conform to the requirements of the sandbox? The Flathub community has decided that the answer is no: application developers can, in practice, use whatever permissions they need to make the app work, even if this entirely subverts the sandbox.
This was originally a good idea. By allowing flexibility with sandbox permissions, Flathub made it very easy to package apps, became extremely popular, and allowed Flatpak itself to become successful. But the original understanding of the Flatpak community was that this laxity would be temporary: eventually, the rules would be tightened and apps would be held to progressively higher standards, until sandbox holes would eventually become rare. Unfortunately, this is taking too long. Flatpak has been around for a decade now, but this goal is not within reach.
Tightening sandbox holes does not need to be a blocker for adopting Flathub in Fedora because it’s not a problem relative to the status quo in Fedora. Fedora Flatpaks have the exact same problem, and Fedora’s distro-packaged apps are not sandboxed at all (with only a few exceptions, like your web browser). But it’s long past time to at least make a plan for how to eventually phase out sandbox holes wherever possible. (In some cases, it won’t ever be possible; e.g. sandboxing a file manager or disk usage analyzer does not make any sense.) It’s currently too soon to use sticks to punish applications for having too many sandbox holes, but sticks will be necessary eventually, hopefully within the next 5 years. In the meantime, we can immediately begin to use carrots to reward app developers for eliminating holes. We will need to discuss specifics.
We also need more developers to help improve xdg-desktop-portal, the component that allows sandboxed apps to safely access resources on the host system without using sandbox holes. This is too much work for any individual; it will require many developers working together.
Software Source Prioritization
So, let’s say we successfully engage with the Flathub project and make some good progress on solving the above problems. What should happen next?
Fedora is a community of doers. We cannot tell Fedora contributors to stop doing work they wish to do. Accordingly, it’s unlikely that anybody will propose to shut down the Fedora Flatpak project so long as developers are still working on it. Don’t expect that to happen.
However, this doesn’t mean Fedora contributors have a divine right for their packaged applications to be presented to users by default. Each Fedora edition (or spin) should be allowed to decide for itself what should be presented to the user in its software center. It’s time for the Fedora Engineering Steering Committee (FESCo) to allow Fedora editions to prefer third-party content over content from Fedora itself.
We have a few options as to how exactly this should work:
We could choose to unconditionally prioritize all Flathub Flatpaks over Fedora Flatpaks, as I proposed earlier this year (Workstation ticket, LWN coverage). The precedence in GNOME Software would be Flathub > Fedora Flatpaks.
Alternatively, we could leave Fedora Flatpaks with highest priority, and instead apply a filter such that only Fedora Flatpaks that are installed by default are visible in GNOME Software. This is my preferred solution; there is already an active change proposal for Fedora 43 (proposal, discussion), and it has received considerable support from the Fedora community. Although the proposal only targets atomic editions like Silverblue and Kinoite for now, it makes sense to extend it to Fedora Workstation as well. The precedence would be Filtered Fedora Flatpaks > Flathub.
When considering our desired end state, we can stop there; those are the only two options because of my “All Other Applications are Flathub Flatpaks” requirement: in an atomic OS, it’s no longer possible to install RPM-packaged applications, after all. But in the meantime, as a transitional measure, we still need to consider where RPMs fit in until such time that Fedora Workstation is ready to remove RPM applications from GNOME Software.
We have several possible precedence options. The most obvious option, consistent with my proposals above, is: Flathub > Fedora RPMs > Fedora Flatpaks. And that would be fine, certainly a huge improvement over the status quo, which is Fedora Flatpaks > Fedora RPMs > Flathub.
But we could also conditionally prioritize Flathub Flatpaks over Fedora Flatpaks or Fedora RPMs, such that the Flathub Flatpak is preferred only if it meets certain criteria. This makes sense if we want to nudge Flathub maintainers towards adopting certain best practices we might wish to encourage. Several Fedora users have proposed that we prefer Flathub only if the app has Verified status, indicating that the Flathub maintainer is the same as the upstream maintainer. But I do not care very much whether the app is verified or not; it’s perfectly acceptable for a third-party developer to maintain the Flathub packaging if the upstream developers do not wish to do so, and I don’t see any need to discourage this. Instead, I would rather consider whether the app receives a Probably Safe safety rating in GNOME Software. This would be a nice carrot to encourage app developers to tighten sandbox permissions. (Of course, this would be a transitional measure only, because eventually the goal is for Flathub to be the only software source.)
There are many possible outcomes here, but here are my three favorites, in order:
My favorite option: Filtered Fedora Flatpaks > Probably Safe Flathub > Fedora RPMs > Potentially Unsafe Flathub. Fedora Flatpaks take priority, but this won’t hurt anything because only applications shipped by default will be available, and those will be the ones that receive the most testing. This is not a desirable end state because it is complicated and it will be confusing to explain to users why a certain software source was preferred. But in the long run, when Fedora RPMs are eventually removed, it will simplify to Filtered Fedora Flatpaks > Flathub, which is elegant.
A simple option, the same thing but without the conditional prioritization: Filtered Fedora Flatpaks > Flathub > Fedora RPMs.
Alternative option: Probably Safe Flathub > Fedora RPMs > Potentially Unsafe Flathub > Unfiltered Fedora Flatpaks. When Fedora RPMs are eventually removed, this will simplify to Flathub > Unfiltered Fedora Flatpaks. This alternative option behaves almost the same as the above, except allows users to manually select the Fedora Flatpak if they wish to do so, rather than filtering them out. But there is a significant disadvantage: if you uninstall an application that is installed by default, then reinstall the application, it would come from Flathub rather than Fedora Flatpaks, which is unexpected. So we’ll probably want to hardcode exceptions for default apps to prefer Fedora Flatpaks.
The corresponding simple option without conditional prioritization: Flathub > Fedora RPMs > Unfiltered Fedora Flatpaks.
Any of these options would be fine.
Conclusion
Flathub is, frankly, not safe enough to be enabled by default in Fedora Workstation today. But these problems are fixable. Helping Flathub become more trustworthy will be far easier than competing against it by maintaining thousands of Fedora Flatpaks. Enabling Flathub by default should be a strategic priority for Fedora Workstation.
I anticipate a lively debate on social media, on Matrix, and in the comments. And I am especially eager to see whether the Fedora and Flathub communities accept my arguments as persuasive. FESCo will be considering the Filter Fedora Flatpaks for Atomic Desktops proposal imminently, so the first test is soon.
Freelens یک محیط توسعه یکپارچه (IDE) گرافیکی و متنباز برای مدیریت و مانیتورینگ کلاسترهای Kubernetes است. این برنامه که نسخهای اوپنسورس از Lens (قبل از پولی شدن آن) محسوب میشود، برای پلتفرمهای ویندوز، macOS و لینوکس عرضه شده و هدف آن سادهسازی عملیات Kubernetes با رابطی کاربرپسند و قدرتمند است. ویژگیهای برجسته Freelens ۱. داشبورد […]
Here's another saturday blog on interesting things that happened
in fedora infrastructure in the last week.
Datacenter Move remnants
Cleanup and fixes have slowed down. There's a few outstanding items,
but things are mostly all back to normal.
The new hardware is nicely fast: we are getting rawhide composes in
about 1.5 hours (was a bit over 2 hours in the old datacenter).
We hope to enable ipv6 on a lot of services soon. There's a last
bit of routing issue to clear up next week and then hopefully
everything will be ready. See the infrastructure mailing list
if you have ipv6 and would like to help us test things.
I moved our ppc64le builders to a iscsi lun from local nvme storage.
All of them are on one power10 machine, and nvme is fast, but
all 32 builders hitting just 4 nvme drives was still sometimes
hitting io limitations. With the storage moved, they should be
building a good deal faster now.
I'm trying to bring the machines we shipped from the old datacenter
online now. They will be mostly adding builder capacity and
openqa capacity. I'm not sure how many I will get before the
mass rebuild starts next wed, but I think we are in reasonable
shape in any case.
Yellow Glue of death
This past week twice the ups on my main machine and wireless ap and router
started yelling about overload and stopped working. I am pretty sure
it's the 'yellow glue of death' problem. Basically the vendor using
a cheap yellow glue, and over time and with heat (it's been hot lately)
it breaks down and starts shorting things out.
So, I ordered some new upses from a different vendor. I was supposed to
get them both yesterday, but only one showed up for some reason. One
is enough for now though, so I set it up and moved all my stuff over to
it. Seems to work fine, nut monitors it fine, easy to add to home
assistant. Even more loaded than I would like it's saying 43min runtime,
which is fine. I basically just need it for short outages and to
give me enough time to fire up the generator if it's a longer outage.
Fossy
Looks like I will be at https://2025.fossy.us/ week after next in
portland. At least thursday, friday, saturday (I have to head home
saturday). Lots of interesting talks, looking forward to it.
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 14 July – 18 July 2025
Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
I don’t think I ever posted about it, but nine months ago (exactly, which I just realized as I’m writing these words), I joined CIQ as a Senior Systems Engineer. One of my early tasks was to help one of our customers put together Rocky Linux images that their customers could use, and one of the requirements from their HPC customers was that the latest Intel irdma kernel module be available.
While packaging up the kernel module as an external kmod was easy enough, the question was asked, “What if the kernel ABI changes?” Their HPC customers wanted to use the upstream Rocky kernel, which, as a rebuild of RHEL has the same kABI guarantees that Red Hat has. There is a list of symbols that are (mostly) guaranteed not to change during a point release, but the Intel irdma driver requires symbols that aren’t in that list.
I did some investigation, and, in the lifespan of Rocky 8.10 (roughly 15 months), there have been somewhere just under 60 kernel releases, with only 3 or 4 breaking the symbols required by the Intel irdma driver. This meant that we could build the kmod when 8.10 came out, and, using weak-updates, the kernel module would automatically be available for newer kernels as they’re released until a release came out that broke one of the symbols that the kmod depended on. At that point, we would need to bump the release and rebuild the kmod. The new kmod build would be compatible with the new kernel, and any other new kernels until the kABI broke again.
When doing the original packaging for the kernel, Red Hat had the wisdom to add in a custom dependency generator that automatically generates a “Provides:” in the RPM for each symbol exported by the kernel, along with a hashed signature of its structure. This means that the kmod RPMs can be built to “Require:” each symbol they need, ensuring that the kmod can’t be installed on a system without also having a matching kernel installed.
This last item would seem to solve the whole “make sure kmods and kernels match” problem, except for one minor detail: You can have more than one kernel installed on your system.
Picture this. You have a system, and you install a kernel on it, and then install the Intel irdma/idpf driver, which makes your fancy network card work. A little while later, you update to the latest kernel and reboot… only to find your network card won’t work anymore!
What’s happened is that the kernel update changed one of the symbols required by the Intel irdma kmod, breaking the kABI. The kmod RPM has a dependency on the symbols it needs, but, because the kernel is special (that’s for you, Maple!), it’s one of the few packages that can have multiple versions installed at the same time, and those symbols are provided by the previous kernel, which is still installed, even if it’s not the currently booted kernel. The fix is as easy as booting back into the previous kernel, and waiting for an updated Intel kmod, but this is most definitely not a good customer experience.
What we really need is a safety net, a way to temporarily block the kernel from being updated until a matching kmod is available in the repositories. This is where dnf-plugin-protected-kmods comes in. When configured to protect a kmod, this DNF plugin will exclude any kernel RPMs if that kernel doesn’t have all the symbols required by the kmod RPM.
This means that, in the example above, the updated kernel would not have appeared as an available update until the Intel irdma/idpf kmod was also available (a warning would appear, indicating that this kernel was being blocked).
NVIDIA originally came up with the idea when they created yum-plugin-nvidia-driver, but it was very specifically designed with the NVIDIA kmods and their requirements in mind, so I forked it and made it more generic, updating it to filter based on the kernel’s “Provides:” and the kmod’s “Requires:”.
Our customer has been using this plugin for over six months, and it has functioned as expected. The DNF kmods we’re building for CIQ SIG/Cloud Next (a story for another day) are also built to support it and there’s a “Recommends:” dependency on it when the kmods are installed.
Since this plugin is useful not just to CIQ, but also to the wider Enterprise Linux community, I started working on packaging it up at this year’s Flock to Fedora conference (thanks for sending me, CIQ!), and, thanks to a review from Jonathan Wright (from AlmaLinux) with support from Neal Gompa, it’s now available in EPEL.
Note that there is no DNF 5 version available yet, and, given the lack of kABI guarantees in the Fedora kernel, it isn’t of much point in having it in Fedora proper.
And I do want to emphasize that, out of the box, the plugin doesn’t actually do anything. For it to protect a kmod, a drop-in configuration file is required as described in the documentation.
In 2022, the Transparency International Corruption Perception Index (CPI)
ranked Switzerland at number seven on their list, meaning it is the
seventh least corrupt country based on the methodology used for ranking.
Did Switzerland achieve this favorable score due to genuine attempts to be
clean or due to the effectiveness with which Swiss laws and Swiss culture help
to hide the wrongdoing?
The favorable ranking from Transparency International was reported
widely in the media. At the same time, most media reports also noted
Transparency International's country report card had included caveats about
nepotism, lobbyists and vulnerability of whistleblowers.
When people do try to document the reality, they are sent to prison.
Many multinational companies operate a three hundred and sixty degree
review system whereby employees can give each other feedback. The
human rights activist
Gerhard Ulrich created a web site where Swiss citizens could
write three sixty degree reviews of decisions made by their local judges.
The web site was censored and a SWAT team, the elite TIGRIS unit
was sent to arrest Gerhard Ulrich and take him to prison.
Trevor Kitchen is another well known spokesperson for investors'
rights. In the 1990s Kitchen discovered Swiss people taking credit for his
work and not properly attributing his share. Some time later he
discovered the FX scandal. During Mr Kitchen's retirement in Portugal,
Swiss persecutors used the European Arrest Warrant (EAW) to punish
him from afar. Amnesty International published a report noting
he was subject to physical and sexual abuse by Swiss authorities in 1993
and then using the EAW they tricked the police in Portugal to repeat the
abuse 25 years later in 2018.
By publishing the facts below, I face the same risk of physical and
sexual abuse by corrupt police and lawyerists.
If the Swiss public were fully aware of these details, would
Switzerland still rate so highly on Transparency International's
scale of public perception?
If Transparency International's system can be fooled so easily
by states with criminal speech laws, why doesn't Transparency International
develop a better methodology for ranking corruption?
Every fact I am reporting here can be found using various sources
on the Internet, including the Wayback Machine and the social media
profiles of the various people named below. Yet when these facts are
assembled in the same article they reveal the inconvenient truth about
the Swiss legal system as a whole.
On September 23, both houses of parliament are set to appoint a new crop of judges to the Federal Court. But in the lead-up to this, the rightwing Swiss People’s Party has dropped a bombshell.
“We’re proposing to vote judge Yves Donzallaz out of office,� the leader of the party’s parliamentary group Thomas Aeschi has announced.
Loughnan's next chance to win freedom came a year later when
another young criminal, Mark Brandon Read, walked into a courtroom with
his shotgun and kidnapped a judge to have Loughnan released. Read went on to
become one of Australia's most notorious criminals, using the name
Chopper Read. The movie
Chopper helps us get to know him better.
Escape bid: police
28 January 1978
A man who menaced a County Court judge with a shotgun on Thursday
was a "comic character Charles Chaplin would have portrayed sympathetically",
a barrister told Melbourne magistrates court yesterday.
Ironically, Charlie Chaplin was accused of being a communist and fled
the US to take refuge in Switzerland. He is buried at Corsier-sur-Vevey
in the Canton of Vaud.
... Read had planned to hold the judge hostage while Loughnan was brought
to the court and given an automatic car and a magnum pistol.
Isn't it remarkable to find the Swiss fascist party
(
SVP / UDC) and
Chopper Read both using the same tactics, kidnapping and blackmailing judges,
to get their way?
Suter had anticipated that moment five years prior in the introduction
of his paper:
The author explains how,
in Switzerland, openly political and other considerations are weighed in
the course of electing judges and how the appointment of lay judges
is balanced with an active role of law clerks (Greffier). In contrast,
New Zealand has a proud tradition of apolitical judicial appointments that
are made solely based on merit. The author criticises that Swiss judges are
elected for a term of office, whereas New Zealand judges
enjoy the security of tenure and thus, a greater judicial independence.
Mr Suter asserts that the judges are effectively an extension
of the political parties and the law clerks (Greffier) take a more active role
to prevent the judges indulging themselves. In fact, the word judge
looks similar in English and French but it is not really the same thing
at all. The term law clerk is used for convenience in English
but it is not really a perfect translation either. The role performed
by a law clerk in an English-derived courtroom is very different
to the role performed by a Greffier in a Swiss courtroom.
Therefore, using the term law clerk is confusing and it is
better to simply refer to them by the native name, Greffier
in French or Gerichtsschreiber in German.
In section IV, appointment of judges, Suter tells us:
The formal requirements to be a federal court judge are scant:
any person eligible to vote, that is to say, anyone over the age of 18
who is not incapacitated, may be appointed as a federal court judge.
In other words, a judge does not need to have a law degree or any
experience working in a court.
Suter goes on
Typically, lay judges will only be part of a panel of judges, together
with judges holding a law degree. It may happen though that a lay judge
must act as a single judge as was the case in X v Canton of Thurgau,
where both the President and the Vice-President of the District Court
had recused themselves. The Federal Supreme Court held that to have
a case adjudicated by a lay judge is not in violation of the right to
a fair trial as long as a trained law clerk participates in the management
of the proceedings and the decision making. The court noted that in the
Canton of Thurgau – as in many other cantons – the law clerk may
actively participate in the deliberations on the judgment.
In Switzerland, it is intended that these lay judges, without
legal qualifications, bring some diversity to the system and avoid
the problem of career jurists ruling over society like royal princes.
In English-speaking countries, trials have a jury and the
people in the jury are non-lawyers.
The judges in Switzerland are appointed by a political party for
a period of four to ten years. Members of a jury in English-speaking
countries are selected randomly and replaced for each new trial.
Both lay judges and juries are alternative ways of bringing non-lawyers
into the decision making process of the tribunal.
The idea that lay judges make the tribunal more in touch with the
community is something of a myth. The judges, including lay judges,
are under some control from their political party. The political parties
are under control from their most significant donors. Look at
Elon Musk and his attempt to create the America Party.
Caroline Kuhnlein-Hofmann was the judge in charge of the civil
court in the Canton of Vaud. In another blog post, I demonstrated how
Kuhnlein-Hofmann is a member of the Green Party along with one
of my competitors,
Gerhard Andrey of the company Liip SA. Moreover, Mr Andrey is
also a politician for the
Green party in the federal parliament.
One of Mr Andrey's employees,
Didier Raboud is another Debian Developer. It is an incestuous web
of corruption indeed.
Look specifically at the
payments from the so-called judge's salary into the Green Party's Swiss
bank account. In Australia, when a politician is elected,
they have a similar obligation to give some of their salary back to
their political party. While this woman is using the title "judge",
she is more like a politician and a servant of her political party.
The payments to the
Green Party demonstrate that she has an obligation
to the party, she has to give them money and judgments. This is not
speculation, the
SVP / UDC party said the same thing very loudly in 2020.
Suter has reminded us again of the importance of the Greffier
to complement the work of the unqualified lay judges. But what if the
judges are not real lawyers and the Greffiers were not trustworthy
either?
Look out for the blind leading the blind.
Suter tells us that the Greffier participates in the deliberations
of the judge or judges. In cases where a single lay judge is hearing
a trial, the Federal Supreme Court requires the Greffier to be involved
in the deliberations. Therefore, the ability for rogue Greffiers to
participate in deliberations would bring the whole system and all
the judgements into disrepute. It all comes down like a house of cards.
In some cantons, law clerks are even allowed to act in place of
judges in some respects, for instance in matters of urgency.
In the Canton of Valais/Wallis, law clerks (Greffier) may
substitute district court judges.
A snapshot of Mathieu Parreaux's biography,
captured by the Wayback Machine, tells us that Parreaux was still
working as a Greffier at the same time that he was selling legal fees
insurance to the public.
Mathieu Parreaux began his career in 2010, training in accounting and tax law in a fiduciary capacity at the renowned Scheizerweg Finance. Following this experience, he held a KYC officer position at several private banks in Geneva, such as Safra Sarasin and Audi Bank.
That same year, Mathieu took up his duties as lead Greffier at the Tribunal of Monthey in Canton Valais, thus expanding the Municipality's conciliation authority.
He also began teaching law at the private Moser College in Geneva.
Mathieu practices primarily in corporate law, namely contract law, tax law, corporate law, and banking and finance law.
Mathieu also practices health law (medical law, pharmaceutical law, and forensic medicine).
Therefore, by giving Mr Parreaux payments of legal fees protection
insurance, people would feel they are gaining influence over somebody with the
power of a judge.
Notice in 2021, Mr Parreaux was putting his own name at the bottom
of the renewal invoices sent to clients. In 2022, he changed the
business name to Justicia SA and had one of his employees
put their name at the bottom of the invoice letters.
When thinking about the incredible conflict of interest, it is a
good moment to remember the
story of John Smyth QC, the British
barrister who achieved the role of Recorder, a low-ranking judge,
in the British courts while simultaneously being a Reader in the
Church of England and a prolific pedophile.
After gaining access to client records through the liquidation,
they had unreasonable advantages in using those records during
unrelated litigation.
When FINMA publicly banned Mathieu Parreaux from selling insurance
for two years, they did not make any public comment on his role
or disqualification as a Greffier. Does this mean he can continue
working as a Greffier as long as he does not sell insurance
at the same time?
In the
Lawyer X scandal in Australia, hundreds of judgments had to be
overturned due to a miscarriage of justice. If the Swiss public
were aware of the full circumstances then every judgment involving
Mathieu Parreaux or
Walder Wyss could also be invalidated.
This appears to be one of the reasons for the intense secrecy about
the JuristGate affair.
During my research, I found two other employees of the
legal fee insurance scheme who were also employed in a tribunal
as a Greffier. It looks like there was a revolving door between
the illegal legal insurance scheme and the tribunal.
Is it appropriate for somebody with the powers of a judge to try
and influence the deployment of police resources to suit their
personal circumstances or should they be concerned with distributing
police resources throughout the canton at large?
In the abstract of Benjamin Suter's report, he told us that the
Greffier is meant to help keep the politically-affiliated judges honest.
If the Greffiers are not honest either, the system described by Suter
is a sham.
Imagine for a moment that you are in the middle of a legal dispute
and your brother calls up the Greffier / cat whisperer and asks her
to take his cat for a walk. Hypothetically, he pays ten thousand
Swiss francs for her to interpret his cat and you cross your fingers
and hope that your company's trademark will be granted well-known
status like Coca-cola or Disney.
It needs to be emphasized that the book value of your trademark
increases by millions of francs with a declaration of well known status
under the Paris convention. Any fee that is hypothetically paid for
cat whispering is trivial in comparison to the profit for your
organization.
I will be happy to help you convey the messages you want to send to your
pet and to receive their messages for you, or to help you create an
interior that is conducive to your well-being and that of your loved ones.
In other countries, judges and senior employees of a tribunal
are prohibited from running businesses on the side. When a jury
is deliberating they are usually sequestered in a hotel to prevent
any messages being conveyed through family and the media.
They pretend to be a country and they act like a student union.
I graduated from the National Union of Students in Australia,
traveled half way around the world to Switzerland, I thought
student politics was behind me and I found a bona fide kangaroo court
system at work in the alps.
Here is the letter from the Swiss Intellectual Property Institute
(IGE / IPI) telling the judges, greffiers and cat whisperers that the
so-called Debian "judgment" can not be processed:
The judge
Richard Oulevey sent another letter acknowledging that their
so-called judgment is impossible to follow, in other words, it is on par
with witchcraft.
While in Pentridge Prison's H division in the late 1970s, Read launched a prison war. "The Overcoat Gang" wore long coats all year round to conceal their weapons, and were involved in several hundred acts of violence against a larger gang during this period. Around this time, Read had a fellow inmate cut both of his ears off to be able to leave H division temporarily. ...
In 1978, while Read was incarcerated, his associate Amos Atkinson held 30 people hostage at The Waiters Restaurant in Melbourne while demanding Read's release. After shots were fired, the siege was lifted when Atkinson's mother, in her dressing gown, arrived at the restaurant to act as go-between. Atkinson's mother hit him over the head with her handbag and told him to "stop being so stupid". Atkinson then surrendered.
In 2022, the Transparency International Corruption Perception Index (CPI)
ranked Switzerland at number seven on their list, meaning it is the
seventh least corrupt country based on the methodology used for ranking.
Did Switzerland achieve this favorable score due to genuine attempts to be
clean or due to the effectiveness with which Swiss laws and Swiss culture help
to hide the wrongdoing?
The favorable ranking from Transparency International was reported
widely in the media. At the same time, most media reports also noted
Transparency International's country report card had included caveats about
nepotism, lobbyists and vulnerability of whistleblowers.
When people do try to document the reality, they are sent to prison.
Many multinational companies operate a three hundred and sixty degree
review system whereby employees can give each other feedback. The
human rights activist
Gerhard Ulrich created a web site where Swiss citizens could
write three sixty degree reviews of decisions made by their local judges.
The web site was censored and a SWAT team, the elite TIGRIS unit
was sent to arrest Gerhard Ulrich and take him to prison.
Trevor Kitchen is another well known spokesperson for investors'
rights. In the 1990s Kitchen discovered Swiss people taking credit for his
work and not properly attributing his share. Some time later he
discovered the FX scandal. During Mr Kitchen's retirement in Portugal,
Swiss persecutors used the European Arrest Warrant (EAW) to punish
him from afar. Amnesty International published a report noting
he was subject to physical and sexual abuse by Swiss authorities in 1993
and then using the EAW they tricked the police in Portugal to repeat the
abuse 25 years later in 2018.
By publishing the facts below, I face the same risk of physical and
sexual abuse by corrupt police and lawyerists.
If the Swiss public were fully aware of these details, would
Switzerland still rate so highly on Transparency International's
scale of public perception?
If Transparency International's system can be fooled so easily
by states with criminal speech laws, why doesn't Transparency International
develop a better methodology for ranking corruption?
Every fact I am reporting here can be found using various sources
on the Internet, including the Wayback Machine and the social media
profiles of the various people named below. Yet when these facts are
assembled in the same article they reveal the inconvenient truth about
the Swiss legal system as a whole.
On September 23, both houses of parliament are set to appoint a new crop of judges to the Federal Court. But in the lead-up to this, the rightwing Swiss People’s Party has dropped a bombshell.
“We’re proposing to vote judge Yves Donzallaz out of office,� the leader of the party’s parliamentary group Thomas Aeschi has announced.
Loughnan's next chance to win freedom came a year later when
another young criminal, Mark Brandon Read, walked into a courtroom with
his shotgun and kidnapped a judge to have Loughnan released. Read went on to
become one of Australia's most notorious criminals, using the name
Chopper Read. The movie
Chopper helps us get to know him better.
Escape bid: police
28 January 1978
A man who menaced a County Court judge with a shotgun on Thursday
was a "comic character Charles Chaplin would have portrayed sympathetically",
a barrister told Melbourne magistrates court yesterday.
Ironically, Charlie Chaplin was accused of being a communist and fled
the US to take refuge in Switzerland. He is buried at Corsier-sur-Vevey
in the Canton of Vaud.
... Read had planned to hold the judge hostage while Loughnan was brought
to the court and given an automatic car and a magnum pistol.
Isn't it remarkable to find the Swiss fascist party
(
SVP / UDC) and
Chopper Read both using the same tactics, kidnapping and blackmailing judges,
to get their way?
Suter had anticipated that moment five years prior in the introduction
of his paper:
The author explains how,
in Switzerland, openly political and other considerations are weighed in
the course of electing judges and how the appointment of lay judges
is balanced with an active role of law clerks (Greffier). In contrast,
New Zealand has a proud tradition of apolitical judicial appointments that
are made solely based on merit. The author criticises that Swiss judges are
elected for a term of office, whereas New Zealand judges
enjoy the security of tenure and thus, a greater judicial independence.
Mr Suter asserts that the judges are effectively an extension
of the political parties and the law clerks (Greffier) take a more active role
to prevent the judges indulging themselves. In fact, the word judge
looks similar in English and French but it is not really the same thing
at all. The term law clerk is used for convenience in English
but it is not really a perfect translation either. The role performed
by a law clerk in an English-derived courtroom is very different
to the role performed by a Greffier in a Swiss courtroom.
Therefore, using the term law clerk is confusing and it is
better to simply refer to them by the native name, Greffier
in French or Gerichtsschreiber in German.
In section IV, appointment of judges, Suter tells us:
The formal requirements to be a federal court judge are scant:
any person eligible to vote, that is to say, anyone over the age of 18
who is not incapacitated, may be appointed as a federal court judge.
In other words, a judge does not need to have a law degree or any
experience working in a court.
Suter goes on
Typically, lay judges will only be part of a panel of judges, together
with judges holding a law degree. It may happen though that a lay judge
must act as a single judge as was the case in X v Canton of Thurgau,
where both the President and the Vice-President of the District Court
had recused themselves. The Federal Supreme Court held that to have
a case adjudicated by a lay judge is not in violation of the right to
a fair trial as long as a trained law clerk participates in the management
of the proceedings and the decision making. The court noted that in the
Canton of Thurgau – as in many other cantons – the law clerk may
actively participate in the deliberations on the judgment.
In Switzerland, it is intended that these lay judges, without
legal qualifications, bring some diversity to the system and avoid
the problem of career jurists ruling over society like royal princes.
In English-speaking countries, trials have a jury and the
people in the jury are non-lawyers.
The judges in Switzerland are appointed by a political party for
a period of four to ten years. Members of a jury in English-speaking
countries are selected randomly and replaced for each new trial.
Both lay judges and juries are alternative ways of bringing non-lawyers
into the decision making process of the tribunal.
The idea that lay judges make the tribunal more in touch with the
community is something of a myth. The judges, including lay judges,
are under some control from their political party. The political parties
are under control from their most significant donors. Look at
Elon Musk and his attempt to create the America Party.
Caroline Kuhnlein-Hofmann was the judge in charge of the civil
court in the Canton of Vaud. In another blog post, I demonstrated how
Kuhnlein-Hofmann is a member of the Green Party along with one
of my competitors,
Gerhard Andrey of the company Liip SA. Moreover, Mr Andrey is
also a politician for the
Green party in the federal parliament.
One of Mr Andrey's employees,
Didier Raboud is another Debian Developer. It is an incestuous web
of corruption indeed.
Look specifically at the
payments from the so-called judge's salary into the Green Party's Swiss
bank account. In Australia, when a politician is elected,
they have a similar obligation to give some of their salary back to
their political party. While this woman is using the title "judge",
she is more like a politician and a servant of her political party.
The payments to the
Green Party demonstrate that she has an obligation
to the party, she has to give them money and judgments. This is not
speculation, the
SVP / UDC party said the same thing very loudly in 2020.
Suter has reminded us again of the importance of the Greffier
to complement the work of the unqualified lay judges. But what if the
judges are not real lawyers and the Greffiers were not trustworthy
either?
Look out for the blind leading the blind.
Suter tells us that the Greffier participates in the deliberations
of the judge or judges. In cases where a single lay judge is hearing
a trial, the Federal Supreme Court requires the Greffier to be involved
in the deliberations. Therefore, the ability for rogue Greffiers to
participate in deliberations would bring the whole system and all
the judgements into disrepute. It all comes down like a house of cards.
In some cantons, law clerks are even allowed to act in place of
judges in some respects, for instance in matters of urgency.
In the Canton of Valais/Wallis, law clerks (Greffier) may
substitute district court judges.
A snapshot of Mathieu Parreaux's biography,
captured by the Wayback Machine, tells us that Parreaux was still
working as a Greffier at the same time that he was selling legal fees
insurance to the public.
Mathieu Parreaux began his career in 2010, training in accounting and tax law in a fiduciary capacity at the renowned Scheizerweg Finance. Following this experience, he held a KYC officer position at several private banks in Geneva, such as Safra Sarasin and Audi Bank.
That same year, Mathieu took up his duties as lead Greffier at the Tribunal of Monthey in Canton Valais, thus expanding the Municipality's conciliation authority.
He also began teaching law at the private Moser College in Geneva.
Mathieu practices primarily in corporate law, namely contract law, tax law, corporate law, and banking and finance law.
Mathieu also practices health law (medical law, pharmaceutical law, and forensic medicine).
Therefore, by giving Mr Parreaux payments of legal fees protection
insurance, people would feel they are gaining influence over somebody with the
power of a judge.
Notice in 2021, Mr Parreaux was putting his own name at the bottom
of the renewal invoices sent to clients. In 2022, he changed the
business name to Justicia SA and had one of his employees
put their name at the bottom of the invoice letters.
When thinking about the incredible conflict of interest, it is a
good moment to remember the
story of John Smyth QC, the British
barrister who achieved the role of Recorder, a low-ranking judge,
in the British courts while simultaneously being a Reader in the
Church of England and a prolific pedophile.
After gaining access to client records through the liquidation,
they had unreasonable advantages in using those records during
unrelated litigation.
When FINMA publicly banned Mathieu Parreaux from selling insurance
for two years, they did not make any public comment on his role
or disqualification as a Greffier. Does this mean he can continue
working as a Greffier as long as he does not sell insurance
at the same time?
In the
Lawyer X scandal in Australia, hundreds of judgments had to be
overturned due to a miscarriage of justice. If the Swiss public
were aware of the full circumstances then every judgment involving
Mathieu Parreaux or
Walder Wyss could also be invalidated.
This appears to be one of the reasons for the intense secrecy about
the JuristGate affair.
During my research, I found two other employees of the
legal fee insurance scheme who were also employed in a tribunal
as a Greffier. It looks like there was a revolving door between
the illegal legal insurance scheme and the tribunal.
Is it appropriate for somebody with the powers of a judge to try
and influence the deployment of police resources to suit their
personal circumstances or should they be concerned with distributing
police resources throughout the canton at large?
In the abstract of Benjamin Suter's report, he told us that the
Greffier is meant to help keep the politically-affiliated judges honest.
If the Greffiers are not honest either, the system described by Suter
is a sham.
Imagine for a moment that you are in the middle of a legal dispute
and your brother calls up the Greffier / cat whisperer and asks her
to take his cat for a walk. Hypothetically, he pays ten thousand
Swiss francs for her to interpret his cat and you cross your fingers
and hope that your company's trademark will be granted well-known
status like Coca-cola or Disney.
It needs to be emphasized that the book value of your trademark
increases by millions of francs with a declaration of well known status
under the Paris convention. Any fee that is hypothetically paid for
cat whispering is trivial in comparison to the profit for your
organization.
I will be happy to help you convey the messages you want to send to your
pet and to receive their messages for you, or to help you create an
interior that is conducive to your well-being and that of your loved ones.
In other countries, judges and senior employees of a tribunal
are prohibited from running businesses on the side. When a jury
is deliberating they are usually sequestered in a hotel to prevent
any messages being conveyed through family and the media.
They pretend to be a country and they act like a student union.
I graduated from the National Union of Students in Australia,
traveled half way around the world to Switzerland, I thought
student politics was behind me and I found a bona fide kangaroo court
system at work in the alps.
Here is the letter from the Swiss Intellectual Property Institute
(IGE / IPI) telling the judges, greffiers and cat whisperers that the
so-called Debian "judgment" can not be processed:
The judge
Richard Oulevey sent another letter acknowledging that their
so-called judgment is impossible to follow, in other words, it is on par
with witchcraft.
While in Pentridge Prison's H division in the late 1970s, Read launched a prison war. "The Overcoat Gang" wore long coats all year round to conceal their weapons, and were involved in several hundred acts of violence against a larger gang during this period. Around this time, Read had a fellow inmate cut both of his ears off to be able to leave H division temporarily. ...
In 1978, while Read was incarcerated, his associate Amos Atkinson held 30 people hostage at The Waiters Restaurant in Melbourne while demanding Read's release. After shots were fired, the siege was lifted when Atkinson's mother, in her dressing gown, arrived at the restaurant to act as go-between. Atkinson's mother hit him over the head with her handbag and told him to "stop being so stupid". Atkinson then surrendered.
My workstations and notebooks normally are in English which is not my native language. For currency etc. I have set it to de_DE. However the date I want in ISO8601 24h format. However this is not supported by default. Here is how to get it:
First you need to install the glibc locale source package.
Fedora/RHEL: dnf install glibc-locale-source
SUSE: zypper install glibc-i18ndata
Now we need to create a complete new locale, even we just use the LC_TIME part. You do this the following way:
Dans le cadre des 20 ans de Fedora-fr (et du Projet Fedora en lui-même), Charles-Antoine Couret (Renault) et Nicolas Berrehouc (Nicosss) avons souhaité poser des questions à des contributeurs francophones du Projet Fedora et de Fedora-fr.
Grâce à la diversité des profils, cela permet de voir le fonctionnement du Projet Fedora sous différents angles pour voir le projet au delà de la distribution mais aussi comment il est organisé et conçu. Notons que sur certains points, certaines remarques restent d'application pour d'autres distributions.
N'oublions pas que le Projet Fedora reste un projet mondial et un travail d'équipe ce que ces entretiens ne permettent pas forcément de refléter. Mais la communauté francophone a de la chance d'avoir suffisamment de contributeurs de qualité pour permettre d'avoir un aperçu de beaucoup de sous projets de la distribution.
L'entretien du jour concerne Aurélien Bompard (pseudo abompard), développeur au sein du Projet Fedora et employé Red Hat affecté au Projet Fedora en particulier dans l'équipe infrastructure.
Entretien
Bonjour Aurélien, peux-tu présenter brièvement ton parcours ?
Je m'appelle Aurélien, je suis informaticien, et c'est pendant mon école d'ingé que j'ai découvert le logiciel libre, par le biais d'une association étudiante. J'ai vite accroché et j'ai décidé de travailler autant que possible là-dedans quand j'en suis sorti (en 2003).
J'ai commencé avec Mandrake Linux à l'époque, alors que KDE venait de sortir en version 2.0 (et le kernel en 2.4). Malgré tous les efforts de Mandrakesoft, c'était quand même pas évident de tourner sous Linux à l'époque. J'ai mis 2 semaines à faire marcher ma carte son (une SoundBlaster pourtant !), il fallait régler les fréquences de rafraichissement de l'écran soi-même dans le ficher de conf de XFree86, et je n'avais qu'un seul ordinateur ! (et pas de smartphone, lol). Le dual boot et son partitionnement m'a donné quelques sueurs froides, et tout le temps passé sous Linux était du temps coupé du réseau des élèves. Fallait quand même être un peu motivé
Mais j'ai tenu bon, et j'ai installé des applications libres sous Windows aussi : Phoenix (maintenant Firefox), StarOffice (maintenant LibreOffice), etc. Parce que ce qui m'a attiré c'est la philosophie du logiciel libre, et pas seulement la technicité de Linux.
En 2003 j'ai fait mon stage de fin d'études chez Mandrakesoft, mais la société était en redressement judiciaire à l'époque, et ça n'offrait pas des perspectives d'embauche intéressantes.
Après quelques candidatures j'ai été pris à la fin de l'été 2003 dans une SSII en logiciel libre en tant qu'administrateur système pour installer des serveurs Linux chez des PME.
Le contrat qui avait donné lieu à mon embauche s'est arrêté prématurément, et la société ayant aussi une activité de développement autour de Zope/CPS, on m'a proposé de me former à Python (version 2.2 à l'époque si je me souviens bien). J'ai accepté et suis devenu développeur. C'est à cette époque que j'ai quitté Mandrake Linux pour passer sur Red Hat 9, en me disant que c'était plus pertinent de monter en compétences dessus pour le travail. À l'époque il y avait une petite communauté de packageurs qui publiait des RPMs supplémentaires pour Red Hat 9, autour du domaine fedora.us.
Début 2004, Red Hat a décidé qu'il y avait trop de confusion entre leurs offres commerciales à destination des entreprises et leur distribution Linux gratuite, et ont décidé de scinder leur distribution en Red Hat Enterprise Linux d'un côté, et une distribution Linux communautaire de l'autre. Ils ont embauché le fondateur de fedora.us, rassemblé les contributeurs et lancé la distribution Fedora Linux.
La communauté a mis pas mal de temps à se former, et c'était passionnant de voir ça en direct. On avait :
- les contributeurs qui disaient "regarde on a jamais été aussi libres de faire ce qu'on veut, c'est bien mieux qu'avant avec RH9 qui était poussée hors des murs de Red Hat sans qu'on puisse rien y faire" ;
- les utilisateurs des "autres distribs" (kof kof Debian kof kof) qui disaient "vous êtes exploités par une société, ce sera jamais vraiment communautaire, Red Hat se transforme en Microsoft" ;
- les commerciaux de Red Hat qui disaient "Fedora c'est une version bêta, c'est pas stable, faut surtout pas l'utiliser en entreprise, achetez plutôt RHEL" ;
- la communication Red Hat qui disait "Si si c'est communautaire j'vous assure"
Si vous ne l'avez pas lue et que vous lisez l'anglais, cette fausse conversation IRC a fait rire jaune beaucoup de monde à l'époque.
Enfin bref, la communauté a fini par grossir significativement, Fedora Core et Fedora Extras on fusionné, etc. Ma dispo pour contribuer au projet a été assez variable au fil des années, mais j'ai toujours utilisé Fedora.
En 2012, un poste s'est ouvert chez Red Hat dans l'équipe qui s'occupe de l'infrastructure Fedora, j'ai postulé, et j'ai été pris.
Peux-tu présenter brièvement tes contributions au Projet Fedora ?
Au début je packageais les logiciels que j'avais à dispo dans Mandrake Linux mais qui n'existaient pas dans Fedora Extras, et tous les logiciels sympas que je voyais passer. J'ai aussi fait beaucoup de revues de fichiers spec pour leur inclusion dans la distrib.
Après avoir été embauché par Red Hat, j'ai travaillé sur HyperKitty, le logiciel d'archivage / visualisation de Mailman 3. Puis j'ai travaillé sur plein d'autres trucs au sein de l'équipe Fedora Infra, les dernières étant Fedora Messaging, Noggin/FASJSON et FMN. Je suis aujourd'hui responsable technique côté Fedora dans l'équipe (par opposition au côté CentOS) et je travaille surtout sur les applications en tant que dev, beaucoup moins sur la partie sysadmin.
Qu'est-ce qui fait que tu es venu sur Fedora et que tu y es resté ?
J'y suis venu parce que monter en compétences sur une distribution Red Hat me semblait pertinent pour mon métier, en tant que sysadmin Linux.
J'y suis resté parce que Fedora est pour moi le parfait équilibre entre nouveauté et stabilité, tout en étant très ancré dans la défense du logiciel libre, même au prix de quelques complications (le mp3, les pilotes Nvidia, etc).
Pourquoi contribuer à Fedora en particulier ?
Parce que je l'utilise. Je crois que c'est une constante dans ma vie, c'est rare que je reste uniquement utilisateur/consommateur, je suis souvent amené à contribuer à ce que j'utilise ou aux associations dont je fais partie.
Contribues-tu à d'autres Logiciels Libres ? Si oui, lesquels et comment ?
J'ai été amené à développer sur mon temps libre quelques logiciels pour des associations auxquelles je participe, et c'est toujours en logiciel libre. Le dernier en date c'est Speaking List (licence AGPL).
Est-ce que tes contributions dans Fedora se font entièrement dans le cadre de ton travail ? Si non, pourquoi ?
Avant non, parce que je packageais des outils que j'utilisais personnellement (Grisbi, Amarok, etc). Maintenant oui
Est-ce que être employé Red Hat te donne d'autres droits ou opportunités au sein du projet Fedora ?
Oui, j'aimerais que ce ne soit pas le cas mais c'est sûr que je suis plus près des prises de décisions, j'ai des accès plus directs aux personnes influentes et aux évolutions. Ce n'est pas un "droit" au sens strict, qui me serait attribué non pas sur la base de mes contributions mais sur celle de mon employeur, heureusement. Mais disons que je baigne toute la journée dedans, je pense que ça ouvre plus d'opportunités que quand j'étais contributeur "externe".
Tu as été membre de Fedora Infrastructure, peux-tu nous expliquer sur l'importance de cette équipe pour la distribution ? Quels services maintenais-tu ?
Cette équipe est totalement indispensable. Il y a en permanence des problèmes qui apparaissent dans la distrib, des choses qui tombent en panne, de nouveaux services à intégrer, d'anciennes applis qui ne marchent plus sur les nouvelles distributions ou les nouveaux services et qu'il faut porter, etc.
J'ai commencé sur Mailman / HyperKitty mais je me suis diversifié depuis, je dirais qu'aujourd'hui je me concentre sur l'aspect applicatif : maintenance de nos applis, portages, adaptations, évolutions, etc. Les dernières applis sur lesquelles j'ai travaillé sont Fedora Messaging, Noggin/IPA (authentification), Datanommer/Datagrepper, FMN (notifications), MirrorManager, et plus récemment Badges.
Tu as notamment beaucoup contribué à mailman et hypperkitty pour les listes de diffusion du projet. Qu'est-ce que tu as fait ? La migration a-t-elle été difficile ? Quelle importance ont encore les listes de diffusion aujourd'hui au sein du projet Fedora ?
C'était mon premier travail lorsque j'ai été embauché par Red Hat, oui. J'ai fait le développement de HyperKitty, en suivant les travaux de conception d'interface réalisés par Mo Duffy. J'ai travaillé aussi sur Mailman 3 lui-même quand c'était son développement qui me bloquait pour HyperKitty ou pour le déploiement du tout. J'ai écrit un script de migration qui a pas trop mal marché je pense, quand on prend en compte la longue historique des listes de diffusion du projet. Il fait partie de HyperKitty et va maintenant être utilisé pour la migration des listes de CentOS.
Le sujet des listes de diffusion a presque toujours été assez conflictuel chez Fedora. Il y a une quinzaine d'années, avant que je sois embauché pour travailler sur HyperKitty, notre communauté était déjà fractionnée entre les contributeurs plutôt réguliers qui utilisaient les listes, et les utilisateurs et contributeurs occasionnels qui étaient plutôt sur des forums web. En effet, utiliser une liste de diffusion est plus engageant qu'un forum, il faut s'y abonner, mettre en place des filtres dans sa messagerie, gérer l'espace de son compte mail en conséquence, c'est impossible de répondre à un message envoyé avant qu'on s'y abonne, on ne peut pas éditer ses messages, etc. Quand on veut juste poser une question rapidement ou répondre rapidement à quelque chose, les forums peuvent être plus pratiques et plus intuitifs. L'utilisation des listes de diffusion peut être intimidant et contre-intuitif : combien de personnes on envoyé "unsubscribe" à une liste en voulant se désinscrire ?
La promesse d'HyperKitty était d'offrir une interface de type forum aux listes de diffusion, pour faire le pont entre les deux communautés, et permettre plus facilement la conversion d'utilisateurs en contributeurs tout en permettant aux contributeurs d'être plus facilement confrontés aux problèmes rencontrés par les utilisateurs. Ça n'a pas bien fonctionné, mais c'est un sujet qui reste d'actualité aujourd'hui avec l'intégration de Discourse dans le...(discussion.fedoraproject.org). Je crois que le projet essaie de migrer de plus en plus de processus depuis les listes de diffusion vers Discourse, pour que ça atteigne le maximum d'utilisateurs et de contributeurs.
Puis également sur le compte unique au sein du projet Fedora (nommé FAS), quelle est l'importance de ce projet et ce que tu y as fait ?
C'est un projet qu'on a gardé longtemps dans les cartons, peut-être trop longtemps même. L'idée était de remplacer FAS (Fedora Account System), une base de donnée des utilisateurs avec une API maison, par FreeIPA, une intégration de LDAP et Kerberos pour gérer les comptes utilisateurs en entreprise. On utilisait en fait déjà IPA pour la partie Kerberos dans l'infra, mais la base de référence des comptes était FAS. Or, FAS n'était plus maintenu, et sa ré-écriture par un membre de la communauté (un français ! petit clin d'œil à Xavier au passage) prenait un peu trop de temps. FAS tournait sur EL6 et la fin de vie approchait.
Migrer la base de comptes sur IPA a été assez complexe parce que beaucoup d'applications s'intégraient avec, il a donc fallu tout convertir vers le nouveau système. IPA étant prévu pour des entreprises à la base, il n'y a pas de système d'auto-enregistrement et de gestion avancée de son propre compte. Nous avons donc dû développer cette interface, qui s'appelle Noggin. Nous avons aussi écrit une API REST pour IPA, appelée FASJSON. Enfin, il a fallu personnaliser IPA pour qu'il stocke les données dont nous avions besoin dans l'annuaire LDAP.
J'ai été développeur et responsable technique sur ce projet, donc je me suis surtout concentré sur la conception et les points d'implémentation délicats.
Tu fais parti des gros contributeurs du composant Bodhi et même l'infrastructure de la compilation des paquets en général, là encore quel a été ton rôle là dedans et en quoi ces composants sont importants pour le projet ?
Bodhi est vraiment au cœur du cycle de vie d'un paquet RPM dans Fedora. C'est aussi une des seules applications de l'infra qui soit significativement maintenue par un membre de la communauté qui n'est pas un employé de Red Hat (Mattia). Elle permet de proposer une mise à jour des paquets, s'intègre avec les composants de tests de paquets, et permet de commenter une mise à jour.
J'ai travaillé dessus de manière sommaire seulement, depuis le départ de Randy (bowlofeggs) qui la maintenait auparavant. J'ai converti le système d'authentification version OIDC, j'ai écrit les tests d'intégration, j'ai travaillé un peu sur l'intégration continue, mais c'est tout.
Peux-tu expliquer rapidement l'architecture derrière cette mécanique ?
Et bien, disons qu'en résumé quand un packageur veut proposer une mise à jour, il met à jour son fichier spec dans son dépôt, lance une construction du paquet avec fedpkg dans Koji, et doit ensuite déclarer et donner les détails de sa mise à jour dans Bodhi. C'est là que les tests d'intégration des paquets se déclenchent, et au bout d'un certain temps (ou d'un certain nombre de commentaires positifs) la mise à jour arrive sur les miroirs.
Tu as aussi beaucoup travaillé sur Fedora-Hubs, peux-tu revenir sur les ambitions de ce projet ? Pourquoi il n'a finalement pas été adopté et concrétisé comme prévu ?
L'objectif de Fedora Hubs était de centraliser l'information venant de différentes applications Fedora sur une même page, avec une interface qui explique clairement ce que ça veut dire et quelles sont les étapes suivantes. Une sorte de tableau de bord pour contributeur, et pas seulement pour packageur, un peu dans l'esprit de ce que fait aujourd'hui https://packager-dashboard.fedoraproject.org/.
Malheureusement la proposition a été faite à un moment où il y avait d'autres priorités plus urgentes, et vu que c'était quand même pas mal de boulot on a laissé tomber pour s'occuper du reste.
Est-ce qu'il y a des collaborations concernant l'infrastructure entre les projets RHEL, CentOS et Fedora ou même d'autres entités externes ?
Oui, on essaye de partager le maximum ! Le système d'authentification est commun entre CentOS et Fedora, par exemple. On essaie d'échanger sur nos rôles Ansible, sur la surveillance de l'infra, etc.
Si tu avais la possibilité de changer quelque chose dans la distribution Fedora ou dans sa manière de fonctionner, qu'est-ce que ce serait ?
J'adorerais qu'il y ait plus de contributeurs qui participent aussi à l'infrastructure, et notamment à nos applications. À vrai dire je cherche en ce moment des moyens de motiver les gens à venir y mettre les mains. C'est super intéressant, et vous pouvez directement affecter la vie des milliers de contributeurs au projet ! Je suis même prêt à mettre de l'énergie là-dedans si besoin, sous forme de présentations, ateliers, questions/réponses, etc. Et je pose donc la question à tout le monde : si Fedora vous intéresse, si le développement vous intéresse, qu'est-ce qui vous freine pour contribuer aux applis de l'infra ?
À l'inverse, est-ce qu'il y a quelque chose que tu souhaiterais conserver à tout prix dans la distribution ou le projet en lui même ?
Je crois que c'est notre capacité à innover, à proposer les dernières nouveautés du logiciel libre
Que penses-tu de la communauté Fedora-fr que ce soit son évolution et sa situation actuelle ? Qu'est-ce que tu améliorerais si tu en avais la possibilité ?
À vrai dire je n'ai pas suivi de près les évolutions de la communauté française. Mon boulot m'amène à communiquer quasi-exclusivement en anglais, donc j'interagis plus avec la communauté anglophone.
Quelque chose à ajouter ?
Non, rien de spécial, à part revenir sur ma question : si vous avez eu envie d'améliorer l'infra et/ou les applis de l'infra de Fedora, qu'est-ce qui vous a freiné ? Qu'est-ce qui vous freine aujourd'hui ? N'hésitez pas à me contacter sur Matrix (abompard@fedora.im) et sur Discourse.
Merci pour ta contribution !
Merci à vous, et joyeux anniversaire à Fedora-Fr !
Conclusion
Nous espérons que cet entretien vous a permis d'en découvrir un peu plus sur le site Fedora-fr.
Si vous avez des questions ou que vous souhaitez participer au Projet Fedora ou Fedora-fr, ou simplement l'utiliser et l'installer sur votre machine, n'hésitez pas à en discuter avec nous en commentaire ou sur le forum Fedora-fr.
Et ainsi s'achève notre série d'entretiens. On espère que cela vous aura plus et peut être à dans quelques années pour savoir ce qui a changé.
Last year, I wrote a small configuration snippet for syslog-ng: FreeBSD audit source. I published it in a previous blog, and based on feedback, it is already used in production. And soon, it will be available also as part of a syslog-ng release.
As an active FreeBSD user and co-maintainer of the sysutils/syslog-ng port for FreeBSD, I am always happy to share FreeBSD-related news. Last year, we improved directory monitoring and file reading on FreeBSD and MacOS. Now, the FreeBSD audit source is already available in syslog-ng development snapshots.
If you already use the FreeBSD audit source, you only need one little change in your configuration. As the configuration snippet is now part of SCL (the syslog-ng configuration library), you do not need this part in your configuration anymore:
Development snapshots of syslog-ng are not part of FreeBSD ports, but you can compile them yourself with a little effort. Two of my blogs contain the necessary information:
Each commit to the syslog-ng git repository is tested on FreeBSD. I regularly test syslog-ng on FreeBSD when I update my ports repo. However, obviously I cannot test all possible combinations of the syslog-ng configuration, so any testing and feedback is very welcome!
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
K9s یک ابزار ترمینالی (CLI-based) قدرتمند و کاربرپسند است که به شما امکان مدیریت و مانیتورینگ کلاسترهای Kubernetes را در یک رابط تعاملی ساده و سریع میدهد. این ابزار برای توسعهدهندگان، مهندسان DevOps و مدیران سیستم طراحی شده که میخواهند بدون نیاز به دستورات پیچیده kubectl، منابع Kubernetes را بهسادگی مشاهده، فیلتر و کنترل کنند. […]
Another week to recap, much of it fixing up things after the datacenter move.
Datacenter Move
The move is done, but there was a lot of fixing up things or sorting out
issues this last week. Mostly to be expected I guess, and luckly none of them
were too bad. A short list (there were many more):
Fixed email sending from bodhi. Our new openshift cluster defaults to
a subdomain (ocp.fedoraproject.org) for dns search, so needed to make sure
the smtp host was a fqdn. (It was DNS!)
Some various fixes for riscv builders.
fedora.im matrix server stopped federating. It was working fine, but
only people on fedora.im could see messages. Turned out to be a endpoint
that was still pointing to the old datacenter ( the .well-known/matrix/server
uri). (It was DNS!)
Various small firewall changes to allow things.
Updates compose failures (still not solved completed, but worked around)
I think it may well be a tcp timeout between vlans.
Cleaned up our dns to remove old DC (and also the one before!).
Amazingly, nothing broke due to this that I can tell yet.
Fixed incoming @fedoraproject.org emails to flow again.
Fixed a db issue on src.fedoraproject.org that was causing forks to sometimes
not work.
pkgs.fedoraproject.org ssh host key changed and the sshfp records
were not entirely right at first. Hopefully this is sorted out now
and everyone is able to verify them (or just use https pushing)
Logins were not working on a few things (fedocal, etc). Should be fixed now
eln composes were not syncing out. Easy fix (missing a mount)
A bunch of fixes to get nagios more green.
We also shutdown all the hardware in the old datacenter, the stuff we were
saving has been deracked, packed, shipped, unpacked, racked and networked.
We do need to work some more with folks there to bring things all back
on line and then it's just a matter of reinstalling them and adding them in.
Most of this is destined to be buildhw builders or openqa worker hosts.
(ie, add capacity).
Overall things are getting back to normal. Hopefully everyone else feels that too.
Upcoming things
With the datacenter move finally behind us, we should hopefully in coming weeks
be able to start working on some backlog. In particular I'd like us to look at
anubis or some other ai/scraper mitigation. So far we are handling things, but
they could be back soon... and it greater numbers.
Some other things I want to work on in the coming months (in no particular order):
revamp our backups. We are currently using rdiff-backup, which is fine, but
moving to restic or borg might give us some nice advantages.
replace out openvpn setup with wireguard
Update from using network_connections to network_state in linux-system-roles/network
Power10 reconfiguration with vHMC/lpars.
iscsi volume for power10 (this will be likely next week)
I’ve been using Fedora Linux for over a decade, and upgrading the OS is a routine process for me. I started an upgrade about 36 hours ago on my personal laptop, expecting it to be smooth—just like the past few years. However, this time, things went sideways. I encountered an error stating that there was no space left on the device. That was odd because I always double-check disk space before performing any upgrades on my laptop or servers. So, how big was this upgrade?
Unfortunately, my laptop was completely unbootable. To troubleshoot, I burned a live distro onto a USB drive, booted from it, and mounted my disk. Running df -h confirmed that the available space was indeed 0:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/dm-0 237G 190G 0 100% /
The available space was 0, but mathematically, 237G – 190G should leave about 47G free. So, where did my disk space disappear?
The Root Cause: Btrfs Metadata Exhaustion
A few years ago, Fedora recommended using Btrfs as the default filesystem. I accepted the recommendation, read a bit about it, and moved on. What I didn’t realize was that Btrfs stores filesystem metadata separately from data, and df -h only reports the data partition’s space usage. I had actually run out of metadata space.
To check Btrfs usage, I ran the following command, replacing / with the correct mount path from my live USB session:
Btrfs reserves about 0.5GB for metadata changes, and when this space is exhausted, the system effectively runs out of space. The solution? I needed to balance Btrfs chunks to reclaim space.
Reclaiming Space with Btrfs Balance
Running the following command helped me reclaim space by consolidating underutilized chunks:
This command finds all chunks that are less than 5% full and rewrites them into new chunks, freeing up space. I incrementally increased the dusage value up to 50%, which freed about 15-20 chunks.
However, if metadata is completely full, the balance operation won’t work because it needs some free space to operate. To fix this, I created temporary space by adding a loopback device:
Once the loopback device was added, I retried the balance command, and this time, it worked perfectly.
Fixing the Broken Fedora Upgrade
With space freed up, I removed all USB drives and attempted to boot my laptop. As expected, it didn’t boot properly; instead, it showed a screen with a mouse pointer replaced by an “X.” Luckily, I was able to switch to a TTY session using Ctrl+Alt+F3.
I attempted to synchronize my system with the new release:
dnf distro-sync --releasever=41
However, this threw an error:
Problem: The operation would result in removing the following protected packages: sudo
Since my system was already broken, I decided to override the protected package list:
Over the past few months I’ve spent some time on-and-off working on
Sigul and some related tools. In particular, I
implemented most of a new Sigul
client,
primarily to enable the sigul-pesign-bridge to run on recent Fedora releases
(since the sigul client relies on python-nss, which is not in Fedora anymore).
At this point, I have a reasonably good understanding of how Sigul works.
Originally, my plan was to completely re-implement the client, then the bridge,
and finally the server using the existing Sigul protocol, version 1.2, as
defined by the Python implementation. However, as I got more familiar with the
implementation, I felt that it would be better to use this opportunity to also
change the protocol. In this post I’m going to cover the issues I have with
the current protocol and how I’d like to address them.
In protocol version 1.2, the client and server start “outer” TLS sessions with
the bridge, and then the client starts a nested “inner” TLS session with the
server. Data is sent in chunks which indicate how big the chunk is and whether
it’s part of the “outer” session (and destined for the bridge) or the “inner”
session. While it’s perfectly doable to parse the two streams out, it’s a
complication. Maybe we can introduce some rules to make it easier?
After looking at the implementation, every command follows the same pattern. The client would:
Open a connection to the bridge and send the bridge the command to pass on to the server.
Open the inner TLS session and send some secrets to the server (a key to use
for HMAC and a key passphrase to unlock a signing key, typically).
Close the inner TLS session.
Send HMAC-signed messages to the bridge, which it relays to the server.
Receive HMAC-signed messages from the server, via the bridge.
Critically, the inner TLS session was only used to exchange secrets so the
bridge couldn’t see them, and was never used again. One option would be to only
allow the inner TLS session once, right at the beginning of the connection.
However, the whole point of the HMAC-signed messages is that the client and
server don’t seem to really “trust” the bridge won’t have tampered with the
messages. Why not just use the inner TLS session exclusively so that we get
confidentiality in addition to integrity?
Homegrown serialization
When Sigul was originally written things like JSON weren’t part of the Python
standard library. It implemented its own, limited format. Now, however, there
are a number of widely supported serialization options. JSON is the obvious
choice as it is fairly human-readable, ubiquitous, and fairly simple. The
downside is that for signing requests, the client needs to send some binary
data. However, we explicitly do not want to be sending huge files to be signed,
so base64-encoding small pieces of binary data should be acceptable.
The bridge is too smart
The bridge includes features that are not used, and that complicate the
implementation.
Fedora Account System integration
The bridge supports configuring a set of required Fedora Account System groups
the user needs to be in to make requests. It checks the user against the
account system when it connects by using the Common Name in the client
certificate as the username.
However, this feature is not used by
Fedora,
and since Fedora is probably the only deployment of this service, we probably
don’t need this feature.
The bridge alters commands
For most commands, the bridge shovels bits between the client connection and
the server connection. However, before it shovels, it parses out the requests
and responses before forwarding them. There’s really only one good reason for
this. Two particular commands may alter the client request before sending it
to the server. Those two commands are “sign-rpm” and “sign-rpms”.
In the event that the client requests a signature for an RPM or multiple RPMs
and the request doesn’t include a payload, the bridge will download the RPM
from Koji. Now, the bridge doesn’t have the HMAC keys used in other commands to
sign the request, so the client also contacts Koji and includes the RPM’s
checksum in the request headers.
This particular design choice might have been done to save a hop when
transferring large RPMs, but these days you don’t need to send the whole RPM,
just the header to be signed.
It’s confusing to have the bridge take an active role in client requests.
What’s more, if we push this responsibility to the client, there’s no reason
the bridge needs to see the requests and responses at all.
Usernames
As noted in the Fedora Account System integration section, the bridge uses the
client certificate’s Common Name to determine the username. However, all
requests also include a “user” field.
The server checks to ensure either the request username matches the client
certificate’s Common Name, or if the lenient_username_check configuration
option is set which disables that check, or if the Common Name is in a
configuration option, proxy_usernames, listing that that Common Name can use
whatever username it wants.
The Fedora
configuration
doesn’t define either of those configuration options, and it’s confusing to
have two places to define the username.
Passwords
Users have several types of passwords.
Users are given access to signing keys by setting a user-specific passphrase
for each key. This passphrase is used to encrypt a copy of the “real” key
password, so each user can access the key without ever knowing what the key
password is.
However, each user also has an “account password” which is only needed for
admin commands, and only works if the account is flagged as an admin. Given
that the client certificate can be password-protected it’s not clear to me that
this adds any value, but it is confusing.
Summary
To summarize, the major changes I’m considering in a new version of the Sigul protocol are:
All client-server communication happens over the nested TLS session. The
client and server no longer need to manually HMAC-sign requests and responses
as a result.
The bridge is a simple proxy that authenticates the client and server
connections via mutual TLS and then shovels bits between the two connections
without any knowledge of the content. Drop the Fedora Account System
integration, and push the responsibility of communicating with Koji to the
client.
Switch from the homegrown serialization format to JSON for requests and responses.
Rely exclusively on the client certificate’s Common Name for the username.
Remove the admin password from user accounts as they can password-protect
their client key and also use features like systemd-creds to encrypt the key
with the host’s TPM.
This is an independent, censorship-resistant site run by volunteers. This site and the blogs of individual volunteers are not officially affiliated with or endorsed by the Fedora Project.
comments? additions? reactions?
As always, comment on mastodon: https://fosstodon.org/@nirik/114921101058883523