Version 4.9.0 of syslog-ng has been available for some time. However, it is not available yet in FreeBSD ports, as there were compilation problems on FreeBSD 15-CURRENT. You can still install it using my own updated ports Makefile.
I maintain my own version of the syslog-ng port for FreeBSD: https://www.syslog-ng.com/community/b/blog/posts/installing-a-syslog-ng-4-development-snapshot-on-freebsd This is what I use for testing syslog-ng development snapshots on FreeBSD. Version 4.9.0 was tested on FreeBSD 13, 14 and 15. Syslog-ng 4.9.0 introduced support for inotify on Linux. FreeBSD 15 introduced kernel level inotify support as well. I could not reproduce the problem myself, but under some circumstances, compiling syslog-ng 4.9.0 on FreeBSD 15 fails at an inotify-related part.
Before you begin
To compile syslog-ng 4.9.0 on FreeBSD, you need:
an up-to-date ports tree
syslog-ng dependencies installed (you can speed it up if you install them from packages instead of compiling them from ports)
git
my git repository
Compiling syslog-ng
Change to a directory that does not have a subdirectory called “freebsd”, as this is where git will download the ports I created.
This repo is regularly updated, so when you check it out, the latest revision points to a syslog-ng development snapshot beyond version 4.9.0. You can check the git history once you have changed to the “freebsd” directory:
git log
To make your life easier, I looked up the right commit for you. You can change to the state where the repository contained FreeBSD ports for the syslog-ng 4.9.0 release with the following command:
Now change to the “syslog-ng4-devel” directory, where I maintain ports Makefiles for syslog-ng development snapshots. Once you change to the above commit, it will contain the Makefile for syslog-ng 4.9.0. You can now do the regular ports workflow:
make config
make install clean
Note that if syslog-ng is already installed on the host, delete it before compiling syslog-ng, otherwise strange things might happen.
Testing
You can now (re)start syslog-ng:
service syslog-ng restart
And check /var/log/messages for new log messages, where you should see a similar message among other logs:
Aug 26 10:25:31 fb132 syslog-ng[75761]: syslog-ng starting up; version='4.9.0'
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
Hello testers,
this page explains what the Enterprise subscription is and how it brings more
value beyond the Kiwi TCMS application itself.
Please read about the details below.
What is Kiwi TCMS Enterprise
This is a distribution of the Kiwi TCMS application with multiple add-ons
suitable for enterprise customers. Popular examples include various login
backends, e.g. LDAP and multi-tenancy architecture. See the
features page for more details.
Who is this subscription for
Kiwi TCMS Enterprise is suitable for medium to large organizations which would
like to have full control over their Kiwi TCMS instance and most likely integrate
it with existing 3rd party systems.
These are typically organizations with hundreds or thousands of testers with strong
internal DevOps team and sizable existing infrastructure.
What do you get
Everything from lower tier subscription plans!
Most notably you get access to all versions of community & enterprise containers
as well as a SaaS namespace plus extended support.
The SaaS namespace can be used as a sandbox for in-house development and
backwards compatibility testing against the latest version of Kiwi TCMS before
upgrading your internal production instance!
What do all of the individual items mean
On-premises inside your VPN: your DevOps team is responsible for provisioning Kiwi TCMS
and keeping it up and running. Typically this would happen inside your company's VPN and would
be accessible only to employees.
It is an on-premises installation which you have full control over.
NOTICE: This subscription price is per instance! For multiple installations of Kiwi TCMS
please adjust the quantity of items on your subscription!
Tagged releases and multi-arch builds: as part of the Kiwi TCMS Enterprise subscription
you get access to private container repositories with
version tagged builds for aarch64 and x86_64 CPU architectures. This gives you more hosting options and makes
it easier to upgrade to new versions when you decide to do so.
Unlimited tenants: a Kiwi TCMS Enterprise container is multi-tenant, meaning that
you can organize testing work into multiple namespaces. For example product-1.tcms.example.com,
team-2.tcms.example.com, abcd.tcms.example.com and so on. There is no limit on the number
of tenants you can have, nor the number of users who can have access to each tenant. That is
entirely up to your teams to decide.
Full Admin panel access: means that as part of a Kiwi TCMS Enterprise installation you would
designate one or more user accounts as your Kiwi TCMS administrators. They will have full access
to the built-in admin pages of the application and be able to perform actions such as creating or disabling
other accounts, assigning permissions, creating tenants, etc.
Kiwi TCMS refers to such accounts as super-user and the first created account receives super-user
status by default.
NOTICE: super-user / admin accounts are meant for day-to-day operations via the web interface while
a DevOps team will have terminal access to the Kiwi TCMS application for routine maintenance or
one-off tasks. In many cases these roles are performed by the same engineer but don't need to be.
Based on Red Hat Enterprise Linux: with a Kiwi TCMS Enterprise subscription you receive
an explicit guarantee that the underlying container image is built on top of Red Hat Enterprise Linux.
There is no such guarantee for other container images produced by the Kiwi TCMS team!
Multiple add-ons: a Kiwi TCMS Enterprise container image ships more functionality than the
community edition of Kiwi TCMS. Our focus is to provide better operational experience for seasoned
IT teams and make it easier to integrate Kiwi TCMS with existing infrastructure and 3rd party systems.
Some examples include:
OAuth (e.g. GitHub, GitLab) login, LDAP or kerberos
Metrics and error logging
Let's Encrypt SSL certificates
Flexible NGINX configuration
Amazon SES integration
Various storage backends
Additional integration with less popular bug-tracking systems
This list changes over time however we are fully committed to everything being open source!
Extended support: as part of Kiwi TCMS Enterprise subscription you get more support coverage,
08-20 UTC/Mon-Fri with a guaranteed response within 24 hrs. This is 6 hours extra coverage
compared to lower tier subscription plans.
Happy Testing!
If you like what we're doing and how Kiwi TCMS supports various communities
please help us!
Whenever I present syslog-ng at a conference or I stand next to a booth, people often ask me why should they use syslog-ng instead of one of its competitors. So let me summarize what the users and developers of syslog-ng typically consider as its most important values.
Documentation
Yes, I know, this is not syslog-ng itself. However, talking to some of our most active and loyal users, one common feedback was that they had chosen syslog-ng because of the quality of its documentation. Syslog-ng have always had very detailed and (usually) up-to-date documentation. Unfortunately though, there has been a period when our documentation has fallen victim of resource shortages for a while. However, as soon as these resource shortages have been taken care of, bringing our documentation up to pace has been at the top of our list.
Enterprise software
“Enterprise” is quite an overused word and I know plenty of people who stop reading anything when this word appears. So, I am open to suggestions what to use instead of it… :-) However, right now, “enterprise” is what best describes our approach in a single word. And what does “enterprise” software mean for us? Well, continuous development while maintaining stability and compatibility as much as possible in all aspects of syslog-ng. Namely, in its configuration, platform support and features.
Configuration
When it comes to syslog-ng configuration, let me use yet another overused expression: “evolution, instead of revolution”. The syslog-ng project started in 1998, so it is over 25 years old now. But no matter when you started using syslog-ng, you can still use your knowledge of it from many years ago. Of course, this does not mean that syslog-ng stayed completely the same, as its configuration has been extended in many ways over the years. However, its basic structure has not changed. If there were any important and / or incompatible changes, the version string at the beginning of the configuration allows syslog-ng to provide you with a meaningful list of changes and recommendations affecting your config.
Platform support
One of the original goals of syslog-ng was to support a wide variety of platforms. Because of this, all major BSD, Linux and UNIX platforms of their times were supported, except probably for SGI. Of course the list of supported platforms has changed throughout the years: some of the platforms (like HP/UX or DEC) have disappeared, while others (like Solaris) are on life support only, due to a lack of interest. At the same time, new platforms have also appeared, like MacOS or AIX. Syslog-ng works on these operating systems running on x86, ARM, POWER, MIPS, s390, or RiscV architectures (and also on architectures I have never even heard about before…).
Testing all variants before each git commit would be of course quite an overhead. Because of this, the code is tested “only” on a smaller subset of platforms: namely, on Linux, FreeBSD and MacOS on x86 and ARM architectures. Besides, I also regularly run tests on some 32-bit systems and POWER as well.
Features
As you could guess by now, we take every precaution to make sure that new features or bug fixes do not have any unfavorable side effects on the rest of syslog-ng. Thousands of automatic test cases help us to make sure of that. And while most of us use one of the latest desktop Linux distributions for development, we are also aware that most of our users run syslog-ng on old enterprise Linux distributions. While RHEL 10 is already available, many banks or HPC clusters are still running on ancient distros, like RHEL 8, or just introducing RHEL 9. From private discussions, I am aware that some of these still have many machines on RHEL 7 and run the latest syslog-ng 4.x version compiled in-house… :-)
I already mentioned platform support. Supporting old (but not end-of-life) operating systems is also important for our users. It is a major PITA sometimes, but users expect most syslog-ng features to be available on all platforms and operating system versions. That said, when a feature is based on a relatively new Linux kernel feature, then ancient distros or other operating systems are unfortunately out of luck.
What is next?
In short, syslog-ng is used in a variety of environments: large sites with a mature infrastructure, smaller sites, or startups as well. And no matter what, we keep developing in an “enterprise” style to ensure that:
There are no half-baked new features.
We support a wide variety of platforms.
We support the majority of syslog-ng features also on older operating system versions.
This ensures that even when your organization matures, you can still keep using syslog-ng.
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
داکر (Docker) ابزاری محبوب برای مدیریت نرمافزارهای کانتینری است که قابلیت حملپذیری، مقیاسپذیری و کارایی را در محیطهای مختلف فراهم میکند. در این مطلب قصد داریم تا Docker را بر روی راکی لینوکس ۱۰ نصب کنیم. گام ۱ – افزودن مخزن رسمی Docker برای نصب آخرین نسخه Docker، مخزن رسمی آن را به سیستم اضافه […]
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 18 Aug – 22 Aug 2025
Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
Hello testers, this page explains what the Private Tenant subscription is and
how it brings more value beyond the Kiwi TCMS application itself.
Please read about the details below.
What is Private Tenant by Kiwi TCMS
This is our most popular subscription tier which combines SaaS hosting and
additional support services. It is more about ease of use allowing your
QA team to focus on more testing rather than specific software features.
Who is this subscription for
Private Tenant is suitable for medium sized teams which consider
their Kiwi TCMS instance to be an important piece of infrastructure but
not necessarily mission critical.
These are organizations with less than 100 testers
which would like to focus their limited resources towards their own
products and are happy to consume 3rd party software as a service.
Most typically these are start-up companies and medium sized businesses.
What do you get
Everything from lower tier subscription plans!
Most notably you get access to all versions of community releases and a
dedicated SaaS namespace as your main Kiwi TCMS instance plus extra
product features and technical support.
What do all of the individual items mean
1x SaaS hosting: you will receive the opportunity to create your own namespace
under the tenant.kiwitcms.org domain name, for example https://acme-inc.tenant.kiwitcms.org.
IMPORTANT: A Private Tenant subscription entitles you to a single namespace and the email associated
with the purchase is referred to as tenant owner.
If you need more tenants you should purchase additional subscriptions under different email addresses!
Unlimited users: means exactly that - you may assign an arbitrary number of users to have access
to the information kept under your tenant. With Private Tenant subscription there are no per-seat
fees - it is a flat rate price!
Control of authorized users: means that you can control who has access to your data and adjust
their permissions level when necessary. You can have your QA team and your partners collaborate in a
single Kiwi TCMS tenant if you decide to do so.
With a Private Tenant subscription user accounts are actually kept under public.tenant.kiwitcms.org,
which is also referred to as the main tenant or the public tenant while access control is granted on
a per-tenant basis via the web interface by the tenant owner.
NOTICE: the name "public tenant" is a misnomer, which comes from one of the underlying programming
libraries used inside Kiwi TCMS. In reality no data is publicly visible, including user accounts.
No User account admin: a stand-alone installation of Kiwi TCMS allows the first registered account,
named super-user, to have access to all others. This isn't the case with a Private Tenant subscription.
You may invite/authorize users and groups for your own tenant and manage access permissions however
you cannot create, remove or disable user accounts. You cannot view the list of all other accounts
which exist in the database.
Extra functionality: a Private Tenant subscription is actually built on top of a
Kiwi TCMS Enterprise container image which means some extra features and add-ons are available
to you as part of this subscription. Whenever possible add-on features are also accessible to
Private Tenant subscribers. A notable exception are different login methods which cannot be
enabled nor configured on a per-tenant basis!
Always the latest version: a SaaS tenant is always the latest version and is automatically
upgraded by the Kiwi TCMS team. While we try our best not to introduce bugs and backwards incompatible
changes it is possible that test automation scripts and/or 3rd party integration code may break
sometimes.
IMPORTANT: With a Private Tenant subscription the responsibility of testing all of your integrations against
the latest version of Kiwi TCMS falls onto the customer.
Backup & disaster recovery: the Kiwi TCMS team takes care to backup our SaaS cluster regularly
in case a catastrophic failure occurs - this is designed for disaster recovery purposes.
While the Kiwi TCMS application itself keeps track of changes and removals of certain objects it may
not always be possible to restore a deleted entry. You may be able to create new records with
the same information however if a record cannot be restored via the Kiwi TCMS web interface we cannot help you.
Due to technical limitations we cannot restore individual records from a Private Tenant.
Data access via web & API: keeping in mind that a Private Tenant namespace is allocated
on a cluster with hundreds of others your only access to data is via web and API.
Due to technical and security limitations we cannot give you access to the underlying database nor
the raw backup files. You are free to export your data as frequently as you wish though!
Dedicated technical support: as a Private Tenant customer you receive full technical support
from the Kiwi TCMS team spanning all components related to the Kiwi TCMS application. Working hours
and response times are listed on our main page!
Happy Testing!
If you like what we're doing and how Kiwi TCMS supports various communities
please help us!
Hello testers,
this page explains what the Self Support subscription is
and how it brings a bit more value beyond the Kiwi TCMS application itself.
You can read about the details below.
What is Self Support by Kiwi TCMS
This is our lowest-tier of support services, where the majority of the work
falls onto the customer to host and run the Kiwi TCMS application.
It is an entry-level subscription which provides the basis for
all other subscriptions.
Who is this subscription for
Self Support is suitable for very small teams and individual engineers
where their Kiwi TCMS instance is not a mission critical piece of infrastructure -
perhaps an instance which is rarely used and isn't available 24/7, a development target
or an initial proof of concept installation.
These are typically organizations with less than 10 testers
which have no particular requirements other than being able to record and inspect
test execution results.
What do you get
This subscription entitles you to a few more items on top of the regular
community edition version of Kiwi TCMS. Please read for the details below.
What do all of the individual items mean
Self-hosted on your server: you will be responsible for installing, configuring
and managing your Kiwi TCMS instance. Backups, upgrades, etc. are entirely a
responsibility of your own.
NOTICE: We kindly ask you to adjust the quantity of items on the subscription in case you are running
multiple instances of Kiwi TCMS!
Tagged releases and multi-arch builds: regular community edition of Kiwi TCMS provides
only a latest version for the x86_64 processor architecture.
As part of the Self Support subscription you get access to a private container repository
called kiwitcms/version with version tagged builds
for aarch64 and x86_64 CPU architectures. This gives you more hosting options and makes
it easier to upgrade to new versions when you decide to do so.
Limited support:Self Support customers do not receive technical support. Help from the
Kiwi TCMS team is limited to account and billing issues and response times are not guaranteed!
Price freeze: subscription price is usually updated once a year to account for inflation
and any additional services which may be provided under a particular subscription tier.
Customers with an active subscription are protected from these changes - price stays fixed
as long as your subscription is active! You will get full access to any new services as soon as
they are implemented.
No ads: regular community edition version of Kiwi TCMS comes with built-in advertisement
from EthicalAds rewards from which are paid out to
opencollective.com/kiwitcms for transparency.
Container images from the Self Support subscription remove those ads and make the web
interface cleaner.
Happy Testing!
If you like what we're doing and how Kiwi TCMS supports various communities
please help us!
As an addendum to yesterday's note: I've got
Actalis issuer to work with cert-manager.
cert-manager expects some credentials to be in so-called base64url encoding, which
is stated in a note in the
documentation. Fix was easy, I had to remove = from provided HMAC Keys. The docs have
sed invocation to use.
If you have spent any time around HID devices under Linux (for example if you
are an avid mouse, touchpad or keyboard user) then you may have noticed that
your single physical device actually shows up as multiple device nodes (for
free! and nothing happens for free these days!).
If you haven't noticed this, run libinput record and you may be
part of the lucky roughly 50% who get free extra event nodes.
The pattern is always the same. Assuming you have a device named
FooBar ExceptionalDog 2000 AI[1] what you will see are multiple devices
/dev/input/event0: FooBar ExceptionalDog 2000 AI Mouse
/dev/input/event1: FooBar ExceptionalDog 2000 AI Keybard
/dev/input/event2: FooBar ExceptionalDog 2000 AI Consumer Control
The Mouse/Keyboard/Consumer Control/... suffixes are a quirk of the kernel's
HID implementation which splits out a device based on the Application Collection. [2]
A HID report descriptor
may use collections to group things together. A "Physical Collection" indicates
"these things are (on) the same physical thingy". A "Logical Collection" indicates
"these things belong together". And you can of course nest these things
near-indefinitely so e.g. a logical collection inside a physical collection is
a common thing.
An "Application Collection" is a high-level abstractions to group something together
so it can be detected by software. The "something" is defined by the HID usage for this
collection. For example, you'll never guess what this device might be based on the
hid-recorder output:
Yep, it's a keyboard. Pop the champagne[3] and hooray, you deserve it.
The kernel, ever eager to help, takes top-level application collections (i.e.
those not inside another collection) and applies a usage-specific suffix to the
device. For the above Generic Desktop/Keyboard usage you get "Keyboard", the other
ones currently supported are "Keypad" and "Mouse" as well as the slightly more
niche "System Control", "Consumer Control" and "Wireless Radio Control" and
"System Multi Axis". In the Digitizer usage page we have "Stylus", "Pen",
"Touchscreen" and "Touchpad". Any other Application Collection is
currently unsuffixed (though see [2] again, e.g. the hid-uclogic driver uses
"Touch Strip" and other suffixes).
This suffix is necessary because the kernel also splits out the data sent
within each collection as separate evdev event node. Since HID is (mostly)
hidden from userspace this makes it much easier for userspace to identify
different devices because you can look at a event node and say "well, it has
buttons and x/y, so must be a mouse" (this is exactly what udev does when applying
the various ID_INPUT properties, with varying
levels of success).
The side effect of this however is that your device may show up as multiple
devices and most of those extra devices will never send events. Sometimes
that is due to the device supporting multiple modes (e.g. a touchpad may by default emulate
a mouse for backwards compatibility but once the kernel toggles it to touchpad
mode the mouse feature is mute). Sometimes it's just laziness when vendors re-use
the same firmware and leave unused bits in place.
It's largely a cosmetic problem only, e.g. libinput treats every event
node as individual device and if there is a device that never sends events it
won't affect the other event nodes. It can cause user confusion though: "why
does my laptop say there's a mouse?" and in some cases it can cause functional
degradation - the two I can immediately recall are udev detecting the mouse
node of a touchpad as pointing stick (because i2c mice aren't a thing), hence
the pointing stick configuration may show up in unexpected places. And fake
mouse devices prevent features like "disable touchpad if a mouse is plugged in"
from working correctly. At the moment we don't have a good solution for detecting
these fake devices - short of shipping giant databases with product-specific entries
we cannot easily detect which device is fake. After all, a Keyboard node on a
gaming mouse may only send events if the user configured the firmware to send
keyboard events, and the same is true for a Mouse node on a gaming keyboard.
So for now, the only solution to those is a per-user
udev rule to ignore a device. If we ever
figure out a better fix, expect to find a gloating blog post in this very space.
[1] input device naming is typically bonkers, so I'm just sticking with precedence here
[2] if there's a custom kernel driver this may not apply and there are quirks to change this so this isn't true for all devices
[3] or sparkling wine, let's not be regionist here
There are quite a few ACME providers. Some even look
like they could replace BuyPass, which had two strong traits: it is based in Europe and was providing certificates
valid for half a year. It looked like Actalis would be a good replacement.
They're from Italy and have 1 year certificates, but available in paid plans only.
After some tinkering with cert-manager I was unable to make it work. Some cryptic, discouraging messages
like "ACME server URL host and ACME private key registration host differ. Re-checking ACME account registration"
and "failed to verify ACME account" err="failed to decode external account binding key data: illegal base64 data at input byte 43" made me look further.
Next shot, ZeroSSL worked straight away. Worth noting –
official cert-manager documentation has a tutorial on using ZeroSSL. There are some limitation, but it's
fine to me. There's nothing more to write, it just works.
For private services (meant to be accessed only from my devices), I'm using
FreeIPA as an ACME provider, of course.
Why not Let's Encrypt? Only because it is not hipster enough.
This is post 001/100 of 100DaysToOffload challenge. I intend to write
short posts about nothing in particular, just collected thoughts. Language will vary: Polish, English,
maybe Arabic if I get back to learning it.
I am reviewing the options for a logic analyzer with a good Linux support and U3Pro32 from DreamSourceLab is one candidate. Its management software DSView is open-source, based on Pulseview from the Sigrok project and supports Linux. So I went ahead and made it available for Fedora in copr repository.
The JIT-enabled Firefox 128.14.0 ESR has been built for Fedora/ppc64le in my Talos COPR repository. The corresponding sources are in my fork of the official Fedora Firefox package and the JIT is coming from Cameron's work.
PS: Fedora 41 and Rawhide builds are still in progress, more in copr
A new Debian crisis emerged on Sunday of the Debian Day weekend
with an email from
Phil Wyett (kathenas). He chose the subject line
Ending Debian Contribution. The snobby people insulted him and
he quit.
The email only received one reply. Nobody thanked Phil for his
enormous work over an extended period of years. Once again, they chewed
somebody up, spat him out and abandoned him at the side of the road.
For nm.debian.org, at 2023-07-24:
I support Philip Wyett 's request to become a Debian Developer, uploading.
I have worked with Philip Wyett on X and Y
for 2 years, and I consider them as having sufficient technical competence.
I don't usually advocate when I "just" sponsor less than 20 packages for a person.
I decided to gran him an exception, because:
1) he has strong motivation in keeping filezilla/libfilezilla in a good shape in Debian.
2) he is fast in acting and answer to questions/emails
3) he knows how ABI/API works, and he does "transition" the two packages successfully since
when he started working on them (its a 2 packages transition, but he knows his stuff!)
4) libfilezilla bumps ABI on each release, so being DM for him is probably painful.
5) he did work on other packages such as pingus, rednotebook.
6) I think having them as DD is a great value for Debian project.
I have personally worked with Philip Wyett
(key 70A0AC45AC779EFE84F63AED724AA9B52F024C8B) for 2 years, and I know Philip Wyett
can be trusted to be a full member of Debian, and have unsupervised, unrestricted upload rights, right now.
Addendum:
For nm.debian.org, at 2024-01-22:
After the closing of the previous application, I asked Philip to do *many* Debian related things.
It turned out that the application was a little bit premature, Debian is not just
about having technical skills (and he strongly has them), but
also about helping newcomers, understanding how the community is built and how to interact each
other without loosing the objective that has to be to bring an OS to end users.
For this reason, after sponsoring a ton of packages reviewed by him on mentors.debian.org, and giving
them DM rights for the packages he maintains, I'm more confident about his ability to become
a full unsupervised DD.
His Debian wiki profile, now deleted, included the comments:
Most of the money you see in and around Free / Open Source Software, Hardware and other projects is in the hands of large groups and self styled organisations, with little or none reaching the actually contributors/maintainer/developers of your favourite projects who actually do the core work.
Consider donating to projects and persons directly.
That must have annoyed the people who spend money without doing
any work.
Quoting Phil's email from 17 August 2025:
In my opinion I have been messed about for over a year, discriminated against by
one or more of the Debian Application Managers who overlooked my DD application
for over 7 months whilst moving forward with other applications that were made
after my own, one done in 10 days.
The Debian Project Leader (DPL),
Andreas Tille, obtained a PhD in Physics
and works at the Robert Koch-Institute in Wernigerode, Germany.
Surely he understands that in any publication, whether it is a scientific
research paper or a piece of software, the names of all the authors/developers
have to be listed with an equal status. Not including somebody's name
is a close cousin of plagiarism. I've raised concerns about
plagiarism in Debian before.
My next communication was from the DAMs closing my application and stating I
could never apply again. The DPL has agreed with this decision.
If the Debian Account Managers were never going to respect his copyright
status anyway, why did they encourage him to keep doing all this work over
the years?
Please write a list of 5 Debian Developers you would like to kick out of the project.
Second, in 2018,
Zini went to
DebConf18 and gave a talk called
"Multiple people" where he talks about having relationships with other
men.
During the last year, I have been thinking passionately about things such as diversity, gender identity, sexual orientation, neurodiversity, and preserving identity in a group.
The last phrase, "preserving identity in a group", reveals a lot.
Zini and other members of the group are screening Debian collaborators
based on a very distorted worldview that only seems to tolerate the people
they would be willing to sleep with.
Phil Wyett, being a former soldier and engineer, may have
given the impression that he is not going to be an easy target
for the
social engineering culture and
Code of Conduct gaslighting that has infected Debian in
recent times.
Earlier this year there was significant publicity about the
Zizian group. It is interesting to note that one of the Zizian
victims was a border guard while Phil Wyett is a former soldier from
the British Army's Royal Engineers.
US Border Patrol Agent David Maland, a US Air Force veteran,
is pictured with a service dog. Phil Wyett, a British Army vet, is pictured
with a regular house cat. Apart from the cat, they served their
respective communities in a multitude of different ways.
Another
Zizian victim was their landlord, Curtis Lind. The trial
is ongoing. Mr Lind, pictured with a horse, lost one eye:
People who spend time developing software have rights under
copyright law just as landlords have rights under real estate law.
The pattern of vicious attacks on authors, developers and landlords
all suggest a significant disregard for the law and a sense of entitlement
from people who operate in a pack, like dogs, barking at the rest of us.
Hi, I am Mayank Singh, welcome back to this blog series on the progress of the new package submission prototype, if you aren’t familiar with the project, feel free to check out the previous blogpost here.
Extending Forgejo support
As Forgejo is expected to become the git forge for Fedora in future and based on community discussions. I thought it did be great to support Forgejo in this project.
packit-service supports both GitLab and GitHub thanks to OGR library. It provides a unified interface for interacting with git forges. The library has a good support for GitHub and GitLab features but the Forgejo integration is still not quite on par with them, few important functions like adding comments and reactions are still missing. I worked on adding the support for the necessary parts so that packit-service can work with Forgejo. There are few bugs related to serialization in the code, which I plan to fix this week.
New Package Builds from New Pull Requests
I’ve added a simple handler to read from a new Pull Request and its description to parse packit commands. This allows users to trigger a build for a new package without needing a packit configuration file in the repository.
For example, adding this line the PR description will trigger a new build for this particular package name.
/packit copr-build package_name: <libexample>
The system dynamically constructs a minimal package config and feeds it into Packit’s internal build logic. Test jobs are expected to work as well, although this is yet to be fully verified.
The current implementation is a quick prototype that reuses as much of the existing packit-service logic as possible. There’s still more work to be done around building the full package context. My mentor has also shared feedback suggesting a cleaner approach, which I’ll be iterating on in the next phase.
What’s Next?
Fix remaining serialization bugs.
Build and test the Testing Farm integration for running test jobs on the built packages.
Improve the source handling logic based on recent feedback.
Thanks for reading and stay tuned for more next week.
This week in the project involved bug fixes, integration cleanup and small improvements.
Bug Fixes and Workarounds
A key function required for the Forgejo integration which is used to fetch files from repositories was broken in the current Fedora 42 release due to an upstream bug in Forgejo’s OpenAPI spec. While the bug is already fixed in newer versions, it couldn’t be backported cleanly due to dependency constraints.
To move forward, I upgraded the service to run on Fedora Rawhide. However, Rawhide had its own quirks, the celery package was broken. To work around this, I installed celery directly from PyPI instead, which resolved the issue for now.
There were also several issues related to how data was being passed between the celery tasks. In particular, raw comment objects from Forgejo events were being passed around, which are not JSON-serializable and caused failures. With help from my mentor, added fixes in the logic to ignore the problematic object from being included in the payload.
Improvements and features
Enabled the `fedora-review` tool for all COPR builds in the codebase by monkey-patching the API call to enable fedora-review, which is not available directly otherwise from the packit API. This allows us to get a list of tasks and requirements for Fedora packaging compliance for the corresponding build of the package.
As of now, COPR builds and testing of the builds work. I started working on status reporting, it’s still rough, and I attempted to get comments working, but I’ll need to add more support in the OGR library to implement commit status properly.
As part of my Outreachy internship with the Fedora Project, I’m building an API to modernize how Fedora plans its release cycles.
With the help of my mentor Tomáš Hrčka, the goal is to replace the current XML-heavy system currently on pagure.io with something flexible, easy to use, and well-structured.
These changes already make onboarding contributors easier and improve testability.
What’s Next
I’m excited about these upcoming milestones:
Refining and aligning tests with the FastAPI structure.
Integrating with Fedora infrastructure for live data.
Strengthening the deployment pipeline for production.
Challenges
My biggest challenge and opportunity is simultaneously learning new backend technologies like FastAPI and OpenID Connect for authorization, along with techniques to improve developer onboarding. Though the learning curve is steep, my mentor’s continuous guidance on the Fedora infrastructure, career development and general advice makes it manageable.
Reflections
This internship has been an incredible learning experience. I’m gaining hands-on exposure to backend architecture, continuous integration practices, and open-source collaboration. More importantly, the chance to build something lasting for Fedora makes the work genuinely rewarding.
There’s a lot left to tackle, and I’m looking forward to pushing it further.
Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.
RPMs of PHP version 8.4.12RC1 are available
as base packages in the remi-modular-test for Fedora 41-42 and Enterprise Linux≥ 8
as SCL in remi-test repository
RPMs of PHP version 8.3.25RC1 are available
as base packages in the remi-modular-test for Fedora 41-42 and Enterprise Linux≥ 8
as SCL in remi-test repository
ℹ️ The packages are available for x86_64 and aarch64.
ℹ️ PHP version 8.2 is now in security mode only, so no more RC will be released.
Note to future self, something that I’ve been doing completely inefficiently in the past, getting the destination IP address of a redirected packet from iptables/linux in an official manner!
Donald Trump a peut-être mis un terme à la diversité, mais
la fête de Marie demeure sacrée. Les Lyonnais ont bravé la canicule pour
porter la statue de Marie sur la colline de Fourvière.
Donald Trump may have stopped diversity in its tracks but Mary's day
remains sacred. The citizens of Lyon, France, braved a heat wave (canicule)
to carry Mary's statue up the hill to the Fourvière.
This is a continuation of this original post in exploring the modifications that can be made to the OpenVPN source code to increase its overall performance: [post]
I’m still exploring how I can make this perform better and optimize the code more but I was finally able to build on top of the bulk-mode changes I had made in the last post and create a multi-threaded server and client model work together. It was tough to do because of the complexity of the code as well as not interfering with the TLS connection state variables and memory along the way.
I was able to make the code spin off 4 threads which both share a common TUN interface for bulk-reads and then create 4 separate TCP connections to each perform a large bulk-transfer. The server will load balance the dual connections from the client across the threads based on the connecting IP address. I am also running 4 VPN processes with 4 TUN devices and using IP routing next hop weight to load-balance the traffic between them.
Update: I just implemented an extra management thread that is dedicated to reading from the shared TUN device and bulk filling the context buffers so that they can all run and process the data in parallel to each other now in a non-locking fashion (6 x 1500 x 4 == 36,000 bytes)!
Config Tips:
Ensure that your VPS WAN interface has a 1500 MTU (my provider was setting it to 9000)
Perform some basic sysctl network/socket/packet memory/buffer/queue size tuning (16777216)
Set the TUN MTU == 1750 && TX QUEUE == 9750 (oversized middle pipe link)
Push && pull the snd ++ rcv buffer sizes from the server config to the client options (16777216)
Use elliptic curve keys and stream cipher crypto (more efficient algos for the CPU)
Write a firewall script to reject-reset forwarded TCP connections when the VPN tunnel is down (help reduce any resuming connection retransmission performance issues)
No more need for compression, fragmentation, or MSS clamping (–mssfix 0)
This is the 133rd issue of syslog-ng Insider, a monthly newsletter that brings you syslog-ng-related news.
NEWS
Deprecating Java-based drivers from syslog-ng: Is HDFS next?
While most Java-based drivers have been deprecated in syslog-ng years ago, we have recently removed all of them in preparation to syslog-ng 4.9.0. Right now, the only Java-based driver remaining is HDFS, so we want to ask the syslog-ng community if the HDFS destination is still needed for them.
Some of our most active users chose syslog-ng because of its detailed and accurate documentation (https://syslog-ng.github.io/). Later I received complaints that it is too detailed, and we need a tutorial: https://peter.czanik.hu/posts/syslog-ng-tutorial-toc/. This time, I was asked for something even shorter. Here you are!
Last year, we published a Prometheus exporter for syslog-ng, implemented in Python. However, syslog-ng 4.9.0 will include one that runs as part of syslog-ng. Needless to say, testing and feedback are very welcome!
This is a heads ups that if you install xkeyboard-config 2.45 (the package that provides the XKB data files), some manual interaction may be needed. Version 2.45 has changed the install location after over 20 years to be a) more correct and b) more flexible.
When you select a keyboard layout like "fr" or "de" (or any other ones really), what typically happens in the background is that an XKB parser (xkbcomp if you're on X, libxkbcommon if you're on Wayland) goes off and parses the data files provided by xkeyboard-config to populate the layouts. For historical reasons these data files have resided in /usr/share/X11/xkb and that directory is hardcoded in more places than it should be (i.e. more than zero).
As of xkeyboard-config 2.45 however, the data files are now installed in the much more sensible directory /usr/share/xkeyboard-config-2 with a matching xkeyboard-config-2.pc for anyone who relies on the data files. The old location is symlinked to the new location so everything keeps working, people are happy, no hatemail needs to be written, etc. Good times.
The reason for this change is two-fold: moving it to a package-specific directory opens up the (admittedly mostly theoretical) use-case of some other package providing XKB data files. But even more so, it finally allows us to start versioning the data files and introduce new formats that may be backwards-incompatible for current parsers. This is not yet the case however, the current format in the new location is guaranteed to be the same as the format we've always had, it's really just a location change in preparation for future changes.
Now, from an upstream perspective this is not just hunky, it's also dory. Distributions however struggle a bit more with this change because of packaging format restrictions. RPM for example is quite unhappy with a directory being replaced by a symlink which means that Fedora and OpenSuSE have to resort to the .rpmmoved hack. If you have ever used the custom layout and/or added other files to the XKB data files you will need to manually move those files from /usr/share/X11/xkb.rpmmoved/ to the new equivalent location. If you have never used that layout and/or modified local you can just delete /usr/share/X11/xkb.rpmmoved. Of course, if you're on Wayland you shouldn't need to modify system directories anyway since you can do it in your $HOME.
Corresponding issues on what to do on Arch and Gentoo, I'm not immediately aware of other distributions's issues but if you search for them in your bugtracker you'll find them.
I love it when the algorithm delivers me a beloved song that I didn’t even remember existed, didn’t know who made it, and didn’t know the name of.
Thank you, algorithm.
Or should I thank the data scientist who organized music data into a space of about 20 to 50 dimensions and optimized and trained decision trees to infer the next song to play?
The summer is just flying by. Lots of folks taking vacations (I should too)
and things have been somewhat quiet this last week.
Outage on monday
We had a nasty outage on monday. It affected lots and lots of our services.
At first it looked like a DDoS against the proxies in our main datacenter.
They were seeing packet loss and were unable to process much of anything.
These proxies are the gateways to a lot of services that are just in that
one main datacenter (koji, etc). On digging some more, I was seeing a number
of connections to our registry stuck in 'sending to client' in apache.
They would pile up there and take up all the connection slots, then nothing
else would be able to get through. It was unclear if this was causing the
packet loss or the packet loss was causing this. I updated and rebooted the
proxies and things came back. I'm not sure if this was really a DDoS, or
if it was a kernel or apache bug or if it was caused by something else.
I guess the takeaway is that if you can't quickly find a cause for something,
updating and rebooting (ie, turning it off and back on again) is worth a
try.
F43 branching next week
Fedora 43 is branching off rawhide next week. So, we have started resigning
everything in rawhide (and all new builds) with the fedora 44 signing key.
This has resulted in some signing slowdowns, but hopefully it will make
branching more smooth next week.
F39/40 not properly moved to archives
When a release goes end of life, we move it to our archives and update
mirrormanager to point any users there. Some things went wrong on doing
that with f39/40, so mirrormanager was still trying to give our normal
mirrors, many of which no longer had that content.
Of course you shouldn't run EOL releases, but this should be cleaned
up now if you have to for some weird reason.
Zodbot back on irc
I finally got around to building what I needed to bring our IRC bot
back up after the datacenter move. It definitely wasn't high on my list
but now it's back.
Some AI learning
Yesterday was a day of learning at Red Hat and I looked at a bunch of
ai related stuff. I ended up mostly playing with claude, and it was a mixed
bag.
On the plus side:
It got very right a process for adding a new host to ansible.
Clear, spelled out what needed to be added where and had examples.
It was quick for adding some debugging I wanted to add. Faster to ask
it that it would have taken me to type it out.
On the minus side:
It couldn't figure out a issue where some role was running on a host
that it shouldn't be. At first it said it would run there, then it
appologised and said it wouldn't. (I still need to track down why
it's happening).
I asked it to fix a more complicated problem in a python app, and
it basically just added checks to avoid the problem without actually
fixing it. Likely I wasn't prompting it correctly to allow that.
tl;dr: I’m asking the biggest users of the LVFS to sponsor the project.
The Linux Foundation is kindly paying for all the hosting costs of the LVFS, and Red Hat pays for all my time — but as LVFS grows and grows that’s going to be less and less sustainable longer term. We’re trying to find funding to hire additional resources as a “me replacement” so that there is backup and additional attention to LVFS (and so that I can go on holiday for two weeks without needing to take a laptop with me).
This year there will be a fair-use quota introduced, with different sponsorship levels having a different quota allowance. Nothing currently happens if the quota is exceeded, although there will be additional warnings asking the vendor to contribute. The “associate” (free) quota is also generous, with 50,000 monthly downloads and 50 monthly uploads. This means that almost all the 140 vendors on the LVFS should expect no changes.
Vendors providing millions of firmware files to end users (and deriving tremendous value from the LVFS…) should really either be providing a developer to help write shared code, design abstractions and review patches (like AMD does) or allocate some funding so that we can pay for resources to take action for them. So far no OEMs provide any financial help for the infrastructure itself, although two have recently offered — and we’re now in a position to “say yes” to the offers of help.
I’ve written a LVFS Project Sustainability Plan that explains the problem and how OEMs should work with the Linux Foundation to help fund the LVFS.
I’m aware funding open source software is a delicate matter and I certainly do not want to cause anyone worry. We need the LVFS to have strong foundations; it needs to grow, adapt, and be resilient – and it needs vendor support.
Draft timeline, which is probably a little aggressive for the OEMs — so the dates might be moved back in the future:
APR 2025: We started showing the historical percentage “fair use” download utilization graph in vendor pages. As time goes on this will also be recorded into per-protocol sections too.
JUL 2025: We started showing the historical percentage “fair use” upload utilization, also broken into per-protocol sections:
JUL 2025: We started restricting logos on the main index page to vendors joining as startup or above level — note Red Hat isn’t sponsoring the LVFS with money (but they do pay my salary!) — I’ve just used the logo as a placeholder to show what it would look like.
AUG 2025: I created this blogpost and sent an email to the lvfs-announce mailing list.
AUG 2025: We allow vendors to join as startup or premier sponsors shown on the main page and show the badge on the vendor list
DEC 2025: Start showing over-quota warnings on the per-firmware pages
DEC 2025: Turn off detailed per-firmware analytics to vendors below startup sponsor level
APR 2026: Turn off access to custom LVFS API for vendors below Startup Sponsorship level, for instance:
/lvfs/component/{}/modify/json
/lvfs/vendors/auth
/lvfs/firmware/auth
APR 2026: Limit the number of authenticated automated robot uploads for less than Startup Sponsorship levels.
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 04 Aug – 08 Aug 2025
Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
The moon has circled us a few times since that last post and some update is in order. First of all: all the internal work required for plugins was released as libinput 1.29 but that version does not have any user-configurable plugins yet. But cry you not my little jedi and/or sith lord in training, because support for plugins has now been merged and, barring any significant issues, will be in libinput 1.30, due somewhen around October or November. This year. 2025 that is.
Which means now is the best time to jump in and figure out if your favourite bug can be solved with a plugin. And if so, let us know and if not, then definitely let us know so we can figure out if the API needs changes.
The API Documentation for Lua plugins is now online too and will auto-update as changes to it get merged. There have been a few minor changes to the API since the last post so please refer to the documentation for details. Notably, the version negotiation was re-done so both libinput and plugins can support select versions of the plugin API. This will allow us to iterate the API over time while designating some APIs as effectively LTS versions, minimising plugin breakages. Or so we hope.
What warrants a new post is that we merged a new feature for plugins, or rather, ahaha, a non-feature. Plugins now have an API accessible that allows them to disable certain internal features that are not publicly exposed, e.g. palm detection. The reason why libinput doesn't have a lot of configuration options have been explained previously (though we actually have quite a few options) but let me recap for this particular use-case: libinput doesn't have a config option for e.g. palm detection because we have several different palm detection heuristics and they depend on device capabilities. Very few people want no palm detection at all[1] so disabling it means you get a broken touchpad and we now get to add configuration options for every palm detection mechanism. And keep those supported forever because, well, workflows.
But plugins are different, they are designed to take over some functionality. So the Lua API has a EvdevDevice:disable_feature("touchpad-palm-detection") function that takes a string with the feature's name (easier to make backwards/forwards compatible this way). This example will disable all palm detection within libinput and the plugin can implement said palm detection itself. At the time of writing, the following self-explanatory features can be disabled: "button-debouncing", "touchpad-hysteresis", "touchpad-jump-detection", "touchpad-palm-detection", "wheel-debouncing". This list is mostly based on "probably good enough" so as above - if there's something else then we can expose that too.
So hooray for fewer features and happy implementing!
[1] Something easily figured out by disabling palm detection or using a laptop where palm detection doesn't work thanks to device issues
First of all, what's outlined here should be available in libinput 1.29 1.30 but I'm not 100% certain on all the details yet so any feedback (in the libinput issue tracker) would be appreciated. Right now this is all still sitting in the libinput!1192 merge request. I'd specifically like to see some feedback from people familiar with Lua APIs. With this out of the way:
Come libinput 1.29 1.30, libinput will support plugins written in Lua. These plugins sit logically between the kernel and libinput and allow modifying the evdev device and its events before libinput gets to see them.
The motivation for this are a few unfixable issues - issues we knew how to fix but we cannot actually implement and/or ship the fixes without breaking other devices. One example for this is the inverted Logitech MX Master 3S horizontal wheel. libinput ships quirks for the USB/Bluetooth connection but not for the Bolt receiver. Unlike the Unifying Receiver the Bolt receiver doesn't give the kernel sufficient information to know which device is currently connected. Which means our quirks could only apply to the Bolt receiver (and thus any mouse connected to it) - that's a rather bad idea though, we'd break every other mouse using the same receiver. Another example is an issue with worn out mouse buttons - on that device the behavior was predictable enough but any heuristics would catch a lot of legitimate buttons. That's fine when you know your mouse is slightly broken and at least it works again. But it's not something we can ship as a general solution. There are plenty more examples like that - custom pointer deceleration, different disable-while-typing, etc.
libinput has quirks but they are internal API and subject to change without notice at any time. They're very definitely not for configuring a device and the local quirk file libinput parses is merely to bridge over the time until libinput ships the (hopefully upstreamed) quirk.
So the obvious solution is: let the users fix it themselves. And this is where the plugins come in. They are not full access into libinput, they are closer to a udev-hid-bpf in userspace. Logically they sit between the kernel event devices and libinput: input events are read from the kernel device, passed to the plugins, then passed to libinput. A plugin can look at and modify devices (add/remove buttons for example) and look at and modify the event stream as it comes from the kernel device. For this libinput changed internally to now process something called an "evdev frame" which is a struct that contains all struct input_events up to the terminating SYN_REPORT. This is the logical grouping of events anyway but so far we didn't explicitly carry those around as such. Now we do and we can pass them through to the plugin(s) to be modified.
The aforementioned Logitech MX master plugin would look like this: it registers itself with a version number, then sets a callback for the "new-evdev-device" notification and (where the device matches) we connect that device's "evdev-frame" notification to our actual code:
libinput:register(1) -- register plugin version 1
libinput:connect("new-evdev-device", function (_, device)
if device:vid() == 0x046D and device:pid() == 0xC548 then
device:connect("evdev-frame", function (_, frame)
for _, event in ipairs(frame.events) do
if event.type == evdev.EV_REL and
(event.code == evdev.REL_HWHEEL or
event.code == evdev.REL_HWHEEL_HI_RES) then
event.value = -event.value
end
end
return frame
end)
end
end)
This file can be dropped into /etc/libinput/plugins/10-mx-master.lua and will be loaded on context creation.
I'm hoping the approach using named signals (similar to e.g. GObject) makes it easy to add different calls in future versions. Plugins also have access to a timer so you can filter events and re-send them at a later point in time. This is useful for implementing something like disable-while-typing based on certain conditions.
So why Lua? Because it's very easy to sandbox. I very explicitly did not want the plugins to be a side-channel to get into the internals of libinput - specifically no IO access to anything. This ruled out using C (or anything that's a .so file, really) because those would run a) in the address space of the compositor and b) be unrestricted in what they can do. Lua solves this easily. And, as a nice side-effect, it's also very easy to write plugins in.[1]
Whether plugins are loaded or not will depend on the compositor: an explicit call to set up the paths to load from and to actually load the plugins is required. No run-time plugin changes at this point either, they're loaded on libinput context creation and that's it. Otherwise, all the usual implementation details apply: files are sorted and if there are files with identical names the one from the highest-precedence directory will be used. Plugins that are buggy will be unloaded immediately.
If all this sounds interesting, please have a try and report back any APIs that are broken, or missing, or generally ideas of the good or bad persuation. Ideally before we ship it and the API is stable forever :)
[1] Benjamin Tissoires actually had a go at WASM plugins (via rust). But ... a lot of effort for rather small gains over Lua
Flossy ( https://2025.fossy.us/ ) is in it's 3rd year now.
I always wanted to attend, but in previous years it was right
around the time that flock was, or some release event that was
very busy (or both!). This year happily it was not and I was able
to attend.
Flossy was in portland, oregon. Which is a 2ish hour drive from my
house. No planes, no jetlag! Hurray! The first day was somewhat of
a half day: registration started in the morning, but the keynote
wasn't until the afternoon. This worked great for me, as I was able
to do a few things in the morning and then drive up and park at
my hotel and walk over to the venue.
The venue was the 3rd floor of the PSU student union building.
I had a bit of confusion at first as the first set of doors I got
to were locked with a sign to enter from one street direction
for events. However, the correct door you can enter by _also_
has the sign on it, so I passed it thinking it would have a nice
"hey, this is the right door". A nice walk around the building and
I figured it out.
Registration was easy and into the ballroom/main room where keynotes
and such and tables for some free software projects as well as coffee
and snacks. I wished a few times there were some tables around to help
juggling coffee and a snack, and perhaps be a place to gather folks.
Somehow a table with 2 people standing at it and more room seems like
a chance to walk up and introduce yourself, where as two people talking
seems like it would be intruding.
Some talks I attended:
Is There Really an SBOM Mandate?
The main surprise for me is that SBOM has no actual definition and
companies/communities are just winging it. I definitely love the idea
of open source projects making their complete source code and scripts
available as their SBOM.
Panel: Ongoing Things in the Kernel Community
I tossed a 'what do you think of AI contributions' question to get
things started and there was a lot of great discussion.
The Subtle Art of Lying with Statistics
Lots of fun graphs here and definitely something people should keep
in mind when they see some stat that doesn't 'seem' right.
The Future of Fixing Technology
This was noting how we should try and define the tech we get
and make sure it's repairable/configurable. I completely agree,
but I am really not sure how we get to that future.
The evening event was nice, got to catch up with several folks.
Friday we had:
keynote: Assessing and Managing threats to the Nonprofit Infrastructure of FOSS
This was an interesting discussion from the point of view of nonprofit
orgs and manageing them. Todays landscape is pretty different than it was
even a few years ago.
After that they presented the Distinguished Service Award in Software Freedom
to Lance Albertson. Well deserved award! Congrats lance!
Starting an Open Mentorship Handbook!
This was a fun talk, it was more of a round table with the audience talking
about their experences and areas. I'm definitely looking forward to
hearing more about the handbook they are going to put together and
I got some ideas about how we can up our game.
Raising the bar on your conference presentation
This was a great talk! Rich Bowen did a great job here. There's tons of
good advice on how to make your conference talk so much better. I found
myself after this checking off when I saw things in talks he mentioned.
everyone speaking should give a watch to this one.
I caught next "Making P2P apps with Spritely Goblins".
Seemed pretty cool, but I am not so much into web design, so I was
a bit lost.
How to Hold It Together When It All Falls Apart:
Surviving a Toxic Open Source Project Without Losing it.
This was kind of a cautionary tale and a bit harrowing, but kudos
to sharing for others to learn by. In particular I was struck by
"being available all the time" (I do this) and "people just expect
you to do things so no one does them if you dont" (this happens to me
as well. Definitely something to think about.
It's all about the ecosystem! was next.
This was a reminder that when you make a thing that a ecosystem
comes up around, it's that part that people find great. If you try and
make it harder for them to keep doing their ecosystem things they will
leave.
DevOps is a Foreign Language (or Why There Are No Junior SREs)
This was a great take on new devops folks learning things like
adults learn new languages. There were a lot of interesting parallels.
More good thoughts about how we could onboard folks better.
Saturday started out with:
Q&A on SFC's lawsuit against Vizio
Some specifics I had not heard before, was pretty interesting.
Good luck on the case conservency!
DRM, security, or both? How do we decide?
Matthew Garret went over a bunch of DRM and security measures.
Good background/info for those that dont follow that space.
Some general thoughts: It was sure nice to see a more diverse audience!
Things seemed to run pretty smoothly and portland was a lovely space
for it. Next year, they are going to be moving to Vancouver, BC.
I think it might be interesting to see about having a fedora booth
and/or having some more distro related talks.
So some time back, I wrote this highly-performant, network-wide, transparent-proxy service in C which was incredibly fast as it could read 8192 bytes off of the client’s TCP sockets directly and proxy them in one write call over TCP directly to the VPN server without needing a tunnel interface with a small sized MTU which bottlenecks reads+writes to <1500 bytes per function call.
I thought about it for a while and came up with a proof of concept to incorporate similar ideas into OpenVPN’s source code as well. The summary of improvements are:
Max-MTU which now matches the rest of your standard network clients (1500 bytes)
Bulk-Reads which are properly sized and multiply called from the tun interface (6 reads)
Jumbo-TCP connection protocol operation mode only (single larger write transfers)
Performance improvements made above now allows for (6 reads x 1500 bytes == 9000 bytes per transfer call)
As you can see below, this was a speed test performed on a Linux VM running on my little Mac Mini which is piping all of my network traffic through it so the full sized MTU which the client assumes doesn’t have to be fragmented or compressed at all!
Note: Also, the client/server logs show the multi-batched TUN READ/WRITE calls along with the jumbo-sized TCPv4 READ/WRITE calls.
Note-note: My private VPS link is 1.5G and my internet link is 1G and my upload speed is hard rate limited by iptables and this test was done via a WiFi network client and not the actual host VPN client itself which to me makes it a bit more impressive.
~
~
~
~
I created a new GitHub repo with a branch+commit which has the changes made to the source code.
As everyone probably knows rust is considered a great language for secure programming and hence a lot of people are looking at it for everything from low level firmware to GPU drivers. In a similar vein it can already be to build UEFI applications.
Upstream in U-Boot we’ve been adding support for UEFI HTTP and HTTPs Boot and it’s now stable enough I am looking to enable this in Fedora. While I was testing the features on various bits of hardware I wanted a small UEFI app I could pull across a network quickly and easily from a web server for testing devices.
Of course adding in display testing as well would be nice…. so enter UEFI nyan cat for a bit of fun!
Thankfully the Fedora rust toolchain already has the UEFI targets built (aarch64 and x86_64) so you don’t need to mess with third party toolchains or repos, it works the same for both targets, just substitute the one you want in the example where necessary.
I did most of this in a container:
$ podman pull fedora
$ podman run --name=nyan_cat --rm -it fedora /bin/bash
# dnf install -y git-core cargo rust-std-static-aarch64-unknown-uefi
# git clone https://github.com/diekmann/uefi_nyan_80x25.git
# cd uefi_nyan_80x25/nyan/
# cargo build --release --target aarch64-unknown-uefi
# ls target/aarch64-unknown-uefi/release/nyan.efi
target/aarch64-unknown-uefi/release/nyan.efi
From outside the container then copy the binary, and add it to the EFI boot menu:
$ podman cp nyan_cat:/uefi_nyan_80x25/nyan/target/aarch64-unknown-uefi/release/nyan.efi ~/
$ sudo mkdir /boot/efi/EFI/nyan/
$ sudo cp ~/nyan.efi /boot/efi/EFI/nyan/nyan-a64.efi
$ sudo efibootmgr --create --disk /dev/nvme0n1 --part 1 --label "nyan" --loader \\EFI\\nyan\\nyan-a64.efi
Bastille, the lightweight jail (container) management system for FreeBSD, was already covered here. Recently, they released Bastille 1.0 and BastilleBSD, a hardened FreeBSD variant that comes with Bastille pre-installed.
What is Bastille?
Bastille is a lightweight jail management system for FreeBSD. Linux users might know jails as containers. FreeBSD jails arrived many years earlier. Just like on Linux, jails were originally not that easy to use. Bastille provides an easy-to-use shell script to work with FreeBSD jails, slightly similar to Podman (I will compare Bastille to Podman, as unlike Docker, Bastille does not have a continuously running daemon component). I wish it had already been available when I was managing tens of thousands of FreeBSD jails two decades ago :-)
BastilleBSD is a variant of FreeBSD by the creators of Bastille. The installation process looks pretty similar to the original FreeBSD installer, however it comes with security hardening enabled by default:
I t also installs Bastille out of box and configures the firewall among other things during installation. You can learn more about it and download at https://bastillebsd.org/platform/
As the 14.3 FreeBSD release is already pre-loaded, there is no need for the bootstrapping steps from the previous blog. The pf firewall is also configured and ready to use. As a first step, we create the jail. We use the name alcatraz, use the 14.3 FreeBSD release and a random IP address (AFAIR this was in the Bastille documentation, but could be anything unique and private).
bastille create alcatraz 14.3-RELEASE 10.17.89.50
Next, we bootstrap (that is, checkout using Git) the syslog-ng template. Bastille templates are similar to Dockerfiles from a distance. They describe what packages to install and how to configure them. There are many other templates, but here we use the one for syslog-ng.
At the end of the process, syslog-ng is started within the jail. As the BastilleBSD template was created four years ago, it still has a version 3.30 syslog-ng configuration. It prints a warning on the terminal:
[2025-07-30T10:21:08.374200] WARNING: Configuration file format is too old, syslog-ng is running in compatibility mode. Please update it to use the syslog-ng 4.8 format at your time of convenience. To upgrade the configuration, please review the warnings about incompatible changes printed by syslog-ng, and once completed change the @version header at the top of the configuration file; config-version='3.30'
The configuration in the template has several modifications compared to the regular FreeBSD syslog-ng configuration. It adds a tcp source on port 514 and disables a few lines in the configuration, which would try to print important logs to the console. The default configuration does not use any new features, so if you are OK with a warning message, you can leave it as-is. But it is easy to fix the config. Just open a shell within the jail:
bastille console alcatraz
And edit the syslog-ng configuration:
vi /usr/local/etc/syslog-ng.conf
Replace the version number with the actual syslog-ng version. By the time of this blog, it is 4.8. Restart syslog-ng for the configuration to take effect:
service syslog-ng restart
Exit from the jail, and configure the firewall, so outside hosts can also log to your syslog-ng server.
bastille rdr alcatraz tcp 514 514
Testing
First, do a local checkthat logging works. Open a shell within the jail, and follow the /var/log/messages file:
This blog is just scratching the surface. In a production environment, you most likely want to build your own configuration, with encrypted connections and remote logs saved separately from local logs. Data is now stored within the jail, but you will most likely want to store it in a separate directory. And so on.
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
The 46th annual TeX Users Group conference (TUG2025) took place in Kerala during July 18–20, 2025. I’ve attended and presented at a few of the past TUG conferences; but this time it is different as I was to present a paper and help to organize the conference. This is a personal, incomplete, (and potentially hazy around the edges) reflection of my experience organizing the event, which had participants from many parts of the Europe, the US and India.
Preparations
The Indian TeX Users Group, lead by CVR, have conducted TUG conferences in 2002 and 2011. We, a group of about 18 volunteers, lead by him convened as soon as the conference plan was announced in September 2024, and started creating todo-lists, schedules and assigning responsible persons for each.
STMDocs campus has excellent conference facilities including large conference hall, audio/video systems, high-speed internet with fallback, redundant power supply etc. making it an ideal choice, as done in 2011. Yet, we prioritized the convenience of the speakers and delegates to avoid travel from and to the hotel in city — prior experience found it is best to locate the conference facility closer to the stay. We scouted for a few hotels with good conference facilities in Thiruvananthapuram city, and finalized the Hyatt Regency; even though we had to take greater responsibility and coordination as they had no prior experience organizing a conference with requirements similar to TUG. Travel and visit advisories were published on the conference web site as soon as details were available.
Projector, UPS, display connectors, microphones, WiFi access points and a lot of related hardware were procured. Conference materials such as t-shirt, mug, notepad, pen, tote bag etc. were arranged. Noted political cartoonist E.P. Unny graciously drew the beloved lion sketches for the conference.
Karl Berry, from the US, orchestrated mailing lists for coordination and communication. CVR, Shan and I assumed the responsibility of answering speaker & delegate emails. At the end of extended deadline for submitting presentations and prerecorded talks; Karl handed us over the archive of those to use with the audio/video system.
Audio/video and live streaming setup
I traveled to Thiruvananthapuram a week ahead of the conference to be present in person for the final preparations. One of the important tasks for me was the setup the audio/video and live streaming for the workshop and conference. The audio/video team and volunteers in charge did a commendable job of setting up all the hardware and connectivity on 16th evening and we tested presentation, video playing, projector, audio in/out, prompt, clicker, microphones and live streaming. There was no prompt at the hotel, so we split the screen-out to two monitors placed on both side of the podium — this was much appreciated by the speakers later. In addition to the A/V team’s hardware and (primary) laptop, two laptops (running Fedora 42) were used: a hefty one to run the presentation & backup OBS setup; another for video conferencing remote speakers’ Q&A session. The laptop used for presentation had 4K screen resolution. Thanks to Wayland (specifically, Kwin), the connected HDMI out can be independently configured for 1080p resolution; but it failed to drive the monitors split further for prompt. Changing the laptop built-in display resolution also to 1080p fixed the issue (may changing from 120 Hz refresh rate to 60 Hz might have helped, but we didn’t fiddle any further).
Also met with Erik Nijenhuis in front of the hotel, who was hand-rolling a cigarette (which turned out to be quite in demand during and after the conference), to receive a copy of the book ‘The Stroke’ by Gerrit Noordzij he kindly bought for me — many thanks!
Workshop
The ‘Tagging PDF for accessibility’ workshop was conducted on 17th July at STMDocs campus — the A/V systems & WiFi were setup and tested a couple of days prior. Delegates were picked up at the hotel in the morning and dropped off after the workshop. Registration of workshop attendees were done on the spot, and we collected speaker introductions to share with session chairs. Had interesting discussions with Frank Mittelbach and Boris Veytsman during lunch.
Reception & Registration
There was a reception at Hyatt on 17th evening, where almost everyone got registered, collected the conference material with program pre-print, t-shirt, mug, notepad & pen, a handwritten (by N. Bhattathiri) copy of Daiva Daśakam, and a copy of the LaTeX tutorial. All delegates introduced themselves — but I had to step out at the exact moment to get into a video call to prepare for live Q&A with Norman Gray from UK, who was presenting remotely on Saturday. There were two more remote speakers — Ross Moore from Australia and Martin J. Osborne from Canada — with whom I conducted the same exercise, despite at inconvenient times for them. Frank Mittelbach needed to use his own laptop for presentation; so we tested the A/V & streaming setup with that too. Doris Behrendt had a presentation with videos; its setup was also tested & arranged.
An ode to libre software & PipeWire
Tried to use a recent Macbook for the live video conference of remote speakers, but it failed miserably to detect the A/V splitter connected via USB to pick up the audio in and out. Resorting to my old laptop running Fedora 42; the devices were detected automagically and PipeWire (plus WirePlumber) made those instantly available for use.
With everything organized and tested for A/V & live streaming, I went back to get some sleep to wake early on the next day.
Day 1 — Friday
Woke up at 05:30, reached hotel by 07:00, and met with some attendees during breakfast. By 08:45, the live stream for day 1 started. Boris Veytsman, the outgoing vice-president of TUG opened TUG2025, handed over to the incoming vice-president and the session chair Erik Nijenhuis; who then introduced Rob Schrauwen to deliver the keynote titled ‘True but Irrelevant’ reflecting on the design of Elsevier XML DTD for archiving scientific articles. It was quite enlightening, especially when one of the designers of a system looks back at the strength, shortcomings, and impact of their design decisions; approached with humility and openness. Rob and I had a chat later, about the motto of validating documents and its parallel with IETF’s robustness principle.
You may see a second stream for day 1, this is entirely my fault as I accidentally stopped streaming during tea break; and started a new one. The group photo was taken after a few exercises in cat-herding.
All the talks on day 1 were very interesting: with many talks about tagging pdf project (that of Mittelbach, Fischer, & Moore); the state of CTAN by Braun — to which I had a suggestion for inactive package maintainer process to consider some Linux distributions’ procedures; Vrajar�ja explained their use of XeTeX to typeset in multiple scripts; Hufflen’s experience in teaching LaTeX to students; Behrendt & Busse’s talk about use of LaTeX in CryptTool; and CVR’s talk about long running project of archiving Malayalam literary works in TEI XML format using TeX and friends. The session chairs, speakers and audience were all punctual and kept their allotted time in check; with many followup discussions happening during coffee break, which had ample time to feel the sessions not rushed.
Ross Moore’s talk was prerecorded. As the video played out, he joined via a video conference link. The audio in/out & video out (for projecting on screen and for live streaming) were connected to my laptop, and we could hear him through the audio system as well as the audience questions via microphone were relayed to him with no lag — this worked seamlessly (thanks to PipeWire). We had a small problem with pausing a video that locked up the computer running the presentation; but quickly recovered — after the conference, I diagnosed it to be a noveau driver issue (a GPU hang).
By the end of the day, Rahul & Abhilash were accustomed to driving the presentation and live streams, so I could hand over the rein and enjoy the talks. Decided to stay back at the hotel to avoid travel, and went to bed by 22:00 but sleep descended on this poor soul only by 04:30 or so; thanks to that cup of ristretto for breakfast!
Judging by the ensuing laughs and questions; it appears not everyone was asleep during my talk. Frank & Ulrike suggested not to colour the underscore glyph in math, instead properly colour LaTeX3 macro names (which can have underscore and colon in addition to letters) in the font.
The sessions on second day were also varied and interesting, in particular Novotný’s talk about static analysis of LaTeX3 macros; Vaishnavi’s fifteen-year long project of researching and encoding Tulu-Tigalari script in Unicode; bibliography processing talks separately by Gray and Osborne (both appeared on video conferencing for live Q&A which worked like a charm), etc.
In the evening, all of us walked (the monsoon rain was at respite) to the music and dance concert; both of which were fantastic cultural & audio-visual experience.
Veena music, and fusion dance concerts.
Day 3 — Sunday
The morning session of final day had a few talks: Rishi lamented about eroding typographic beauty in publishing (which Rob concurred with, Vrajar�ja earlier pointed out as the reason for choosing TeX, …); Doris on LaTeX village in CCC — and about ‘tuwat’ (to take action); followed by the TeX Users Group annual general body meeting presided by Boris as the first session post lunch; then on his approach to solve editorial review process of documents in TeX; and a couple more talks: Rahul’s presentation about pdf tagging used our opentype font for syntax highlighting (yay!); and the lexer developed by Overleaf team was interesting. On Veeraraghavan’s presentation about challenges faced by publishers, I had a comment about the recurrent statement that “LaTeX is complex� — LaTeX is not complex, but the scientific content is complex, and LaTeX is still the best tool to capture and represent such complex information.
Two Hermann Zapf fans listening to one who collaborated with Zapf [published with permission].
Calligraphy
For the final session, Narayana Bhattathiri gave us a calligraphy demonstration, in four scripts — Latin, Malayalam, Devanagari and Tamil; which was very well received judging by the applause. I was deputed to explain what he does; and also to translate for the Q&A session. He obliged the audience’s request of writing names: of themselves, or spouse or children, even a bär, or as Hà n Thế Thà nh wanted — Nhà khủng lồ (the house of dinosaurs, name for the family group); for the next half hour.
Bhattathiri signing his calligraphy work for TUG2025.
Nijenhuis was also giving away swags by Xerdi, and I made the difficult choice between a pen and a pendrive, opting for the latter.
The banquet followed; where in between enjoying delicious food I could find time to meet and speak with even more people and say good byes and ‘tot ziens’.
Later, I had some discussions with Frank about generating MathML using TeX.
Many thanks
A number of people during the conference shared their appreciation of how well the conference was organized, this was heartwarming. I would like to express thanks to many people involved, including the TeX Users Group, the sponsors (who made it fiscally possible to run the event and support many travels via bursary), STMDocs volunteers who handled many other responsibilities of organizing, the audio-video team (who were very thoughtful to place the headshot of speakers away from the presentation text), the unobtrusive hotel staff; and all the attendees, especially the speakers.
Thanks particularly to those who stayed at and/or visited the campus, for enjoying the spicy food, delicious fruits from the garden, and surviving the long techno-socio-eco-political discussions. Boris seems to have taken it to heart my request for a copy of the TeXbook signed by Don Knuth — I cannot express the joy & thanks in words!
The TeXbook signed by Don Knuth.
The recorded videos were handed over to Norbert Preining, who graciously agreed to make the individual lectures available after processing. The total file size was ~720 GB; so I connected the external SSD to one of the servers and made it available to a virtual machine via USB-passthrough; then mounted and made it securely available for copying remotely.
Special note of thanks to CVR, and Karl Berry — who I suspect is actually a kubernetes cluster running hundreds of containers each doing a separate task (with apologies to a thousand gnomes), but there are reported sightings of him; so I sent personal thanks via people who have seen him in flesh — for leading and coordinating the conference organizing. Barbara Beeton and Karl copy-edited our article for the TUGboat conference proceedings, which is gratefully acknowledged. I had a lot of fun and a lot less stress participating in TUG2025 conference!
There's a lovely device called a pistorm, an adapter board that glues a Raspberry Pi GPIO bus to a Motorola 68000 bus. The intended use case is that you plug it into a 68000 device and then run an emulator that reads instructions from hardware (ROM or RAM) and emulates them. You're still limited by the ~7MHz bus that the hardware is running at, but you can run the instructions as fast as you want.
These days you're supposed to run a custom built OS on the Pi that just does 68000 emulation, but initially it ran Linux on the Pi and a userland 68000 emulator process. And, well, that got me thinking. The emulator takes 68000 instructions, emulates them, and then talks to the hardware to implement the effects of those instructions. What if we, well, just don't? What if we just run all of our code in Linux on an ARM core and then talk to the Amiga hardware?
We're going to ignore x86 here, because it's weird - but most hardware that wants software to be able to communicate with it maps itself into the same address space that RAM is in. You can write to a byte of RAM, or you can write to a piece of hardware that's effectively pretending to be RAM[1]. The Amiga wasn't unusual in this respect in the 80s, and to talk to the graphics hardware you speak to a special address range that gets sent to that hardware instead of to RAM. The CPU knows nothing about this. It just indicates it wants to write to an address, and then sends the data.
So, if we are the CPU, we can just indicate that we want to write to an address, and provide the data. And those addresses can correspond to the hardware. So, we can write to the RAM that belongs to the Amiga, and we can write to the hardware that isn't RAM but pretends to be. And that means we can run whatever we want on the Pi and then access Amiga hardware.
And, obviously, the thing we want to run is Doom, because that's what everyone runs in fucked up hardware situations.
Doom was Amiga kryptonite. Its entire graphical model was based on memory directly representing the contents of your display, and being able to modify that by just moving pixels around. This worked because at the time VGA displays supported having a memory layout where each pixel on your screen was represented by a byte in memory containing an 8 bit value that corresponded to a lookup table containing the RGB value for that pixel.
The Amiga was, well, not good at this. Back in the 80s, when the Amiga hardware was developed, memory was expensive. Dedicating that much RAM to the video hardware was unthinkable - the Amiga 1000 initially shipped with only 256K of RAM, and you could fill all of that with a sufficiently colourful picture. So instead of having the idea of each pixel being associated with a specific area of memory, the Amiga used bitmaps. A bitmap is an area of memory that represents the screen, but only represents one bit of the colour depth. If you have a black and white display, you only need one bitmap. If you want to display four colours, you need two. More colours, more bitmaps. And each bitmap is stored in an independent area of RAM. You never use more memory than you need to display the number of colours you want to.
But that means that each bitplane contains packed information - every byte of data in a bitplane contains the bit value for 8 different pixels, because each bitplane contains one bit of information per pixel. To update one pixel on screen, you need to read from every bitmap, update one bit, and write it back, and that's a lot of additional memory accesses. Doom, but on the Amiga, was slow not just because the CPU was slow, but because there was a lot of manipulation of data to turn it into the format the Amiga wanted and then push that over a fairly slow memory bus to have it displayed.
The CDTV was an aesthetically pleasing piece of hardware that absolutely sucked. It was an Amiga 500 in a hi-fi box with a caddy-loading CD drive, and it ran software that was just awful. There's no path to remediation here. No compelling apps were ever released. It's a terrible device. I love it. I bought one in 1996 because a local computer store had one and I pointed out that the company selling it had gone bankrupt some years earlier and literally nobody in my farming town was ever going to have any interest in buying a CD player that made a whirring noise when you turned it on because it had a fan and eventually they just sold it to me for not much money, and ever since then I wanted to have a CD player that ran Linux and well spoiler 30 years later I'm nearly there. That CDTV is going to be our test subject. We're going to try to get Doom running on it without executing any 68000 instructions.
We're facing two main problems here. The first is that all Amigas have a firmware ROM called Kickstart that runs at powerup. No matter how little you care about using any OS functionality, you can't start running your code until Kickstart has run. This means even documentation describing bare metal Amiga programming assumes that the hardware is already in the state that Kickstart left it in. This will become important later. The second is that we're going to need to actually write the code to use the Amiga hardware.
First, let's talk about Amiga graphics. We've already covered bitmaps, but for anyone used to modern hardware that's not the weirdest thing about what we're dealing with here. The CDTV's chipset supports a maximum of 64 colours in a mode called "Extra Half-Brite", or EHB, where you have 32 colours arbitrarily chosen from a palette and then 32 more colours that are identical but with half the intensity. For 64 colours we need 6 bitplanes, each of which can be located arbitrarily in the region of RAM accessible to the chipset ("chip RAM", distinguished from "fast ram" that's only accessible to the CPU). We tell the chipset where our bitplanes are and it displays them. Or, well, it does for a frame - after that the registers that pointed at our bitplanes no longer do, because when the hardware was DMAing through the bitplanes to display them it was incrementing those registers to point at the next address to DMA from. Which means that every frame we need to set those registers back.
Making sure you have code that's called every frame just to make your graphics work sounds intensely irritating, so Commodore gave us a way to avoid doing that. The chipset includes a coprocessor called "copper". Copper doesn't have a large set of features - in fact, it only has three. The first is that it can program chipset registers. The second is that it can wait for a specific point in screen scanout. The third (which we don't care about here) is that it can optionally skip an instruction if a certain point in screen scanout has already been reached. We can write a program (a "copper list") for the copper that tells it to program the chipset registers with the locations of our bitplanes and then wait until the end of the frame, at which point it will repeat the process. Now our bitplane pointers are always valid at the start of a frame.
Ok! We know how to display stuff. Now we just need to deal with not having 256 colours, and the whole "Doom expects pixels" thing. For the first of these, I stole code from ADoom, the only Amiga doom port I could easily find source for. This looks at the 256 colour palette loaded by Doom and calculates the closest approximation it can within the constraints of EHB. ADoom also includes a bunch of CPU-specific assembly optimisation for converting the "chunky" Doom graphic buffer into the "planar" Amiga bitplanes, none of which I used because (a) it's all for 68000 series CPUs and we're running on ARM, and (b) I have a quad core CPU running at 1.4GHz and I'm going to be pushing all the graphics over a 7.14MHz bus, the graphics mode conversion is not going to be the bottleneck here. Instead I just wrote a series of nested for loops that iterate through each pixel and update each bitplane and called it a day. The set of bitplanes I'm operating on here is allocated on the Linux side so I can read and write to them without being restricted by the speed of the Amiga bus (remember, each byte in each bitplane is going to be updated 8 times per frame, because it holds bits associated with 8 pixels), and then copied over to the Amiga's RAM once the frame is complete.
And, kind of astonishingly, this works! Once I'd figured out where I was going wrong with RGB ordering and which order the bitplanes go in, I had a recognisable copy of Doom running. Unfortunately there were weird graphical glitches - sometimes blocks would be entirely the wrong colour. It took me a while to figure out what was going on and then I felt stupid. Recording the screen and watching in slow motion revealed that the glitches often showed parts of two frames displaying at once. The Amiga hardware is taking responsibility for scanning out the frames, and the code on the Linux side isn't synchronised with it at all. That means I could update the bitplanes while the Amiga was scanning them out, resulting in a mashup of planes from two different Doom frames being used as one Amiga frame. One approach to avoid this would be to tie the Doom event loop to the Amiga, blocking my writes until the end of scanout. The other is to use double-buffering - have two sets of bitplanes, one being displayed and the other being written to. This consumes more RAM but since I'm not using the Amiga RAM for anything else that's not a problem. With this approach I have two copper lists, one for each set of bitplanes, and switch between them on each frame. This improved things a lot but not entirely, and there's still glitches when the palette is being updated (because there's only one set of colour registers), something Doom does rather a lot, so I'm going to need to implement proper synchronisation.
Except. This was only working if I ran a 68K emulator first in order to run Kickstart. If I tried accessing the hardware without doing that, things were in a weird state. I could update the colour registers, but accessing RAM didn't work - I could read stuff out, but anything I wrote vanished. Some more digging cleared that up. When you turn on a CPU it needs to start executing code from somewhere. On modern x86 systems it starts from a hardcoded address of 0xFFFFFFF0, which was traditionally a long way any RAM. The 68000 family instead reads its start address from address 0x00000004, which overlaps with where the Amiga chip RAM is. We can't write anything to RAM until we're executing code, and we can't execute code until we tell the CPU where the code is, which seems like a problem. This is solved on the Amiga by powering up in a state where the Kickstart ROM is "overlayed" onto address 0. The CPU reads the start address from the ROM, which causes it to jump into the ROM and start executing code there. Early on, the code tells the hardware to stop overlaying the ROM onto the low addresses, and now the RAM is available. This is poorly documented because it's not something you need to care if you execute Kickstart which every actual Amiga does and I'm only in this position because I've made poor life choices, but ok that explained things. To turn off the overlay you write to a register in one of the Complex Interface Adaptor (CIA) chips, and things start working like you'd expect.
Except, they don't. Writing to that register did nothing for me. I assumed that there was some other register I needed to write to first, and went to the extent of tracing every register access that occurred when running the emulator and replaying those in my code. Nope, still broken. What I finally discovered is that you need to pulse the reset line on the board before some of the hardware starts working - powering it up doesn't put you in a well defined state, but resetting it does.
So, I now have a slightly graphically glitchy copy of Doom running without any sound, displaying on an Amiga whose brain has been replaced with a parasitic Linux. Further updates will likely make things even worse. Code is, of course, available.
[1] This is why we had trouble with late era 32 bit systems and 4GB of RAM - a bunch of your hardware wanted to be in the same address space and so you couldn't put RAM there so you ended up with less than 4GB of RAM
Joplin یک نرمافزار متنباز و رایگان برای یادداشتبرداری، مدیریت وظایف (To-do)، و ذخیرهسازی اطلاعات شخصی است. این برنامه به عنوان جایگزینی قوی برای نرمافزارهایی مانند Evernote طراحی شده و بر حفظ حریم خصوصی کاربر و قابلیت همگامسازی امن تأکید دارد. معرفی Joplin Joplin ابزاری برای نوشتن و مدیریت یادداشتها به شیوهای ساختارمند و امن است. […]
Yet another day, yet another need for testing a device I don't have. That's fine and that's why many years ago I wrote libinput record and libinput replay (more powerful successors to evemu and evtest). Alas, this time I had a dependency on multiple devices to be present in the system, in a specific order, sending specific events. And juggling this many terminal windows with libinput replay open was annoying. So I decided it's worth the time fixing this once and for all (haha, lolz) and wrote unplug. The target market for this is niche, but if you're in the same situation, it'll be quite useful.
Pictures cause a thousand words to finally shut up and be quiet so here's the screenshot after running pip install unplug[1]:
This shows the currently pre-packaged set of recordings that you get for free when you install unplug. For your use-case you can run libinput record, save the output in a directory and then start unplug path/to/directory. The navigation is as expected, hitting enter on the devices plugs them in, hitting enter on the selected sequence sends that event sequence through the previously plugged device.
Annotation of the recordings (which must end in .yml to be found) can be done by adding a YAML unplug: entry with a name and optionally a multiline description. If you have recordings that should be included in the default set, please file a merge request. Happy emulating!
[1] And allowing access to /dev/uinput. Details, schmetails...
At GnuTLS, our journey into optimizing GitLab CI began when we faced a significant challenge: we lost our GitLab.com Open Source Program subscription. While we are still hoping that this limitation is temporary, this meant our available CI/CD resources became considerably lower. We took this opportunity to find smarter ways to manage our pipelines and reduce our footprint.
This blog post shares the strategies we employed to optimize our GitLab CI usage, focusing on reducing running time and network resources, which are crucial for any open-source project operating with constrained resources.
CI on every PR: a best practice, but not cheap
While running CI on every commit is considered a best practice for secure software development, our experience setting up a self-hosted GitLab runner on a modest Virtual Private Server (VPS) highlighted its cost implications, especially with limited resources. We provisioned a VPS with 2GB of memory and 3 CPU cores, intending to support our GnuTLS CI pipelines.
The reality, however, was a stark reminder of the resource demands. A single CI pipeline for GnuTLS took an excessively long time to complete, often stretching beyond acceptable durations. Furthermore, the extensive data transfer involved in fetching container images, dependencies, building artifacts, and pushing results quickly led us to reach the bandwidth limits imposed by our VPS provider, resulting in throttled connections and further delays.
This experience underscored the importance of balancing CI best practices with available infrastructure and budget, particularly for resource-intensive projects.
Reducing CI running time
Efficient CI pipeline execution is paramount, especially when resources are scarce. GitLab provides an excellent article on pipeline efficiency, though in practice, project specific optimization is needed. We focused on three key areas to achieve faster pipelines:
Tiering tests
Layering container images
De-duplicating build artifacts
Tiering tests
Not all tests need to run on every PR. For more exotic or costly tasks, such as extensive fuzzing, generating documentation, or large-scale integration tests, we adopted a tiering approach. These types of tests are resource-intensive and often provide value even when run less frequently. Instead of scheduling them for every PR, they are triggered manually or on a periodic basis (e.g., nightly or weekly builds). This ensures that critical daily development workflows remain fast and efficient, while still providing comprehensive testing coverage for the project without incurring excessive resource usage on every minor change.
Layering container images
The tiering of tests gives us an idea which CI images are more commonly used in the pipeline. For those common CI images, we transitioned to using a more minimal base container image, such as fedora-minimal or debian:<flavor>-slim. This reduced the initial download size and the overall footprint of our build environment.
For specialized tasks, such as generating documentation or running cross-compiled tests that require additional tools, we adopted a layering approach. Instead of building a monolithic image with all possible dependencies, we created dedicated, smaller images for these specific purposes and layered them on top of our minimal base image as needed within the CI pipeline. This modular approach ensures that only the necessary tools are present for each job, minimizing unnecessary overhead.
De-duplicating build artifacts
Historically, our CI pipelines involved many “configure && make” steps for various options. One of the major culprits of long build times is repeatedly compiling source code, oftentimes resulting in almost identical results.
We realized that many of these compile-time options could be handled at runtime. By moving configurations that didn’t fundamentally alter the core compilation process to runtime, we simplified our build process and reduced the number of compilation steps required. This approach transforms a lengthy compile-time dependency into a quicker runtime check.
Of course, this approach cuts both ways: while it simplifies the compilation process, it could increase the code size and attack surface. For example, support for legacy protocol features such as SSL 3.0 or SHA-1 that may lower the entire security should still be able to be switched off at the compile time.
Another caveat is that some compilation options are inherently incompatible with each other. One example is that thread sanitizer cannot be enabled with address sanitizer at the same time. In such cases a separate build artifact is still needed.
The impact: tangible results
The efforts put into optimizing our GitLab CI configuration yielded significant benefits:
The size of the container image used for our standard build jobs is now 2.5GB smaller than before. This substantial reduction in image size translates to faster job startup times and reduced storage consumption on our runners.
9 “configure && make” steps were removed from our standard build jobs. This streamlined the build process and directly contributed to faster execution times.
By implementing these strategies, we not only adapted to our reduced resources but also built a more efficient, cost-effective, and faster CI/CD pipeline for the GnuTLS project. These optimizations highlight that even small changes can lead to substantial improvements, especially in the context of open-source projects with limited resources.
For further information on this, please consult the actual changes.
Next steps
While the current optimizations have significantly improved our CI efficiency, we are continuously exploring further enhancements. Our future plans include:
Distributed GitLab runners with external cache: To further scale and improve resource utilization, we are considering running GitLab runners on multiple VPS instances. To coordinate these distributed runners and avoid redundant data transfers, we could set up an external cache, potentially using a solution like MinIO. This would allow shared access to build artifacts, reducing bandwidth consumption and build times.
Addressing flaky tests: Flaky tests, which intermittently pass or fail without code changes, are a major bottleneck in any CI pipeline. They not only consume valuable CI resources by requiring entire jobs to be rerun but also erode developer confidence in the test suite. In TLS testing, it is common to write a test script that sets up a server and a client as a separate process, let the server bind a unique port to which the client connects, and instruct the client to initiate a certain event through a control channel. This kind of test could fail in many ways regardless of the test itself, e.g., the port might be already used by other tests. Therefore, rewriting tests without requiring a complex setup would be a good first step.
The NeuroFedora team has decided to make a couple of changes to the artefacts that we produce and maintain:
The Comp Neuro Lab ISO image has been dropped.
We are moving away from packaging Python software that is easily installable from PyPi into rpms for the Fedora repositories to testing it out on Fedora.
Over the years that the NeuroFedora team has been maintaining neuroscience rpm packages for the Fedora repositories, we have amassed a rather large number, almost ~500 packages.
That is great, and we are extremely pleased with our coverage of the neuroscience software ecosystem.
However, given that the team is composed of a few volunteers who can only dedicate limited amounts of their time to maintaining packages, we were beginning to find that we were unable to keep up with the increasing workload.
Further, we realised that the use case for including all neuroscience software in Fedora was no longer clear---was it really required for us to package all of it for users?
An example case is the Python ecosystem.
Usually, users/researchers/developers tend to install Python software directly from PyPi rather than relying on system packages that we provide.
The suggested use of virtual environments to isolate projects and their dependencies also requires the use of software directly from PyPi, rather than using our system packages.
Therefore, for software that can be installed directly, we argue that it is less important that we package them.
Instead, it is more useful to our user base if we thoroughly test that this set of software can be properly installed and used on Fedora---on all the different versions of Python that a Fedora release supports.
So, a new guideline that we now follow is:
prioritise packaging software that cannot be easily installed from upstream forges (such as PyPi)
Following this, we have made a start on the Python packages that we maintain:
we made a list of software that is easily installable from PyPi
we began dropping them from Fedora, and instead testing their usage from PyPi
The testing involves:
checking that the software and its extras can be successfully installed on Fedora using pip
checking that the modules that the software include can be successfully imported
Our documentation has also been updated to reflect this change.
We now include two tables on each page.
One table provides information about the software that can be installed from PyPi, and so is not included in rpm form in Fedora.
The other provides information about the software that continues to be included in Fedora, because it cannot be easily installed from PyPi directly.
We will continue reporting issues to upstream developers as we have done before.
The difference now is that we work directly with what they publish, rather than our rpm packaged versions of what they publish.
You can follow this discussion and progress here.
Please refer to the lists in the documentation for the up to date list of packages we include/test.
Most linux distributions choose to drop it from their repositories. Various forks exist and Valkey seems a serious one and was chosen as a replacement.
So starting with Fedora 41 or Entreprise Linux 10 (CentOS, RHEL, AlmaLinux, RockyLinux...) redis is no more available, but valkey is.
With version 8.0 Redis Labs choose to switch to AGPLv3 license, and so is back as an OpenSource project, but lot of users already switch and want to keep valkey.
RPMs of Valkey version 8.1.3 are available in the remi-modular repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
So you now have the choice between Redis and Valkey.
1. Installation
Packages are available in the valkey:remi-8.1 module stream.
These packages are weak dependencies of Valkey, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).
The Modules are automatically loaded after installation and service (re)start.
Some modules are not available for Enterprise Linux 8.
3. Future
Valkey also provides a set of modules, requiring some packaging changes already proposed for the Fedora official repository.
Redis may be proposed for reintegration and return to the Fedora official repository, by me if I find enough motivation and energy, or by someone else.
So users will have the choice and can even use both.
ℹ️ Notice: Enterprise Linux 10.0 and Fedora have valley 8.0 in their repository. Fedora 43 will have valkey 8.1. CentOS Stream 9 also has valley 8.0, so it should be part of EL-9.7.
Apache provides a pretty standard screen to display directoyr contents if you do not provide any mods. We post artifacts up to a local server that I later need to download. Here are my hacky notes using command line utilities. I probably will convert this to python next.
If you download a default page using curl you get something like this:
Since they are sorted by time, we can grab the first one on the list (assuming we want the latest, which I do). First, download it to a file so we don’t have to wait for a download each iteration.
You can use XPath to navigate most of the html. At first,I kinda gave up and used awk, but suspected I would get a cleaner solution with just XPath if I stayed on it. And I did:
I don’t think XPath will remove the quotes or the href, but I can deal with that.
Here is the same logic in Python:
#!/bin/python3
import requests
from lxml import etree
url = "https://yourserver.net/file/"
response = requests.get(url)
html_content = response.text
# Parse the HTML content using lxml
tree = etree.HTML(html_content)
# Use XPath to select elements
# Select the text content of the first h1 tag
val = tree.xpath("//html/body/pre/a[text()='Parent Directory']/following::a[1]/@href")[0]
print(val)
#To keep going down the tree...
release_url=url + val
response = requests.get(release_url)
html_content = response.text
print(html_content)
LWN wrote an article which opens with the assertion "Linux users who have Secure Boot enabled on their systems knowingly or unknowingly rely on a key from Microsoft that is set to expire in September". This is, depending on interpretation, either misleading or just plain wrong, but also there's not a good source of truth here, so.
First, how does secure boot signing work? Every system that supports UEFI secure boot ships with a set of trusted certificates in a database called "db". Any binary signed with a chain of certificates that chains to a root in db is trusted, unless either the binary (via hash) or an intermediate certificate is added to "dbx", a separate database of things whose trust has been revoked[1]. But, in general, the firmware doesn't care about the intermediate or the number of intermediates or whatever - as long as there's a valid chain back to a certificate that's in db, it's going to be happy.
That's the conceptual version. What about the real world one? Most x86 systems that implement UEFI secure boot have at least two root certificates in db - one called "Microsoft Windows Production PCA 2011", and one called "Microsoft Corporation UEFI CA 2011". The former is the root of a chain used to sign the Windows bootloader, and the latter is the root used to sign, well, everything else.
What is "everything else"? For people in the Linux ecosystem, the most obvious thing is the Shim bootloader that's used to bridge between the Microsoft root of trust and a given Linux distribution's root of trust[2]. But that's not the only third party code executed in the UEFI environment. Graphics cards, network cards, RAID and iSCSI cards and so on all tend to have their own unique initialisation process, and need board-specific drivers. Even if you added support for everything on the market to your system firmware, a system built last year wouldn't know how to drive a graphics card released this year. Cards need to provide their own drivers, and these drivers are stored in flash on the card so they can be updated. But since UEFI doesn't have any sandboxing environment, those drivers could do pretty much anything they wanted to. Someone could compromise the UEFI secure boot chain by just plugging in a card with a malicious driver on it, and have that hotpatch the bootloader and introduce a backdoor into your kernel.
This is avoided by enforcing secure boot for these drivers as well. Every plug-in card that carries its own driver has it signed by Microsoft, and up until now that's been a certificate chain going back to the same "Microsoft Corporation UEFI CA 2011" certificate used in signing Shim. This is important for reasons we'll get to.
The "Microsoft Windows Production PCA 2011" certificate expires in October 2026, and the "Microsoft Corporation UEFI CA 2011" one in June 2026. These dates are not that far in the future! Most of you have probably at some point tried to visit a website and got an error message telling you that the site's certificate had expired and that it's no longer trusted, and so it's natural to assume that the outcome of time's arrow marching past those expiry dates would be that systems will stop booting. Thankfully, that's not what's going to happen.
First up: if you grab a copy of the Shim currently shipped in Fedora and extract the certificates from it, you'll learn it's not directly signed with the "Microsoft Corporation UEFI CA 2011" certificate. Instead, it's signed with a "Microsoft Windows UEFI Driver Publisher" certificate that chains to the "Microsoft Corporation UEFI CA 2011" certificate. That's not unusual, intermediates are commonly used and rotated. But if we look more closely at that certificate, we learn that it was issued in 2023 and expired in 2024. Older versions of Shim were signed with older intermediates. A very large number of Linux systems are already booting certificates that have expired, and yet things keep working. Why?
Let's talk about time. In the ways we care about in this discussion, time is a social construct rather than a meaningful reality. There's no way for a computer to observe the state of the universe and know what time it is - it needs to be told. It has no idea whether that time is accurate or an elaborate fiction, and so it can't with any degree of certainty declare that a certificate is valid from an external frame of reference. The failure modes of getting this wrong are also extremely bad! If a system has a GPU that relies on an option ROM, and if you stop trusting the option ROM because either its certificate has genuinely expired or because your clock is wrong, you can't display any graphical output[3] and the user can't fix the clock and, well, crap.
The upshot is that nobody actually enforces these expiry dates - here's the reference code that disables it. In a year's time we'll have gone past the expiration date for "Microsoft Windows UEFI Driver Publisher" and everything will still be working, and a few months later "Microsoft Windows Production PCA 2011" will also expire and systems will keep booting Windows despite being signed with a now-expired certificate. This isn't a Y2K scenario where everything keeps working because people have done a huge amount of work - it's a situation where everything keeps working even if nobody does any work.
So, uh, what's the story here? Why is there any engineering effort going on at all? What's all this talk of new certificates? Why are there sensationalist pieces about how Linux is going to stop working on old computers or new computers or maybe all computers?
Microsoft will shortly start signing things with a new certificate that chains to a new root, and most systems don't trust that new root. System vendors are supplying updates[4] to their systems to add the new root to the set of trusted keys, and Microsoft has supplied a fallback that can be applied to all systems even without vendor support[5]. If something is signed purely with the new certificate then it won't boot on something that only trusts the old certificate (which shouldn't be a realistic scenario due to the above), but if something is signed purely with the old certificate then it won't boot on something that only trusts the new certificate.
How meaningful a risk is this? We don't have an explicit statement from Microsoft as yet as to what's going to happen here, but we expect that there'll be at least a period of time where Microsoft signs binaries with both the old and the new certificate, and in that case those objects should work just fine on both old and new computers. The problem arises if Microsoft stops signing things with the old certificate, at which point new releases will stop booting on systems that don't trust the new key (which, again, shouldn't happen). But even if that does turn out to be a problem, nothing is going to force Linux distributions to stop using existing Shims signed with the old certificate, and having a Shim signed with an old certificate does nothing to stop distributions signing new versions of grub and kernels. In an ideal world we have no reason to ever update Shim[6] and so we just keep on shipping one signed with two certs.
If there's a point in the future where Microsoft only signs with the new key, and if we were to somehow end up in a world where systems only trust the old key and not the new key[7], then those systems wouldn't boot with new graphics cards, wouldn't be able to run new versions of Windows, wouldn't be able to run any Linux distros that ship with a Shim signed only with the new certificate. That would be bad, but we have a mechanism to avoid it. On the other hand, systems that only trust the new certificate and not the old one would refuse to boot older Linux, wouldn't support old graphics cards, and also wouldn't boot old versions of Windows. Nobody wants that, and for the foreseeable future we're going to see new systems continue trusting the old certificate and old systems have updates that add the new certificate, and everything will just continue working exactly as it does now.
Conclusion: Outside some corner cases, the worst case is you might need to boot an old Linux to update your trusted keys to be able to install a new Linux, and no computer currently running Linux will break in any way whatsoever.
[1] (there's also a separate revocation mechanism called SBAT which I wrote about here, but it's not relevant in this scenario)
[2] Microsoft won't sign GPLed code for reasons I think are unreasonable, so having them sign grub was a non-starter, but also the point of Shim was to allow distributions to have something that doesn't change often and be able to sign their own bootloaders and kernels and so on without having to have Microsoft involved, which means grub and the kernel can be updated without having to ask Microsoft to sign anything and updates can be pushed without any additional delays
[3] It's been a long time since graphics cards booted directly into a state that provided any well-defined programming interface. Even back in 90s, cards didn't present VGA-compatible registers until card-specific code had been executed (hence DEC Alphas having an x86 emulator in their firmware to run the driver on the card). No driver? No video output.
[4] There's a UEFI-defined mechanism for updating the keys that doesn't require a full firmware update, and it'll work on all devices that use the same keys rather than being per-device
[5] Using the generic update without a vendor-specific update means it wouldn't be possible to issue further updates for the next key rollover, or any additional revocation updates, but I'm hoping to be retired by then and I hope all these computers will also be retired by then
[6] I said this in 2012 and it turned out to be wrong then so it's probably wrong now sorry, but at least SBAT means we can revoke vulnerable grubs without having to revoke Shim
[7] Which shouldn't happen! There's an update to add the new key that should work on all PCs, but there's always the chance of firmware bugs
The JIT-enabled Firefox 128.13.0 ESR has been built for Fedora/ppc64le in my Talos COPR repository. It took longer than expected because of some COPR infra issues (builds have been timing out), but all F-41, F-42 and Rawhide builds are available now. The corresponding sources are in my fork of the official Fedora Firefox package and the JIT is coming from Cameron's work.
❝PDFs were created as a way to give a document an absolute, invariable design suitable for PRINT. It was never meant to be how we consumed documents on a screen.❞
And I must add:
We the data professionals, we hate PDFs. They might look good and structured for your human eyes, but the data inside them is a mess, unstructured and not suitable to be processed by computer programs.
Although we still didn’t reached an agreement for ubiquitous formats, here are some better options:
ePub (which is basically packaged HTML + CSS + images) for long text such as articles, T&Cs or contracts. ePub is usually associated with books but I hope it can be popularized for other used, given its versatility.
YAML, JSON, XML including digital signatures as JWS, for structured data such as government issued documents.
SVG (Scalable Vector Graphics, which is an XML application) for high quality graphics, including paged and interactive content, such as exported presentation slides.
MPEG-4 for interactive sequence of images, including dynamic animations and SVG with JavaScript, for content such as slide shows. Although MPEG-4 is usually associated with video, it can do much more than that. Player support is extremely weak for these other possibilities though.
SQLite for pure tabular and relational data. The SQLite engine is now ubiquitous, present in every browser and on every platform you can think of.
Eventually I want to be able to smoothly recall the chords at speed. However, when memorizing them, it helps to have a series of mnemonics, and to chunk them together. Just as you practice a song slow before you play it fast, you memorize the chords slow.
Here is my analysis of “All the Things You Are.”
Notation:
if there is no modifier on the chord, I mean a Maj7.
A minus sign means a minor 7th.
A lowercase o means diminished
A 0 mean minor 7 flat 5
G- | C7 | F7 | Bb
Eb | E- A7| D | D
D- | G- | C | F
Bb | B- E7| A | A
B- | E7 | A | A
Ab-| Db7 | Gb | D7b13
G- | C7 | F7 | Bb
Eb | Eb- | D- | Dbo
C- | F7 |Bb | A0 D7b9
Start by noting that the whole thing is in the key of Bb and that is the landing chord, one measure before the end; the last measure is the turn around that brings you back to the top and would not be played the last iteration.
This song is all about moving in fourths. Remember the a ii-V-I progression is a series of fourths, as that sequence is used a lot in this tune.
To chunk this, start at the line level: ii-V7 in the key of C, V7-I in Bb, the key of the tune. This is all set up to establish the tone center. Essentially, it starts on the F, which is the fifth of the Bb base chord. Each chord is a fourth from the one preceding it. In order to remember it, focus in on the fact that each chord is designed to lead to the next, and resolve to the Bb, (but not stay there) Finger the G, C F and Bb keys as you would play them.
To transition to the next line, note that the Eb is also a fourth from the chord before it, and it targets the D major at the end of the line. It does this by chromatically stepping up the the E and using a ii-V-I. This is going to give a not-quite-resolved feeling (again) when you land on the D.
The transition from the second to third line is a move to the parallel minor. The root note stays the same, as does the fifth of the chord, but the third and the seventh both change, and these are the important notes for soling, the differences in the scale you would play over the D major and the D minor is that F# and C# move to the F natural and C natural.
The pattern is now the same as the first two bars…but starting on D minor. All of the relationships between the chords are the same, but down a fourth. The target chord here is now the A.
To transition to the bridge, we stay in the key of A, and, in standard jazz form, run it through a ii-V-I sequence for four bars. The sixth line is a half step down from the A to the Ab.
This leaves only the last bar of the measure as a standalone chord. Tonality-wise, we need to move from a G flat to an F, which is a half step down. But since we need a measure to do it, we need to somehow prolong the transition. The jump from Gb to the D is One of the few transitions that is not along the cycle of fourths. The best thing to note is that the Gb can be rewritten as an F#, and that is the third of the D. The Flat 13th of the D is the Bb/A# which is the third of the Gb. So you keep two of the notes of the chord constant while moving the root.
To transition back to the main them. The D leads to the G- along the cycle of 4th. This is also a return to the beginning of the song, and this and the following four chords are identical to the start of the song.
Where things change is the seventh line, and it is this variation that makes the song even more different from most “Song-Form” songs (Songs that follow the A-A-B-A pattern) in that it has an extra four bars in the last A section. You can think of the eighth line as that extra four bars: It is a descending line: Eb to Eb minor, then down a half step through Dminor D diminished a Cminor. All of this was to target the C minor as the ii of the final ii-V-I of the song, leading to the Final resolution to Bb.
The last two bars are the turn around, targetting a Gminor. The A-7b5 is still in the key of Bflat: it is built off the major 7th, and usually you play the locrian mode over it. The D7flat 9 can be played with the same set of notes. This is a ii-V in G minor. G minor is the relative (not parallel) major to Bb.
OK, that is a lot to think about. But again, you can chunk it:
Start on Gminor.
5 measures of cycle of fourths
1/2 step up ii-V-I.
Stay on that key, 5 measures cycle of fourths,
1/2 step up ii-V-I. Stay in that key for the bridge 4 bars.
Half step up ii-V I (kinda) for the rest of the bridge.
Weird Transition back to Gminor.
5 measures of cycle of fourths.
Move to parallel minor.
Chromatic walkdown for 4 bars.
ii V-I.
Turnaround.
That should be enough of a structure to help you recall the chords as you try to run through them mentally.
I have been working on memorizing chords for a bunch of Standards and originals. It helps tremendously. A couple things that have worked for me:
Focus on the roots first. There are usually patterns to the ways the roots move: ii-V and Whole steps up or down are the most common.
Think in terms of Key-centers. Often, the bridges are simpler chord sequences, all in one key. A long series of ii-V-I will chunk these together.
Keep an eye at the smaller i-Vs that lead to key changes. These will help me tie the chunks together.
The goal is to solo over the tunes while playing saxophone. Finger the roots as I go through the progression. Instead of (or as well as) thinking “D-G” air-finger all 6 fingers down and then just the three left hand fingers…It gives me an additional channel of memorization.
Run through the changes in my head, without looking at the sheet music. Chunk series of changes, 2 to 4 bars at a time.
Whistle the melody line, and air-finger the roots of the changes.
Once I have it down, I listen to my favorite version of the song, and keep the pattern of the changes running through my head. I will restart many times to keep track, especially with Bebop.
Play through the changes , but SLOW….start with playing just roots…then gradually expand out to thirds fifths and sevenths, but make sure I keep track of the changes. I use ireal pro or musescore for this. Stop the player if I get lost.
In my last post on
Fedora’s signing infrastructure, I ended with some protocol changes I would be
interested in making. Over the last few weeks, I’ve tried them all in a
proof-of-concept project and I’m fairly satisfied with most of them. In this
post I’ll cover the details of the new protocol, as well as what’s next.
Sigul Protocol 2.0
The major change to the protocol, as I mentioned in the last post, is that all
communication between the client and the server happens over the nested TLS
session. Since the bridge cannot see any of the traffic, its role is reduced
significantly and it is now a proxy server that requires client authentication.
I considered an approach where there was no communication from the client or
server to the bridge directly (beyond the TLS handshake). However, adding a
handshake between the bridge and the server/client makes it possible for all
three to share a few pieces of data useful for debugging and error cases. While
it feels a little silly to add this complexity primarily for debugging
purposes, the services are designed to be isolated and there’s no opportunity
for live debugging.
The handshake
After a server or client connects to the bridge, it sends a message in the
outer TLS session to the bridge. The message contains the protocol version the
server/client will use on the connection, as well as its role (client or
server). The bridge listens on two different ports for client and server
connections, so the role is only included to catch mis-configurations where the
server connects to the client port or vice versa.
The bridge responds with a message that includes a status code to indicate it
accepts the connection (or not), and a UUID to identify the connection.
The bridge sends the same UUID to both the client and server so it can be used
to identify the inner TLS session on the client, bridge, and server. This makes
it easy to, for example, collect logs from all three services for a connection.
It also can be used during development with OpenTelemetry to trace a
request across all
three services.
Reasons the bridge might reject the connection include (but is not limited
to): the protocol version is not supported, the role the connection announced
is not the correct role for the port it connected to, or the client certificate
does not include a valid Common Name field (which is used for the username).
After the bridge responds with an “OK” status, servers accept incoming inner
TLS session on the socket, and clients connect. All further communication is
opaque to the bridge.
Client/Server
In the last blog post, I discussed requests and responds being JSON
dictionaries, and content that needed to be signed could be base64 encoded, but
I didn’t go into the details of RPM header signatures. While I’d still love to
have everything be JSON, after examining the size of RPM headers, I opted to
use a request/response format closer to what Sigul 1.2 currently uses. The
reason is that base64-encoding data increases its size by around 33%. After
poking at a few RPMs with the
rpm-head-signing tool, I
concluded that headers were still too large (cloud-init’s headers were hundreds
of kilobytes) to pay a 33% tax for a slightly simpler protocol.
So, the way a request or response works is like this: first, a frame is sent.
This is two unsigned 64-bit integers. The first u64 is the size of the JSON,
and the second is the size of the arbitrary binary payload that follows it.
What that binary is depends on the command specified in the JSON. The
alternative was to send a single u64 describing the JSON size alone, so the
added complexity here is minimal. The binary size can also be 0 for commands
that don’t use it, just like the Sigul 1.2 protocol.
Unlike Sigul 1.2, none of the messages need HMAC signatures since they all
happen inside the inner TLS session. Additionally, the protocol does not allow
any further communication on the outer TLS session, so the implementation to
parse the incoming requests is delightfully straightforward.
Authentication
Both the client and server authenticate to the bridge using TLS certificates.
Usernames are provided in the Common Name field of the client certificate. For
servers, this is not terribly useful, although the bridge could have an
allowlist of names that can connect to the server socket.
Clients and servers also mutually authenticate via TLS certificates on the
inner TLS session. Technically, the server and client could use a separate set
of TLS certificates for authentication, but in the current proof-of-concept it
uses the same certificates for both authenticating with the bridge and with the
server/client. I’m not sure there’s any benefit to introducing additional sets
of certificates, either.
For clients, commands no longer need to include the “user”: it’s pulled from
the certificate by the server. Additionally, there’s no user passwords. Users
that want a password in addition to their client certificate can encrypt their
client key with a password. If this doesn’t sit well with users (mostly Fedora
Infrastructure) we can add passwords back, of course.
Users do still need to exist in the database or their requests will be
rejected, so administrators will need to create the users via the command-line
interface on the server or via a client authenticated as an admin user.
Proof of concept
Everything I’ve described has been implemented in my proof-of-concept
branch. I’m in
the process of cleaning it up to be merge-able, and command-line interfaces
still need to be added for the bridge and client. There’s plenty of TODOs still
sprinkled around, and I expect I’ll reorganize the APIs a bit. I’ve also made
no effort to make things fast. Still, it will happily process hundreds or
thousands of connections concurrently. I plan to add benchmarks to the test
suite prior to merging this so I can be a little less handwave-y about how
capable the service is.
What’s next?
Now that the protocol changes are done (although there’s still time to tweak
things), it’s time to turn our attention to what we actually want to do:
managing keys and signing things.
Management
In the current Sigul implementation, some management is possible remotely using
commands, while some are only possible from on the server. One thing I would
like to consider is moving much of the uncommon or destructive management tasks
to be done locally on the server rather than exposing them as remote commands.
These tasks could include:
All user management (adding, removing, altering)
Removing signing keys
Users could still do any non-destructive actions remotely. This includes
creating signing keys, read operations on users and keys, granting and revoking
access to keys for other users, and of course signing content.
Moving this to be local to the server makes the permission model for requests
simpler. I would love feedback on whether this would be inconvenient from
anyone in the Fedora Release Engineering or Infrastructure teams.
Signing
When Sigul was first written, Fedora was not producing so many different kinds
of things that needed signatures. Mostly, it was RPMs. Since then, many other
things have needed signatures. For some types of content (containers, for
example), we still don’t sign them. For other types, they aren’t signed in an
automated fashion.
Even the RPM case needs to be reevaluated. RPM 6.0 is on the horizon with a new
format that supports multiple signatures - something we will want to support
post-quantum algorithms. However, we still need to sign the old v4 format for
EPEL 8.0. We probably don’t want to continue to shell out to rpmsign on the
server.
Ideally, the server should not need to be aware of all these details. Instead,
we can push that complexity to the client and offer a few signature types. We
can, for example, produce SecureBoot signatures and container signatures the
same way. I need to
understand the various signature formats and specifications, as well as the
content we sign. The goal is to offer broad support without requiring
constantly teaching the server about new types of content.
Comments and Feedback
Thoughts, comments, or feedback greatly welcomed on Mastodon
Recently, several people have asked me about the syslog-ng project’s view on AI. In short, there is cautious optimism: we embrace AI, but it does not take over any critical tasks from humans. But what does this mean for syslog-ng?
Well, it means that syslog-ng code is still written by humans. This does not mean that we do not use AI tools at all, but we do not use AI tools to write code for two reasons.
Firstly, this is because of licensing. The syslog-ng source code uses a combination of GLPv2 and LGPLv2.1, and there are no guarantees that the code generated by AI tools would be compliant with the licensing we use. The second reason is code quality. Syslog-ng is built on high performance C code and is used in highly secure environments. So even if the code generated by AI worked, there would be no guarantee that it was also efficient and secure. And optimizing and securing code later takes a lot more effort than writing it with those principles in mind from scratch.
Writing portable code is also difficult. X86_64 Linux is just one platform out of the many supported by syslog-ng. Additional platforms like ARM, RiscV, POWER, s390 and many others are also supported, which means that the code needs to run also on big-endian machines. And not just on Linux, but also on MacOS, FreeBSD and others.
So, where do we use AI tools, then? Well, we have tons of automated test cases and humans review each pull request before they are merged. However, various tools also analyze our code regularly both on GitHub and internally, and said tools give us recommendations on how we could improve code quality and security. Needless to say, they raise many false alarms, though. Still, it’s good to have them, as sometimes they spot valid problems. However, the decision on whether to apply their recommendations are always made by humans, as AI tools often do not have a full understanding of the code.
The other cornerstone of syslog-ng is its documentation. Some of our most active users decided to use syslog-ng because of the quality of its documentation, written by humans. Because of this, we also plan to use AI to help users find information in the documentation. We already had a proof-of-concept running on a laptop where users could ask questions from an AI tool and find information a lot quicker that way than browsing the documentation.
It is not directly related to syslog-ng development, but I have learned just recently that sequence-rtg, the tool I use to generate syslog-ng PatternDB rules based on a large number of log messages is also considered to be an AI tool by most definitions, even though its documentation never mentions AI. I guess this is probably because it was born before AI became an important buzzword… :-) You can learn more about sequence and how I used it at https://www.syslog-ng.com/community/b/blog/posts/sequence-making-patterndb-creation-for-syslog-ng-easier
So TL;DR: We use AI. Less than AI fanatics would love to see, but more than what AI haters can likely accept. We are on a safe middle ground, where AI does not replace our work, but rather augments it.
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
I own a Prusa Core One. Lately I had to print several things for my car (Camper Van) in ASA. If you wonder about the reason, it is ASA is heat resistant and temperatures can go quite high in a car.
ASA is really hard to print successfully. The reason is that it retracts when it cools. If you have an enclosed printer, you normally heat the chamber with the print bed. So the heating is coming from below. The more area of the heat bed you cover with your printer, the cooler is will get.
Print sheet lifted off the print bed (retraction forces).
As soon as you print especially flat surfaces, the middle of the surface cools down faster than the edges. If you print a box, the wall will also shield the heat from the print bed and you get even more cooling inside. More cooling means more retraction. Sooner or later your print will start to bend and lift off either the print sheet or print bed. The only way to avoid that would be to have 50째C warm air pushing down on your print from above. I haven’t seen a solution for that yet.
If you have a five hour print job, you want that your print to lift of as late as possible. Small jobs you can print successfully, but big jobs like 5×4 H10 Gridfinity boxes or something like that are nearly impossible to print 100% successfully.
However here are all my tricks how to master ABS/ASA:
Pre-requirements
Satin Powder-coated Print Sheet
Magigoo Glue for PC (Its the only one which works with temperature over 100째C)
10 mm dia X 5mm N52 Neodym Magnets
Masking tape (Krepp)
Slicer Settings
Infill: Gyroid 10% (This works the best and prints fast)
Outer Brim: Witdh 3mm
For larger and longer prints which cover quite some area on the print sheet, after slicing add a “Pause” after 5mm of printing. That would be for layer height:
0.25mm = Pause at layer 21
0.20mm = Pause at layer 26
0.15mm = Pause at layer 35
Preparing the print
Clean the print sheet, put Magigoo on the it and place it in the printer
Auto home the printer
Move the print bed to 40mm
Select Custom Preheat and preheat to 120째C
You can use a hair dryer to heat the chamber below the print bed. BE CAREFUL! Do not point it to the door or it will melt. Make sure it sucks air from outside.
Close the door
Cover the gaps of the upper half of the printer with tape. The filtration system needs to suck air into the chamber. We want that it does it from below the print bed. See picture below.
Set the Filtration fan to 35%! This is important or it will remove too much heat form the chamber.
Prusa Core One with taped gaps
The Print
Start the print, it will take some time to absorb heat but it warms up to ~50째C. I have a thermometer at the top of the chamber inside. It normally has ~45째C when it starts to print.
In the Slicer you see how long it takes till you reach the Pause you set. Once the printer starts, set a timer that reminds you when it will Pause.
Adding Magnets
The Magigoo glue is very strong. If the retraction forces are getting too high, the print sheet will lift of the print bed. We can try to prevent that or at least postpone the lift off by adding magnets around our model once the printer pauses.
Additional magnets to hold down the print sheet
Event with the added magnets from above. After 4 hours of printing the retraction forces got so high, that it lifted of the print sheet from the print bend on both sides.
Let me know in the comments if you have a trick I don’t know yet!
Greaseweazle is a USB device and host tools allowing versatile floppy drive control which comes very handy when working with old computers. And as usually I prefer software on my systems to be installed in form of packages, thus I have spent some time to prepare greaseweazle rpms and they are now available from my copr repository. The repo also includes latest python-bitarray, because version 3 or newer needed by greaseweazle and it hasn't been updated in Fedora yet. The tools written in Python seems to work OK on my x86_64 laptop, but fail on my ppc64le workstation. Probably some serial port / terminal handling goes wrong in the pyserial library. There are discussions about terminal handling and its PowerPC specifics on the glibc development mailing list for a month or two now. I need to keep digging into it ...
I recently found, under the rain, next to a book swap box, a pile of 90's “software magazines” which I spent my evening cleaning, drying, and sorting in the days afterwards.
In the 90s, this was used by Diamond Editions[1] (a publisher related to tech shop Pearl, which French and German computer enthusiasts probably know) to publish magazines with just enough original text to qualify for those subsidies, bundled with the really interesting part, a piece of software on CD.
If you were to visit a French newsagent nowadays, you would be able to find other examples of this: magazines bundled with music CDs, DVDs or Blu-rays, or even toys or collectibles. Some publishers (including the infamous and now shuttered Éditions Atlas) will even get you a cheap kickstart to a new collection, with the first few issues (and collectibles) available at very interesting prices of a couple of euros, before making that “magazine” subscription-only, with each issue being increasingly more expensive (article from a consumer protection association).
Other publishers have followed suite.
I guess you can only imagine how much your scale model would end up costing with that business model (50 eurocent for the first part, 4.99€ for the second), although I would expect them to have given up the idea of being categorised as “written press”.
To go back to Diamond Editions, this meant the eventual birth of 3 magazines: Presqu'Offert, BestSellerGames and StratéJ. I remember me or my dad buying a few of those, an older but legit and complete version of ClarisWorks, CorelDraw or a talkie version of a LucasArt point'n'click was certainly a more interesting proposition than a cut-down warez version full of viruses when budget was tight.
3 of the magazines I managed to rescue from the rain
This brings us back to today and while the magazines are still waiting for scanning, I tried to get a wee bit organised and digitising the CDs.
Some of them will have printing that covers the whole of the CD, a fair few use the foil/aluminium backing of the CD as a blank surface, which will give you pretty bad results when scanning them with a flatbed scanner: the light source keeps moving with the sensor, and what you'll be scanning is the sensor's reflection on the CD.
My workaround for this is to use a digital camera (my phone's 24MP camera), with a white foam board behind it, so the blank parts appear more light grey. Of course, this means that you need to take the picture from an angle, and that the CD will appear as an oval instead of perfectly circular.
I tried for a while to use GIMP perspective tools, and “Multimedia” Mike Melanson's MobyCAIRO rotation and cropping tool. In the end, I settled on Darktable, which allowed me to do 4-point perspective deskewing, I just had to have those reference points.
So I came up with a simple "deskew" template, which you can print yourself, although you could probably achieve similar results with grid paper.
My janky setup
The resulting picture
After opening your photo with Darktable, and selecting the “darkroom” tab, go to the “rotate and perspective tool”, select the “manually defined rectangle” structure, and adjust the rectangle to match the centers of the 4 deskewing targets. Then click on “horizontal/vertical fit”. This will give you a squished CD, don't worry, and select the “specific” lens model and voilà.
Tools at the ready
Targets acquired
Straightened but squished
You can now export the processed image (I usually use PNG to avoid data loss at each step), open things up in GIMP and use the ellipse selection tool to remove the background (don't forget the center hole), the rotate tool to make the writing straight, and the crop tool to crop it to size.
And we're done!
The result of this example is available on Archive.org, with the rest of my uploads being made available on Archive.org and Abandonware-Magazines for those 90s magazines and their accompanying CDs.
[1]: Full disclosure, I wrote a couple of articles for Linux Pratique and Linux Magazine France in the early 2000s, that were edited by that same company.
One of the datacenters we have machines in is reorganizing things and
is moving us to a new vlan. This is just fine, it will allow more speration,
consolidating ips and makes sense.
So, they added tagging for the new vlan to our switch there, and this week
I setup things so we could have a seperate bridge on that vlan. There
was a bit of a hiccup with ipv6 routing, but that was quickly fixed.
Now we should be able to move vm's as we like to the new vlan by just changing
their IP address and moving them to use the new bridge over the old.
Once everything is moved, we can drop the old bridge and be all on
the new one.
IPV6 now live in main datacenter
Speaking of ipv6, we added ipv6 AAAA records to our new datacenter machines/applications
on thursday. Finally ipv6 users should be able to reach some services
that were not available before. Please let us know if you see any problems
with it, or notice any services that are not yet enabled.
Fun with bonding / LCAP 802.3ad
Machines in our new datacenter use two bonded interfaces. This allows us to
reboot/upgrade switches and/or continue to operate normally if a switch dies.
This is great, but when initially provisioning a machine you want to be able
to do so on one interface until it's installed and the bonding setup is
configured. So, one of the two interfaces is set to allow this. On our x86_64
machines, it's always the lower numbered/sorting first interface. However,
this week I found that when I couldn't get any aarch64 machines to pxe boot
that on those machines the initial live interface is THE HIGHER numbered one.
Frustrating, but easy to work with once you figure it out.
Mass rebuild
The f43 mass rebuild was started wed per the schedule. There was a bit
of a hiccup thursday morning as the koji db restarted and stopped the
mass rebuild script from submitting, but things resumed after that.
Everything pretty much finished friday night. This is a good deal faster than
in the past where it usually took until sunday afternoon.
I expect we will be merging it into rawhide very soon, so look for...
everything to update.
Most linux distributions choose to drop it from their repositories. Various forks exist and Valkey seems a serious one and was chosen as a replacement.
So starting with Fedora 41 or Entreprise Linux 10 (CentOS, RHEL, AlmaLinux, RockyLinux...) redis is no more available, but valkey is.
With version 8.0 Redis Labs choose to switch to AGPLv3 license, and so is back as an OpenSource project.
RPMs of Redis version 8.0.3 are available in the remi-modular repository for Fedora ≥ 40 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
1. Installation
Packages are available in the redis:remi-8.0 module stream.
These packages are weak dependencies of Redis, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).
The modules are automatically loaded after installation and service (re)start.
The modules are not available for Enterprise Linux 8.
3. Future
Valkey also provides a similar set of modules, requiring some packaging changes already proposed for Fedora official repository.
Redis may be proposed for unretirement and be back in the Fedora official repository, by me if I find enough motivation and energy, or by someone else.
I may also try to solve packaging issues for other modules (e.g. RediSearch). For now, module packages are very far from Packaging Guidelines, so obviously not ready for a review.
One of my pet peeves around running local LLMs and inferencing is the sheer mountain of shit^W^W^W complexity of compute stacks needed to run any of this stuff in an mostly optimal way on a piece of hardware.
CUDA, ROCm, and Intel oneAPI all to my mind scream over-engineering on a massive scale at least for a single task like inferencing. The combination of closed source, over the wall open source, and open source that is insurmountable for anyone to support or fix outside the vendor, screams that there has to be a simpler way. Combine that with the pytorch ecosystem and insanity of deploying python and I get a bit unstuck.
What can be done about it?
llama.cpp to me seems like the best answer to the problem at present, (a rust version would be a personal preference, but can't have everything). I like how ramalama wraps llama.cpp to provide a sane container interface, but I'd like to eventually get to the point where container complexity for a GPU compute stack isn't really needed except for exceptional cases.
On the compute stack side, Vulkan exposes most features of GPU hardware in a possibly suboptimal way, but with extensions all can be forgiven. Jeff Bolz from NVIDIA's talk at Vulkanised 2025 started to give me hope that maybe the dream was possible.
The main issue I have is Jeff is writing driver code for the NVIDIA proprietary vulkan driver which reduces complexity but doesn't solve my open source problem.
Enter NVK, the open source driver for NVIDIA GPUs. Karol Herbst and myself are taking a look at closing the feature gap with the proprietary one. For mesa 25.2 the initial support for VK_KHR_cooperative_matrix was landed, along with some optimisations, but there is a bunch of work to get VK_NV_cooperative_matrix2 and a truckload of compiler optimisations to catch up with NVIDIA.
But since mesa 25.2 was coming soon I wanted to try and get some baseline figures out.
I benchmarked on two systems (because my AMD 7900XT wouldn't fit in the case). Both Ryzen CPUs. The first I used system I put in an RTX5080 then a RTX6000 Ada and then the Intel A770. The second I used for the RX7900XT. The Intel SYCL stack failed to launch unfortunately inside ramalama and I hacked llama.cpp to use the A770 MMA accelerators.
I picked this model at random, and I've no idea if it was a good idea.
Some analysis:
The token generation workload is a lot less matmul heavy than prompt processing, it also does a lot more synchronising. Jeff has stated CUDA wins here mostly due to CUDA graphs and most of the work needed is operation fusion on the llama.cpp side. Prompt processing is a lot more matmul heavy, extensions like NV_coopmat2 will help with that (NVIDIA vulkan already uses it in the above), but there may be further work to help close the CUDA gap. On AMD radv (open source) Vulkan is already better at TG than ROCm, but behind in prompt processing. Again coopmat2 like extensions should help close the gap there.
NVK is starting from a fair way behind, we just pushed support for the most basic coopmat extension and we know there is a long way to go, but I think most of it is achievable as we move forward and I hope to update with new scores on a semi regular basis. We also know we can definitely close the gap on the NVIDIA proprietary Vulkan driver if we apply enough elbow grease and register allocation :-)
I think it might also be worth putting some effort into radv coopmat2 support, I think if radv could overtake ROCm for both of these it would remove a large piece of complexity from the basic users stack.
As for Intel I've no real idea, I hope to get their SYCL implementation up and running, and maybe I should try and get my hands on a B580 card as a better baseline. When I had SYCL running once before I kinda remember it being 2-4x the vulkan driver, but there's been development on both sides.
(The graphs were generated by Gemini.)
This is an independent, censorship-resistant site run by volunteers. This site and the blogs of individual volunteers are not officially affiliated with or endorsed by the Fedora Project.
comments? additions? reactions?
As always, comment on mastodon: https://fosstodon.org/@nirik/114999790880878918