Fedora is a Trademark of Red Hat, Inc, an operating system built by volunteers around the world. This page is provided so that independent volunteers can showcase our contributions to Fedora and Free Software in general. Official Fedora Download page.
|
|
The Fedora Project is proposing a new contributor status called “Fedora Verified” to better recognize all forms of community contribution, and we need your feedback. Following the Fedora Council 2026 Strategy Summit, Fedora leadership is reflecting on how we recognize, support, and empower the people who make Fedora possible. Please read through our proposal below and share your thoughts in the Fedora Verified community survey.
As the global open source community grows, the Fedora Project needs to ensure that our systems for recognizing contributors keep pace. Historically, open source recognition has leaned heavily on easily-quantifiable systems such as git repository commits and Pull Requests. But Fedora is built on much more than just code. We want to implement a more human-centered approach that equally values all forms of contribution including mentoring, documentation, design, event organization, and community support.
To help us get there, we are proposing a new contributor status called “Fedora Verified” (Name TBD – feedback welcome!). But before we finalize this model, we need your feedback.
“Fedora Verified” is a proposed membership-driven approach for the Fedora Account System that distinguishes highly engaged, committed contributors from tens of thousands of standard registered accounts.
How is “Fedora Verified” different from a standard account? Anyone can create a new account in the Fedora Account System (FAS) to begin their journey, file bugs, or make initial contributions. A FAS account is the equivalent of a digital passport to access various Fedora-hosted applications and services for users and contributors alike. “Fedora Verified” represents the next step: a mutual commitment between the contributor and the project, recognizing a sustained track record of positive impact and adherence to our core principles as a community: the Four Foundations (Freedom, Friends, Features, First).
What are the proposed benefits? The primary motivation behind “Fedora Verified” is to build trust-based recognition that grants elevated, privileged rights within the project. Most notably, this status would determine eligibility for strategic governance activities, such as:
To ensure fairness and transparency, we are proposing a set of baseline metrics that a contributor must meet before their request for “Fedora Verified” status goes to a human review. The proposed baseline includes:
While we have a framework, there are several major questions we need the community to answer before we move forward. Specifically, we want to know:
We want to make sure this proposed membership model is fair, sustainable, and truly represents what our contributors value. Your feedback will directly influence how this policy is drafted and implemented.
Take the Fedora Verified Community Survey!
The survey will be open until Sunday, 5th May 2026 at 23:59 UTC. Thank you for taking the time to share your perspective, and for everything you do to make Fedora an amazing community!
The post Fedora Verified: Help Shape a New Way to Recognize Fedora Contributors appeared first on Fedora Community Blog.
I am presently experimenting with a software-based routing-offload feature of nftables that I am not used as an iptables fan called flowtable. I haven’t had a chance yet to measure the performance of this config but I am using the commands below to help set it up in my firewall:
nft add flowtable ip filter fast "{ hook ingress priority 0; devices = { eth0, eth1 }; counter; }"nft add rule ip filter FORWARD iifname "eth0" oifname "eth1" ct state "{ established, related }" counter flow add @fastnft add rule ip filter FORWARD iifname "eth1" oifname "eth0" ct state "{ established, related }" counter flow add @fastnft add rule ip filter FORWARD iifname "eth0" oifname "eth1" ct state "{ established, related }" counter acceptnft add rule ip filter FORWARD iifname "eth1" oifname "eth0" ct state "{ established, related }" counter accept
You will see some connections being tracked and offloaded with the conntrack -L command:
tcp 6 src=192.168.99.1 dst=1.2.3.4 sport=52077 dport=443 src=1.2.3.4 dst=10.10.10.2 sport=443 dport=52077 [OFFLOAD] mark=17 use=2udp 17 src=192.168.99.1 dst=2.3.4.5 sport=53055 dport=4500 src=2.3.4.5 dst=10.10.10.2 sport=4500 dport=53055 [OFFLOAD] mark=17 use=2
~
We’ll release Valgrind 3.27.0 later today. While making sure the NEWS file was up to date I wrote about all the contributions made this release.
Thanks all, and apologies if I missed something or someone.
Aaron Merey added two new options to helgrind.
To control helgrind tracing of internal synchronization, threading and memory events use –show-events=1|2|3.
Use –track-destroy=no|yes|all to checks for missing pthread_mutex_destroy and pthread_rwlock_destroy calls. With yes, helgrind warns when pthread_mutex_init or pthread_rwlock_init is called on the address of a live (undestroyed) lock. With all, Helgrind also reports undestroyed locks at process exit.
Valgrind has separate VEX IR translators for AMD64 and x86 (32 bit) code. While the AMD64 translator has seen support for new encodings and instruction sets, the x86 translator has not.
Alexandra Hájková decided to port the SSE4.1 instruction set from the AMD64 translator to the x86 translator and add backend support. This is ongoing work, see the bug dependency tree.
But many more 32bit programs using SSE4.1 should now run under Valgrind.
Andreas Arnez and Florian Krohm did a lot of work on the s390x support.
Andreas added support for new s390x z/Architecture features from the 15th edition. This enables running binaries compiled with -march=arch15 or -march=z17 and exploiting the new MSA extensions 10-13.
Florian Krohm integrated binutils objdump for s390x disassembly in VEX. And did a lot of s390x code and facilities cleanups.
s390x machine models older than z196 are no longer supported.
Andreas also showed there are still meaningful optimizations to be made on how memcheck tracks undefinedness bits as outlined in the original “Using Valgrind to detect undefined value errors with bit-precision” paper.
His optimization of memcheck instrumenting a bitwise AND/OR with a constant is clever and simplifies the generated code.
Martin Cermak maintains the Linux Test Program (LTP) valgrind integration, which checks our syscall wrappers work correctly. And he makes sure newer linux syscalls are wrapped. Valgrind 3.27.0 adds support for file_getattr, file_setattr, lsm_get_self_attr, lsm_set_self_attr, lsm_list_modules. And corrects various syscall and ioctl corner cases.
Martin also added Valgrind address space manager support for tracking linux kernel lightweight guard pages, created through madvise (MADV_GUARD_INSTALL).
These guard pages are very low overhead for the kernel because they aren’t tracked as separate VMAs and don’t show up in the process proc maps. But Valgrind does still need to know whether the addresses are accessible. A new –max-guard-pages option controls the memory Valgrind reserves for tracking these pages.
Paul Floyd had more commits than all others combined for this release. Paul takes care of the alternative toolchains, Solaris/illumos, FreeBSD and Darwin/MacOS ports.
Tested Oracle Solaris 11.4, OpenIndiana Hipster and OmniOS.
FreeBSD works on both amd64 and arm64, support for 16.0-CURRENT has been added.
Supported MacOS versions, 10.13 (bug fixes), 10.14, 10.15, 11.0 (Intel only), 12.0 (Intel only), 13.0 (Intel only, preliminary). No arm64 support yet.
A lot of code in valgrind 3.27.0 to support MacOS was previously maintained by Louis Brunner out of tree.
There are two new client requests (macros defined in valgrind.h)
Another frozen week before the Fedora 44 release, just a few notable things:
Openssl4 landed in rawhide and caused some issues and then was pulled back out by FESCo. We definitely do need to move to it for Fedora 45, but hopefully we can land it in a way that doesn't break as many things as this last time.
Folks are working on it and I expect we will see it soon.
We had a aarch64 builder virthost fail to reboot with memory errors a few weeks ago. Finally got someone onsite to pull and reseat all it's memory and that seems to have done the trick. We are back to full on aarch64 builders again. Of course we had enough that I doubt anyone actually noticed that some were down.
I also brought up 3 more big x86_64 builders. They should be added after freeze/sometime soon. Nice to have extra capacity there even thought we aren't hurting for x86_64 builders.
Yesterday our wiki was up and down in the morning. Seems scrapers not only found the wiki, but also found that they could query time ranges for changes in Special:RecentChanges.
We put in some blocking and then increased a bunch of cpu on the backend and everything seems to be back to 'normal' now.
Until the next time...
I will be out on a family vacation next week. Our plane leaves super stupid early on tuesday morning and I will be packging and such on monday. So, please don't ping me: file tickets or ask others to take care of any fedora issues you might have.
Hopefully when I am back we will be go for Fedora 44 release!
This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.
Week: 13 – 17 Apr 2026
This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker
This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker
This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker
This is the summary of the work done regarding the RISC-V architecture in Fedora.
This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.
This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.
This team is working on keeping Epel running and helping package things.
This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project
If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.
The post Community Update – Week 16 appeared first on Fedora Community Blog.
ActBlue is the online fundraising platform used by US Democratic party candidates. It is the subject of a major scandal that has gripped the congress. It has been linked to Debianism, another disappearing developer and in a parody of other Debianism scandals, there are possibly two people using the same name, one being the wife of the missing developer and the other being a US Senate candidate who claims to have exposed the ActBlue scandal.
These Github screenshots confirm that Decklin Foster was affiliated with ActBlue and vanished in 2018:
Accusations have been made about the concealment of illegal foreign donations and deception of Congress.
Chris Gleason has nominated to represent Florida in the US Senate. Gleason registered using a post office box and created a domain name, voteforgleason.com using an anonymous service in Iceland. Gleason's profile on X/Twitter has no photo while their Facebook profile is completely disabled.
A similar web site has been created at https://chris4florida.com/
The phone number on voteforgleason.com and chris4florida.com goes to a pharmacy rather than a campaign office.
Nonetheless, I was able to verify Christopher Gleason submitted a nomination that is registered with the state officials.
Gleason's web site tells us:
Chris Gleason built the forensic tools that exposed ActBlue's billion-dollar money laundering operation. His evidence ...
Therefore, the candidate Gleason is not a pharmacist.
So far, Chris appears to be male, intermittently using the name Christopher and the masculin pronouns like His.
At the height of the Debian suicide cluster, shortly before Adrian von Bidder-Senn died on our wedding day ( detailed report), another Debian Developer, Decklin Foster put all his packages up for adoption.
Up to 2016, we can see that Decklin Foster was listed in the public filings of ActBlue Civics, Inc as either a senior engineer or at one point, as Director of Information Technology.
Decklin Foster's activity on their Github profile stops abruptly in May 2018.
ProPublic shows the last salary payment to Decklin Foster's bank account was in July 2018.
Decklin Foster disappeared at almost the exact same time as Arjen Kamphuis, author of the book on Information Security for Investigative Journalists. I was one of the last people to see Arjen before he vanished. Remarkably, Arjen had even asked me for protection.
On 1 January 2015, Decklin Foster's PGP key was removed because it was only 1024 bits. Most developers had created stronger keys before this mass removal of insecure keys took place.
In 2019, the Debian Account Managers asked the keyring managers to completely remove Decklin Foster from the Debian keyring. There was no Statement on Decklin Foster so far.
Clicking the links to see the statements about the removal does not work. An error message tells us the messages about Decklin Foster's removal from debianism are all private.
Foster's web site address is https://www.red-bean.com/decklin and it is currently reporting "The requested URL was not found on this server.". Thanks to the Wayback Machine we can find a snapshot from 2019 which reveals an inconvenient truth:
If you’re interested in me, I have started using Google Plus. If you’re interested in my work, I’m on Github. I was a Debian developer for some time, but I’ve mostly given that up. I currently work for ActBlue and live in Cambridge, MA with my wife.
Clicking on "my wife", we find the web site of Chris Gleason at http://cgleason.org/.
Reading Gleason's about page, we find the pronoun "they":
chris gleason is a graphic designer, zine creator, and print maker in chicago, illinois. they love ...
Therefore, the Debian Developer ( What is a Debian Developer?) who was Director of Information Technology for ActBlue was married to a female or transgender Chris Gleason. Is this the same person as the elusive male Chris Gleason who is now running for the US Senate in Florida on claims about corruption at ActBlue? Or is it simply a bizarre coincidence that two people so closely connected with this scandal share the same name?
Remember the case of Francois Thiébaud, the pimp who usurped the reputation of the legendary boss of Tissot SA? They both have the same name too but they are different people.
In 2017, the Trans Women Writers Collective published the book Nameless Woman, written by trans women of colour. In the credits, the trans women thank Decklin Foster.
This anthology was made possible by the generous support of hundreds of people. In particular, we would like to thank Annaya Youkai, Kieran Todd, Sadie Laett-Babcock, Adelaida Shelley, Jaime Peschiera, Kai Cheng Thom, Talon Wilde, David Cope, Alex Meginnis, Decklin Foster, and Eli Nelson for their help.
Here are photos from the respective online profiles of Decklin Foster and Chris Gleason.
They don't look too similar but who knows. Anything is possible in America today.
In 2017, Bitch Magazine included Decklin Foster in a list of donors.
In 1999, at the time Decklin Foster was recruited by Debianism, they had a home page at http://members.home.com/decklin/.
Shortly after, the page moved to http://www.red-bean.com/~decklin/ and that eventually evolved to http://www.red-bean.com/decklin/. The last good capture of the site at the Wayback machine was 11 October 2019. It looks like they disabled the web site after that date.
On 22 July 1999, Raphael Hertzog, known for the Freexian scandals wrote a message asking people to do unpaid work on orphaned packages in the hope that their application to become a Debian Developer would be approved more quickly:
To: debian-devel-announce@lists.debian.org, debian-devel@lists.debian.org, debian-qa@lists.debian.org, debian-mentors@lists.debian.org Subject: [New maintainer] Working for Debian and becoming a registered Debian developer From: Raphael Hertzog <rhertzog@hrnet.fr> Date: Thu, 22 Jul 1999 18:06:26 +0200 [ Large crosspost to start the discussion, please reply to debian-devel only. Simply respect the reply-to. ] Hello everybody, you may or not be aware that getting a Debian developer is quite long. I want to propose a solution to facilitate the integration of new Debian developers. It's quite simple. In order to fully learn how Debian works, the best solution is : - to adopt orphaned packages and correct their bugs - that your work should be checked by an official developer (I'll call it the sponsor). Of course, as long you're not a registered Debian developers you cannot upload your packages. The soluton is that the sponsor will upload the package you'll do. The official maintainer will be debian-qa@lists.debian.org. After all when you correct bugs on orphaned packages, you're doing Quality Assurance. This does also allow you to get new bugs in your mailbox. You just need to subscribe to debian-qa@lists.debian.org. You would be allowed to open/close/set the severity/forward the bugs since all debian-qa members can do it on debian-qa packages. If the sponsor finds that you've done a good job with the package, he will explain that to the new maintainer team in the hope that your application will be processed faster. And when you'll be official Debian developper, you'll be able to change the Maintainer field to your name. I'll propose myself to be a sponsor. We'll need more sponsor ... any volunteers ? Hopefully several people from debian-qa will accept to be sponsor like me ... All the future Debian developers interested should also reply ... Any input appreciated ! Cheers, -- Hertzog Raphaël >> 0C4CABF1 >> http://prope.insa-lyon.fr/~rhertzog/
Decklin Foster was one of the people recruited by those tactics.
To: debian-devel@lists.debian.org Cc: debian-mentors@lists.debian.org Subject: Re: [New maintainer] Working for Debian and becoming a registered Debian developer From: Decklin Foster <decklin@home.com> Date: Thu, 22 Jul 1999 13:39:13 -0400 Raphael Hertzog writes: > Of course, as long you're not a registered Debian developers you cannot > upload your packages. The soluton is that the sponsor will upload the > package you'll do. The official maintainer will be > debian-qa@lists.debian.org. After all when you correct bugs on orphaned > packages, you're doing Quality Assurance. Sounds good, I'll subscribe right after I finish writing this. I'm also trying to work on non-orphaned backages as well (for example right now i'm fixing a bug in gsfonts-x11.) So keep in mind that you can always just send patches :) -- Debian GNU/Linux - http://www.debian.org/ The Web is to graphic design as the fax machine is to literature.
Not only was Decklin under the influence of Hertzog, they were also under the influnce of the Red Hat share offer. This email encourages speculation on the IPO:
To: debian-devel@lists.debian.org Subject: Re: SPAM from Red Hat From: Decklin Foster <decklin@home.com> Date: Wed, 21 Jul 1999 09:57:45 -0400 Martin Bialasinski writes: > is it only me, or did you also get this spam from Red Hat about stock > options? > > Oh man - the bigger the company, the less clueful people? On #debian last night, it was suggested that we use our opportunity to buy some of this stock and sell it when the price goes up. This money could then be used to fund Debian, buy new hardware, improve our network connection, etc. Does anyone else think this is a Good Idea(TM)? I would be willing to donate as much as I reasonably could. -- Debian GNU/Linux - http://www.debian.org/ The Web is to graphic design as the fax machine is to literature.
Of interest to those watching the ActBlue saga, there is an email about hacking and cracking:
To: debian-devel@lists.debian.org Subject: Re: [New maintainer] Working for Debian and becoming a registered Debian developer From: Decklin Foster <decklin@home.com> Date: Thu, 22 Jul 1999 16:37:40 -0400 Carl Mummert writes: > Hacking is a serious crime Cracking is a serious crime. Breaking into computer systems without permission is a serious crime. Violation of privacy and theft of confidential information is a serious crime. Now what does this have to do with hacking? > The fact remains that the debian policy is to discourage new > developers by making it slow and difficult to get an account. I have no problem with waiting, and I'd rather not look bad just because some people keep speaking badly about the new-maintainer team. We don't need another flamewar here. People have work to do. -- Debian GNU/Linux - http://www.debian.org/ The Web is to graphic design as the fax machine is to literature.
The New Maintainer report tells us they entered the process in the same month, their application manager was Craig Small and they completed the process in July/August 2000. The advocacies and the application manager (AM) report are all missing from the mailing list archives.
They had a page at https://people.debian.org/~decklin/ but that has been inaccessible ever since the peak of the Debian suicide cluster.
They had a blog on another web site. It is captured in the Wayback machine up to 2012. The last snapshot with the index is here: http://blog.rupamsunyata.org/. The last blog post:
I'm the fuel that fires the engine of Failure
So, the Democrats in my very blue state put up a depressing, entitled, out-of-touch candidate for our vacant senate seat and she lost. The only reason I voted for her was because she wasn't a Republican. Supporting someone you don't even slightly like is psychologically draining.
At this point, I would vote for a Democratic party (or a Republican party!) with the exact same fiscal policy as the current Republicans if they actually made a principled, moral stand on equal protection and civil rights, habeas corpus/due process, and reproductive rights. Those don't cost anything[1].
Maybe they should be solved before the stuff that does cost billions of dollars. As it is my choice is weak, almost grudging support for those rights from people who want to hand the economy over to the government, and disgusting, immoral, vehement opposition to them from people who want to hand the economy over to wealthy corporations.
Neither side is doing anything effective to keep us free, or to keep the market free. Each side says or implies that this is a Christian nation, which it explicitly isn't, while failing to do what's right. Sometimes I want to give up and stop voting.
[1] Conversely, of course, it doesn't cost anything to take people's rights away, or prevent them from getting rights in the first place; I think this is why anti-gay-marriage ballot measures have been more successful in the current recession. Some people get their kicks from the suffering of others.
Accessing the blog from 2013 onwards we can see the front page has been replaced with the message:
This blog is not being updated. Old entries are still around, but I'm turning off the front page for now.
From there, we could find a link to Decklin Foster on LiveJournal. Their profile tells us they like #Debian-women. Don't forget the Debian pregnancy cluster.
There is a link to a Twitter/X account for Decklin Foster.
contributors.debian.org tells us that Decklin Foster stopped contributing in February 2011, immediately before the death of Adrian von Bidder-Senn on our wedding day. Chris Gleason is not on the list at all. If Decklin had abandoned Debianism, why did it take eight years to remove them from the keyring? Reading the full history of the Debian Harassment culture, we can see many other co-authors were removed for purely political reasons and blackmail but keys belonging to the people who had abandoned the project and people who died were left in the keyring for years.
To: debian-devel <debian-devel@lists.debian.org> Subject: RFA: all my packages From: Decklin Foster <decklin@red-bean.com> Date: Thu, 10 Feb 2011 17:11:05 -0500 Message-id: <1297375750-sup-7355@gillespie.rupamsunyata.org> I'm looking for a new maintainer for, well, any of these. My heart is not in it anymore and most of them have been neglected for a while. Recently my free time has been taken up by other things (mainly my job) and I forsee that continuing. http://qa.debian.org/developer.php?login=decklin%40red-bean.com python-beautifulsoup and mpd need attention for proposed-updates; I missed getting them into Squeeze. rxvt-unicode is a total clusterfuck. If any desktop-type packages remain I will orphan them, as I am only running Debian on servers now. Apart from that, perhaps with a greatly reduced load I can still make a tiny contribution to the community. If not, I will retire. -- things change. decklin@red-bean.com
Decklin Foster is on a list of former members of Harvard's Center for Depression, Anxiety and Stress Research. They have a photo of him when he was younger. It appears to be the same person as the Github profile.
Various scholarly articles from Harvard experts on depression have thanked Decklin Foster for their contributions in 2008 and 2009. Decklin Foster was collaborating on this world-class depression research at exactly the same time they were part of the debian-private discussions that precipitated the Debian Day Volunteer Suicide in 2010.
The connection to psychiatric research is a really odd coincidence, given that Decklin sent that RFA (resignation) email immediately before the death of Adrian von Bidder-Senn on our wedding day. The death was discussed like a suicide.
Subject: Re: Death of Adrian von Bidder Date: Fri, 22 Apr 2011 09:39:49 +0200 From: A Mennucc <mennucc1@debian.org> To: debian-private@lists.debian.org Il 19/04/2011 18:17, martin f krafft ha scritto: > Dear Debian colleagues, > > I have the sad task to communicate to you the news of the death of > Adrian von Bidder (avbidder, cmot), who passed away last Sunday, > most probably of a heart attack. I had contacted Adrian regarding the Debian umbrella. So I had also a chance of seeing a picture of him http://blog.fortytwo.ch/archives/80-Yay!-Debian-Logo!.html In that picture he seemed quite happy and young. His death is quite shocking and sad. a.
Remember, Debianists admitted the group needed a psychiatrist back in 2006, well before most of the deaths.
Around the same time, a petition about suicide prevention was submitted to the Basel city council and it had the name A. von Bidder at the bottom.
Now Decklin Foster himself is missing.
William Lee Irwin III was another Debian Developer who asked for help and then vanished.
There is a Decklin Foster profile on Youtube that hasn't been used for nine years. There are four subscribers. One of the videos has the comment:
Mixed these together on my show (editsradio.org) this week and really liked the result, so here it is on its own, slowed down and a little extended.
Photo taken at the Wilbur Theater in Boston on 2012-07-31.
The last snapshot of editsradio.org is on 6 April 2015. After that, the content is changed to Arabic. From 15 August 2015, it is redirecting to another site, also in Arabic, at http://www.17serialbaran.org.
In January 2015, Decklin Foster & Chris Gleason are listed as a couple as new members of the Brattle Theatre, Cambridge, Massachusetts.
Later in 2015, a report from the World Science Fiction Society lists Decklin Foster as a new member.
Spokeo has a report about Christina N Gleason-Foster in Chicago, IL with a former address in Cambridge, MA, the same location as Decklin Foster.
Going back to 2013, when the blog vanished, Universal Hub published a report "House of Blues turns down the heat, adds ice water for electronica shows due to Molly scourge". This is not about Molly de Blanc it is about the Molly pills. Decklin Foster drops a comment in the discussion:
This sounds like a bad idea. You really don't want to give huge amounts of water to MDMA users
There is a LinkedIn profile for Chris Gleason in Pinellas County, in Florida, not far from Jeremy Bícha, the Registered Sex Offender who was invited to speak at DebConf25 in Brest, France. Looking at the photo on LinkedIn, is this an older version of Decklin Foster's wife who has transitioned back to being a man or is it a completely different person?
It would be extremely offensive to ask such a question in any other group of people but in the world of Debianism and Zizian phenomena, there are a disproportionate number of people who are living such lifestyles.
Let's not forget the example of another Debianism transgender bedmate with at least five identities, that was Pauline / Maria Climent / Pommeret.
The Republican Chris Gleason has a profile on Ballotpedia where they claim to have come from Massachusetts, the same Democrat state where we found Decklin Foster.
Chris Gleason was born in Lowell, Massachusetts. Gleason's career experience includes working as a technology consultant. He served in the U.S. Army National Guard from 1989 to 1999. Gleason earned a bachelor's degree from the University of Massachusetts, Lowell in 1996. Gleason has been affiliated with Caribbean Christian Center for the Deaf, Michigan -Make-A-Wish, Seniors Helping Seniors.
In the recent UK elections, journalists and researchers found various examples of candidates who didn't really exist. At least one political party was accused of making up fake candidates to make their party look bigger and attract more donations.
I have the impression the Chris Gleason in Florida is a different person but I'm not ruling out the possibility it is a fake profile or an alter-ego of Chris Gleason, wife of Decklin.
The ActBlue crisis is real however. Here is a committee report on the US house web site.
The Committee on House Administration, the Committee on the Judiciary, and the Committee on Oversight and Government Reform are charged with ensuring the integrity of American elections. To that end, the Committees are examining allegations that ActBlue, a leading political fundraising organization, allowed bad actors, including foreign actors, to exploit its online platform to make fraudulent political donations.
Chris Gleason
CEO at NextMed Holdings, LLC CEO at Translational Analytics and Statistics, LLC
Chris Gleason is a board member at Our Mayberry, a company focused on revolutionizing charitable giving and fundraising.1 He is a lawyer, entrepreneur, and community philanthropist with multiple leadership roles in charities helping children.3 Gleason has also been involved in various business ventures and has held executive positions in different companies.
In addition to his role at Our Mayberry, Gleason has served as a board member for the Goldwater Institute since 2013.5 He was also recently appointed as the president and CEO of Moximed, a medical device company, in June 2024.2
Gleason has a background in sales leadership, having previously worked as VP of sales at Relievant and VP of sales of interventional urology at Teleflex.2 He has also been involved in political activities, receiving income from Election Watch, a Wisconsin-based group, in 2024.4
It's worth noting that Gleason has recently entered the political arena, running for the position of Pinellas County Supervisor of Elections in Florida for the 2024 election. His campaign has been controversial, as he has made unsubstantiated claims about election fraud and criticized the incumbent, Julie Marcus.
On 10 April 2026, Miami Independent published a video where they interview Chris Gleason and Jeff Buongiorno about vote rigging allegations. The CIA is mentioned within the first ninety seconds of the video. I stopped watching at that point.
In the case of another Debian Developer, Paul Tagliamonte, he really was working in the White House and the Pentagon. We have a photo to prove it:
Chris Gleason's campaign web site has the title Whistleblower in big letters. This implies he was an insider or he was connected to an insider, in other words, his claim to be a whistleblower encourages us to ask about the bizarre possibility that he really is or was the transgender wife of ActBlue's missing director of information technology, Decklin Foster.
If that was true, did his/her domestic arrangements give them unauthorized access to servers, laptops or cloud accounts for ActBlue? I was very grateful to receive donations of file servers from the Catholic archdiocese of Melbourne.
Take a side-step and have a look at the other Florida connection with the US Republican party. In the report about Senior management and HR email privacy: Martin Ebnoether (venty), Axel Beckert (xtaran) & Debian abuse in Switzerland, I made the observation that Axel Beckert's boyfriend and I both worked at the same company. The owner of that company is one of the top donors of the US Republican party and he lives three doors away from Mar-a-Lago, the home of current US President Donald Trump. Trump himself was elected for the first time on my birthday and I correctly predicted there would be conflict in the Strait of Hormuz.
Decklin was using Gists, they also stopped abruptly in 2017.
The Red-Bean.com web site has a list of people associated with their web site and Decklin's name is not on the list.
Whether they are the same Chris Gleason or not, we can say for sure that the Decklin Foster from Debianism is the same Decklin Foster who became Director of Information Technology for disgraced fundraising platform ActBlue Civics, Inc.
Here is one more interesting leak from the debian-private leaked gossip network. It shows us that Decklin Foster was in favor of the practice of dividing the community and humiliating people. It looks like he supported the humiliation of Sven Luther at the very time he was working in the Harvard Medical School's depression research team. Sven's mother was dying at the time this bun fight erupted.
Subject: Expulsion process: Sven Luther Date: Thu, 01 Mar 2007 00:00:29 +0100 From: Joerg Jaspert <joerg@debian.org> Organization: Goliath-BBS To: debian-private@lists.debian.org ... Now, the list of people who sent something in for the process: Anthony - Requestor Supporters, unordered: srivasta@debian.org mbanck@debian.org tbm@cyrius.com 93sam@debian.org fs@debian.org jgoerzen@complete.org fjp@debian.org dilinger@debian.org joeyh@debian.org liw@iki.fi stappers@stappers.nl tolimar@debian.org jeroen@wolffelaar.nl tfheen@debian.org micah@riseup.net decklin@red-bean.com tb@becket.net tytso.mit.edu
The conflict between Sven Luther and Frans Pop appears to be a factor in the eventual suicide of Frans Pop. The whole group failed.
Subject: [Very long] Post-partem rant and retrospective Date: Thu, 31 May 2007 03:56:11 +0200 From: Frans Pop <elendil@planet.nl> To: debian-private@lists.debian.org I've decided to write this in a separate mail because I'm afraid this may get long. Quite a bit of this has been written before, but I hope some of you will bear with me. [snip] So, what has made me decide to leave the project. It's a combination of just plain emotional stress over the whole Sven Luther issue, frustration with the inability of the project to deal with that and with some other issues, and frustration with the fact that a fair number of members of the project seem to feel that as long as you don't upload packages with trojans, pretty much anything is OK.
and eventually....
Subject: Resignation Date: Sun, 15 Aug 2010 21:41:18 +0200 From: Frans Pop <elendil@planet.nl> To: debian-private@lists.debian.org It's time to say goodbye. I don't want to say too much about it, except that I've been planning this for a long time. Participating in Debian has been great. ...
To see all the leaked messages from debian-private, including the history of Decklin Foster, please see my crowdfunding campaign video.
At 11pm local time in eastern Australia, a huge fire broke out at the Viva Energy refinery in Corio, Geelong.
There has been a near-total news vacuum. This may be deliberate or it may be a consequence of cost-cutting that has replaced many journalists with artificial intelligence. The few human journalists who remain in the profession may have already gone to bed when the fire started.
The national broadcaster, the ABC, was quick to include it in their list of breaking news items but without much detail. About three hours after the fire started, it was present on the web site of 9 News but not visible on the web sites of 7 News, Herald Sun or The Age. About five hours after the fire started, the local newspaper Geelong Advertiser included it in their Facebook account.
The story is newsworthy for a number of reasons. Australia previously had eight refineries but six of them were phased out and never replaced. Australia relies on foreign refineries for over eighty percent of fuel. With the Corio refinery out of action, there is only one domestic refinery left. Therefore, it is surprising the news media have been so slow to pick up the story.
The next big reason it is newsworthy is the war in Iran.
None of the news reports have commented on the fact that Richard Marles, the deputy prime minister and the minister for defence is the local member of parliament for the region where the refinery is located.
In the news vacuum, people have been quick to share rumours on social control media. Some people are speculating about the prospect of a drone attack. In Europe last year there were reports about Russian drones launched from cargo ships in international waters and interfering with European airports. Other reports have speculated about cargo ships using their anchors to sabotage pipelines and communications cables on the sea floor. France intercepted and seized a ship connected with Russia.
Another user on social control media has commented that there was a technical incident at the plant earlier in the day and the fire could be nothing more than an accident.
People would be wise not to jump to conclusions. Even if it is a terror attack, it may not be Iran. In recent news reports, Russia announced they had the right to attack any countries who are sending support to Ukraine. The French company Thales manufacturers the BushMaster armored personnel carriers in Bendigo and the government donated some of them to Ukraine. Low cost cardboard drones manufactured in Australia have also been donated to Ukraine.
There's a disconnect in the AI Engineering space right now and I think that the open source community has alread risen to the occasion to bridge the gap, but I don't see any signal that it's well understood or widely adopted. The industry is overwhelmingly focused on building agents from scratch via custom frameworks, bespoke orchestration layers, hand-rolled tool-calling loops, etc. when many of the hard problems have already been solved in that layer of the stack. The building block exists. It's open source. It's called goose.
I think for over 90% of use cases, if you're spending your time implementing an agent from scratch, you're already behind or potentially have already lost the race. My hypothesis is that Goose is the building block. It's the small, composable thing that becomes powerful when you wrap it in what the industry is rapidly agreeing is called the Harness.
Most people hear "goose" and think either "another AI coding assistant" or "another AI chatbot" (depending on how they came across goose and how they use it). That misunderstanding is the problem. Goose is not a coding assistant. It is not a chatbot. It is not a Claude Code competitor, though it can be configured to act as all of those things. At its core, goose is a small, configurable agent runtime with an extension-based architecture that can be composed into virtually anything.
It operates on three components:
Interface: Desktop app or CLI/TUI that collects user input and displays output.
Agent: The core logic engine that manages the interactive loop: sending requests to LLM providers, orchestrating tool calls, and handling context revision.
Extensions: Pluggable components built on the Model Context Protocol (MCP) that provide specific tools and capabilities.
A small core with a lot of power delivered through native extensions, external plugins, and configuration options. The agent core itself is minimal, it's an interactive loop plus context management. That's it. All capabilities come through the extension system.
You can strip goose down to nothing. No external capabilities. No tool calling. No skills. No plugins. You can even configure it so it cannot access the internet, only the inference service to talk to the model (which can be local). At that point, it's a plain chatbot with no agency whatsoever.
Or you can go the other direction entirely.
Configure goose with the Developer extension, Computer Controller, Memory, and a handful of MCP servers and you have a working replacement for Claude Code, Codex, Gemini CLI, OpenCode, or any other similar tool. Same capabilities, no vendor lock-in, and you choose your own inference provider from over 25 options (at the time of this writing)including Anthropic, OpenAI, Google Gemini, Groq, Mistral, and more. You can run fully local inference via goose's native inference provider, or offload to Ollama, Ramalama LM Studio, or Docker Model Runner. The full list of providers is in the goose documentation.
If you put this together, you're well on your way to unlocking the full potential but you're just getting started.
Where goose gets interesting is its composition model. Goose Recipes are reusable, shareable workflow definitions that package together instructions, extensions, parameters, provider settings, retry logic, and structured response schemas. A recipe can be as simple as a single prompt with a specific extension configuration. Alternatively it can be sophisticated, composed of subrecipes where each subrecipe is effectively another goose agent with its own configuration: its own extensions, plugins, inference provider, system prompt, and skills.
Subrecipes run in isolated sessions with no shared conversation history, memory, or state. The main recipe's agent decides when to invoke them, can run them sequentially or in parallel, and chains their outputs through conversation context. Compositional agent orchestration without writing a single line of framework code.
You're not writing an orchestration layer. You're not building a DAG executor. You're not implementing tool-calling logic. You're writing YAML that describes what you want done and goose handles the how.
If want to take this all the way to the extreme of a fully autonomous software factory like the one Steve Yegge outlines in his now infamous blog post, "Welcome to Gas Town", and implemented via his Gastown project. Gastown is a multi-agent workspace manager for orchestrating Claude Code, GitHub Copilot, Codex, Gemini, and other AI agents with persistent work tracking. It's a Go application with concepts like Mayors, Rigs, Polecats, Hooks, Convoys, and Beads. It's a real engineering effort to coordinate 20-30 agents on a codebase.
You can do exactly that by using goose as the building block. The open source community did it. They looked at Gastown and re-implemented its core concepts using goose's native capabilities. The result is Goosetown. Goosetown is a multi-agent coordination system that orchestrates "flocks" of AI agents (researchers, writers, workers, reviewers) to decompose and execute complex tasks. Goosetown uses goose's subagent delegation, skills system for role-based specialization, inter-agent communication via a broadcast channel called the "Town Wall," and multi-model support for adversarial cross-reviews where different LLMs review each other's work.
If you look at the code, it's just a few flat files, some shell scripts, some skills markdown, and some agent definitions.
All of this built on top of goose. Not alongside it. Not wrapping it. On it. Using the primitives goose already provides: skills, subagents, extensions, and recipes.
Goose also runs as a daemon, exposing itself to other applications via the Agent Client Protocol (ACP) (a standardized JSON-RPC protocol developed by Zed Industries). ACP does for AI agents what LSP did for language servers. ACP decouples agents from editors and frontends, so goose can be embedded directly into Zed, JetBrains, Neovim, or any ACP-compatible environment.
The composability runs both directions. Goose can also consume other ACP agents as providers, routing its LLM calls through Claude Code, Codex, or Gemini while keeping its own extension ecosystem and UI. As Adrian Cole wrote in his blog post "How to Break Up with Your Agent":
"Pick the UI you like. Pick the agent you like. They don't have to be the same thing."
This bidirectional composability — goose as a component and goose as an orchestrator — is what separates it from other agent tools.
Goose is fully open source under the leadership of the Agentic AI Foundation (AAIF), which provides vendor-neutral governance under the umbrella of the Linux Foundation. AAIF also hosts the Model Context Protocol (MCP) itself, so the standards goose builds on are governed with the same neutrality.
This matters. When you build your workflows on goose, you're building on a foundation governed by a neutral body with a Governing Board, a Technical Committee, and a transparent contribution model. This is the same open, collaborative, and neutral model that made Linux and Kubernetes into reliable core components of the entire software industry, and it's the same reason I think it's worth investing time and energy into.
It's no secret I'm an open source nerd, and goose checks all the boxes.
We've collectively been on a journey. First it was Prompt Engineering, crafting the right words to get the right output. Then it was Context Engineering, making sure the model has the right information at the right time. Now, it seems we've arrived at the next turn in this adventure we all find ourselves in: Harness Engineering.
Ralph Bean nails this in his blog post "What Even Is the Harness?". The harness is the enablement layer. It's everything you add to the agent runtime that gives you control over your outcomes:
"Harness — the enablement layer. AGENTS.md files, skills, custom tools, hand-crafted linters, system prompts for task-oriented agents. These are the things you engineer, iteratively, to increase the chances the agent gets things right. This is what Birgitta Böckeler calls the user harness and is where Mitchell Hashimoto's attention lives."
—Ralph Bean
Read that again. The harness is not the agent. The harness is what you add to
the agent. The AGENTS.md files. The skills. The custom MCP tools. The
hand-crafted linters. The system prompts. The recipes and subrecipes. The
extension configurations. The provider choices. The permission policies.
This is where your engineering effort belongs. Not in building the interactive loop, or implementing tool-calling JSON parsing, or writing context window management, or building MCP client libraries. Goose already does all of that and does so with the full backing of the AAIF, the Linux Foundation, and a vibrant open source community.
In most cases, and I'd argue almost all cases, your job is to build the harness.
I think for over 90% of use cases where someone is building an agent today, goose is a better starting point than a blank text editor or a vibe coding session (are we calling it Agentic Engineering yet?).
If you need a coding assistant, goose does that. If you need a research agent, configure goose with web scraping extensions and a research-focused recipe or skill. If you need a CI/CD bot, run goose in daemon mode with ACP or orchestrate it with scripts/recipes in your CI job runner of choice. If you need multi-agent orchestration, compose goose instances with subrecipes or build a Goosetown-style flock. If you need local-only, air-gapped inference, point goose at Ollama, Ramalama, LM Studio, or its native inference provider. If you need to integrate with your existing editor, goose speaks ACP natively or you can set GOOSE_PROMPT_EDITOR and run the whole flow from inside your editor of choice. If you need vendor-neutral governance, it's under the Linux Foundation umbrella via AAIF.
The remaining 10%? Those are the genuinely novel agent architectures, the research projects pushing boundaries, the use cases where you do need to control every byte of the agent loop. For those, build from scratch. For everything else, build the harness. I'm not saying you can't build agents from scratch. I'm simply suggesting that you probably don't need to.
If you're a professional technologist or an aspiring AI Engineer, I'd encourage you to shift your mental model. Stop thinking about building agents. Start thinking about harnessing them. At this point in the AI hype cycle, the agent is mature enough to be the commodity. The harness is your competitive advantage.
Install goose. Strip it down to nothing and build it back up. Write a recipe. Compose some subrecipes. Add skills. Configure extensions. Point it at different providers. Run it as a daemon. Embed it in your editor. Build a flock. Engineer the harness.
Go forth and harness your agents.
Happy hacking. <3
I got my network working again, and this time I am running the VM as non-root user.
The hypervisor is an old Fedora install that I first upgraded to Fedora 43.
I used nmcli to remove all connections (I was in via telnet and a serial concentrator) and then added a bridge. I had to figure out which of the interfaces was actually attached to the outside world, which I did by re-creating a ethernet connection, and bringing it up, then deleting the connection. That device becomes the bridge-slave-device.
So, after a bunch of nmcli con del commands to get to a baseline, I ran:
nmcli con add type bridge con-name virbr0 ifname virbr0
nmcli connection modify virbr0 ipv4.method auto
nmcli connection add type bridge-slave ifname enP5p1s0f0np0 master virbr0 con-name enP5p1s0f0np0
nmcli con up virbr0
And this should be enough to recreate.
I also had to create a permission for the bridge-helper to alow connection from userland:
I had to create he direcotry and then edit the file in :
/home/ayoung/qemu/build/qemu-bundle/usr/local/etc/qemu/bridge.conf
SO it looked like this:
allow virbr0
This post attempts to explain how Huion tablet devices currently integrate into the desktop stack. I'll touch a bit on the Huion driver and the OpenTablet driver but primarily this explains the intended integration[1]. While I have access to some Huion devices and have seen reports from others, there are likely devices that are slightly different. Huion's vendor ID is also used by other devices (UCLogic and Gaomon) so this applies to those devices as well.
This post was written without AI support, so any errors are organic artisian hand-crafted ones. Enjoy.
First, a short overview of the ideal graphics tablet stack in current desktops. At the bottom is the physical device which contains a significant amount of firmware. That device provides something resembling the HID protocol over the wire (or bluetooth) to the kernel. The kernel typically handles this via the generic HID drivers [2] and provides us with an /dev/input/event evdev node, ideally one for the pen (and any other tool) and one for the pad (the buttons/rings/wheels/dials on the physical tablet). libinput then interprets the data from these event nodes, passes them on to the compositor which then passes them via Wayland to the client. Here's a simplified illustration of this:
Unlike the X11 api, libinput's API works both per-tablet and per-tool basis. In other words, when you plug in a tablet you get a libinput device that has a tablet tool capability and (optionally) a tablet pad capability. But the tool will only show up once you bring it into proximity. Wacom tools have sufficient identifiers that we can a) know what tool it is and b) get a unique serial number for that particular device. This means you can, if you wanted to, track your physical tool as it is used on multiple devices. No-one [3] does this but it's possible. More interesting is that because of this you can also configure the tools individually, different pressure curves, etc. This was possible with the xf86-input-wacom driver in X but only with some extra configuration, libinput provides/requires this as the default behaviour.
The most prominent case for this is the eraser which is present on virtually all pen-like tools though some will have an eraser at the tail end and others (the numerically vast majority) will have it hardcoded on one of the buttons. Changing to eraser mode will create a new tool (the eraser) and bring it into proximity - that eraser tool is logically separate from the pen tool and can thus be configured differently. [4]
Another effect of this per-tool behaviour is also that we know exactly what a tool can do. If you use two different styli with different capabilities (e.g. one with tilt and 2 buttons, one without tilt and 3 buttons), they will have the right bits set. This requires libwacom - a library that tells us, simply: any tool with id 0x1234 has N buttons and capabilities A, B and C. libwacom is just a bunch of static text files with a C library wrapped around those. Without libwacom, we cannot know what any individual tool can do - the firmware and kernel always expose the capability set of all tools that can be used on any particular tablet. For example: wacom's devices support an airbrush tool so any tablet plugged in will announce the capabilities for an airbrush even though >99% of users will never use an airbrush [5].
The compositor then takes the libinput events, modifies them (e.g. pressure curve handling is done by the compositor) and passes them via the Wayland protocol to the client. That protocol is a pretty close mirror of the libinput API so it works mostly the same. From then on, the rest is up to the application/toolkit.
Notably, libinput is a hardware abstraction layer and conversion of hardware events into others is generally left to the compositor. IOW if you want a button to generate a key event, that's done either in the compositor or in the application/toolkit. But the current versions of libinput and the Wayland protocol do support all hardware features we're currently aware of: the various stylus types (including Wacom's lens cursor and mouse-like "puck" devices) and buttons, rings, wheels/dials, and touchstrips on pads. We even support the rather once-off Dell Canvas Totem device.
Huion's devices are HID compatible which means they "work" out of the box but they come in two different modes, let's call them firmware mode and tablet mode. Each tablet device pretends to be three HID devices on the wire and depending on the mode some of those devices won't send events.
This is the default mode after plugging the device in. Two of the HID devices exposed look like a tablet stylus and a keyboard. The tablet stylus is usually correct (enough) to work OOTB with the generic kernel drivers, it exports the buttons, pressure, tilt, etc. The buttons and strips/wheels/dials on the tablet are configured to send key events. For example, the Inspiroy 2S I have sends b/i/e/Ctrl+S/space/Ctrl+Alt+z for the buttons and the roller wheel sends Ctrl-/Ctrl= depending on direction. The latter are often interpreted as zoom in/out so hooray, things work OOTB. Other Huion devices have similar bindings, there is quite some overlap but not all devices have exactly the same key assignments for each button. It does of course get a lot more interesting when you want a button to do something different - you need to remap the key event (ideally without messing up your key map lest you need to type an 'e' later).
The userspace part is effectively the same, so here's a simplified illustration of what happens in kernel land:
Any vendor-specific data is discarded by the kernel (but in this mode that HID device doesn't send events anyway).If you read a special USB string descriptor from the English language ID, the device switches into tablet mode. Once in tablet mode, the HID tablet stylus and keyboard devices will stop sending events and instead all events from the device are sent via the third HID device which consists of a single vendor-specific report descriptor (read: 11 bytes of "here be magic"). Those bits represent the various features on the device, including the stylus features and all pad features as buttons/wheels/rings/strips (and not key events!). This mode is the one we want to handle the tablet properly. The kernel's hid-uclogic driver switches into tablet mode for supported devices, in userspace you can use e.g. huion-switcher. The device cannot be switched back to firmware mode but will return to firmware mode once unplugged.
Once we have the device in tablet mode, we can get true tablet data and pass it on through our intended desktop stack. Alas, like ogres there are layers.
Historically and thanks in large parts to the now-discontinued digimend project, the hid-uclogic kernel driver did do the switching into tablet mode, followed by report descriptor mangling (inside the kernel) so that the resulting devices can be handled by the generic HID drivers. The more modern approach we are pushing for is to use udev-hid-bpf which is quite a bit easer to develop for. But both do effectively the same thing: they overlay the vendor-specific data with a normal HID report descriptor so that the incoming data can be handled by the generic HID kernel drivers. This will look like this:
Notable here: the stylus and keyboard may still exist and get event nodes but never send events[6] but the uclogic/bpf-enabled device will be proper stylus/pad event nodes that can be handled by libinput (and thus the rest), with raw hardware data where buttons are buttons.
Because in true manager speak we don't have problems, just challenges. And oh boy, we collect challenges as if we'd be organising the olypmics.
First and probably most embarrassing is that hid-uclogic has a different way of exposing event nodes than what libinput expects. This is largely my fault for having focused on Wacom devices and internalized their behaviour for long years. The hid-uclogic driver exports the wheels and strips on separate event nodes - libinput doesn't handle this correctly (or at all). That'd be fixable but the compositors also don't really expect this so there's a bit more work involved but the immediate effect is that those wheels/strips will likely be ignored and not work correctly. Buttons and pens work.
hid-uclogic being a kernel driver has access to the underlying USB device. The HID-BPF hooks in the kernel currently do not, so we cannot switch the device into tablet mode from a BPF, we need it in tablet mode already. This means a userspace tool (read: huion-switcher) triggered via udev on plug-in and before the udev-hid-bpf udev rules trigger. Not a problem but it's one more moving piece that needs to be present (but boy, does this feel like the unix way...).
By far the most annoying part about anything Huion is that until relatively recently (I don't have a date but maybe until 2 years ago) all of Huion's devices shared the same few USB product IDs. For most of these devices we worked around it by matching on device names but there were devices that had the same product id and device name. At some point libwacom and the kernel and huion-switcher had to implement firmware ID extraction and matching so we could differ between devices with the same 0256:006d usb IDs. Luckily this seems to be in the past now with modern devices now getting new PIDs for each individual device. But if you have an older device, expect difficulties and, worse, things to potentially break after firmware updates when/if the firmware identification string changes. udev-hid-bpf (and uclogic) rely on the firmware strings to identify the device correctly.
edit: and of course less than 24h after posting this I process a bug report about two completely different new devices sharing one of the product IDs
Because we have a changeover from the hid-uclogic kernel driver to the udev-hid-bpf files there are rough edges on "where does this device go". The general rule is now: if it's not a shared product ID (see above) it should go into udev-hid-bpf and not the uclogic driver. Easier to maintain, much more fire-and-forget. Devices already supported by udev-hid-bpf will remain there, we won't implement BPFs for those (older) devices, doubly so because of the aforementioned libinput difficulties with some hid-uclogic features.
The newer tablets are always slightly different so we basically need to reverse-engineer each tablet to get it working. That's common enough for any device but we do rely on volunteers to do this. Mind you, the udev-hid-bpf approach is much simpler than doing it in the kernel, much of it is now copy-paste and I've even had quite some success to get e.g. Claude Code to spit out a 90% correct BPF on its first try. At least the advantage of our approach to change the report descriptor means once it's done it's done forever, there is no maintenance required because it's a static array of bytes that doesn't ever change.
Because we're abstracting the hardware, userspace needs to be fully plumbed. This was a problem last year for example when we (slowly) got support for relative wheels into libinput, then wayland, then the compositors, then the toolkits to make it available to the applications (of which I think none so far use the wheels). Depending on how fast your distribution moves, this may mean that support is months and years off even when everything has been implemented. On the plus side these new features tend to only appear once every few years. Nonetheless, it's not hard to see why the "just sent Ctrl=, that'll do" approach is preferred by many users over "probably everything will work in 2027, I'm sure".
A currently unsolved problem is the lack of tool IDs on all Huion tools. We cannot know if the tool used is the two-button + eraser PW600L or the three-button-one-is-an-eraser-button PW600S or the two-button PW550 (I don't know if it's really 2 buttons or 1 button + eraser button). We always had this problem with e.g. the now quite old Wacom Bamboo devices but those pens all had the same functionality so it just didn't matter. It would matter less if the various pens would only work on the device they ship with but it's apparently quite possible to use a 3 button pen on a tablet that shipped with a 2 button pen OOTB. This is not difficult to solve (pretend to support all possible buttons on all tools) but it's frustrating because it removes a bunch of UI niceties that we've had for years - such as the pen settings only showing buttons that actually existed. Anyway, a problem currently in the "how I wish there was time" basket.
Overall, we are in an ok state but not as good as we are for Wacom devices. The lack of tool IDs is the only thing not fixable without Huion changing the hardware[7]. The delay between a new device release and driver support is really just dependent on one motivated person reverse-engineering it (our BPFs can work across kernel versions and you can literally download them from a successful CI pipeline). The hid-uclogic split should become less painful over time and the same as the devices with shared USB product IDs age into landfill and even more so if libinput gains support for the separate event nodes for wheels/strips/... (there is currently no plan and I'm somewhat questioning whether anyone really cares). But other than that our main feature gap is really the ability for much more flexible configuration of buttons/wheels/... in all compositors - having that would likely make the requirement for OpenTabletDriver and the Huion tablet disappear.
The final topic here: what about the existing non-kernel drivers?
Both of these are userspace HID input drivers which all use the same approach: read from a /dev/hidraw node, create a uinput device and pass events back. On the plus side this means you can do literally anything that the input subsystem supports, at the cost of a context switch for every input event. Again, a diagram on how this looks like (mostly) below userspace:
Note how the kernel's HID devices are not exercised here at all because we parse the vendor report, create our own custom (separate) uinput device(s) and then basically re-implement the HID to evdev event mapping. This allows for great flexibility (and control, hence the vendor drivers are shipped this way) because any remapping can be done before you hit uinput. I don't immediately know whether OpenTabletDriver switches to firmware mode or maps the tablet mode but architecturally it doesn't make much difference.
From a security perspective: having a userspace driver means you either need to run that driver daemon as root or (in the case of OpenTabletDriver at least) you need to allow uaccess to /dev/uinput, usually via udev rules. Once those are installed, anything can create uinput devices, which is a risk but how much is up for interpretation.
[1] As is so often the case, even the intended state does not necessarily spark joy
[2] Again, we're talking about the intended case here...
[3] fsvo "no-one"
[4] The xf86-input-wacom driver always initialises a separate eraser tool even if you never press that button
[5] For historical reasons those are also multiplexed so getting ABS_Z on a device has different meanings depending on the tool currently in proximity
[6] In our udev-hid-bpf BPFs we hide those devices so you really only get the correct event nodes, I'm not immediately sure what hid-uclogic does
[7] At which point Pandora will once again open the box because most of the stack is not yet ready for non-Wacom tool ids
Sorting a terabyte of data in the late 1990s meant serious hardware, serious planning, and probably a serious budget approval process. Today you can do it on a workstation before lunch. I wanted to know how fast, so I wrote rustbucket to find out.
It’s a two-phase external sort implemented in Rust, built around io_uring,
and named for reasons that should be obvious to anyone who has spent time
with either Rust or storage systems.
Red Hat just published the Accessibility Conformance Report (ACR) for Red Hat Enterprise Linux 10.
Accessibility Conformance Reports basically document how our software measures up against accessibility standards like WCAG and Section 508. Since RHEL 10 is built on GNOME 47, this report is a good look at how our stack handles various accessibility things from screen readers to keyboard navigation.
Getting a desktop environment to meet these requirements is a huge task and it’s only possible because of the work done by our community in projects like: Orca, GTK, Libadwaita, Mutter, GNOME Shell, core apps, etc…
Kudos to everyone in the GNOME project that cares about improving accessibility. We all know there’s a long way to go before desktop computing is fully accessible to everyone, but we are surely working on that.
If you’re curious about the state of accessibility in the 47 release or how these audits work, you can find the full PDF here.
Another saturday and... oh wait, it's sunday! I was away almost all the day yesterday (morning at https://beaverbarcamp.org/ and afternoon/evening visiting family ), so this will be a day late. :)
This week we were still in Fedora 44 final freeze (we canceled the go/nogo on thursday because there were still unaddressed blockers) so there was a lot of catching up on old issues/processing docs and other pull requests and the like.
There were a few things that stood out however:
Diego wrote up a pull request to adjust our matrix bot to point to forge.fedoraproject.org for things that have moved there from pagure.io ( https://github.com/fedora-infra/maubot-fedora/pull/150 ) and so I merged it, figured out how to cut a release there, figured out how to deploy it to first staging and the production.
!forge org repo should work for a generic pointer to any forge project.
Hopefully this will make meetings and discussion on matrix nicer.
We had to move openqa behind anubis as the scrapers discovered it and were making it unusable. Unfortunately, openqa has a mode where you can update test screens that uses websockets and those were not correctly passing though anubis so that functionality was broken.
I was going to go look at apache docs and see if I could track down what needed to be set to do that, but decided to just ride the ai wave and ask a ai agent about it.
It snarfed in the config, thought about it for a bit, then spewed out a solution. The solution was largely for older apache versions (but I didn't tell it what apache version we were running), but at the end it correctly noted that on newer versions passing "upgrade=websocket" to the proxy commandline would fix it.
It did. It definitely saved me time poking through the apache docs.
We have had a number of machines we moved from our old datacenter that we wanted to repurpose as builders sitting around. It's not been very high priority to get them setup, but what better time that a freeze to get them online.
So, I got 3 of them ready, which involved updating a bunch of firmware on them, installing them, configuring networking, etc.
Will need a small freeze break to add them into ansible and finish them up, but then there should be 3 more buildhw-x86 builders.
Since I was catching up on things I decided to go ahead and upgrade my main server and it's vmhost to fedora 44 this morning.
Everything went super fast and painlessly, aside one issue with matrix-synapse (The f43 packager is newer than the f44 one so it would not work with my config/database). For now I just "downgraded" (or is that "sidegraded") to the f43 one. It seems to have a pretty nasty tangle of rust crate version changes, so it might not be too easy to sort out quickly.
The week after next (the week of april 20th) I will be away all week. I might look in on matrix/email some, but don't count on it. I'll be at a family vacation in hawaii. Please file tickets and be kind to my co-workers who are perfectly capable of handling anything in my absense. :)
As always, comment on mastodon: https://fosstodon.org/@nirik/116392979078747195
This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.
Week: 06 – 10 Apr 2026
This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker
This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker
This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.
This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.
This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project
If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.
The post Community Update – Week 15 2026 appeared first on Fedora Community Blog.
RPMs of PHP version 8.5.5 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
RPMs of PHP version 8.4.20 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
ℹ� These versions are also available as Software Collections in the remi-safe repository.
ℹ� The packages are available for x86_64 and aarch64.
ℹ� There is no security fix this month, so no update for versions 8.2.30 and 8.3.30.
Version announcements:
ℹ� Installation: Use the Configuration Wizard and choose your version and installation mode.
Replacement of default PHP by version 8.5 installation (simplest):
On Enterprise Linux (dnf 4)
dnf module switch-to php:remi-8.5/common
On Fedora (dnf 5)
dnf module reset php dnf module enable php:remi-8.5 dnf update
Parallel installation of version 8.5 as Software Collection
yum install php85
Replacement of default PHP by version 8.4 installation (simplest):
On Enterprise Linux (dnf 4)
dnf module switch-to php:remi-8.4/common
On Fedora (dnf 5)
dnf module reset php dnf module enable php:remi-8.4 dnf update
Parallel installation of version 8.4 as Software Collection
yum install php84
And soon in the official updates:
⚠� To be noticed :
ℹ� Information:
Base packages (php)
Software Collections (php83 / php84 / php85)
Â
Cockpit is the modern Linux admin interface. We release regularly.
Here are the release notes from Cockpit 360:
Cockpit’s remote login feature passes user-supplied hostnames and usernames from the web interface to the SSH client without validation or sanitization. An attacker with network access to the Cockpit web service can craft a single HTTP request to the login endpoint that injects malicious SSH options or shell commands, achieving code execution on the Cockpit host without valid credentials. The injection occurs during the authentication flow before any credential verification takes place, meaning no login is required to exploit the vulnerability.
The affected Cockpit versions are Cockpit 326 up to and including Cockpit 359. (cockpit >= 326, cockpit <= 359)
This is tracked as CVE-2026-4631.
A workaround is disabling LoginTo option in cockpit.conf, this disables the direct login feature but it is still strongly recommended to upgrade to Cockpit 360.
Many thanks to Florian Kohnhäuser for reporting this issue!
Cockpit 360 is available now:
THe cloud image is shipped as a qcow2 file. It has about 3 GB of usable space. I need more.
The Root FS for my Fedora43 Cloud image is a btrfs mounted on /dev/vda3.
First, stop the virtual machine. Then grow the Qcow2 image
qemu-img resize vms/Fedora-Cloud-Base-Generic-43-1.6.aarch64.qcow2 +50G
Start the virtual machine. Grow the partition
growpart /dev/vda3 1
finally resize the filesystem.
btrfs filesystem resize max /
Now the disk is 50G larger:
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda3 53G 1.8G 51G 4% /
After building a custom Qemu, there are a couple ways to run a VM to get to it. The older approach to VM management is to create a block device, run the VM with a boot device, do a full install and log in to the serial console. However, if you run the Qemu/KVM machine from the command lilne, hitting control C will stop your VM, and this is annoying. I have found it worth while to set up networking and then to SSH in to the machine.
My notes here suck. I am going to try and document what I have here working, and, over time, reverse engineer how I got here.
Edit: here are the steps
This is the command I use to run my virtual machine. This is on an AmpereOne test machine in my lab. You probably don’t have access to AARCH64 machines at this scale. Maybe someday….
../qemu/build/qemu-system-aarch64 \
-machine virt \
-enable-kvm \
-m 16G \
-cpu host \
-smp 16 \
-nographic \
-monitor telnet:127.0.0.1:1234,server,nowait \
-bios /usr/share/edk2/aarch64/QEMU_EFI.fd \
-drive if=none,file=../virt/vms/Fedora-Cloud-Base-Generic-43-1.6.aarch64.qcow2,id=hd0 \
-device vhost-vsock-pci,guest-cid=22 \
-device virtio-blk-device,drive=hd0,bootindex=0 \
-object memory-backend-file,id=mem,size=16G,mem-path=/dev/shm,share=on \
-numa node,memdev=mem \
-chardev socket,id=char0,path=/tmp/virtiofs_socket \
-virtfs local,path=/root/adam/linux,mount_tag=mylinux,security_model=passthrough,id=fs0 \
-netdev bridge,id=vm0,br=virbr0 \
-device virtio-net-pci,netdev=vm0,mac=52:54:00:70:0C:01 \
-device virtio-scsi-device \
-qmp unix:/tmp/qmp.sock,server,nowait \
2>&1 | tee /tmp/qemu.log
The VM is running based on a cloud image I downloaded from Fedora. To get the Keys in the machine, I started by running it using libvirt and virt-install:
virt-install --name fire43 --os-variant fedora43 --disk ./Fedora-Cloud-Base-Generic-43-1.6.aarch64.qcow2 --import --cloud-init root-ssh-key=/root/.ssh/id_rsa.pub
Here is the bridge setup on the hypervisor:
5: virbr0: mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 0c:42:a1:5a:9b:36 brd ff:ff:ff:ff:ff:ff
inet 10.76.112.72/24 brd 10.76.112.255 scope global dynamic noprefixroute virbr0
valid_lft 12409sec preferred_lft 12409sec
inet6 fe80::7098:f305:ad32:181e/64 scope link noprefixroute
valid_lft forever preferred_lft forever
This took a bunch of trial and error to get right. I don’t know how much is specific to my environment, but I do know that the bridge IP address is how I log in to the machine.
Looking at how this is stored in /etc/NetworkManager/system-connections/virbr0.nmconnection
[connection]
id=virbr0
uuid=8074697c-fdbb-48ad-888a-a64c4468e91c
type=bridge
interface-name=virbr0
[ethernet]
[bridge]
[ipv4]
method=auto
[ipv6]
addr-gen-mode=default
method=auto
[proxy]
And the ethernet connection in /etc/NetworkManager/system-connections/enP5p1s0f1np1.nmconnection
[connection]
id=enP5p1s0f1np1
uuid=097663c4-765c-4678-aaf3-761a1af2bb72
type=ethernet
interface-name=enP5p1s0f1np1
timestamp=1760474378
[ethernet]
[ipv4]
method=auto
[ipv6]
addr-gen-mode=eui64
method=auto
[proxy]
I know I got here by running nmcli commands, but they have long since fallen off my bash history, and I did not write them down.
One thing I can tell by the IP address that my VM gets is that it is talking to the same DHCP server as the Hypervisor.
I recently destroyed my previous VM that had NFS setup. I would like to get that working again, as that allowed me to sync the Kernel between the Hypervisor and the VM. But that is a tale for another day.
The Fedora Project’s Code of Conduct and its reports are managed by the Fedora Code of Conduct Committee, the Fedora Community Architect, and the Fedora Project Leader. We publish this summary to demonstrate our commitment to community safety and our project’s social fabric.
This post covers the year of reports received in the 2025 calendar year. The purpose of publishing the annual Code of Conduct Report is to provide transparency, insight, and awareness into the health signs of the community.
In 2025, we had a slight uptick in engagement from 2024. 14 reports were opened in 2025, compared to 11 reports in 2024. While we saw some members step down this year, the Fedora Code of Conduct Committee (CoCC) also refreshed its membership with new voices. Jef Spaleta, Chris Idoko, and Ankur Sinha were nominated this year to maintain responsiveness and steer our community standards forward.
The majority of issues reported in the year 2025 were largely handled through “shoulder taps” and formal reach-outs. This is in comparison to disciplinary actions or emergency action requiring bans or long-term suspensions. While reports did increase from 2024 to 2025, the difference is negligible. The Committee expects this number to fluctuate annually, as world events and international conflicts often impact the social dynamics of communities like ours.
You can see the full data from 2025 in the table below.
After six years of reporting, looking back at our journey from the modernization of the Code of Conduct to where we stand today, it is encouraging to see how much we have grown together. Yearly reports indicate while our community continues to have conflicts (as any healthy community ought to), incident severity continues to decrease comparing reports spanning 2020 through 2025. We attribute this consistent reduction in “opened reports” and “CoC interventions” to the maturity of our self-moderation culture.
A significant part of this positive atmosphere is thanks to the refreshed CoC guidelines established by Marie Nordin in 2021 successfully addressing the peak in incidents that occurred in the COVID-19. These were roadmaps on how we want to treat each other. Seeing these guidelines in actions in our reports shows that they are working as hoped. We feel the community is in a healthy place at this time, but a healthy committee is one that never stops listening. We would love to hear your thoughts, feedback and suggestions on how we can continue to help our shared spaces feel safe, inclusive and welcoming.
| Year | Reports Opened | Reports Closed | Warnings Issued | Moderations Issued | Suspensions Issued | Bans Issued |
| 2025 | 14 | 14 | 1 | 2 | 0 | 0 |
| 2024 | 11 | 11 | 1 | 0 | 1 | 0 |
| 2023 | 17 | 17 | 5 | 3 | 1 | 1 |
| 2022 | 21 | 24 | 6 | 3 | 0 | 0 |
| 2021 | 23 | 24 | 2 | 1 | 0 | 1 |
| 2020 | 20 | 16 | 8 | 4 | 2 | 0 |
If you witness or are part of a situation that violates Fedora’s Code of Conduct, please open a private report on the [Code of Conduct repo] or email codeofconduct@fedoraproject.org. As always, your reports are confidential and only visible to the Code of Conduct Committee.
Remember that opening a CoC report does not automatically mean action will be taken. Sometimes things can be clarified, improved, or resolved entirely. Or, it could be something pretty small, but it definitely wasn’t okay, and you don’t want to make a big deal… open that report anyway, because it could show a pattern of behavior that is negatively impacting more people than yourself.
Here is a reminder to our Fedora community to be kind and considerate to each other in all our interactions. We all depend on each other to create a community that is healthy, safe, and happy. Most of all, we love seeing folks self-moderate and stand up for the right thing day to day in our community. Keep it up, and keep being awesome Fedora, we <3 you!
Fedora Project’s Code of Conduct and reports are managed by the Fedora Code of Conduct Committee (CoCC). The Fedora CoCC is made up of the Fedora Project Leader, Jef Spaleta; the Fedora Community Architect, Justin Wheeler; the Red Hat legal team, as appropriate; and community nominated members. Jef Spaleta, Chris Onoja Idoko, Ankur Sinha, nominated this year.
We’re incredibly grateful to Josh Berkus and Laura Santamaria for stepping up as term-limited members of the Fedora Code of Conduct Committee (CoCC). Their commitment ensured we had consistent coverage through September 30th, 2025, providing vital support until our newest nominees were fully onboarded and trained.
The post Fedora Code of Conduct Report 2025 appeared first on Fedora Community Blog.
For a couple of years, Andreas Schneider and I have been working on a project we call the ‘local authentication hub’: an effort to use the Kerberos protocol to track authentication and authorization context for applications, regardless of whether the system they run on is enrolled into a larger organizational domain or is standalone. We aim to reuse the code and experience we got while developing Samba and FreeIPA over the past twenty years.
The local authentication hub relies on a Kerberos KDC available on demand on each system. We achieved this by allowing MIT Kerberos to communicate over UNIX domain sockets. On Linux systems, systemd allows processes to be started on demand when someone connects to a UNIX domain socket, and MIT Kerberos 1.22 has support for this mode.
A KDC accessible over a UNIX domain socket is not very useful in itself: it is only available within the context of a single machine (or a single container, or pod, if UNIX domain sockets are shared across multiple containers). Otherwise, it is a fully featured KDC with its own quirks. And we can start looking at what could be improved based on the enhanced context locality we have achieved. For example, a KDB driver can see host-specific network interfaces and thus be able to react to requests such as host/<ip.ad.dr.ess>@LOCALKDC-REALM dynamically—something that a centrally-managed KDC would only do through statically registered service principal names (SPNs), which are a pain to update as machines move across networks.
Adding support for dynamic features means new code needs to be written. MIT Kerberos is written in C, so our choices are either to continue writing in C or to integrate with whatever new language we choose. Initially, we kept the local KDC database driver written in C and decided to build the infrastructure we need in Rust. The end goal is to have most bits written in Rust.
The local KDC database isn’t supposed to handle millions of principal entries, but even for millions of them, MIT Kerberos has a pretty good default database driver built on LMDB: klmdb. We wanted to get out of the data store business and instead focus on higher-level logic. Thus, we made the same change I made in Samba around 2003 for virtual file system modules: we introduced support for stackable KDB drivers. This is also a part of the MIT Kerberos 1.22 release: a KDB driver implementation can ask the KDC to load a different KDB driver and choose to delegate some requests to it. The local KDC driver is using klmdb for that purpose.
With the database handled for us by klmdb, we focused on the local KDC-specific logic. We wanted to dynamically discover user principals from the operating system so that administrators do not need to maintain separate databases for them. systemd provides a userdb API to query such information over a varlink interface (also available over a UNIX domain socket) in a structured way, using JSON format. Thus, the Kirmes project was born. Kirmes is a Rust data library backed by the userdb API. It handles varlink communication through the wonderful Zlink library and exposes both asynchronous and synchronous access to user and group information.
The local KDC database driver prototype used the Kirmes C API. We demonstrated it at FOSDEM 2025: a user lookup is done over varlink, and if a user is present on the system, their Kerberos key is then looked up in klmdb using a specially-formatted userdb:<username> principal. You still need to handle those keys somehow, but there is a way to avoid that: use RADIUS.
A bit of historical reference. In 2012, Red Hat collaborated with MIT to introduce a KDC-side implementation of RFC 6560 (the OTP pre-authentication mechanism; at that point implemented in a proprietary solution by the RSA corporation). This mechanism allowed the KDC to get a hint out of a KDB driver and ask a RADIUS server to authenticate the credentials provided by the Kerberos client. Unlike traditional Kerberos symmetric keys, in this case, the client is sending a plain-text credential over the Kerberos protocol, and this credential can be forwarded to the RADIUS server. The plain-text nature of the RADIUS credential requires the use of a secure communication channel, and a good part of RFC 6560 relies on Flexible Authentication Secure Tunneling (FAST, RFC6113), where a pre-existing Kerberos ticket is used to encrypt the content of that tunnel.
Since ~2013, FreeIPA has used this mechanism to provide multi-factor authentication mechanisms: HOTP/TOTP tokens, RADIUS proxying to remote servers, the OAuth2 device authorization grant flow, and FIDO2 tokens. The list of mechanisms can be extended, as long as the model fits into the somewhat constrained Kerberos exchange flow. FreeIPA handles all communication from the KDC side via a local UNIX domain socket-activated daemon, ipa-otpd, which performs a user principal lookup and then decides on the details of how that user will be authenticated.
For the local KDC case, we used a similar approach but wrote a simplified version, localkdc-pam-auth, which uses PAM to authenticate user credentials. It works well and allows for a drop-in replacement: once the local KDC is set up, users defined on the system will automatically be able to receive Kerberos tickets, with no need to change any passwords or migrate their credentials into the Kerberos KDC. All we need now is the business logic to guide the KDC to use the OTP pre-authentication mechanism so that our RADIUS ‘proxy’ (localkdc-pam-auth) gets activated. This logic is implemented and will be available in the first localkdc release soon.
But back to the KDC side. As mentioned above, our goal was to write the local KDC database driver in a modern, safe language. Interfacing Rust with the MIT Kerberos KDC means building an interface that allows aligning code on both sides. This is what this blog is actually about (sorry for the long prelude…): how to make an MIT Kerberos KDB driver in Rust.
Today I published Kurbu5, a project that aims to provide these API bindings to Rust. The name is a transliteration of “krb5” into Mesopotamian cuneiform phonology: Kurbu-ḫamšat-qaqqadī—”The Blessed Five-Headed One”.
Creating API bindings is tedious work: there are many interfaces, each representing multiple functions and structures. MIT Kerberos has 12 interfaces which altogether expose roughly 117 methods that plugin authors implement, backed by around 70 supporting types (data structures passed into and out of those methods). It all sounds like a Tolkien tale: nine interfaces for core Kerberos functionality (checking password quality, mapping hostnames to Kerberos realms, mapping Kerberos principals to local accounts, selecting which credential cache to use, handling pre-authentication on both the client and server side, enforcing KDC policy, authorizing PKINIT certificates, and auditing events on the KDC side), the database backend interface, and two administrative interfaces. This is something that could be automated with agentic workflows—which I did to allow a parallel porting effort. The resulting agent instructions are useful artifacts in themselves: they show how to work when porting MIT Kerberos C code to Rust.
The result is split over several Rust crates to allow targeted reuse. The bulk of the code lives in three crates. The core Kerberos plugin crate (kurbu5-rs) is the largest at around 12,600 lines. The database backend crate (kurbu5-kdb-rs) follows at 5,600 lines, and the administration crate (kurbu5-kadm5-rs) at 3,100 lines. The remaining crates—the proc-macro derives and the raw FFI sys crates—are much smaller, with the sys crates being almost trivially thin (the KDB and kadm5 ones are under 40 lines each, since they mostly just re-export bindings from the main sys crate).
All crates are available on crates.io and share the same MIT license as the original MIT Kerberos.
In the localkdc project, we use kurbu5 to build a KDB driver and provide our audit plugin. We also have an experimental re-implementation of the OTP pre-authentication mechanism, both client and KDC sides, that was used to test interoperability with MIT Kerberos versions. The core of the KDB driver is ~520 lines of heavily documented Rust code, mostly handling business logic.
A somewhat quiet week in fedora land this time, which is nice, as it allows for catching up on planned work. Of course there was the usual flow of day to day items too.
Long ago OpenShift used a custom object called 'DeploymentConfig' to define how to deploy applications. After a while it was deprecated in favor of the normal k8s 'Deployment' object. We have a bunch of apps using the old DeploymentConfig and we wanted to migrate them to the new Deployment.
To be clear, this is just a deprecation right now, it's not been removed from OpenShift yet, but we wanted to get things moveed sooner rather than later.
So, Pedro did all the heavy lifting here and created pull requests for all our apps to move them.
I spent some time this last week merging those and then doing the dance to change the existing app over, which roughly was:
merge pull request
delete DeploymentConfig
run ansible to deploy the Deployment
check that everything was redeployed and working correctly.
I managed to find a few apps in staging that were not working or deployed correctly and had to fix those up along the way. We also hit some issues with selectors not getting updated, so applications didn't have correct routes/services.
There's a few more of these to do, but will probibly wait until after freeze is over to do them as they could be disruptive.
Speaking of freeze, we started the Fedora 44 Final infrastructure freeze. So far things are looking smooth for composes and such.
There are a few blockers currently, but hopefully we can get them sorted out and get a good release soon.
koji 1.36.0 came out last week and I spent a bit of time this week looking at modernizing the fedora spec to more match the python packaging guidelines and also to enable tests.
My somewhat hacky pr is at https://src.fedoraproject.org/rpms/koji/pull-request/29
It's nice to run the tests and have things not throwing deprecation warnings.
I have some posts planned which I need to actually write up sometime. One on my solar system, which is mostly going great, and another fun one on open source monitoring of blood glucose levels. Perhaps this weekend.
I'm going to be largely away from the internet the week of April 20th. I'm going on a family vacation to Hawaii. :) I have never been there, so it should be pretty fun. I'll probibly check emails from time to time, but I will definitely not be around day to day on matrix/slack/irc/whatever.
As always, comment on mastodon: https://fosstodon.org/@nirik/116347877029785741
In order to perform test driven development, you need a way to drive your code that can isolate behavior. Linux Kernel drivers that communicate with hardware devices can be hard to test: you might not have access to the hardware from your test systems, or the hardware may be flakey. I have such a set of issues with the Platform Communication Channel (PCC) drivers I am working with.
My primary work has been with a network driver that only exists on the newest hardware. However, I also need to be able to handle some drivers that would only work against old hardware. There are also PCC based drivers for hardware that my company does not support or have access to. I might want to make a test to ensure that changes to the Linux Kernel PCC driver does not change its behavior against these drivers. There exists no system where all of these drivers would be supported. But I can build one with Qemu.
The Qemu based driver might not completely simulate the hardware exactly as implemented, and that is OK: I want to be able to do things with Qemu I cannot do with current hardware. For example, the MCTP-over-PCC driver should be able to handle a wide array of messages, but the hardware I have access to only supports a very limited subset of message types.
The full code for the device is here.
Here is how I went about building a Qemu based PCC driver.
I want this code to run on Aarch64 (ARM64) natively. That means that I run the machine specified in hw/arm/virt.c. Thus, the first line of my run script is:
../qemu/build/qemu-system-aarch64 -machine virt \
The device itself lives in hw/arm/pcc.c. It was originally called mctp-pcc.c, but I soon realized that there was no reason to make it MCTP specific. While the code is testing type 3/4 devices, I suspect it would work fine for a type 2 or other driver with a minimum of changes.
Every device has to hang off a bus. Thus I started by creating a device like this article suggests: off the system bus: SysBusDevice parent_obj; This differs from some of the other examples out there where you are create, say a PCIe device, as there is a way to dynamically load PCIe devices: you cannot dynamically load SysBus devices, at least not in the default AARCH64 Qemu virt machine. Thus, I have to modify the virt.c code to add in my device.
I had to generate two new ACPI Tables: Secondary System Descriptor Table (SSDT) and and Platform Communication Channel (PCCT.) These tables are gnenerated from a vall in virt.c to create_pcc_devices. This function probably should be moved to a pcc specific file so it could potentially be shared by other virtual machine types, but for now it co-exists in virt.c as well. For now it is hard coded to only build the one device. This is obviosuly not going to scale. I will talk about how to improve this at the end of the article.
The bulk of the code in the driver is for generating the entries for the PCCT. The data in the PCCT has the address of the shared memory registers and data buffer, and the IRQ ID used to communicate between the OS and the platform. THe information is stored in a structure called PcctExtMemSubtable, which will then be written to the PCCT using ACPI primitives. This structure is filled during the device realise function mctp_pcc_realize.
The SSDT is a bit more free form, and does not have a structure to support it, but probably should. Right now I am just writing the direct primitives for the entry.
Both the outbox and inbox channels are mapped to single, contiguous block of memory. When reads or writes happen, Qemu forwards them to custom functions. I can then use the memory offset to identify if this a register or if it is the shared buffer. One of these memory offsets is the doorbell, and is used to implement the IRQ processing.
Each machine type in Qemu has a memory map table. In virt.c it is called
static const MemMapEntry base_memmap[] = {...
I found a space in the middle of the table that was unclaimed and use it for both of the channels of the PCCT: the code looks like this:
[VIRT_MMIO] = { 0x0a000000, 0x00000200 },
/* ...repeating for a total of NUM_VIRTIO_TRANSPORTS, each of that size */
[VIRT_PCC] = { 0x0a008000, 0x00008000 },
[VIRT_PLATFORM_BUS] = { 0x0c000000, 0x02000000 },
There is enough room between VIRT_PCC and VIRT_PLATFORM_BUS for multiple PCC entries. NUM_VIRTIO_TRANSPORTS is set to 32 (0x20). Multipltied by 0x200 = 0x0a004000 there is still plenty of room beyond the end of that and 0x0a008000.
Just as the machine has a mapping for memory mappied IO, the machine has a table for IRQs. For virt.c this table is defined as
static const int a15irqmap[] = {
I added the PCC IRQs in like this:L
[VIRT_SMMU] = 74, /* ...to 74 + NUM_SMMU_IRQS - 1 */
[VIRT_PCC] = 80 , /* and 81 */
[VIRT_PLATFORM_BUS] = 112, /* ...to 112 + PLATFORM_BUS_NUM_IRQS -1 */
Since NUM_SMMU_IRQS is defined as 4, we have enough room for 2 IRQs at 80.
The ARM64 Virtual machine uses a GIC. IT has an internal offset, so ID 1 inside Qemu because IRQ 33 inside the linux virtual machine. Thus the actual mapping takes place inside create_pcc_devices:
qemu_irq out_q_irq = qdev_get_gpio_in(vms->gic, outbox_irq)
sysbus_connect_irq(SYS_BUS_DEVICE(dev), 0, out_q_irq);
qemu_irq in_q_irq = qdev_get_gpio_in(vms->gic, inbox_irq);
sysbus_connect_irq(SYS_BUS_DEVICE(dev), 1, in_q_irq);
The outbox is designed to be triggered from the OS, and then to trigger it back once a message has been processed. The inbox is for sending messages to the OS.
One thing that is not well done yet is that these numbers are not then communicated to the Device: right now we you magic constants to keep them in sync. This is something to improve in the future.
Qemu has a standard way to represnt all hardware devices. Even though ACPI can play this role in a physical machine, Qemu goes with the more uniform FLattened Device Tree. Thus, for each device we create, we need to create a FDT entry. This includes knowing about the interrupts assigned.
qemu_fdt_add_subnode(ms->fdt, nodename);
qemu_fdt_setprop_string(ms->fdt, nodename, "compatible", TYPE_MCTP_PCC);
qemu_fdt_setprop_sized_cells(ms->fdt, nodename, "reg", 2, base, 2, size);
qemu_fdt_setprop_cells(ms->fdt, nodename, "interrupts",
GIC_FDT_IRQ_TYPE_SPI, outbox_irq, GIC_FDT_IRQ_FLAGS_LEVEL_HI,
GIC_FDT_IRQ_TYPE_SPI, inbox_irq, GIC_FDT_IRQ_FLAGS_LEVEL_HI)
When an interrupt comes from the OS to Qemu, I copy the contents of the shared buffer to a file in /tmp/pcc/outbox. I have written a program called PCCD which runs as an external process. PCCD uses Inotify to identify that a new file has been written and closed, and will then process the file. PCCD responds by posting a message to an inbox directory. Qemu also uses Inotify to identify that there is a new message, and stores it in the shared buffer. It then triggers an IRQ in the OS which tells the OS that there is a message to read. All files names are generated from timestamps.
I was able to reuse a shell script I had written for the MCTP over PCC driver to send messages to the Kernel. I copied this inside the VM. This is essentially the same test as I use to test the physical hardware implementation. However, now I can extend it to run messages that the Hardware does not implement. TO do this, I can implement the messages in PCCD.
The PCCT itself could be thought of as a type of bus. It may make sense to create a new Bus Type to support it and the devices that hang off it. That would allow a way to scope in PCC specific behavior.
There is a mechanism to create DSDT entries for ACPI device interfaces. It loops through all the devices on a Bus and checks to see if the device implements the AcpiDeviceIf interface. If it does, it adds a couple functions to the device. While our devices are ACPI devices, we do not need those functions. Instead, we can take the pattern and create a PCC interface that allows the device to define its own values.
struct AcpiDevPccIfClass {
/* <private> */
InterfaceClass parent_class;
/* <public> */
dev_pcc_fn build_dev_pcc;
dev_ssd_fn build_dev_ssd;
dev_pcct_ext_mem_subtable_fn get_ext_mem_subtable;
};
This interface could be hung off of the SystemBus, but then we need to enumerate each SystemBusDevice to see if it has this interface.
Both options seem viable.
The benefit to going with the PCCBus is we should be able to then make the devices loadable at run time via command line parameters. To do that with SystemBus would require a change to virt.c that might not be acceptable.
And I need a struct for the SSDT.
A huge Thank You to Greg Rose for his support and mentorship on this project.
Changes do not always get accepted upon initial submission. My current submission of the MCTP over PCC patch is at revision 37 and will likely have more. Previously, this patch was part of a series, and the change log was displayed in the series header email. However, now that I am down to a single patch, the change log should go in the email message with the patch attached.
It rturns out this is fairly simple to do: Puyt the change log at the =bottom of the commit message, after the Signed-Off-By tag. and after three dashes:
Signed-off-by: Adam Young <admiyo@os.amperecomputing.com>
---
Changes in V38:
- Release all struct sk_buff messages in Mailbox ring buffer when Network device is stopped
Changes in V37:
- Free Channel in error case during init MTU.
- Ensure Shared Buffer is > min-MTU + PCC Header
Git format patch will put all of this into the patch file. The Change log will have a line of three dashes at the start and at the end. When you run git am <patch> the information between the set of three dashes will be removed. The commit message will end with the Signed-off-by: line.
This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.
Week: 31 Mar – 03 Apr 2026
This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker
This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker
This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker
This is the summary of the work done regarding AI in Fedora.
This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.
This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.
This team is working on keeping Epel running and helping package things.
If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.
The post Community Update – Week 14 2026 appeared first on Fedora Community Blog.
Caveat lector
This post discusses tools reluctantly written with AI assistance. If you don’t entertain using them under any circumstance, and think even reading about them legally compromise your ability to reimplement them yourselves, stop reading now
This is a follow-up to the original Sandogasa announcement. Before I ended up fedora-cve-triage to extract library crates and reuse them in the other Sandogasa tools, I already created two tools for managing CentOS Hyperscale SIG workflows, hs-intake and hs-relmon. It simply makes sense to also merge them back in and deduplicate functionalities.
I'm doing a podcast recording this week, so I wanted to run some numbers so I could have some facts rather than feels. It turns out my feels were off by a factor of 3 or so.
If asked, I've always said the contributor count to the drm subsystem is probably in the 100 or so developers per release cycle.
Did the simplest:
git log --format='%aN' v6.14..v6.15 drivers/gpu/drm/ include/uapi/drm/ include/drm/ | sort -u | wc -l
Iterated over a few kernel releases
v6.15 326
v6.16 322
v6.17 300
v6.18 334
v6.19 332
v7.0-rc6 346
The number for the complete kernel in those scenarios are ~2000 usually, which means drm subsystem has around 15-16% of the kernel contributors.
I'm a bit spun out, that's quite a lot of people. I think I'll blame Sima for it. This also explains why I'm a bit out of touch with the process problems other maintainers have, and when I say stuff like a lot of workflows don't scale, this is what I mean.
The fact that only one candidate is running in the Debianism elections gives a stark reminder about the state of the so-called community. The main reason why other people did not contest the election is because of fear. Fear of a circle of reprisals that began when Adrian von Bidder-Senn died on our wedding day.
When CentOS died, people tried to carry on in various ways. That tells us a lot about human psychology. People knew the game was over but they tried to continue as if it was business as usual, as if the situation could be salvaged, as if it was only a temporary crisis.
Due to years of censorship, including the payment of $120,000 to steal Debian-related domain names, the Debianists have been living in a bubble and deluding themselves. When Sruthi Chandran nominated on Friday 13th, people acted as if this was a good thing.
Now Sruthi has stopped answering questions on the Debian-vote mailing list and it seems reality has started to sink in. People are coming to realize that the position of Debian Project Leader is the interface between Debianism and the outside world. People can fool themselves and use the Code of Conduct gaslighting to blackmail other volunteers to pretend that Sruthi is a great leader. People are coming to realise that these tricks won't work on the wider community. Given that Sruthi would be Debian's interface to the outside world, we can't just ignore how the world views the candidate who is the wife of another developer.
She has ignored the most serious questions on Debian-vote mailing list. A woman trying to run Debian from a social control media account is the death of Debian. Here is a tally of the number of replies she provided each day for those who use email, the mainstay of Debian communication:
| Day | Count |
|---|---|
| 14 March | 0 |
| 15 March | 0 |
| 16 March | 0 |
| 17 March | 4 |
| 18 March | 0 |
| 19 March | 0 |
| 20 March | 0 |
| 21 March | 3 |
| 22 March | 1 |
| 23 March | 0 |
| 24 March | 7 |
| 25 March | 0 |
| 26 March | 0 |
| 27 March | 0 |
| 28 March | 0 |
| 29 March | 0 |
| 30 March | 0 |
| 31 March | 0 |
That is a total of only 15 replies. She has been largely silent for a whole week since 24 March.
Technically, questions and their answers are supposed to be completed before midnight on Friday, 3 April. The most critical questions have not been answered. In her platform, Sruthi Chandran boasts about being the "Chief orga DebConf India 2023" but there has never been an official report about the death of Abraham Raji at the conference.
Voting runs from 4 April to 17 April, which is the 15th anniversary of the day Adrian von Bidder-Senn died on our wedding day. It was discussed like a copy-cat suicide but there was no official report about those deaths either.
Remember the words from Abraham Raji himself:
Everything in Debian is transparent, all forms of official communication are a matter of public record, the amount of unresolved bugs, every step taken by debian as an organization, everything is in the open! I appreciate that from my distribution. There is no room for underhand corporate deals, no unfair treatment behind private mails and everything can be reviewed by the public.
Does Sruthi Chandran spend more time in debian-private (leaked) and WhatsApp groups than the public communication channels that Debian is supposed to be using?
Sruthi Chandran's platform tells us she wants to put diversity ahead of traditional goals like freedom and security. She has been very vague about this. As a consequence, more evidence is going to be published during the voting period to prove that Debian "diversity" means some men who did the real work are not being given credit while some large sums of money were assigned to the wives and girlfriends of cabal members.
I've never stated whether people should vote for Sruthi Chandran or not. Looking at the tone of the discussion, I feel people are coming to realise the way the outside world views candidates like this is not the same way that people view it from inside the bubble.
Consider the irony: they spent all that money in arguments about leaks that are "tarnishing" the trademark. The implication of these arguments about tarnishing is that the way the outside world views Debianism does matter. Can anybody see the risk that Sruthi Chandran and a lop-sided diversity crusade could do far more to tarnish the trademark than any leaks that have appeared up to this moment?
Debian may not die exactly the same way that CentOS died. At some point, as with CentOS, we will go past the point of no return. Maybe we already did. Will people have the courage to ask questions before that threshold is crossed or will they continue acting as if nothing is wrong even long after the life support system has been unplugged from the corpse?
Remember, Debianists gave over $120,000 in kill money to racist Swiss lawyerists to attack my family but they didn't pay Abraham Raji anything for the work he did helping organise DebConf23. When Raji joined the other developers on the day trip, they asked him to contribute some of his own money, he was left behind to swim alone and he drowned. Yet the lawyerists were given $120,000.
The best way to encourage people to nominate for the election will be for the existing leader, Andreas Tille, to withdraw all the privacy attacks, settle the lawsuits proactively and ensure the next leader can walk in and find the desk is clean ready to work on productive things.
Don't hold your breath waiting for transparency about these attacks on my family. There is still time to watch my video and contribute to the crowdfunding campaign.
In previous reports, we've described the tactics of the FSFE misfits using terms like identity fraud and Nigerian fraud.
In the French justice system, they have their own name for this type of scam: parasitisme.
Parasitisme is the French legal concept of one person or organisation trying to usurp the reputation of another person or organisation to obtain money or procure the work of volunteers.
As an Australian, the Fellowship elected me on ANZAC Day in 2017 to investigate the FSFE misfits in Berlin. Australians don't know a lot of French but we know parasites.
Is the judgment real or is it just another April fools day joke? The facts and the leaks are certainly real. The French courts really use the word Parasitisme to describe this type of scam. However, some fake judgments are being distributed with the number CO23.002709 and the names of Caroline Kuhnlein-Hofmann and Mélanie Bron. You can recognise the fake judgments because they have the date of the Kristallnacht, 10 November, on the first page.
Click for the parasitisme judgment and punishments:
If you want Australians to continue working to put these dangerous pests to rest, please watch my crowdfunding campaign video and discuss it with your community today.
About seven years ago, a ticket was filed noting aarch64 systems were shipping with Secure Boot enabled, and that Fedora should start signing its boot path to support these devices out of the box.
I’m pleased to say that today’s Fedora Rawhide images - what will be Fedora 45 - finally does this thanks to the work of a whole bunch of people.
This means you can grab the latest Rawhide images and boot them on your favorite aarch64 laptop
without turning off Secure Boot, or launch VMs in any of the major clouds with Secure Boot on. For
example, I’m able to start a VM in Azure with the TrustedLaunch security type:
❯ az group create --name "jcline-aarch64-secureboot" --location "eastus2"
❯ az vm create --location eastus2 --name fedora \
--resource-group jcline-aarch64-secureboot \
--image /CommunityGalleries/Fedora-5e266ba4-2250-406d-adad-5d73860d958f/Images/Fedora-Cloud-Rawhide-Arm64/Versions/latest \
--security-type TrustedLaunch \
--size Standard_D2plds_v6 \
--accept-term \
--ssh-key-values @/home/jcline/.ssh/id_ed25519.pub
❯ ssh jcline@20.12.69.183
[jcline@fedora ~]$ mokutil --sb-state
SecureBoot enabled
The way Fedora used to sign UEFI applications for Secure Boot was delightfully simple (for some
value of simple). The keys were in a smart card, plugged into a special build host, and anything
that needed a signature was routed to be built on that host. pesign, one of the common utilities
to sign PE applications, has a mode where it can run as a daemon and sign anything provided to it
over a Unix socket. That Unix socket is threaded into the build environment, where builds can access
it to sign PE applications with pesign-client.
Unfortunately, that host was x86_64 so when aarch64 started shipping with Secure Boot enabled, an alternative approach was needed.
Ultimately we moved the smart card to the signing server we use for RPMs and other things. The tricky bit about the whole process is that Fedora signs each bit of the boot chain during the build. Each time any of the UEFI applications in the boot chain is built it needs to be signed. One way to do this is to build the application in Fedora’s infrastructure, and then have a second build which uses the output of the first build along with a signature as input to construct a signed final version. However, this means you’ve got two specfiles which you have to keep in sync, and there’s probably other painful aspects I’ve not considered. In any case, that’s not what Fedora does.
Instead, Fedora signs the UEFI applications during the build. Since we want the signing key to be
stored in a remote server, this implies some sort of networking, but builds aren’t permitted network
access. Nor can the build environment provide the necessary secrets to authenticate with the signing
service. In order to handle this, I wrote a small service that pretends to be the pesign Unix
socket, and that can be exposed to the build environment in the same way. However, it just shovels
anything it gets to the signing server and returns whatever the signing server does.
That service got deployed last week, and after a little bit of debugging it even worked. In fact, everything was signed for aarch64 last week, except for the fallback UEFI application that adds a boot entry for Fedora if it’s not there, which happens on first boot. Without that, booting new images would fail unless you explicitly added the correct Fedora boot entry manually. Yesterday, shim got rebuilt and everything works.
It’s possible this will eventually work in Fedora 44 Cloud images. Shim in Fedora 44 hasn’t (yet) been rebuilt and we’re in the final freeze for Fedora 44, so unfortunately we just missed it, but if it does get rebuilt later, Cloud images will be updated and will start working.
For Fedora 43 and older, the version of shim shipped doesn’t include the version signed for aarch64. I’m not sure it’s worth the risk to update it, as much as I’d like it to work there, as well.
Anyway, Fedora 45 will be upon us before you know it, and after seven years, six more months isn’t so bad, right?
Thoughts, comments, or feedback greatly welcomed on Mastodon
If religion is a form of social engineering attack, having your own language is the ultimate mind-control trick.
Latin is the language of God - or control.
The Latin-1 character set (ISO 8859-1) is natively supported in the Linux console driver, including Red Hat Linux systems.
On 12 December 1924, Pope Pius XI created the Diocese of Raleigh, North Carolina. In 1971, Pope Paul VI forked the diocese to create a separate Diocese of Charlotte.
Red Hat was founded in Durham. They subsequently relocated to Raleigh and in 2009 they began preaching about the need for Codes of Conduct demagoguery.
From 2009 meeting minutes:
Paul: There are just as many RHT'ers contributing to the negativity as volunteers
In the rival Diocese of Charlotte, Bishop Michael Martin was appointed on 9 April 2024.
In May 2025, a draft Code of Conduct was leaked from Bishop Martin in Charlotte, North Carolina. The document includes harsh new restrictions on the use of practices from the Traditional Latin Mass.
In a report from 31 March 2026:
The Diocese of Charlotte, a focal point of liturgical conflict
The controversy is not new. Since his arrival in the diocese in 2024, following the resignation for health reasons of Bishop Peter Jugis, Michael Martin has been the target of strong criticism from the faithful attached to the traditional liturgy.
The most significant episode occurred in May 2025, when he reduced from four to one the authorized locations for the celebration of the traditional Latin Mass, leaving only one chapel for these celebrations. The measure was presented expressly as part of the application of Traditionis Custodes.
This was followed by leaked diocesan documents that pointed to new restrictions, including possible limits on the use of Latin in the liturgy, certain traditional vestments, and some habitual postures of the faithful at the moment of Communion.
Prohibition of kneelers and Communion rails
The tension increased even further in September 2025, when Martin prohibited the use of an altar rail in a Catholic school in Charlotte.
The fact the 2025 document is a draft should not be understated. It is not unusual for managers to ask their subordinates to write drafts like this full of ideas from brainstorming sessions. The bishop may have asked different subordinates or theology students to write competing drafts with the intention of cherry-picking ideas from each of them rather than using the draft verbatim. Therefore, as one of possibly many drafts, it has no meaning until Bishop Martin decides to make a public statement.
Nonetheless, the leaking of a draft is a great way to get people talking about the Catholic Church. Many companies deliberately leak their own documents to gain attention on social control media.
Here are some of the key points and benefits of the Latin language:
However, Latin has been challenged over the years. English bibles have been around for centuries.
Can you put candles on the altar? The candles are part of the Traditional Latin Mass but there have been arguments about whether they are permitted or even banned in the post-Latin world. My first ever voluntary role was being an altar boy at our local church. Lighting the candles before mass and extinguishing them at the end of the mass were among our responsibilities. Was I breaking the Code of Conduct by lighting these candles for an English-language mass?
In fact, due to the extreme temperatures in Australia, we have many days of the year when all types of fire are banned. If the altar boys still light the candles on one of the total fire ban days, are they simultaneously violating two Codes of Conduct?
Latin was both the language of the empire and the language of the official state religion. Today, English is the language of business, but more people speak Chinese or Spanish. There are many more religions and sects. When children are born into a family that practices the Traditional Latin Mass, they have the opportunity to learn Latin from childhood. When modern day religions and cults recruit adult members who never practiced a religion before, they don't have the same opportunitity to indoctrinate those people into a language.
The use of the Latin language has disadvantages.
The Vatican has not completely banned use of the Traditional Latin Mass. They have not completely banned candles and some other traditions that are associated with the Traditional Latin Mass. Nonetheless, there is a Code of Conduct that every modern day diocese is expected to follow and that means the majority of masses should be offered in the local language(s) understood by the congregation.
Please follow the Catholic.Community web site and make it your home page.
I’ve been trying out contact lenses for the first time. Multi-focal lenses provide different focal lengths to the eye at once, and you can have different prescription lenses in each eye (as long as they don’t differ by too much).
This means the brain is getting signals from the eyes, each providing potentially multiple focal lengths, and learns to combine them to reduce blur. It’s interesting and I wanted to be able to visualise how that works, so I made this interactive simulator with the help of Gemini. It shows a heatmap (green is sharp, red is blurry) over distance, comparing uncorrected vision with modern multi-focal lenses. Try it out! All the calculations happen locally within your browser.
Modelling EDOF contrast loss, intermediate dips and true binocular fusion.
Why do the rows look the way they do? Multi-focal contact lenses trade absolute sharpness for a wider range of vision. When seeing an object in sharp focus without corrective lenses, all of the light entering your eye gets focused at a single sharp point. If you can see an object in perfect focus, the light is converging to a single point on the retina.
With modern contact lenses this same amount of light is focused into multiple points: some might converge early (in front of the retina) or late (would be focused behind the retina). This is why the graph doesn’t show true green for the contact lenses. It’s the trade-off.
This next interactive simulator shows what’s actually happening to the rays of light within the eye, explaining why the focus heatmap above has dips in between areas of very good focus.
Observe how light splits in a multifocal lens. The rays only glow cyan when they form a perfectly sharp point on the retina.
The post Visualising vision correction appeared first on PRINT HEAD.
Last week we finally got the new secure boot setup fully switched over. We are now signing aarch64 grub2/kernel/fwupd as we are the x86_64 versions. The aarch64 signed artifacts are in rawhide now, but will move to stable releases as testing permits.
Sadly my Lenovo slim7x doesn't boot correctly with the signed artifacts, I think due to needing a firmware update or manually enrolling the microsoft certs. I'll try and test more with it when I can, but many other folks are seeing it work fine.
It's been a 7 year journey to get this done. Why so long? A few of the reasons in no particular order:
At first we were not even sure MS would sign others on aarch64
Our old x86_64 setup was smart cards in 2 builders, and we didn't have any easy way to install more in aarch64 builders.
They stopped making the smart cards we were using.
There were a number of things that made the fedora aarch64 kernel not work with secure boot. Many around the 'lockdown' patches.
Lack of time from everyone involved.
Need for someone to write a way to use our normal signing server to sign these things (so we wouldn't need cards in builders).
Lack of capacity in old smart cards to add new certs.
And probibly many more things I have forgotten about.
Feels great to get us in a better place and have signed aarch64 builds!
We had a mass update/reboot cycle this last week. It went pretty smoothly this time as we were not applying firmware updates or doing any other work.
We should be all caught up for the freeze next week....
Next tuesday starts the Fedora 44 Final freeze. This is the weeks running up to the Fedora 44 linux final release. So, if you need to get anything in, do so before tuesday.
So the reason I was off line thursday was because I was getting solar and battery and inverter installed here. It's already pretty awesome. Look for a long blog post on it next weekish or so.
During this freeze I am hoping to get started on some projects I was meaning to do already, but got busy with the signing stuff: revamping our backups and moving more stuff to rhel10 (will do staging in freeze).
As always, comment on mastodon: https://fosstodon.org/@nirik/116308267360944066
The 23rd edition of my favorite conference just came to an end, I can’t believe this incredibly feeling of joy, satisfaction, gratitude and proud that I’m experimenting even though there’s been a few days since I attended it. Probably similar to the first time when I went Wow! The dust has finally settled after SCaLE 23x in Pasadena, and if you weren’t there, you missed one for the history books. The sun was out, the Pasadena Convention Center was buzzing, and the Fedora Project was right in the thick of it.
This year felt different. For the first time in many years, we had a dedicated Fedora Hatch track on Friday in Room 208. It was basically a mini-Flock (our contributor conference) nestled right inside SCaLE. We covered everything from Fedora Docs revamps to the «Age of Atomic» desktops.
One of the standout moments was the RPM Packaging Workshop. Seeing folks roll up their sleeves to learn the guts of Fedora packaging reminded me why this community is so special—people here don’t just use the tech; they want to build it.


We tried something a little different this year by sharing our booth space with the CentOS Project. Honestly? It was a masterstroke.
The synergy was palpable. Whether we were talking about how CentOS Stream benefits from Fedora’s fast-moving innovation or just swapping stickers, the unified presence showed the strength of the ecosystem. It made the booth a «one-stop shop» for anyone curious about the Red Hat-sponsored community projects, and the flow of traffic was non-stop.


A huge shout-out to our Fedora Ambassadors. You all are the heartbeat of these events. From managing the «swag hounds» (yes, the Fedora 43 stickers went fast!) to answering deep-dive technical questions about GNOME 49 and Btrfs, our ambassadors handled it with grace, wit, and a lot of caffeine.
The enthusiasm from the attendees was off the charts. We met:
In every edition we try to do something different, something that attracts people to our booth just for curiosity. We have had 3d printers, Fedora Jam with actual musical instruments, old games, and stuff. This year we had a retro modem that did hit directly into the nostalgy of many.


Looking back, it’s wild to see how much SCaLE has evolved. What started many years ago as a humble gathering has grown into North America’s largest community-run open-source expo. The professional production, the sheer variety of tracks (AI, Security, DevOps, oh my!), and the diversity of the crowd have all leveled up. Yet, somehow, it still keeps that «local user group» feeling where you can grab a beer with a kernel dev and talk shop.
SCaLE is all about community and Fedora has a foundation of Friends, our community. This event would not mean the same for Fedora if we didn’t find time to spend with amigos and had our coumminity stronger. This is probably my favorite part and the single thing that makes me realize that it is all worth.



If SCaLE 23x is any indication, the future of open source in Southern California (and beyond) is looking bright. We left Pasadena feeling energized, inspired, and maybe just a little bit exhausted.
Fedora is already looking forward to 24x. We have big plans for the next release, more «Hatch» events, and even more ways to collaborate with our friends in the ecosystem. See you all next year!

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.
RPMs of PHP version 8.5.5RC1 are available
RPMs of PHP version 8.4.20RC1 are available
ℹ️ The packages are available for x86_64 and aarch64.
ℹ️ PHP version 8.3 is now in security mode only, so no more RC will be released.
ℹ️ Installation: follow the wizard instructions.
ℹ️ Announcements:
Parallel installation of version 8.5 as Software Collection:
yum --enablerepo=remi-test install php85
Parallel installation of version 8.4 as Software Collection:
yum --enablerepo=remi-test install php84
Update of system version 8.5:
dnf module switch-to php:remi-8.5 dnf --enablerepo=remi-modular-test update php\*
Update of system version 8.4:
dnf module switch-to php:remi-8.4 dnf --enablerepo=remi-modular-test update php\*
ℹ️ Notice:
Software Collections (php84, php85)
Base packages (php)
On March 17 we released systemd v260 into the wild.
In the weeks leading up to that release (and since then) I have posted a series of serieses of posts to Mastodon about key new features in this release, under the #systemd260 hash tag. In case you aren't using Mastodon, but would like to read up, here's a list of all 21 posts:
I intend to do a similar series of serieses of posts for the next systemd release (v261), hence if you haven't left tech Twitter for Mastodon yet, now is the opportunity.
My series for v261 will begin in a few weeks most likely, under the #systemd261 hash tag.
In case you are interested, here is the corresponding blog story for systemd v259, here for v258, here for v257, and here for v256.
Released on 2026-03-25.
This is a feature release that adds support for uploading flatpaks to multiple
registries.
container.destination_registry config value can now be specified as list (#6077).The following developers contributed to this release of Bodhi:
Caveat lector
This post discusses tools reluctantly written with AI assistance. If you don’t entertain using them under any circumstance, and think even reading about them legally compromise your ability to reimplement them yourselves, stop reading now
I’ve spent the past few weeks of having to use LLMs to scratch some long-standing itches that, unfortunately, no one in the community has had the time to solve programmatically.
It started off with fedora-cve-triage, written to address the issue that a lot of CVE bugs filed against Fedora packages are badly attributed, and there is a lack of automation for handling issues filed against CVEs that have been addressed in a software update but failed to reference said issue.
the old pen resists
but the cursor blinks, waiting—
I press Enter. Fine.
~ Claude Opus 4.6 (1M context)
If you know where I work, you’ve probably heard of news reports that we will be judged on AI-driven impact. I’ll let you drawn your own conclusion on how much truth there is in the reports, but you can listen to what Zuck said about AI in a recent earnings report.
Suddenly I have been hearing the term Landlock more in (agent) security circles. To me this is a bit weird because while Landlock is absolutely a useful Linux security tool, it’s been a bit obscure and that’s for good reason. It feels to me a lot like the how weird prevalence of the word delve became a clear tipoff that LLMs were the ones writing, not a human.
Here’s my opinion: Agentic LLM AI security is just security.
We do not need to reinvent any fundamental technologies for this. Most uses of agents one hears about provide the ability to execute arbitrary code as a feature. It’s how OpenCode, Claude Code, Cursor, OpenClaw and many more work.
Especially let me emphasize since OpenClaw is popular for some reason right now: You should absolutely not give any LLM tool blanket read and write access to your full user account on your computer. There are many issues with that, but everyone using an LLM needs to understand just how dangerous prompt injection can be. This post is just one of many examples. Even global read access is dangerous because an attacker could exfiltrate your browser cookies or other files.
Let’s go back to Landlock – one prominent place I’ve seen it
mentioned is in this project nono.sh pitches itself as a new sandbox for agents.
It’s not the only one, but indeed it heavily leans on Landlock on Linux.
Let’s dig into this blog post
from the author. First of all, I’m glad they are working on agentic
security. We both agree: unsandboxed OpenClaw (and other tools!) is a bad idea.
Here’s where we disagree:
With AI agents, the core issue is access without boundaries. We give agents our full filesystem permissions because that’s how Unix works. We give them network access because they need to call APIs. We give them access to our SSH keys, our cloud credentials, our shell history, our browser cookies – not because they need any of that, but because we haven’t built the tooling to say “you can have this, but not that.”
No. We have had usable tooling for “you can have this, but not that”
for well over a decade. Docker kicked off a revolution for a reason:
docker run <app> is “reasonably completely isolated” from the host system.
Since then of course, there’s many OCI runtime implementations,
from podman to apple/container on MacOS
and more.
If you want to provide the app some credentials, you can just
use bind mounts to provide them like docker|podman|ctr -v ~/.config/somecred.json:/etc/cred.json:ro.
Notice there the ro which makes it readonly. Yes, it’s
that straightforward to have “this but not that”.
Other tools like Flatpak on Linux have leveraged Linux kernel namespacing similar to this to streamline running GUI apps in an isolated way from the host. For a decade.
There’s far more sophisticated tooling built on top of similar container runtimes since then, from having them transparently backed by virtual machines, Kubernetes and similar projects are all about running containers at scale with lots of built up security knowledge.
That doesn’t need reinventing. It’s generic workload technology, and agentic AI is just another workload from the perspective of kernel/host level isolation. There absolutely are some new, novel risks and issues of course: but again the core principle here is we don’t need to reinvent anything from the kernel level up.
Security here really needs to start from defaulting
to fully isolating (from the host and other apps),
and then only allow-listing in what is needed. That’s again how
docker run worked from the start. Also on this topic,
Flatpak portals
are a cool technology for dynamic resource access on a single
host system.
So why do I think Landlock is obscure? Basically because most workloads should already be isolated already per above, and Landlock has heavy overlap with the wide variety of Linux kernel security mechanisms already in use in containers.
The primary pitch of Landlock is more for an application to further isolate itself – it’s at its best when it’s a complement coarse-grained isolation techniques like virtualization or containers. One way to think of it is that often container runtimes don’t grant privileges needed for an application to further spawn its own sub-containers (for kernel attack surface reasons), but Landlock is absolutely a reasonable thing for an app to use to e.g. disable networking from a sub-process that doesn’t need it, etc.
Of course the challenge is that not every app is easy to run in a container or virtual machine. Some workloads are most convenient with that “ambient access” to all of your data (like an IDE or just a file browser).
But giving that ambient access by default to agentic AI is a terrible idea. So don’t do it: use (OCI) containers and allowlist in what you need.
(There’s other things nono is doing here that I find dubious/duplicative; for example I don’t see the need for a new filesystem snapshotting system when we have both git and OCI)
But I’m not specifially trying to pick on nono – just in the last two weeks I had to point out similar problems in two different projects I saw go by also pitched for AI security. One used bubblewrap, but with insufficient sandboxing, and the other was also trying to use Landlock.
On the other hand, I do think the credential problem (that nono and others are trying to address in differnet ways) is somewhat specific to agentic AI, and likely does need new tooling. When deploying a typical containerized app usually one just provisions a few relatively static credentials. In contrast, developer/user agentic AI is often a lot more freeform and dynamic, and while it’s hard to get most apps to leak credentials without completely compromising it, it’s much easier with agentic AI and prompt injection. I have thoughts on credentials, and absolutely more work here is needed.
It’s great that people want to work on FOSS security, and AI could certainly use more people thinking about security. But I don’t think we need “next generation” security here: we should build on top of the “previous generation”. I actually use plain separate Unix users for isolation for some things, which works quite well! Running OpenShell in a secondary user account where one only logs into a select few things (i.e. not your email and online banking) is much more reasonable, although clearly a lot of care is still needed. Landlock is a fine technology but is just not there as a replacement for other sandboxing techniques. So just use containers and virtual machines because these are proven technologies. And if you take one message away from this: absolutely don’t wire up an LLM via OpenShell or a similar tool to your complete digital life with no sandboxing.
Pretty much everything I deal with requires parsing ASN.1 encodings. ASN.1 definitions published as part of internet RFCs: certificates are encoded using DER, LDAP exchanges use BER, Kerberos packets are using DER as well. ASN.1 use is a never ending source of security issues in pretty much all applications. Having safer ASN.1 processing is important to any application developer.
In FreeIPA we are using three separate ASN.1 libraries: pyasn1 and x509 (part of PyCA) for Python code, and asn1c code generator for C code. In fact, we use more: LDAP server plugins also use OpenLDAP’s lber library, while Kerberos KDC plugins also use internal MIT Kerberos parsers.
The PyCA developers noted in their State of OpenSSL statement:
[…] when pyca/cryptography migrated X.509 certificate parsing from OpenSSL to our own Rust code, we got a 10x performance improvement relative to OpenSSL 3 (n.b., some of this improvement is attributable to advantages in our own code, but much is explainable by the OpenSSL 3 regressions). Later, moving public key parsing to our own Rust code made end-to-end X.509 path validation 60% faster — just improving key loading led to a 60% end-to-end improvement, that’s how extreme the overhead of key parsing in OpenSSL was.
That’s 16x performance improvement over the OpenSSL 3. OpenSSL did improve their performance since then but it still pays an overhead for a very flexible design to allow loading cryptographic implementations from dynamic modules (providers). Enablement for externally-provided modules is essential to allow adding new primitives and support for government-enforced standards (such as FIPS 140) where implementations have to be validated in advance and code changes cannot come without expensive and slow re-validation process.
Nevertheless, in FreeIPA we focus on integrating with Linux distributions. Fedora, CentOS Stream, and RHEL enforce crypto consolidation rules, where all packaged applications must be using the same crypto primitives provided by the operating system. We can process metadata ourselves but all cryptographic operations still have to go through OpenSSL and NSS. And paying large performance costs during metadata processing would be hurting to infrastructure components such as FreeIPA.
FreeIPA is a large beast. Aside from its management component, written in Python, it has more than a dozen plugins for 389-ds LDAP server, plugins for MIT Kerberos KDC, plugins for Samba, and tight integration with SSSD, all written in C. Its default certificate authority software, Dogtag PKI, is written in Java and relies on own stack of Java and C dependencies. We are using PyCA’s x509 module for certificate processing in Python code but we cannot use it and underlying ASN.1 libraries in C as those libraries aren’t exposed to C applications or intentionally limited in their functionality to PKI-related tasks.
For the 2026-2028, I’m focusing on enabling FreeIPA to handle post-quantum cryptography (PQC), as a part of the Quantum-Resistant Cryptography in Practice (QARC) project. The project is funded by the European Union under the Horizon Europe framework programme (Grant Agreement No. 101225691) and supported by the European Cybersecurity Competence Centre. One of well publicized aspects of moving to PQC certificates is their sizes. The following table 5 is from Post-Quantum Cryptography for Engineers IETF draft summarizes it well:
| PQ Security Level | Algorithm | Public key size (bytes) | Private key size (bytes) | Signature size(bytes) |
|---|---|---|---|---|
| Traditional | RSA2048 | 256 | 256 | 256 |
| Traditional | ECDSA-P256 | 64 | 32 | 64 |
| 1 | FN-DSA-512 | 897 | 1281 | 666 |
| 2 | ML-DSA-44 | 1312 | 2560 | 2420 |
| 3 | ML-DSA-65 | 1952 | 4032 | 3309 |
| 5 | FN-DSA-1024 | 1793 | 2305 | 1280 |
| 5 | ML-DSA-87 | 2592 | 4896 | 4627 |
Public keys for ML-DSA-65 certificates 7.6x bigger than RSA-2048 ones. You need to handle public keys in multiple situations: when performing certificates’ verification against known certificate authorities (CAs), when matching their properties for validation and identity derivation during authorization, when storing them. FreeIPA uses LDAP as a backend, so storing 7.6 times more data directly affects your scalability when number of users or machines (or Kerberos services) grow up. And since certificates are all ASN.1 encoded, I naturally wanted to establish a performance baseline to ASN.1 parsing.
I started with a small task: created a Rust library, synta, to decode and encode ASN.1 with the help of AI tooling. It quickly grew up to have its own ASN.1 schema parser and code generation tool. With those in place, I started generating more code, this time to process X.509 certificates, handle Kerberos packet structures, and so on. Throwing different tasks at Claude Code led to iterative improvements. Over couple months we progressed to a project with more than 60K lines of Rust code.
| Language | files | blank | comment | code |
|---|---|---|---|---|
| Rust | 207 | 9993 | 17492 | 67284 |
| Markdown | 52 | 5619 | 153 | 18059 |
| Python | 41 | 2383 | 2742 | 7679 |
| C | 17 | 852 | 889 | 4333 |
| Bourne Shell | 8 | 319 | 482 | 1640 |
| C/C++ Header | 4 | 319 | 1957 | 1138 |
| TOML | 20 | 196 | 97 | 896 |
| YAML | 1 | 20 | 46 | 561 |
| make | 4 | 166 | 256 | 493 |
| CMake | 3 | 36 | 25 | 150 |
| JSON | 6 | 0 | 0 | 38 |
| diff | 1 | 6 | 13 | 29 |
| SUM | 364 | 19909 | 24152 | 102300 |
I published some of the synta crates yesterday on crates.io, the whole project is available at codeberg.org/abbra/synta. In total, there are 11 crates, though only seven are published (and synta-python is also available at PyPI):
| Crate | Lines (src/ only) |
|---|---|
| synta | 10572 |
| synta-derive | 2549 |
| synta-codegen | 17578 |
| synta-certificate | 4549 |
| synta-python | 8953 |
| synta-ffi | 7843 |
| synta-krb5 | 2765 |
| synta-mtc | 7876 |
| synta-tools | 707 |
| synta-bench | 0 |
| synta-fuzz | 3551 |
Benchmarking, fuzzer, and tools aren’t published. They only needed for development purposes.
The numbers below were obtained on Lenovo ThinkPad P1 Gen 5, 12th Gen Intel(R) Core(TM) i7-12800H, 64 GB RAM, on Fedora 42. This is pretty much a 3-4 years old hardware.
Benchmarking is what brought this project to life, let’s look at the numbers. When dealing with certificates, ASN.1 encoding can be parsed in different ways: you can visit every structure or stop at outer shells and only visit the remaining nested structures when you really need them. The former is “parse+fields” and the latter is “parse-only” in the following table that summarizes comparison between synta and various Rust crates (and OpenSSL/NSS which were accessible through their Rust FFI bindings):
| Library | Parse-only | Parse+fields | vs synta (parse-only) | vs synta (parse+fields) |
|---|---|---|---|---|
| synta | 0.48 µs | 1.32 µs | — | — |
| cryptography-x509 | 1.45 µs | 1.43 µs | 3.0× slower | 1.1× slower |
| x509-parser | 2.01 µs | 1.99 µs | 4.2× slower | 1.5× slower |
| x509-cert | 3.16 µs | 3.15 µs | 6.6× slower | 2.4× slower |
| NSS | 7.90 µs | 7.99 µs | 16× slower | 6.1× slower |
| rust-openssl | 15.4 µs | 15.1 µs | 32× slower | 11× slower |
| ossl | 16.1 µs | 15.8 µs | 33× slower | 12× slower |
“Parse+fields” tests access every named field: serial number, issuer/subject DNs, signature algorithm OID, signature bytes, validity period, public key algorithm OID, public key bytes, and version. The “parse+fields” speedup is the fair end-to-end comparison: synta’s parse-only advantage is large because most fields are stored as zero-copy slices deferred until access, while other libraries must materialise all fields eagerly at parse time.
The dominant cost in X.509 parsing is Distinguished Name traversal: a certificate’s issuer and subject each contain a SEQUENCE OF SET OF SEQUENCE with per-attribute OID lookup. synta defers this entirely by storing the Name as a RawDer<'a> — a pointer+length into the original input with no decoding. cryptography-x509 takes a similar deferred approach. The nom-based and RustCrypto libraries decode Names eagerly. NSS goes further and formats them into C strings, which is the dominant fraction of its 16× parse overhead.
For benchmarking I used certificates from PyCA test vectors. There are few certificates with different properties, so we parse them multiple times and then average numbers:
| Certificate | synta | cryptography-x509 | x509-parser | x509-cert | NSS |
|---|---|---|---|---|---|
| cert_00 (NoPolicies) | 1333.7 ns | 1386.7 ns | 1815.9 ns | 2990.6 ns | 7940.3 ns |
| cert_01 (SamePolicies-1) | 1348.8 ns | 1441.0 ns | 2033.4 ns | 3174.3 ns | 7963.8 ns |
| cert_02 (SamePolicies-2) | 1338.6 ns | 1440.1 ns | 2120.1 ns | 3205.6 ns | 8206.8 ns |
| cert_03 (anyPolicy) | 1362.4 ns | 1468.3 ns | 2006.2 ns | 3194.5 ns | 7902.4 ns |
| cert_04 (AnyPolicyEE) | 1232.9 ns | 1424.7 ns | 1968.6 ns | 3168.1 ns | 7913.1 ns |
| Average | 1323 ns | 1432 ns | 1989 ns | 3147 ns | 7985 ns |
The gap between synta (1.32 µs) and cryptography-x509 (1.43 µs) is tighter here than in parse-only (3.0×) because synta’s field access includes two format_dn() calls (~800 ns combined) that cryptography-x509 does for effectively free (its offsets were computed at parse time). Synta leads by ~8% overall.
Now, when parsing PQC certificates, an interesting thing happens. First, it is faster to parse ML-DSA than traditional certificates.
| Certificate | synta | cryptography-x509 | x509-parser | x509-cert | NSS |
|---|---|---|---|---|---|
| ML-DSA-44 | 1030.9 ns | 1256.4 ns | 1732.2 ns | 2666.0 ns | 7286.9 ns |
| ML-DSA-65 | 1124.9 ns | 1237.5 ns | 1690.5 ns | 2664.2 ns | 7222.1 ns |
| ML-DSA-87 | 1102.6 ns | 1226.5 ns | 1727.2 ns | 2696.6 ns | 7284.6 ns |
| Average | 1086 ns | 1240 ns | 1717 ns | 2675 ns | 7265 ns |
synta’s ML-DSA parse+fields (1.09 µs) is faster than its traditional parse+fields (1.32 µs)
because ML-DSA test certificates have shorter Distinguished Names (one attribute each in issuer and subject vs multiple attributes in traditional certificates in the test above). The signature BIT STRING — which is 2,420–4,627 bytes for ML-DSA — is accessed as a zero-copy slice with no size-dependent cost.
Imaging your app needs to test whether the certificate presented by a client is known to you (e.g. belongs to a trusted CAs set). A library like OpenSSL looks at the client’s certificate, extracts identifiers of the certificate issuer, looks up whether such issuer is known in the CA database. That would require looking up properties of the certificates in the database. The fast we can do that, the better.
All those numbers in the previous section are for a single certificate being parsed millions of times. In a real app we often need to validate the certificate against a system-wide database of certificate authorities. The database used by Fedora and other Linux distributions comes from Firefox. It contains 180 self-signed root CA certificates for all public CAs with diverse key types (RSA 2048/4096, ECDSA P-256/P-384) and DN structures. The median cert by DER size is “Entrust.net Premium 2048 Secure Server CA” (1,070 bytes); the benchmark uses this cert for single-certificate and field-access sub-benchmarks to get stable results that are not sensitive to certificate-size outliers.
Another data I tried to benchmark against is 9,898 certificates from the Common CA Database (CCADB), covering the full multi-level hierarchy used by Mozilla, Chrome, Apple, and Microsoft:
| Depth | Count | Description |
|---|---|---|
| 0 | 919 | Root CAs (self-signed) |
| 1 | 6,627 | Intermediates issued directly by roots |
| 2 | 2,212 | Two levels deep |
| 3 | 137 | Three levels deep |
| 4 | 3 | Four levels deep |
Intermediate CA certificates tend to have more complex DNs and more extensions than the root CAs in the Mozilla store. The CCADB median cert is “Bayerische SSL-CA-2014-01” (10,432 bytes). These certificates from CCADB cover past 30 years of certificate issuance on the internet.
To see how those benchmarks would behave if CA roots database would be built with post quantum cryptography, I rebuilt the CCADB corpus as ML-DSA certificates. Nine CCADB certificates were skipped: OpenSSL’s x509 -x509toreq -copy_extensions copy step failed to convert them to CSR form, typically because those certs use non-standard DER encodings or critical extensions that the x509toreq pipeline cannot copy into a PKCS#10 request. (The failures are in OpenSSL’s cert→CSR conversion; synta parses all 9,898 original CCADB certs without error.) This leaves 9,889 of the original 9,898 certs in the synthetic database.
The median cert by DER size is “TrustCor Basic Secure Site (CA1)” (6,705 bytes). ML-DSA certs range from 5,530 B to 16,866 B; the distribution is shifted left relative to the CCADB RSA/ECDSA median (10,432 B) because the smallest CCADB certs (compact root CAs with few extensions) become the new median position after ML-DSA key replacement enlarges all certs uniformly.
| Benchmark | Library | Dataset | Time | Throughput |
|---|---|---|---|---|
synta_parse_all |
synta | Mozilla (180 certs) | 87.8 µs | 2.0 M/sec |
nss_parse_all |
NSS | Mozilla (180 certs) | 1.577 ms | 114 K/sec |
openssl_parse_all |
rust-openssl | Mozilla (180 certs) | 3.552 ms | 50.7 K/sec |
ossl_parse_all |
ossl | Mozilla (180 certs) | 3.617 ms | 49.8 K/sec |
synta_parse_and_access |
synta | Mozilla (180 certs) | 261 µs | 690 K/sec |
synta_build_trust_chain |
synta | Mozilla (180 certs) | 11.6 µs | — |
synta_parse_all |
synta | CCADB (9,898 certs) | 5.10 ms | 1.94 M/sec |
nss_parse_all |
NSS | CCADB (9,898 certs) | 106 ms | 93 K/sec |
openssl_parse_all |
rust-openssl | CCADB (9,898 certs) | 203 ms | 48.8 K/sec |
ossl_parse_all |
ossl | CCADB (9,898 certs) | 214 ms | 46.3 K/sec |
synta_parse_and_access |
synta | CCADB (9,898 certs) | 16.1 ms | 615 K/sec |
synta_parse_roots |
synta | CCADB (919 roots) | 457.7 µs | 2.01 M/sec |
synta_parse_intermediates |
synta | CCADB (8,979 intermediates) | 4.735 ms | 1.90 M/sec |
synta_build_dependency_tree |
synta | CCADB (9,898 certs) | 559 µs | — |
synta_parse_all |
synta | ML-DSA synth (9,889 certs) | 5.78 ms | 1.71 M/sec |
nss_parse_all |
NSS | ML-DSA synth (9,889 certs) | 103 ms | 96.4 K/sec |
openssl_parse_all |
rust-openssl | ML-DSA synth (9,889 certs) | 239 ms | 41.4 K/sec |
ossl_parse_all |
ossl | ML-DSA synth (9,889 certs) | 256 ms | 38.6 K/sec |
synta_parse_and_access |
synta | ML-DSA synth (9,889 certs) | 17.5 ms | 566 K/sec |
synta_parse_roots |
synta | ML-DSA synth (919 roots) | 463 µs | 1.98 M/sec |
synta_parse_intermediates |
synta | ML-DSA synth (8,970 ints.) | 5.10 ms | 1.76 M/sec |
synta_build_dependency_tree |
synta | ML-DSA synth (9,889 certs) | 549 µs | — |
NSS is 18–21× slower than synta across all three datasets; rust-openssl is 40–41× slower and ossl is 41–44× slower. All three C-backed libraries successfully parse ML-DSA certificates (NSS 3.120+ and OpenSSL 3.4+ support ML-DSA natively). NSS’s absolute parse time is nearly identical across CCADB traditional certs (106 ms) and ML-DSA synthetic certs (103 ms) — confirming that NSS’s dominant cost is eager DN formatting at parse time, which depends on DN attribute count rather than the signature algorithm. The slightly lower relative slowdown for NSS on ML-DSA (18× vs 21×) is entirely because synta is slower on ML-DSA (5.78 ms vs 5.10 ms), not because NSS is faster.
synta’s throughput is consistent at ~1.7–2.0 M certs/sec across all three datasets, confirming linear O(n) scaling. Parse rate is slightly lower for the ML-DSA synthetic hierarchy (1.71 M/sec) than for the CCADB traditional hierarchy (1.94 M/sec) because the larger ML-DSA SubjectPublicKeyInfo and signature BIT STRING fields add bytes to the tag+length-header scan that synta performs at parse time. The intermediates-only sub-benchmark is slightly lower than roots-only in each dataset (1.76 M/sec vs 1.98 M/sec for ML-DSA; 1.90 M/sec vs 2.01 M/sec for CCADB) because intermediate CAs tend to have more complex DNs and extension lists.
Finally, individual property access for a pre-parsed certificate, single field read, no allocation unless noted:
| Field | Mozilla (1,070 B) | CCADB (10,432 B) | ML-DSA (6,705 B) | Notes |
|---|---|---|---|---|
issuer_raw / subject_raw |
4.1 / 4.1 ns | 4.2 / 4.1 ns | 4.5 / 4.4 ns | Zero-copy slice |
public_key_bytes / signature_bytes |
4.1 / 4.1 ns | 4.2 / 4.2 ns | 4.6 / 4.4 ns | Zero-copy slice |
signature_algorithm / public_key_algorithm |
5.9 / 5.4 ns | 5.9 / 5.5 ns | 6.3 / 6.4 ns | OID → &'static str |
serial_number |
10.9 ns | 6.8 ns | 7.5 ns | Integer → i64, length-dependent |
validity |
180 ns | 206 ns | 231 ns | Two time-string allocations |
issuer_dn |
401 ns | 224 ns | 246 ns | format_dn() → String |
subject_dn |
404 ns | 292 ns | 324 ns | format_dn() → String |
Zero-copy fields (issuer_raw, subject_raw, public_key_bytes, signature_bytes) cost
~4–5 ns — the price of reading a pointer and length from a struct field. The slightly higher
cost for CCADB and ML-DSA fields vs Mozilla is within measurement noise.
identify_signature_algorithm() and identify_public_key_algorithm() match the OID
component array against a static table and return &'static str — no allocation, no string
formatting. The ~5–6 ns cost is a few comparisons and a pointer return.
serial_number cost depends on the integer’s byte length: the Entrust Mozilla cert carries
a 16-byte serial number (parsed via SmallVec<[u8; 16]>), while the CCADB and ML-DSA
synthetic medians have shorter serials. At 10.9, 6.8, and 7.5 ns respectively, all are
negligible.
validity (~180–231 ns) allocates two strings: UTCTime and GeneralizedTime are formatted
from their raw DER bytes into owned Strings. The two calls account for essentially all
of the cost; the YYMMDDHHMMSSZ to RFC 3339 formatting is the dominant work.
format_dn() is the most variable field: it walks the Name DER bytes, decodes each
SEQUENCE OF SET OF SEQUENCE, looks up each attribute OID by name, and formats the result
into an owned String. The Mozilla cert’s issuer DN is more complex (multiple attributes,
longer values: 401 ns) than the CCADB median (224 ns) or the ML-DSA synthetic median
(246 ns). The ML-DSA synthetic median’s subject DN (324 ns) is slightly more expensive
than the CCADB median (292 ns) because a different cert occupies the median position after
key replacement. format_dn() cost is proportional to the DN’s attribute count and string
lengths.
CERT_NewTempCertificate (NSS) and OpenSSL’s d2i_X509 perform significantly more work
per certificate than synta:
Eager DN formatting — NSS formats the issuer and subject Distinguished Names into
internal C strings during CERT_NewTempCertificate, even when the caller never reads
them. Distinguished Name formatting is the single most expensive operation in certificate
parsing; doing it unconditionally at parse time accounts for roughly 80% of NSS’s total
parse cost. OpenSSL decodes DN structure eagerly as well.
Arena and heap allocation — each NSS certificate allocates a PLArena block and
copies the full DER buffer into it (copyDER = 1). OpenSSL allocates from the C heap.
These allocations are additional work beyond decoding.
Library state and locking — NSS acquires internal locks on every
CERT_NewTempCertificate call to update the certificate cache, even when the resulting
certificate is marked as temporary. This serialises concurrent parsing in multi-threaded
applications.
FFI boundary costs — the rust-openssl and ossl measurements include the overhead
of crossing from Rust into the C library via extern "C" calls and pointer marshalling.
synta defers all of (1): issuer and subject are stored as RawDer<'a> (borrowed byte
spans) and decoded only when the caller calls format_dn(). There is no locking, no arena,
and no FFI boundary.
In these tests I also found out that PyCA’s cryptography-x509 doesn’t have optimizations for multiple accesses to the same fields. It is typically not a problem if you are just loading a certificate and use it once. If you have to return back to it multiple times, that becomes visible and hurts your performance. So I submitted a pull request to apply some of the optimizations I found with synta. The pull request had to be split into smaller ones and few of them were already merged, so performance to access issuer, subject, and public key in certificates and to some attributes in CSRs was improved 100x. The rest waits for improvements in PyO3 to save some of memory use.
Things are just flying by and it seems to be saturday again, so here's another weekly recap.
Most of my week was consumed with work on our secure boot signing infrastructure. The old setup was using smart cards in specific builders. This had a lot of disadvantages, including:
space on the smart cards was pretty much full, preventing adding more certs
Those machines were 'special' and if they went down/broke things would be bad.
The smart cards in them are not even made anymore or supported, so we couldn't get more for adding more builders.
So, thanks to a bunch of work from Jeremy Cline we finally have things moved over to the new setup. This setup is:
Using our normal signing infrastructure (sigul, soon to be replaced by a rust re-write). We can easily decide in config which machines are used.
Using a new hardware on the vault end that has more space for more certs.
Allows us to easily add a aarch64 path to sign there.
The signed aarch64 grub2 build is in rawhide now, but for whatever reason it's not working on my slim7x. It is however working in vm's, cloud providers and other hardware, so I suspect it might be just a problem with this laptop. It also doesn't work with my Radxa Orion O6, but again could be something going on there. I think it's at least good enough to get more widespread testing.
We should hopefully have a signed kernel next week, but in the mean time if you have a arm device that supports secureboot, you can update to the latest grub2 and give it a try.
We seem to have dropped the ball on f44/f45 openh264 builds. :(
So, I looked at doing some this week. I ran into a linker issue on the i686 builds, but managed to work around that and get builds.
Now we just need to wait for cisco to publish them. I am hoping this process will go much quicker than it has in the past, since we have a better way to upload things for them now.
Time will tell.
I moved all our openshift clusters to 4.21.5 this week (from 4.20.15).
I really love how easy openshift upgrades are. Press button and wait usually. I did have to uprgade to the latest 4.20 first before it would let me move to 4.21, but both steps went fine.
Next week we will be catching up on updates all around and rebooting things. The week after we start Fedora 44 Final freeze so we want to have things all updated before that. No special stuff this time, just updates/reboots so I expect it to go smoothly.
As always, comment on mastodon: https://fosstodon.org/@nirik/116268414239551452
Hello and welcome to another update on what’s been happening at the GNOME Foundation. It’s been two weeks since my last update, and there’s been plenty going on, so let’s dive straight in.
My update wouldn’t be complete without mentioning this week’s GNOME 50 release. It looks like an amazing release with lots of great improvements! Many thanks to everyone who contributed and made it such a success.
The Foundation plays a critical role in these releases, whether it’s providing development infrastructure, organising events where planning takes place, or providing development funding. If you are reading this and have the means, please consider signing up as a Friend of GNOME. Even small regular donations make a huge difference.
The Board of Directors had its regular monthly meeting on March 9th, and we had a full agenda. Highlights from the meeting included:
The Travel Committee met both this week and last week, as it processed the initial batch of GUADEC sponsorship applications. As a result of this work the first set of approvals have been sent out. Documentation has also been provided for those who are applying for visas for their travel.
The membership of the current committee is quite new and it is having to figure out processes and decision-making principals as it goes, which is making its work more intensive than might normally be the case. We are starting to write up guidelines for future funding rounds, to help smooth the process.
Huge thanks to our committee members Asmit, Anisa, Julian, Maria, and Nirbheek, for taking on this important work.
Planning and preparation for the 2026 editions of LAS and GUADEC have continued over the past fortnight. The call for papers for both events is a particular focus right now, and there are a couple of important deadlines to be aware of:
There are teams behind each of these calls, reviewing and selecting proposals. Many thanks to the volunteers doing this work!
We are also excited to have sponsors come forward to support GUADEC.
The Foundation has been undertaking a program of improvements to our accounting and finance systems in recent months. Those were put on hold for the audit fieldwork that took place at the beginning of March, but now that’s done, attention has turned to the remaining work items there.
We’ve been migrating to a new payments processing platform since the beginning of the year, and setup work has continued, including configuration to make it integrate correctly with our accounting software, migrating credit cards over from our previous solution, and creating new web forms which are going to be used for reimbursement requests in future.
There are a number of significant advantages to the new system, like the accounting integration, which are already helping to reduce workloads, and I’m looking forward to having the final pieces of the new system in place.
Another major change that is currently ongoing is that we are moving from a quarterly to a monthly cadence for our accounting. This is the cycle we move on to “complete” the accounts, with all data inputted and reconciled by the end of the cycle. The move to a monthly cycle will mean that we are generating finance reports on a more frequent basis, which will allow the Board to have a closer view on the organisation’s finances.
Finally, this week we also had our regular monthly “books” call with our accountant and finance advisor. This was our usual opportunity to resolve any questions that have come up in relation to the accounts, but we also discussed progress on the improvements that we’ve been making.
On the infrastructure side, the main highlight in recent weeks has been the migration from Anubis to Fastly’s Next-Gen Web Application Firewall (WAF) for protecting our infrastructure. The result of this migration will be an increased level of protection from bots, while simultaneously not interfering in peoples’ way when they’re using our infra. The Fastly product provides sophisticated detection of threats plus the ability for us to write our own fine-grained detection rules, so we can adjust firewall behaviour as we go.
Huge thanks to Fastly for providing us with sponsorship for this service – it is a major improvement for our community and would not have been possible without their help.
That’s it for this update. Thanks for reading and be on the lookout for the next update, probably in two weeks!

After nearly three years of development (it takes time to make up one’s mind) Firefox for Linux users can now enjoy seamless emoji insertion using the native GTK emoji chooser. This long-requested feature, implemented in
Firefox 150 (recent Beta), brings a more integrated experience for Linux users who want to add emojis to their web content.
Starting with Firefox 150, the native Gtk emoji picker can be invoked directly within Firefox. This means you can insert emojis into text fields, comment boxes, social media posts, and any other web input using the same interface you’re already familiar with from other GTK applications.
Using the emoji picker is simple and follows standard GTK conventions:
The feature leverages GTK’s built-in GtkEmojiChooser widget, ensuring that the look and feel matches other applications in your desktop environment.
For users who prefer not to use this feature (perhaps due to conflicts with custom keybindings or specific workflows), Firefox provides a preference to disable it. Go to about:config page and set widget.gtk.native-emoji-dialog to false.
The native GTK emoji picker implementation uses the GtkEmojiChooser popover built into GTK3. Unlike other GTK dialogs (file picker, print dialog), it can’t be invoked directly in GTK3 – it must be triggered by a key-binding event or signal on GtkEntry or GtkTextView widgets, and the widget has to be visible from GTK’s perspective.
This conflicts with Firefox’s GTK architecture, which doesn’t use GTK widgets directly but instead paints everything itself.
But a solution was found. Firefox already has an invisible GtkEntry widget that’s not attached to any actual window. It’s an offscreen widget used to invoke
key-binding events from GTK input events, like copy/paste and other edit commands. It also receives the ’emoji-insert’ signal after Ctrl+., but normally ignores it since the GtkEntry itself isn’t visible.
I configured the GtkEntry to listen for the ’emoji-insert’ signal. When received, I create a new GtkEntry as a child of the recently focused GtkWindow and redirect the ’emoji-insert’ signal there. The GtkEntry has to be ‘shown’ but remains invisible to users because Firefox paints a wl_subsurface over it.
It also needs to be correctly positioned to show the GtkEmojiChooser in the right location, and connected to other signals like ‘insert_text’ to retrieve the selected emoji. Thanks to Emilio Cobos Ã�lvarez and Masayuki Nakano for their helpful hints on text processing and positioning!

بهار از راه میرسد و با خود عطر شکوفهها، صدای پرندگان و طراوتی بینظیر را به ارمغان میآورد! نوروز، جشن زنده شدن طبیعت و فرصتی طلایی برای ماست تا رویاهای تازهای بسازیم و با انگیزهای نو، به استقبال روزهای درخشان برویم. لحظه تحویل سال، پر از امید و آرزوهای زیباست؛ لحظهای که آیندهای روشن را […]
The post هر روزتان نوروز و نوروزتان پیروز first appeared on طرفداران فدورا.In the Fall of 2025 we’ve experimented with AI agents backporting upstream patches to CentOS Stream. For more info feel free to read:
There was one backport that served as “the final boss”: CVE-2025-59375 in expat. The upstream fix has 19 commits and changes almost 1000 lines. The models back then couldn’t handle this for Stream 9 where the drift in code was severe. I recall the backporting process halting several times with a message saying “this is too complex, a human developer with an IDE should do this work”.
It’s March 2026, let’s try to do this with Claude Code, imperfect prompt, and Opus 4.6.
It’s clear LLMs are one of the biggest changes in technology ever. The rate of progress is astounding: recently due to a configuration mistake I accidentally used Claude Sonnet 3.5 (released ~2 years ago) instead of Opus 4.6 for a task and looked at the output and thought “what is this garbage”?
But daily now: Opus 4.6 is able to generate reasonable PoC level Rust code for complex tasks for me. It’s not perfect – it’s a combination of exhausting and exhilarating to find the 10% absolutely bonkers/broken code that still makes it past subagents.
So yes I use LLMs every day, but I will be clear: if I could push a button to “un-invent” them I absolutely would because I think the long term issues in larger society (not being able to trust any media, and many of the things from Dario’s recent blog etc.) will outweigh the benefits.
But since we can’t un-invent them: here’s my opinion on how they should be used. As a baseline, I agree with a lot from this doc from Oxide about LLMs. What I want to talk about is especially around some of the norms/tools that I see as important for LLM use, following principles similar to those.
On framing: there’s “core” software vs “bespoke”. An entirely new capability of course is for e.g. a nontechnical restaurant owner to use an LLM to generate (“vibe code”) a website (excepting hopefully online orderings and payments!). I’m not overly concerned about this.
Whereas “core” software is what organizations/businesses provide/maintain for others. I work for a company (Red Hat) that produces a lot of this. I am sure no one would want to run for real an operating system, cluster filesystem, web browser, monitoring system etc. that was primarily “vibe coded”.
And while I respect people and groups that are trying to entirely ban LLM use, I don’t think that’s viable for at least my space.
Hence the subject of this blog is my perspective on how LLMs should be used for “core” software: not vibe coding, but using LLMs responsibly and intelligently – and always under human control and review.
I think most of the industry would agree we can’t give responsibility to LLMs. That means they must be overseen by humans. If they’re overseen by a human, then I think they should be amplifying what that human thinks/does as a baseline – intersected with the constraints of the task of course.
On “amplification”: Everyone using a LLM to generate content should inject their own system prompt (e.g. AGENTS.md) or equivalent. Here’s mine – notice I turn off all the emoji etc. and try hard to tune down bulleted lists because that’s not my style. This is a truly baseline thing to do.
Now most LLM generated content targeted for core software is still going to need review, but just ensuring that the baseline matches what the human does helps ensure alignment.
Let’s focus on a very classic problem: pull request reviews. Many projects have wired up a flow such that when a PR comes in, it gets reviewed by a model automatically. Many projects and tools pitch this. We use one on some of my projects.
But I want to get away from this because in my experience these reviews are a combination of:
In practice, we just want the first of course.
How I think it should work:
I wrote this agent skill to try to make this work well, and if you search you can see it in action in a few places, though I haven’t truly tried to scale this up.
I think the above matches the vision of LLMs amplifying humans.
There’s no doubt that LLMs can be amazing code generators, and I use them every day for that. But for any “core” software I work on, I absolutely review all of the output – not just superficially, and changes to core algorithms very closely.
At least in my experience the reality is still there’s that percentage of the time when the agent decided to reimplement base64 encoding for no reason, or disable the tests claiming “the environment didn’t support it” etc.
And to me it’s still a baseline for “core” software to require another human review to merge (per above!) with their own customized LLM assisting them (ideally a different model, etc).
Of course, my position here is biased a bit by working on FOSS – I still very much believe in that, and working in a FOSS context can be quite different than working in a “closed environment” where a company/organization may reasonably want to (and be able to) apply uniform rules across a codebase.
While for sure LLMs allow organizations to create their own Linux kernel filesystems or bespoke Kubernetes forks or virtual machine runtime or whatever – it’s not clear to me that it is a good idea for most to do so. I think shared (FOSS) infrastructure that is productized by various companies, provided as a service and maintained by human experts in that problem domain still makes sense. And how we develop that matters a lot.
md0: echo current LBS to md/logical_block_size to prevent data loss issues from LBS changes.
Note: After setting, array will not be assembled in old kernels (<= 6.18)
RPMs of Valkey version 9.1 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
⚠️ Warning: this is a pre-release version not ready for production usage.
Packages are available in the valkey:remi-9.1 module stream.
# dnf install https://rpms.remirepo.net/enterprise/remi-release-<ver>.rpm # dnf module switch-to valkey:remi-9.1/common
# dnf install https://rpms.remirepo.net/fedora/remi-release-<ver>.rpm # dnf module reset valkey # dnf module enable valkey:remi-9.1 # dnf install valkey
The valkey-compat-redis compatibility package is not available in this stream. If you need the Redis commands, you can install the redis package.
Some optional modules are also available:
These packages are weak dependencies of Valkey, so they are installed by default (if install_weak_deps is not disabled in the dnf configuration).
The Modules are automatically loaded after installation and service (re)start.
Valkey also provides a set of modules, which may be submitted for the Fedora official repository.
ℹ️ Notices:
valkey
In Chapter 1 I gave the context for this project and in Chapter 2 I showed the bare minimum: an ELF that Open Firmware loads, a firmware service call, and an infinite loop.
That was July 2024. Since then, the project has gone from that infinite loop to a bootloader that actually boots Linux kernels. This post covers the journey.
The Boot Loader Specification expects BLS snippets in a FAT filesystem under loaders/entries/. So the bootloader needs to parse partition tables, mount FAT, traverse directories, and read files. All #![no_std], all big-endian PowerPC.
I tried writing my own minimal FAT32 implementation, then integrating simple-fatfs and fatfs. None worked well in a freestanding big-endian environment.
The breakthrough was hadris, a no_std Rust crate supporting FAT12/16/32 and ISO9660. It needed some work to get going on PowerPC though. I submitted fixes upstream for:
thiserror pulling in std: default features were not disabled, preventing no_std builds.u32. On x86 that’s invisible; on big-endian PowerPC it produced garbage cluster chains.read_to_vec() to coalesce contiguous fragments into a single I/O. This made kernel loading practical.All patches were merged upstream.
Hadris expects Read + Seek traits. I wrote a PROMDisk adapter that forwards to OF’s read and seek client calls, and a Partition wrapper that restricts I/O to a byte range. The filesystem code has no idea it’s talking to Open Firmware.
PowerVM with modern disks uses GPT (via the gpt-parser crate): a PReP partition for the bootloader and an ESP for kernels and BLS entries.
Installation media uses MBR. I wrote a small mbr-parser subcrate using explicit-endian types so little-endian LBA fields decode correctly on big-endian hosts. It recognizes FAT32, FAT16, EFI ESP, and CHRP (type 0x96) partitions.
The CHRP type is what CD/DVD boot uses on PowerPC. For ISO9660 I integrated hadris-iso with the same Read + Seek pattern.
Boot strategy? Try GPT first, fall back to MBR, then try raw ISO9660 on the whole device (CD-ROM). This covers disk, USB, and optical media.
This cost me a lot of time.
Open Firmware provides claim and release for memory allocation. My initial approach was to implement Rust’s GlobalAlloc by calling claim for every allocation. This worked fine until I started doing real work: parsing partitions, mounting filesystems, building vectors, sorting strings. The allocation count went through the roof and the firmware started crashing.
It turns out SLOF has a limited number of tracked allocations. Once you exhaust that internal table, claim either fails or silently corrupts state. There is no documented limit; you discover it when things break.
The fix was to claim a single large region at startup (1/4 of physical RAM, clamped to 16-512 MB) and implement a free-list allocator on top of it with block splitting and coalescing. Getting this right was painful: the allocator handles arbitrary alignment, coalesces adjacent free blocks, and does all this without itself allocating. Early versions had coalescing bugs that caused crashes which were extremely hard to debug – no debugger, no backtrace, just writing strings to the OF console on a 32-bit big-endian target.
March 7, 2026. The commit message says it all: “And the kernel boots!”
The sequence:
BLS discovery: walk loaders/entries/*.conf, parse into BLSEntry structs, filter by architecture (ppc64le), sort by version using rpmvercmp.
ELF loading: parse the kernel ELF, iterate PT_LOAD segments, claim a contiguous region, copy segments to their virtual address offsets, zero BSS.
Initrd: claim memory, load the initramfs.
Bootargs: set /chosen/bootargs via setprop.
Jump: inline assembly trampoline – r3=initrd address, r4=initrd size, r5=OF client interface, branch to kernel:
core::arch::asm!(
"mr 7, 3", // save of_client
"mr 0, 4", // r0 = kernel_entry
"mr 3, 5", // r3 = initrd_addr
"mr 4, 6", // r4 = initrd_size
"mr 5, 7", // r5 = of_client
"mtctr 0",
"bctr",
in("r3") of_client,
in("r4") kernel_entry,
in("r5") initrd_addr as usize,
in("r6") initrd_size as usize,
options(nostack, noreturn)
)
One gotcha: do NOT close stdout/stdin before jumping. On some firmware, closing them corrupts /chosen and the kernel hits a machine check. We also skip calling exit or release – the kernel gets its memory map from the device tree and avoids claimed regions naturally.
I implemented a GRUB-style interactive menu:
e: edit the kernel command line with cursor navigation and word jumping (Ctrl+arrows).This runs on the OF console with ANSI escape sequences. Terminal size comes from OF’s Forth interpret service (#columns / #lines), with serial forced to 80×24 because SLOF reports nonsensical values.
IBM POWER has its own secure boot: the ibm,secure-boot device tree property (0=disabled, 1=audit, 2=enforce, 3=enforce+OS). The Linux kernel uses an appended signature format – PKCS#7 signed data appended to the kernel file, same format GRUB2 uses on IEEE 1275.
I wrote an appended-sig crate that parses the appended signature layout, extracts an RSA key from a DER X.509 certificate (compiled in via include_bytes!), and verifies the signature (SHA-256/SHA-512) using the RustCrypto crates, all no_std.
The unit tests pass, including an end-to-end sign-and-verify test. But I have not tested this on real firmware yet. It needs a PowerVM LPAR with secure boot enforced and properly signed kernels, which QEMU/SLOF cannot emulate. High on my list.
The crate has grown well beyond Chapter 2. It now provides: claim/release, the custom heap allocator, device tree access (finddevice, getprop, instance-to-package), block I/O, console I/O with read_stdin, a Forth interpret interface, milliseconds for timing, and a GlobalAlloc implementation so Vec and String just work.
Published on crates.io at github.com/rust-osdev/ieee1275-rs.
I would like to test the Secure Boot feature on an end to end setup but I have not gotten around to request access to a PowerVM PAR. Beyond that I want to refine the menu. Another idea would be to perhaps support the equivalent of the Unified Kernel Image using ELF. Who knows, if anybody finds this interesting let me know!
The source is at the powerpc-bootloader repository. Contributions welcome, especially from anyone with POWER hardware access.
Cockpit is the modern Linux admin interface. We release regularly.
Here are the release notes from Cockpit 358 and cockpit-files 38:
The Cockpit Client has been updated to GTK 4 and WebKit 6, making it easier to enable support for downloads from Cockpit Files.

If there are any Wi-Fi devices, the Networking page overview shows the number of available networks and the currently connected network.

The details page shows a table of available networks with mode/security, signal strength, and rate. The currently active connection (if any) is always at the top. It is followed by known networks, and then unknown networks sorted by descending signal strength. You can connect, disconnect, and forget networks.

When editing files, Cockpit will now show a confirmation dialog when clicking away, hitting cancel, or pressing escape. This prevents accidentally discarding changes and losing data.

Thanks to Leone25 for this contribution!
Cockpit 358, cockpit-files 38, cockpit-machines 350 and cockpit-podman 123 are available now:
Dear testers, we're happy to announce Kiwi TCMS version 15.4!
IMPORTANT:
This is a minor version release which includes security related updates, dependency updates and new translations.
You can explore everything at https://public.tenant.kiwitcms.org!
---
Public container image (x86_64):
pub.kiwitcms.eu/kiwitcms/kiwi latest ae3ae4a25d24 729MB
IMPORTANT: version tagged and multi-arch container images are available only to subscribers!
hub.kiwitcms.eu/kiwitcms/version 15.4 (aarch64) 9e55b77392d5 17 Mar 2026 747MB hub.kiwitcms.eu/kiwitcms/version 15.4 (x86_64) f65cf08eb2e4 17 Mar 2026 729MB hub.kiwitcms.eu/kiwitcms/enterprise 15.4-mt (aarch64) 267353b14a05 17 Mar 2026 1.02GB hub.kiwitcms.eu/kiwitcms/enterprise 15.4-mt (x86_64) b3b04b9b2846 17 Mar 2026 996MB
IMPORTANT: version tagged, multi-arch and Enterprise container images are available only to subscribers!
Follow the Upgrading instructions from our documentation.
Happy testing!
---
If you like what we're doing and how Kiwi TCMS supports various communities please help us grow!

پیام ما روشن است: آتش ما، نماد نور و پاکی ایران، بر تاریکی و پلیدی پیروز خواهد شد و خاک ایران را پاک خواهد کرد.
The post چهارشنبه سوری خجسته باد first appeared on طرفداران فدورا.So where has the last six months gone? I was planning on getting images done for Fedora 44 Beta but I was unwell and busy and ran out of time. So what better time to get them out than Pi Day!.
So compared to the last image what do we have now? Quite a lot more and I have more in the pipeline which should be in place in before freeze, plus a possible secret
, I just wanted to get something out sooner rather than later for people to play with. So the things that are working and tested are now:
Overall the devices are quire usable, but I will be working to improve it even more in the coming days.
The things that don’t work, but I’m hoping will be working RSN (pre 44) in no particular order:
One thing you do need to currently do manually once you’ve created an image is to add the following to the kernel command line (use the –args option to arm-image-installer): cma=256M@0M-1024M and without that accelerated graphics and some other things just won’t work, once you’re booted add it to /etc/kernel/cmdline so new kernels will get it too. I’ll hopefully have that issue fixed shortly, I know the problem, just still haven’t got the best solution!
You’ll also want to disable auto-suspend on the Desktop images.
So where can I get these images? Right here:
The Fedora 44 Minimal Image
The Fedora 44 KDE Image
The Fedora 44 GNOME Workstation Image
Happy Pi Day everyone!
RPMs of PHP version 8.5.4 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
RPMs of PHP version 8.4.19 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
ℹ� These versions are also available as Software Collections in the remi-safe repository.
ℹ� The packages are available for x86_64 and aarch64.
ℹ� There is no security fix this month, so no update for versions 8.2.30 and 8.3.30.
Version announcements:
ℹ� Installation: Use the Configuration Wizard and choose your version and installation mode.
Replacement of default PHP by version 8.5 installation (simplest):
On Enterprise Linux (dnf 4)
dnf module switch-to php:remi-8.5/common
On Fedora (dnf 5)
dnf module reset php dnf module enable php:remi-8.5 dnf update
Parallel installation of version 8.5 as Software Collection
yum install php85
Replacement of default PHP by version 8.4 installation (simplest):
On Enterprise Linux (dnf 4)
dnf module switch-to php:remi-8.4/common
On Fedora (dnf 5)
dnf module reset php dnf module enable php:remi-8.4 dnf update
Parallel installation of version 8.4 as Software Collection
yum install php84
And soon in the official updates:
⚠� To be noticed :
ℹ� Information:
Base packages (php)
Software Collections (php83 / php84 / php85)
Â
comments? additions? reactions?
As always, comment on the fediverse: https://fosstodon.org/@nirik/116426624580451263