
پیام ما روشن است: آتش ما، نماد نور و پاکی ایران، بر تاریکی و پلیدی پیروز خواهد شد و خاک ایران را پاک خواهد کرد.
The post چهارشنبه سوری خجسته باد first appeared on طرفداران فدورا.Fedora is a Trademark of Red Hat, Inc, an operating system built by volunteers around the world. This page is provided so that independent volunteers can showcase our contributions to Fedora and Free Software in general. Official Fedora Download page.
|
|

پیام ما روشن است: آتش ما، نماد نور و پاکی ایران، بر تاریکی و پلیدی پیروز خواهد شد و خاک ایران را پاک خواهد کرد.
The post چهارشنبه سوری خجسته باد first appeared on طرفداران فدورا.In the analysis of Sruthi Chandran's nomination for Debian Project Leader, we began to examine her relationship with another Debian Developer, Pirate Praveen Arimbrathodiyil.
I know many people from India and some of them are now very ashamed of this behaviour. For others, it is business as usual.
When NR Narayana Murthy founded one of India's biggest IT companies, Infosys, he decided there would be no nepotism and no jobs for family members. The press tells the story over and over again, even after he is dead, to help India learn the lesson.
It’s Not All In The Family
Murthy is the only member of his family at Infosys, and the same holds true for all the other co-founders as well. A decision taken by all the partners.
The tech honcho reportedly told his author and philanthropist wife Sudha that only one of them could be with the company.
The idea was to remove nepotism of any sort, and nurture talent.
As noted in the earlier blog, when Pirate Praveen wrote an advocacy for Sruthi Chandran to become a Debian Developer, he did not make any statement about his relationship with this woman.
Sruthi Chandran did not make any declaration about her conflicts of interests when writing her platforms for previous DPL elections.
When did they become boyfriend and girlfriend? When did they become husband and wife?
Despite not wanting to say anything about their own relationships, remember Pirate Praveen is one of the people who wanted to make big public statements about Daniel Baumann from Bern, Switzerland.
These are people who speak English as a second language. When they are interacting in a mailing list in English, the last thing they need is some group of Indians poring over their messages and nitpicking about their "tone".
What Pirate Praveen demanded was a cyberattack on the reputation of an unpaid volunteer in neutral Switzerland.
Don't hold your breath waiting for a statement about romantic conflicts of interest.
If one of these people is elected, will we see them making statements about other volunteers on a weekly basis?
Subject: Re: Call for moderation and mediation: debian-live vs. debian-live-ng Date: Wed, 11 Nov 2015 20:58:05 +0530 From: Pirate Praveen <praveen@debian.org> To: debian-private@lists.debian.org On 2015, നവംബർ 11 8:46:15 PM IST, Joachim Breitner <nomeata@debian.org> wrote: >Hi, > >Am Mittwoch, den 11.11.2015, 14:19 +0100 schrieb Miriam Ruiz: >> Just for the record, I don't feel myself capable of acting a >> moderator able to help in this situation, particularly when it seems >> that what's essentially asked for is for someone to talk with Daniel [Baumann] >> and convince him not to get mad and to not make a fuss out of it, and >> to deal with it in a civilized way. Or maybe I'm wrong, but the >> impression I already have about the situation, being an outsider and >> having made up my opinion mostly from what I've seen, heard or read >> throughout the years, is that there is nothing to negotiate or >> moderate here, the decision is taken -for the sensible reasons that >> have already been explained, I'm not complaining about that or >> anything- and the point is essentially trying to convince Daniel [Baumann] et >> al. not to get angry about how things are being done. Am I wrong? > >it might be part of what I am hoping for, but not everything. > >There are a few people out there (and maybe in here as well) who see >(parts of) this story and draw conclusions about how Debian treats >contributors. Conclusions that I hope are in general false, and >conclusions that I’d not like to pervade our image. > >So, a bit more concretely, here are thinks that I such a neutral report >could state. (Read every line with an “if deemed appropriate by the >mediator” – I certainly do not know enough about the issue to make any >such call, and therefore some of these are deliberately contradictory) > > * Outline the history of events that led to this outcome. > * Allow Daniel [Baumann] to not lose his face. This might involve > - acknowledging his work, and thanking for it > - apologize to him, if he was treated wrong on a social level > - outline a way forward to collaborate, or at least to allow > the projects to exist side-by-side in a friendly manner > * Explain why, despite the public perception, nobody has been wronged, > neither technically and socially. > * Explain that “the project” did the technical correct thing, but did > it wrong on a social level, and state that this was a mistake, and > we are all humans, and the project in general does not approve such > behaviour. > >A bit more profanely, one could say that we have a slight PR problem, >and PR problems should better be handled actively. I agree. There has to be a public statement and this is a good starting point.
Nominations closed on Friday the 13th and the voting finishes on 17 April 2026, the anniversary of a notorious death that was discussed like a suicide, a copy-cat suicide. The victim died on our wedding day. Nobody ever asked for a public statement from the coroner. The victim's widow, Diana von Bidder-Senn became the mayor of Basel in Switzerland.
There was never any statement about why Abraham Raji died after they asked him to contribute his own money to the kayak trip at DebConf23. Over $120,000 from Debian bank accounts was used to attack my family and try to hide the Debian suicide cluster.
The best way to encourage people to nominate for the election will be for the existing leader, Andreas Tille, to withdraw all the privacy attacks, settle the lawsuits proactively and ensure the next leader can walk in and find the desk is clean ready to work on productive things.
Don't hold your breath waiting for transparency about these attacks on my family. There is still time to watch my video and contribute to the crowdfunding campaign.
The nominations closed on Friday the 13th, with only one candidate submitting a nomination around midnight in her timezone.
If Sruthi Chandran was aware of the creepy date and all the other controversy then she has been brave to submit the nomination in those circumstances. Nominating also means she is subject to a higher level of scrutiny than regular volunteers. Even if we have concerns about some aspects of her candidacy, we should also be respectful of those things that are positive about it.
The voting finishes on 17 April 2026, the anniversary of a notorious death that was discussed like a suicide, a copy-cat suicide. The victim died on our wedding day. No report was ever published. The victim's widow, Diana von Bidder-Senn became the mayor of Basel in Switzerland.
I've seen all sorts of odd phenomena in the elections of student unions and the ALP. They pale in comparison to this. What we already know about this Debian election is beyond bad.
Previous leaders have used their position to blackmail and humiliate victims and our families. The next person elected into the role is volunteering to inherit the legacy of toxic conflict. There are various police investigations and lawsuits in progress. Would any sane person really want to accept this role knowing the name of their family is going to be dragged into those disputes too?
Debianism now suffers from a disproportionate number of gays and transgenders. They have the Zizian mindset, in other words, they have tried to have stright white males molested and killed. They did that after my father died. So far, they failed. The next person to accept any leadership role in the group is inheriting all these vendettas.
I don't criticise the corruption in Debianism for the sake of it. It is necessary to call it out for all the people who currently suffer while other people take credit for our work. Some of the rogue Debianists are total imposters. When they remove the names of real Debian Developers and replace us with somebody's wife or girlfriend or an AI, that is plagiarism.
People are saying she is somebody's wife. Few details have been provided.
Look at how Phil Wyett waited seven years while his Debian Developer application was in limbo. He eventually got fed up and quit. Many other people quietly abandoned Debianism in similar circumstances without making a public comment.
Now look at the New Maintainer report for Sruthi Chandran. She applied to be listed as a Debian Developer in January 2019 and the process was completed in May 2019. Four months for Sruthi Chandran to have her name added to the credits as opposed to seven years for Phil Wyett to get nothing.
Sruthi Chandran received multiple messages from Indian men advocating her advancement. Praveen Arimbrathodiyil, who uses the pseudonym Pirate Praveen has written this advocacy:
I have sponsored almost all of her 200+ packages and also conducted packaging workshops with her.
There are reports about the trips they took together. Here is a report from 2019:
With Narmadh's help, we could start the function on time by bringing Mr. Praveen and Mrs. Sruthi from Palakkad Junction railway station.
They shared a photo:
Look at the Red Hat logo on Pirate Praveen's shirt. Is Debianism really a philosophical movement of some kind or is that a charade used to trick people into doing unpaid work beta-testing systemd?
Pirate Praveen was a candidate for the Indian Pirate Party. Is an Indian political party grooming a woman to be their stooge in Debianism? This is not a comment directed uniquely at the candidacy of Sruthi Chandran. Political parties actually do this type of thing all the time.
If anybody asks why somebody's wife, girlfriend or niece jumped the queue and was selected as a speaker at her first tech conference, the Debianists avoid the question and spread extreme lies about sexual harassment.
Sruthi regularly boasts about being first female Debian Developer in India. In a country with close to a billion people, it is worth asking why other Indian women don't volunteer. Is it because other Indian women have been smart enough to figure out the Debian "family" fallacy is a charade used to exploit people?
Recent blogs about the Debian pregnancy cluster tell the truth about women in Debianism.
When the first blog in the Debian pregnancy cluster series appeared, people were suspicious about the possibility that I was overstating the significance of this problem. Now we can see more than half the Debian women are either spouses or part of the transgender group.
When new women arrive at a conference they see the other women are all holding hands with their partners. The new women immediately understand what they have to do to get ahead in a group like this. Many women don't come to a second conference.
The leaked DebConf room lists take the story even further. Many of the Debian couples share rooms with other Debian couples. These couples are very intimate with each other and very rude to newcomers. Some people say it is like a student union. Other people call it a clique. Whatever you call it, it is not really Debian any more and it is definitely not real diversity.
Sruthi Chandran's platforms from previous elections do not contain any conflict of interest declarations.
She belatedly mentions her husband elsewhere, for example, in the Ada Lovelace Day interview.
I got introduced to the free software world and Debian through my husband. I attended many Debian events with him. During one such event, out of curiosity, I participated in a Debian packaging workshop.
Techrights expressed concerns about her relationships in a recent article:
When Debian wanted to stage a seemingly legitimate election it needed to have more than one candidate running; so eventually the female partner of a geek rose to the challenge (had no coding skills at all, no technical history in Debian) and lost to the "incumbent German".
The Techrights article alleges "no technical history" yet Sruthi Chandran is actually producing packages. It is important to clarify the extent of her personal contribution to those packages. As we saw with the notorious SSH package vulnerability, some package maintainers are simply cutting-and-pasting patches without understanding what they are looking at.
In 2018, DebConf was in Taiwan. A picture appeared of Lior Kaplan with his arm around one of the Albanian women. A few months later the same women had a job at GNOME.
In 2019, DebConf was in Brazil. My last intern from Google Summer of Code told me people pressured her to denounce me. She refused to get involved in gossip and dirty politics.
At the same conference, at the formal dinner, the two women sitting closest to the former Debian leader Chris Lamb are both from Albania. The woman closest to Lamb is Anisa Kuci. A few weeks later, she was given an Outreachy internship, followed by a job at Wikimedia Italia, giving her a residence permit. After people revealed details about men who dated her, she had to change jobs again and she is now at the GNOME Foundation. They are hiding her name and face.
At the same time this woman arrived in the GNOME foundation, they began denouncing a real developer, Sonny Piers.
This has been an awkward pattern in free software communities. Women with minimal or no technical experience are placed in a position of authority and they are used to denounce certain men.
At Easter in 2021, the misfits used Molly de Blanc to create and promote a highly offensive petition attacking a male volunteer, Dr Richard Stallman.
Based on these patterns, there is concern that if Sruthi Chandran is elected as leader of Debianism, the misfits will use her to spread gossip about Dr Richard Stallman or some other man, just as they used Molly de Blanc.
In April 2025, during the previous election, a huge thread appeared
on the debian-vote mailing list with the subject line
Why Debian is dying.
Talking about death and dying, Sruthi Chandran, the wife of a male developer who hides conflicts of interest submitted her nomination on Friday the 13th.
The voting period ends on 17 April, the 15th anniversary of Adrian von Bidder's death, which had been discussed like a copy-cat suicide.
Adrian's wife became the mayor of Basel, which seems to be a factor in the extraordinary amount of money, over $120,000, spent attacking my family and I.
For the dirty men who do censorship and blackmail conspiracies, they may see Sruthi Chandran, a woman, as the perfect front man. The dirty men can hide in the shadows while the leader pretends to be innocent and naive. In other words, if they think they can manipulate Sruthi such that she doesn't disrupt business-as-usual, they may well vote for her and let her feel she is in charge for a year.
In India, who cares if another white male commits suicide in Europe because they can't feed their family? Sruthi Chandran is one of the DebConf organisers. Together, Sruthi Chandran and Pirate Praveen Arimbrathodiyil recruited and wrote the advocacy emails for Abraham Raji before he became the first person to die at DebConf.
The best way to encourage people to nominate for the election will be for the existing leader, Andreas Tille, to withdraw all the privacy attacks, settle the lawsuits proactively and ensure the next leader can walk in and find the desk is clean ready to work on productive things.
Don't hold your breath waiting for transparency about these attacks on my family. There is still time to watch my video and contribute to the crowdfunding campaign.
So where has the last six months gone? I was planning on getting images done for Fedora 44 Beta but I was unwell and busy and ran out of time. So what better time to get them out than Pi Day!.
So compared to the last image what do we have now? Quite a lot more and I have more in the pipeline which should be in place in before freeze, plus a possible secret
, I just wanted to get something out sooner rather than later for people to play with. So the things that are working and tested are now:
Overall the devices are quire usable, but I will be working to improve it even more in the coming days.
The things that don’t work, but I’m hoping will be working RSN (pre 44) in no particular order:
One thing you do need to currently do manually once you’ve created an image is to add the following to the kernel command line (use the –args option to arm-image-installer): cma=256M@0M-1024M and without that accelerated graphics and some other things just won’t work, once you’re booted add it to /etc/kernel/cmdline so new kernels will get it too. I’ll hopefully have that issue fixed shortly, I know the problem, just still haven’t got the best solution!
You’ll also want to disable auto-suspend on the Desktop images.
So where can I get these images? Right here:
The Fedora 44 Minimal Image
The Fedora 44 KDE Image
The Fedora 44 GNOME Workstation Image
Happy Pi Day everyone!
Another saturday, another weekly recap.
Monday and Tuesday were all about the Fedora 44 Beta release. Things went mostly smoothly, aside the magazine article publishing early so some outlets announced the release before the website was updated and that caused a bit of confusion.
Hopefully everyone is trying out 44 Beta and reporting bugs and issues so we can have a good final release.
We were in infra freeze around the Beta release so a bunch of pull requests and changes pilled up waiting for that to end. With the beta out the door, we unfroze and I spent time this week (along with others) pushing out many of those changes. A short / incomplete list:
Merged pr for pkgs to perhaps fix sporadic core dumps ( https://forge.fedoraproject.org/infra/tickets/issues/12670 )
Merged pr to attempt to fix koji 502's ( https://forge.fedoraproject.org/infra/ansible/pulls/3173 )
Merged pr to fix a bunch of pagure/forge move links (mostly in comments) ( https://forge.fedoraproject.org/infra/ansible/pulls/3174 )
Merged pr to move fedoraloveskde from pagure to forge ( https://forge.fedoraproject.org/infra/ansible/pulls/3183 )
Created a pr to update our security.txt file ( https://forge.fedoraproject.org/infra/ansible/pulls/3210 )
Merged openshift-readonly pr ( https://forge.fedoraproject.org/infra/ansible/pulls/3188 )
new pr to drop haproxy for src.fp.o ( https://forge.fedoraproject.org/infra/ansible/pulls/3211 )
A pull request moving us to using lmdb instead of hash for postfix configutation (rhel10 drops bdb): ( https://forge.fedoraproject.org/infra/ansible/pulls/3120 )
and more. We got a lot moved forward and there were a number of pull requests from new folks or folks who don't normally submit them and thats been great to see!
Thursday morning we had a outage of kojipkgs servers. It all happened before I was awake, but I think I have a good idea of what happened:
Someone/scrapers/whoever requested some urls under our ostree tree via our cloudfront distribution.
These were for objects directories (the directories themselves)
These directories have around 32k object files in them.
So, dutifully, apache generated a pretty index of them for the client.
This required each request to stat all 32k files in order to display them in a index.
This took... minutes for each request
Requests filled up the request queue
haproxy then marked the backends as down
clients started getting 503's
I have no forbidden directory indexes on these directories, so hopefully that will prevent this from happening again.
Lets we forget that they are still around, scrapers made their presense known again toward the end of the week. Two things they were doing:
They started hitting over and over our hotspot.txt file. This is a small static file containing just "OK" that is used to detect if you are behind a captive portal or not. It's hard to imagine that they get any extracted value from their scraping when they are this mindnumbingly bad at writting a distributed crawler. I guess they make up for it with just having way more clients than they can use to bother with being efficent at all. This one is particularly anoying because we don't want it put it behind anubis or block it or it will break it's entire function.
They started hitting koji's 'search' endpoint with pretty exacting queries. These caused database load to go through the roof and caused the application to stop responding. I disabled search for friday, and just re-enabled it. I hope they have moved on to /dev/null now.
This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.
Week: 09 – 13 March 2026
This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker
This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker
This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker

This is the summary of the work done regarding the RISC-V architecture in Fedora.
This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.
This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.
This team is working on keeping Epel running and helping package things.
This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project
If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.
The post Community Update – Week 11 appeared first on Fedora Community Blog.
Today is Friday the 13th and it is also the last day for people to nominate in the election for a Debian Project Leader.
In political parties, people are usually very keen to nominate for any vacancy on a committee. In most other organisations, it is not unusual for people to be shy about elections.
The role involves doing a lot of administrative work that you will never get paid for. Knowing that you will not get paid for this work, people accept these chores in the hope they can improve the organisation and have some sort of legacy.
The mindless outbreak of nitpicking against former leader Branden Robinson has changed the equation dramatically. It shows us that a leader's legacy is disposable. Look at how Andreas Tille got rid of Branden Robinson.
In other voluntary organisations, like sports clubs, they typically have a big board on the wall with the names of former leaders. There is a contract between generations. Former leaders are entitled to give their opinion in open meetings, just like anybody else.
When they censored Branden Robinson, the misfits bounced a cheque. People are even less likely to volunteer for things in future, whether it is the leadership role or any other chore.
In Debianism, each team is somewhat independent. Theoretically, the mailing list managers, the IRC managers, the Planet service managers or the Community Team nazis can each make their own decision to censor the leader if they wanted to. This means the leader never really has genuine freedom to act and he or she can always be blackmailed, manipulated and forced to dumb-down to the mindlessness of the crowd.
The FSFE Fellowship elected me as their representative, a leadership role, in 2017. Even while I was in the role, the cowards who run the mailing lists were already nitpicking what I say and hiding things from the community. They didn't have the guts to run in the elections themselves because they are cowards. They didn't even wait for my term to finish before censoring certain topics, like the theft of a bequest, which is a censored topic on mailing lists.
The best way to encourage people to nominate for the election will be for the existing leader, Andreas Tille, to withdraw all the privacy attacks, settle the lawsuits proactively and ensure the next leader can walk in and find the desk is clean ready to work on productive things.
Don't hold your breath waiting for transparency about these attacks on my family. There is still time to watch my video and contribute to the crowdfunding campaign.
RPMs of PHP version 8.5.4 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
RPMs of PHP version 8.4.19 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
ℹ� These versions are also available as Software Collections in the remi-safe repository.
ℹ� The packages are available for x86_64 and aarch64.
ℹ� There is no security fix this month, so no update for versions 8.2.30 and 8.3.30.
Version announcements:
ℹ� Installation: Use the Configuration Wizard and choose your version and installation mode.
Replacement of default PHP by version 8.5 installation (simplest):
On Enterprise Linux (dnf 4)
dnf module switch-to php:remi-8.5/common
On Fedora (dnf 5)
dnf module reset php dnf module enable php:remi-8.5 dnf update
Parallel installation of version 8.5 as Software Collection
yum install php85
Replacement of default PHP by version 8.4 installation (simplest):
On Enterprise Linux (dnf 4)
dnf module switch-to php:remi-8.4/common
On Fedora (dnf 5)
dnf module reset php dnf module enable php:remi-8.4 dnf update
Parallel installation of version 8.4 as Software Collection
yum install php84
And soon in the official updates:
⚠� To be noticed :
ℹ� Information:
Base packages (php)
Software Collections (php83 / php84 / php85)
Â
We will never know the names of the people who thought about contesting the Debian Project Leader elections but decided not to raise their hand. In fact, the people who spend time thinking about it remain totally invisible.
Anybody who does volunteer for this position is going to inherit all the conflicts created by previous leaders. One of the most recent conflicts was the case of Phil Wyett. Mr Wyett, a British Army veteran, bravely shared all the private insults sent to him by various misfits. Wyett had tried to privately escalate the snobbiness to the current leader, Andreas Tille. In a parallel incident, we saw former leader Branden Robinson exposed to snobbiness. He tried to escalate concern about snobbiness on the debian-project mailing list. Andreas Tille's response was laced with more snobbiness, Tille did nothing to resolve the dispute. Wyett and Robinson are two more festering wounds in the culture of Debianism that every future leader is going to inherit.
A genuinely good candidate for the role is going to understand the risks created by these previous conflicts are too great. The candidate has to risk their own time, their money and their reputation and they will not be paid anything in return for that.
We already have damning evidence about the Debian suicide cluster to reflect on.
As I was reading the news yesterday about the murder-suicide on a Postbus in Switzerland, I was inevitably reminded about the death of Adrian von Bidder in Basel on our wedding day (detailed history) .
If you were elected leader of Debian in April and one of the victims like Phil Wyett or Branden Robinson committed suicide a few weeks later, how would you handle that? What if one of the victims went even further and engaged in a deplorable murder-suicide?
Another big story in the news this week is the story of Iran laying sea mines in the Straight of Hormuz. They tell us at least a dozen mines have been hidden below the water. Anybody who becomes leader of Debianism today is inheriting a whole bunch of virtual landmines, nasty surprises that could blow up in their face.
The best candidates will think about that and decide the risk of being in the captain's chair on the day somebody else dies is too big a risk for their reputation.
Andreas Tille is one of the older developers and he appears to be very close to retirement age. His children appear to be fully grown adults who can support themselves. Therefore, Andreas Tille's family may suffer a reputational risk but not a very big financial risk if he loses his job one or two years before he was going to retire anyway.
For younger candidates with a large mortgage to pay and small children to support, the thought of inheriting more Debian deaths is an unacceptable risk. Such people will not nominate.
That leaves us with the people near retirement age, the girlfriends who are fully supported by their spouse and the people using fake names.
There is no clear binary test to determine if an organisation is a cult or not. Nonetheless, when leadership candidates have to contemplate inheriting the legacy of snobbiness and suicides, when the set of possible leaders is restricted to those willing to tolerate and stubbornly ignore the stigma of this culture, we have clearly reached the point where most outside observers can agree that Debianism has become a cult.
Cult members themselves typically won't realise they are in a cult until after they quit.
The best way to encourage people to nominate for the election will be for the existing leader, Andreas Tille, to withdraw all the privacy attacks, settle the lawsuits proactively and ensure the next leader can walk in and find the desk is clean ready to work on productive things.
Don't hold your breath waiting for transparency about these attacks on my family. There is still time to watch my video and contribute to the crowdfunding campaign.
On the afternoon of Tuesday, 10 March 2026, a man receiving treatment for an unknown condition vanished from a hospital in the Canton of Bern, Switzerland. We are told this was a regular hospital, not a secure psychiatric hospital. It is not clear if he was being treated for a psychiatric disorder or if he was in the hospital for an unrelated health problem. The hospital alerted the police that a patient was missing. It is not clear whether they alerted the police out of concern for the patient himself or out of concern for a risk to the public at large.
The police told us they immediately began searching for the man and put out a nationwide alert. The police did not seem to know about the nature of any risk and made inquiries with relatives to try and understand the situation.
Remember the Basel suicide petition? We found the name A. von Bidder on the petition and Adrian von Bidder died on our wedding day. The petition admits there are some cases where the authorities are failing to help people. We also know there are other cases where they become too proactive and put too many people in the psychiatric hospital, for example, to protect homeless people from the cold or using people with mild or temporary conditions to fill empty beds. Filling empty beds ensures the hospital can charge extra fees to the health insurance system. As a consequence, some of the psychiatric hospitals are always at one hundred percent capacity and some people who need real care are left waiting a few days in a regular hospital or emergency room.
On the evening of Tuesday, 10 March 2026, the man boarded a Swiss Postbus and set himself on fire. He killed six people, including the driver and himself.
Reports have suggested up to fifty people were injured, with five suffering injuries that are grave or life threatening.
Everybody who lives in Switzerland has used a Postbus. Postbusses generally operate in rural areas and mountains popular with tourists and hikers. Most people associate the Postbus with their leisure or free time. Therefore, the incident touches a nerve in everybody throughout the country. When people go up to the mountains next Saturday, some of them will use a Postbus and they will remember the Postbus catastrophe at Kerzers.
The morning after the incident, all evidence of the Postbus tragedy had vanished. The street was left empty.
The day after the incident, on the afternoon of 11 March 2026, the Swiss authorities made various announcements that focus on the man and make no reference to his environment and other people who are part of his life story.
In other words, they are using the man as a scapegoat to hide everything else that precipitated this incident.
They told us he is known to the health authorities, he was marginalised and disturbed. They insist he was psychiatrically unstable. The skeleton of the burnt out bus has been towed away, the case is closed, it is all this man's fault. We need to check that.
Let's dissect all of those statements and show how the Swiss police and other authorities have not really told us anything useful at all.
He was known to the health authorities
In the State of Victoria, Australia, my aunt is a nurse, two of my cousins are nurses, the Health Complaints Commissioner is my cousin and a banned psychiatrist was a former classmate. The Health Minister who appointed my cousin as Health Complaints Commissioner? He was a neighbor in the region where I used to live. The Minister and I know each other through the ALP. I am certainly known to the health authorities but I never burnt a Postbus. I am a big fan of public transport.
he was marginalised
Do people marginalise themselves? The statement implies the involvement of third parties.
Who marginalised him? Did a rogue employer make up a rumour about him as a way to avoid giving him a redundancy payment when downsizing? This is one of the typical problems that comes up frequently in Switzerland. The value of the Swiss Franc went up dramatically in the years of the credit crisis. Many firms had to downsize. Sadly, some employers will say anything to avoid paying out three months salary that is due on redundancy.
If the police insist he was marginalised, they need to tell us how that started, we need to hear both sides of the story.
he was disturbed
Could this simply be a consequence of somebody marginalising him?
Did somebody disturb him deliberately? If we was known to the authorities, as they suggest, had they abused him in custody or in a healthcare setting? Amnesty International published a detailed report about the abuse of Trevor Kitchen, a whistleblower who has been stalked by the Swiss authorities for thirty years:
Trevor Kitchen, a 41-year-old British citizen resident in Switzerland, was arrested by police in Chiasso (canton of Ticino) on the morning of 25 December 1992 in connection with offences of defamation and insults against private individuals. In a letter addressed to the Head of the Federal Department of Justice and Police in Berne and to the Tribunal in Bellinzona (Ticino) on 3 June 1993 he alleged that two police officers arrested him in a bar in Chiasso and, after handcuffing him, accompanied him to their car in the street outside. They then bent him over the car and hit him around the head approximately seven times and carried out a body search during which his testicles were squeezed. He claimed he was then punched hard between the shoulder blades several times. He said he offered no resistance during the arrest.
He was then taken to a police station in Chiasso where he was questioned in Italian (a language he does not understand) and stated that during the questioning "The same policeman that arrested me came into the office to shout at me and hit me once again around the head. Another policeman forced me to remove all of my clothes. I was afraid that they would use physical force again; they continued to shout at me. The one policeman was pulling at my clothes and took my trouser belt off and removed my shoe laces. Now I stood in the middle of an office completely naked (for 10 minutes) with the door wide open and three policemen staring at me, one of the policemen put on a pair of rubber surgical gloves and instructed me to crouch into a position so that he could insert his fingers into my anus, I refused and they all became angry and started shouting and demonstrating to me the position which they wanted me to take, laughing, all were laughing, these police were having a good time. They pointed at my penis, making jokes, hurling abuse and insults at me, whilst I stood completely still and naked. Finally, when they finished laughing, one of the policemen threw my clothes onto the floor in front of me. I got dressed."
He was transferred to prison some hours later and in his letter claimed that during the night he started to experience severe pains in his chest, back and arms. He asked a prison guard if he could see a doctor but the request was refused and he claimed the guard kicked him. He was released on 30 December 1993. Medical reports indicated that since his release he had been experiencing recurrent pain in the area of his chest and right shoulder and had been receiving physiotherapy for an injury to the upper thoracic spine and his right shoulder girdle.
In 2018, years after Mr Kitchen left Switzerland, they used a European Arrest Warrant to have the police abduct and molest him again where he is retired in Portugal. What this tells us is the police and the jurists have a pathological obsession with certain people.
he was psychiatrically unstable
I asked various search engines about the diagnosis of psychiatric instability. Here is what the artificial intelligence told me:
Psychiatric instability is not a formal medical diagnosis
It looks like this "psychiatrically unstable" is a term that was created by some creative person in the police propaganda office. It is not a term used by real doctors.
A real doctor would tell us the name of the illness, the date it was diagnosed, whether he had recovered from the illness and whether or not he was currently undergoing treatment.
Looking at details about psychiatric illness, most of the people who suffer from illnesses like schizophrenia and bipolar tend to do very random things like running around naked with a chainsaw. If people with these conditions are subject to medical supervision, which includes regular therapy sessions and medication, these outcomes are avoided.
Yet what we saw on Tuesday night was not a random guy setting himself on fire in the middle of the park. He has chosen to set himself on fire at a place and time where he would inflict the maximum damage to those around him. This is not the typical behaviour of somebody who harms themselves because of schizophrenia or bipolar.
Schizophrenia is a mental disorder characterized variously by hallucinations, delusions, disorganized thinking or behavior, and flat or inappropriate affect.
He has made deliberate decisions about the place, the time and the use of an accelerant to maximise the spread of the fire. Even inside the bus, it appears he chose a position to ignite himself where he blocked the exit and may have damaged the door to make it inoperable. While he may not have a religious motive, his planning and execution of the attack suggests conscious deliberation on par with that of a terrorist, not somebody acting spontaneously under mental illness.
The Postbus incident resembles a pre-planned and calculated murder-suicide, not unlike the GermanWings plane crash where the pilot deliberately crashed into the French alps.
The information provided by the Swiss authorities is very disparaging to the man. It is necessary for them to show us his full history, including employer disputes, harassment from Switzerland's amateur judges and the actual medical diagnosis. If we don't know all of these things then the disparaging statement from the police is quite useless.
Remember the Basel suicide petition? We found the name A. von Bidder on the petition and Adrian von Bidder died on our wedding day. His wife became the mayor of Basel but nobody ever explained why he died. It is really important to read all that again in conjunction with this latest crisis.
Read more about how Swiss judges and jurists are humiliating people in the JuristGate reports.
News reports have appeared about a bus catching fire in Kerzers, Switzerland, which is also known as Chiètres in French. Reports suggest one person doused himself in fire. Police confirm it was deliberate and they say it is not a terrorist. The possibility that remains for us to contemplate is that it was somebody who has been humiliated by their employer or the jurists in a tribunal.
This web site has gone to great lengths to reveal the extent to which jurists take money from people and then fail to provide any solution to the needs of their clients. In many cases, the jurists make disputes even worse in the hope that parties on both sides of the dispute, or their insurers, will end up paying even more legal fees.
When employers make somebody redundant in Switzerland, they normally have to give the workers three to six months notice or a lump sum payment, sometimes both. To avoid making these payments, some employers and some of the agencies responsible for contract employees will try to blame one of the workers. Many of these disputes are settled by a tribunal. Both the employer and employee want to save face and so the judgments are kept totally secret.
There are parallel cases in Germany. Galia Mancheva's abuse report describes her experience of taking the FSFE misfits to an employment tribunal. The tribunal forced her to listen to lies from the men and then they humiliated her by telling her she can't have justice because she had been in the job for less than twelve months.
Due to the JuristGate scandal, when the illegal legal fees insurance scheme vanished overnight, up to 20,000 clients suddenly found themselves without a lawyer. Some of those people were in the middle of legal disputes with employers or employees. Some of those disputes involve sensitive matters, such as accusations of sexual impropriety and employee illness. FINMA has not made any comment on how these case files were protected or compromised. Matthieu Parreaux, the founder of the rogue firm, suggested that all the case files have fallen into the hands of other lawyers.
When we began examining the Adrian von Bidder death in Basel, where his wife subsequently became the mayor, we looked at a petition to the Basel Kanton about the way they are hiding the number of suicides and the reasons for them. We were shocked to find the name A. von Bidder on the petition shortly before Adrian von Bidder-Senn died at age 32. No official reasons have been given. The only fact we know is that he died eight months after the Debian Day volunteer suicide and people discussed it like a copy-cat suicide. Nonetheless, if Debianism was not the only influence in his death, we need to start asking if he had simultaneously been the victim of rogue employment practices or incompetence by jurists.
As noted in the Basel Kanton suicide petition, many of these cases involve total secrecy and the employment tribunals also operate in secrecy. From time to time, news reports appear about cases where multiple people are attacked or killed, for example, a chainsaw attack at the office of CSS insurance in Kanton Schaffhausen. I went to some detail to explain the incompetence, corruption and racism we encountered when somebody fell on Carla at a Zurich yoga studio and the people at the accident insurance spent two years giving us excuses.
Police confirm the Postbus fire occurred deliberately. It happened at peak hour when the bus had the maximum number of passengers. The victim chose to die in a Postbus. These buses are iconic of Swiss rural life and the tourism industry. Did the victim of this suicide intend to send some kind of signal, like the distinctive three-tone horn of the bus itself?
Will the victims of this tragedy spend two years waiting for an answer or will they never get answers at all?
Read more about JuristGate, the illegal legal insurance that Swiss regulator FINMA knew about for five years.
About 3 months ago I started working with RISC-V port of Fedora Linux. Many things happened during that time.
I went through the Fedora RISC-V tracker entries, triaged most of them (at the moment 17 entries left in NEW) and tried to handle whatever possible.
My usual way of working involves fetching sources of a Fedora package (fedpkg
clone -a) and then building it (fedpkg mockbuild -r fedora-43-riscv64). After
some time, I check did it built and if not then I go through build logs to find
out why.
Effect? At the moment, 86 pull requests sent for Fedora packages. From heavy packages like the “llvm15” to simple ones like the “iyfct” (some simple game). At the moment most of them were merged, and most of these got built for the Fedora 43. Then we can build them as well as we follow ‘f43-updates’ tag on the Fedora koji.
Work on packages brings the hard, sometimes controversial, topic: speed. Or rather lack of it.
You see, the RISC-V hardware at the moment is slow. Which results in terrible build times — look at details of the binutils 2.45.1-4.fc43 package I took from koji (Fedora and RISC-V Fedora):
| Architecture | Cores | Memory | Build time |
|---|---|---|---|
| aarch64 | 12 | 46 GB | 36 minutes |
| i686 | 8 | 29 GB | 25 minutes |
| ppc64le | 10 | 37 GB | 46 minutes |
| riscv64 | 8 | 16 GB | 143 minutes |
| s390x | 3 | 45 GB | 37 minutes |
| x86_64 | 8 | 29 GB | 29 minutes |
That was StarFive VisionFive 2 board, while it has other strengths (such as upstreamed drivers), it is not the fastest available one. I asked around and one of porters did a built on Milk-V Megrez — it took 58 minutes.
Also worth mentioning is that the current build of RISC-V Fedora port is done with disabled LTO. To cut on memory usage and build times.
RISC-V builders have four or eight cores with 8, 16 or 32 GB of RAM (depending on a board). And those cores are usually compared to Arm Cortex-A55 ones. The lowest cpu cores in today’s Arm chips.
The UltraRISC UR-DP1000 SoC, present on the Milk-V Titan motherboard should improve situation a bit (and can have 64 GB ram). Similar with SpacemiT K3-based systems (but only 32 GB ram). Both will be an improvement, but not the final solution.
We need hardware capable of building above “binutils” package below one hour. With LTO enabled system-wide etc. to be on par with the other architectures. This is the speed-related requirement.
There is no point of going for inclusion with slow builders as this will make package maintainers complain. You see, in Fedora build results are released into repositories only when all architectures finish. And we had maintainers complaining about lack of speed of AArch64 builders in the past. Some developers may start excluding RISC-V architecture from their packages to not have to wait.
And any future builders need to be rackable and manageable like any other boring server (put in a rack, connect cables, install, do not touch any more). Because no one will go into a data centre to manually reboot an SBC-based builder.
Without systems fulfilling both requirements, we can not even plan for the RISC-V 64-bit architecture to became one of official, primary architectures in Fedora Linux.
Such long build times make my use of QEMU useful. My AArch64 desktop has 80 cores, so with the use of QEMU userspace riscv64 emulation, I can build the “llvm15” package in about 4 hours. Compare that to 10.5 hours on a Banana Pi BPI-F3 builder (it may be quicker on a P550 one).
And LLVM packages make real use of both available cores and memory. I am wondering how fast would it go on 192/384 cores of Ampere One-based system.
Still, I used QEMU for local builds/testing only. Fedora, like several other distributions, does native builds only.
We plan to start building Fedora Linux 44. If things go well, we will use the same kernel image on all of our builders (the current ones use a mix of kernel versions). LTO will still be disabled.
When it comes to lack of speed… There are plans to bring new, faster builders. And probably assign some heavier packages to them.
Logseq is great for dumping daily notes, but finding them again later can be a pain. If you’re looking for notes on a “connection timeout” but originally wrote “increasing the socket keepalive”, a standard keyword search will give you nothing. You end up having to guess the exact phrasing your past self used.
I wanted a way to search my graph by concept rather than exact text matches. That’s why I put together the Logseq Semantic Search plugin.
The upcoming database version of Logseq actually has semantic search built-in. But since I’m still using the standard Markdown version for my day-to-day workflow, I wanted to get that capability right now.
The plugin uses text embeddings to find conceptually similar blocks. But just embedding individual bullet points doesn’t work well for outliners. A block that just says “needs refactoring” is useless on its own.
If you’ve seen my Logsqueak project, you’ll recognise the indexing approach here. Every block is indexed along with its complete structural lineage—the page name, properties, and the full chain of parent blocks above it.
Because it captures this nested context, the search index knows that a vague bullet point nested under billing-service → Database Connection Pool is actually about your Postgres setup. Searching for “optimizing billing db” will pull that specific child block right to the top of the results.
Since a Logseq graph is essentially a private brain dump, I wanted this to run entirely locally. By default, the plugin connects to Ollama using the lightweight nomic-embed-text model. It’s smart enough to only re-embed blocks that have changed, so it’s relatively fast even without a GPU. (If you prefer, you can also point it at any OpenAI-compatible endpoint in the settings).
I run Fedora Workstation and prefer to keep my host system clean, so I run Ollama via Podman. It’s incredibly straightforward to set up:
# Start the Ollama container, exposing the default port
# and persisting data
podman run -d \
--name ollama \
-p 11434:11434 \
-e "OLLAMA_ORIGINS=*" \
-v ollama:/root/.ollama \
docker.io/ollama/ollama
# Pull the lightweight embedding model
podman exec ollama ollama pull nomic-embed-text
Because we mapped port 11434, the Semantic Search plugin can talk to the container seamlessly at http://localhost:11434 right out of the box. No dependency issues, just a private embedding server ready to run in the background.
You can grab the plugin directly from the Logseq Marketplace. Once it’s installed, hit Alt+K (or click the toolbar icon) to open the search modal. Try typing a natural language query—like “notes about debugging pipeline failures”—and it will surface the relevant blocks even if you didn’t use the word “debugging.”
The source code is up on GitHub if you want to poke around or contribute.
The post Searching Logseq by Concept, Not Keystrokes appeared first on PRINT HEAD.
One of the major thing at work is XML, due to all things identity. Yes, XML and SAML are very much alive. SWAMID is the identity fedeation for research and higher education in Sweden and edusgain which is the global identify federation around the world connected 80+ pariticipaaant federations connecting over 10k identify and service providers. And these are based on SAML.
In the last few weeks I released two libraries in Rust and then python bindings for the same using pyo3. uppsala is the zero dependency XML library and pyuppsala is the python binding.
XmlWriter) for constructing output without a DOMRead the full documentation
bergshamra is the pure Rust XML Security library implementing the W3C XML Digital Signatures (XML-DSig), XML Encryption (XML-Enc), and XML Canonicalization (C14N) specifications. Built entirely on the RustCrypto ecosystem with Uppsala for XML parsing, and pybergshamra is the python binding.
AssertionID attribute as default ID, cid: URI scheme for WS-Security MIME references#![forbid(unsafe_code)] across every crate| Category | Algorithms |
|---|---|
| Digest | SHA-1, SHA-224/256/384/512, SHA3-224/256/384/512, MD5†, RIPEMD-160† |
| Signature (RSA) | RSA PKCS#1 v1.5 (SHA-1/224/256/384/512, MD5†, RIPEMD-160†), RSA-PSS (SHA-1/224/256/384/512, SHA3-224/256/384/512) |
| Signature (EC) | ECDSA (P-256/P-384/P-521 × SHA-1/224/256/384/512, SHA3-224/256/384/512, RIPEMD-160†) |
| Signature (other) | DSA (SHA-1, SHA-256), Ed25519, HMAC (SHA-1/224/256/384/512, MD5†, RIPEMD-160†) |
| Post-quantum | ML-DSA-44/65/87 (FIPS 204), SLH-DSA SHA2-128f/128s/192f/192s/256f/256s (FIPS 205) |
| Block cipher | AES-128/192/256-CBC, AES-128/192/256-GCM, 3DES-CBC |
| Key wrap | AES-KW-128/192/256 (RFC 3394), 3DES-KW (RFC 3217) |
| Key transport | RSA PKCS#1 v1.5, RSA-OAEP (SHA-1/224/256/384/512 digest, MGF1-SHA-1/224/256/384/512) |
| Key agreement | ECDH-ES (P-256/P-384/P-521), X25519, DH-ES (X9.42) |
| Key derivation | ConcatKDF, HKDF (SHA-256/384/512), PBKDF2 |
| C14N | Inclusive 1.0/1.1, Exclusive 1.0, each ± comments |
| Transforms | Enveloped signature, Base64, XPath, XPath Filter 2.0, XSLT (identity), OPC Relationship |
| Key formats | PEM, DER, PKCS#8, PKCS#12, X.509, xmlsec keys.xml, raw HMAC/AES/3DES |
† MD5 and RIPEMD-160 are behind the legacy-algorithms feature flag.
Bergshamra is tested against the full xmlsec interoperability test suite (1157 test steps across DSig and Enc). These are the same tests used by the xmlsec1 C library, covering test vectors from the W3C, Merlin, Aleksey, IAIK, NIST, and Phaos interop suites.
| Suite | Passed | Failed | Total | Pass Rate |
|---|---|---|---|---|
| Enc | 701 | 0 | 701 | 100% |
| DSig | 447 | 9 | 456 | 98% |
| Total | 1148 | 9 | 1157 | 99.2% |
The 9 DSig failures are GOST algorithm tests (GOST R 34.10-2001, GOST R 34.10-2012-256, GOST R 34.10-2012-512) which require special OS cryptographic libraries not available in the RustCrypto ecosystem.
These are the libraries, you will see the tools/services built on top of these in the coming months hopefully.
I was chatting with Brian “redbeard” Harrington recently, and a certain turn of phrase came up during our conversation I thought was worth sharing:
“Referred pain is deferred pain.”
What does this mean?
A lot of open source projects and programming projects in general start from a single developer scratching an itch that solves a problem they personally experience. As a unit, the person developing the code and experiencing the pain are one and the same. Software that solves clearly-defined issues with the perspective of those directly experiencing those issues is much better positioned to succeed than software meant to solve problems its builders it don’t understand or experience firsthand.
So projects started by one or two or a small group of developers who really deeply understand the problem domain – first-hand experience with it is best – are ones that tend to succeed.
When things get more complicated – a larger scale project, folks building solutions who don’t live and breath or even understand the problem domain, solving problems for people they don’t know – that’s when your ability to build something truly great, that has a fantastic user experience, that gains user adoption, that really solves real problems is compromised. Because the pain is referred.
(Wait, what’s “referred pain?” It’s when you experience pain in one area of the body for which the source of injury is a completely different body part. A lot of folks experience back pain, for example, that is referred pain from another body part.)
My career-long approach to software engineering is from a human-centered perspective. In that role, I’ve served as the bridge between users – the folks directly experiencing pain – and the builders creating solutions for them. Open source itself enables direct links (without user experience type practitioners in the middle.) In open source, the folks experiencing pain can connect directly with the folks building the tool. If they are so inclined, they can even contribute to the code itself! It’s one of the things that I truly enjoy about working on open source community projects: that direct connection and tighter feedback loop. There are folks that need to be connected that for various reasons do not participate in this direct way, though. So I have described my role at times as serving as an ambassador for those users to the developers and other team members planning and building things. I serve as their voice.
Open source was a first milestone in democratizing the means of software production.
Generative AI tools are perhaps the next, because they lower the bar in terms of skill sets and access in producing software. (We can niggle about the quality and maintainability and the software engineering skill sets that are still critical and valuable in that production. I hope we can all agree these tools are a great enabler of prototype code to help communicate needs / address pain / etc., though.)
The power I’m talking about here, is kind of looping back to open source’s roots… folks with the skill set to build, building things to solve problems they are familiar with and face the pain of. Now we have folks who previously could not build for themselves, with different pains, able to scratch their own itches. This is a more diverse community. Their pains, their problems, are from a broader set of domains than developers directly experience. There’s potential, consequentially, that we could see more innovative and novel forms of software emerging.
I speak a little to this shift we may see in user engagement with open source projects in the latest episode of my Red Hat video series, “What’s on Mo’s Mind?” I talk about how Gen AI is a new abstraction layer in computing that lowers the bar for participation and in doing so enables a broader set of perspectives to contribute to software. Take a look if you like 

(Note: The feature image was generated with Gemini 3.0 using a prompt I drafted manually and using character design artwork from my own portfolio. A frustrating process, by the way, but here we are.
)
Here we are in the first week of March 2026 already. This was a pretty quiet week for me, partly due to the Fedora 44 Beta freeze and partly I think due to people traveling/being away. In any case it was welcome to me to have a chance to work on some planned work instead of day to day or fighting fires.
This week I finally got our gpu machine all setup, which has been a very long road. Last year we thought it would be very handy to have a machine that has desktop GPUs in it that we could use to test / build / explore things that could use those. We didn't want a server with fancy datacenter gpus, we wanted things that Fedora users might have. This of course is tricky, since that entails a desktop like machine in a datacenter.
After some looking around, we found the Dell Precision 7960 Rack, which is a rackmount machine, but sort of a desktop too.
We got a loaner to test things out with, and finally decided to buy it and use it. There have been so many little delays with this thing ( wrong network card, need a new one. Time of people involved to setup the testing. Drac license was wrong and I couldn't install it, and more).
But finally this week it's up. We will see how useful it becomes and what new exciting things it opens up.
We had our Fedora 44 Beta go/nogo meeting on thursday and amazinly we were go for release on tuesday. The second beta candidate had no accepted blockers. I'm always a bit surprised when things go so smoothly, but I will take it!
I also made some more progress on my secure boot signing setup, but then i hit a blocker. I was able to sign grub and kernel for aarch64, but it doesn't actually boot. (I have my lenovo slim7x and also another aarch64 box that supports secure boot to test with). Hopefully we can get to the bottom of that soon so we can switch things on. I really hope we can have it running before Fedora 44 final freeze.
This also has been a long road.
I'm getting a solar system with batteries and home backup installed late this month, and I'm really looking forward to it.
Unfortunately, my electic coop just informed me that there is going to be a 4 hour power outage on monday for maint work. If it had only been next month, I could have just ignored it. Oh well, one more time for the generator! :)
As always, comment on mastodon: https://fosstodon.org/@nirik/116189196834124254
Do you remember the most important characteristics you should look for in a good laptop? In the following order:
1⃣ A high-resolution, high-density display: 3K or 4K, far beyond HD or Full HD
2⃣ A battery that lasts all day
3⃣ Fast storage (SSD)
4⃣ Light, thin, and elegant
It’s not the CPU.
It’s not AI.
It’s not having huge storage capacity.
It’s not a large physical size.
It’s not having more than 8 GB of memory (memory is not storage).
It’s not having a stylus, tablet convertible, or having a detachable or articulated keyboard.
And it’s definitely not having a numeric keypad on the side.
Until last week, the best and most affordable laptop on the market with these characteristics was the $1100 MacBook Air. But now Apple has launched the MacBook Neo, which delivers all these qualities — display, battery, storage, lightness, and elegance for 45% less: $600.
❝A laptop for me is just for browsing the internet, email, editing documents, messaging, watching movies, and relaxing with games like Solitaire or Roblox.❞
Congratulations, you’re like 99.9% of humanity. The MacBook Neo delivers the best value for you.
In the Windows laptop universe, these truly important characteristics (display quality, lightness, etc.) are usually found only in the most expensive product lines. To justify the high price, their marketing shifts the focus to things that are largely irrelevant: unnecessarily powerful CPUs, unnecessarily large storage, unnecessarily large memory, tablet modes, styluses, and so on.
All unnecessary for 99.9% of humanity.
And even in those expensive lines, the battery rarely lasts more than two hours, let alone all day. The reason: inefficient CPU.
Don’t be misled when choosing your next laptop. Pay attention to the characteristics that really matter: hires display, battery duration, fast storage, lightness, and elegance. General rule is to avoid laptops that use Intel CPUs.
This also applies to the laptops that companies give to their employees.
Lembra quais são as características mais importantes que você deve procurar num bom laptop? 1⃣ tela de alta resolução e densidade, 3K ou 4K, bem mais que HD ou full HD; 2⃣ bateria que dura o dia todo; 3⃣ armazenamento rápido (SSD); 4⃣ leve, fino e elegante.
Não é CPU.
Não é IA.
Não é armazenamento grande.
Não é ter tamanho físico grande.
Não é ter mais do que 8GB de memória (memória não é armazenamento).
Não é ter caneta, virar tablet, teclado removível ou articulado.
E definitivamente não é ter teclado numérico lateral.
Até semana passada, o melhor e mais barato laptop do mercado com essas características era o MacBook Air de $1100. Mas agora a Apple lançou o MacBook Neo que entrega todas essas características de tela, bateria, armazenamento, leveza e elegância, 45% mais barato: $600.
❝Laptop prá mim é só prá navegar na internet, e-mail, editar documentos, mandar mensagem, assistir filmes, me distrair jogando paciência e Roblox❞. Parabéns, você é como 99,9% da humanidade. O MacBook Neo entrega o melhor custo-benefício para você.
No universo dos laptops Windows, essas características importantes (tela, leveza etc), só se encontram nas linhas mais caras dos fabricantes. E para justificar o alto preço, seu marketing procura mudar o foco para coisas completamente irrelevantes, como CPU desnecessariamente poderosa, armazenamento desnecessariamente alto, memória desnecessariamente grande, virar tablet, ter caneta etc. Desnecessário para 99,9% da humanidade. E mesmo assim, nem nessas linhas mais caras a bateria dura mais do que 2 horas (motivo: CPU ineficiente), quanto menos o dia todo.
Não seja ludibriado ao escolher seu próximo laptop. Preste atenção nas características que realmente importam: tela, bateria, armazenamento rápido, leveza e elegância. A regra geral é evitar laptops com CPUs Intel.
Isso vale também para laptops que empresas dão a seus funcionários.
This post is the latest in my series of GNOME Foundation updates. I’m writing these in my capacity as Foundation President, where I’m busy managing a lot of what’s happening at the organisation at the moment. Each of these posts is a report on what happened over a particular period, and this post covers the current week as well as the previous one (23rd February to 6th March).
I’ve mentioned the GNOME Foundation’s audit on numerous occassions previously. This is being conducted as a matter of routine, but it is our first full formal audit, so we have been learning a lot about what’s involved.
This week has been the audit fieldwork itself, which has been quite intense and a lot of work for everyone involved. The audit team consists of 5 people, most of whom are accountants of different grades. Our own finance team has been meeting with them three times a day since Tuesday, answering questions, doing walkthroughs of our systems, and providing additional documents as requested.
A big part of the audit is cross-referencing and checking documentation, and we have been busy responding to requests for information throughout the week. On last count, we have provided 140 documents to the auditors this week alone, on 20 different themes, including statements, receipts, contracts, invoices, sponsorship agreements, finance reports, and so on.
We’re expecting the draft audit report in about three weeks. Initial signs are good!
Planning activity for GUADEC 2026 has continued over the past two weeks. That includes organising catering, audio visual facilities, a photographer, and sponsorship work.
Registration for the event is now open. The Call for Papers is also open and will close on 13 March – just one week away! If you would like to present this year, please submit an abstract!
If you would like travel sponsorship for GUADEC, there are two deadlines to submit a request: 15th March (for those who need to book travel early, such as if they need a visa) and 24th May (for those with less time pressure).
This year’s Linux App Summit is happening in Berlin, on the 16th and 17th May, and is shaping up to be a great event. As usual we are co-organizing the event with KDE, and the call for proposals has just opened. If you’d like to present, you have until 23rd March to submit a paper.
The Travel Committee will be accepting travel applications for LAS attendees this year, so if you’d like to attend and need travel assistance, please submit a request no later than 13th April.
On the infrastracture side, GNOME’s single sign on service has been integrated with blogs.gnome.org, which is great for security, as well as meaning that you won’t need to remember an extra password for our WordPress instance. Many thanks to miniOrange for providing us with support for their OAuth plugin for WordPress, which has allowed this to happen!
That’s it for my update this week. In addition to the highlights that I’ve mentioned, there are quite a number of other activities happening at the Foundation right now, particularly around new programs, some of which we’re not quite ready to talk about, but hope to provide updates on soon.
Do you want to join us for our annual contributor conference? We want to see you there! However, we know that traveling to a global event is a big trip. It costs real money. To help out, the Flock Organizing Team offers Flock 2026 financial assistance. We want to make sure money does not stop our active contributors from attending.
This is your final reminder. You must submit your form by Sunday, March 8th, 2026. The organizing team starts looking at the data on Monday morning. Because of this fast timeline, we cannot accept any late forms. Sunday is a hard stop.
What does this funding actually cover? We can help you pay for your travel. This includes your airfare or train tickets. We can also help cover your hotel room at the main event venue. We have a limited budget. Because of this, we cannot fully fund every person who applies. Your peers on the organizing team review all the forms. They look at your community impact to make these tough choices.
Are you giving a talk this year? We are excited to hear from you! But please remember one important rule. Being an accepted speaker does not give you guaranteed funding. You still need to ask for help. All speakers must fill out the Flock 2026 financial assistance form if they need travel support.
Applying is easy. Just follow these steps:
We want to bring as many builders and contributors together as possible. Please do not wait until the last minute. If you need support to join us, fill out the application today!
The post Final Reminder: Flock 2026 Financial Assistance Applications Close Sunday, March 8th appeared first on Fedora Community Blog.
This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.
Week: 02 – 06 March 2026
This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker
This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker
This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker
This is the summary of the work done regarding the RISC-V architecture in Fedora.
This is the summary of the work done regarding AI in Fedora.
This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.
This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.
This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project
If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.
The post Community Update – Week 10 2026 appeared first on Fedora Community Blog.
I’ve been using Logseq for a year now and it’s become the backbone of my workflow. I have pages dedicated to specific topics, concepts, projects, meetings… all sorts of things.
During my day, when I want to note something down or write something out to think about it, the daily Logseq journal is the obvious place for it to go. It has been an invaluable habit to build. But there’s a catch: the journal can easily become a black hole. It ends up as a chaotic mix of meeting notes, fleeting thoughts, random ideas, task lists and the occasional moment of genuine insight.
Most of the time, I try to link journal items to the relevant pages. Sometimes I remember to update those pages in light of new information. But other times I forget, and those insights get buried in the timeline, only resurfacing if I explicitly search for them.
All of those things belong in the journal, but some of them also belong in permanent pages. I wanted a way to filter the signal from the noise and capture things that I can integrate into my pages, in a way that makes them traceable back to the journal, without leaving the keyboard.
Enter Logsqueak: a proof-of-concept experiment to see if a local AI model can act as an automated gardener for a Personal Knowledge Management (PKM) system.

It’s a Python-based terminal UI built with Textual, using RAG (Retrieval-Augmented Generation) via Ollama. Because PKM data is highly personal, my aim was to be able to build a tool that can run entirely on a local GPU, meaning your private journal entries never have to leave your machine. (Though you can certainly connect it to much larger cloud models if you prefer.)
The workflow is broken down into 3 phases:
In this phase, Logsqueak reads your Logseq journal and helps identify which items are ephemeral daily noise (e.g., “Morning standup at 9am”) and which are actual knowledge or insight worth keeping.
Temporal context is stripped away, and additional context from parent bullet points is added in.
useEffect hook was missing the empty dependency array.useEffect hook for the main chart includes an empty dependency array.In this final phase, the most semantically relevant pages in your Logseq graph are tracked down, and the best insertion point is identified. Logsqueak will suggest exactly which page and heading the new insight belongs under.
Logseq is built around powerful block properties, so crucially this is where the traceability happens. When an insight is integrated, Logsqueak adds an extracted-to:: property to the original journal block, linking it directly to the new block. The new block on the target page gets an id:: property linking back. This means you can always jump from your polished knowledge base straight back to the original journal entry to see the full context of what you were doing that day.
All writes are performed using a custom engine specifically built for Logseq’s Markdown format, ensuring your notes stay safe. Because this is a proof-of-concept, all writes are guarded by explicit user approval—Logsqueak won’t change your files without you saying “yes.”
Logsqueak requires Python 3.11+ and an AI assistant. You can use Ollama to run everything locally.
If you’re on Fedora, getting the prerequisites running is incredibly straightforward. Since Fedora Workstation ships with recent Python versions out of the box, you’re already halfway there. You just need to grab Ollama to run the models locally, set up a virtual environment, and you’re good to go:
# Assuming you've installed Ollama
git clone https://github.com/twaugh/logsqueak.git
cd logsqueak
./setup-dev.sh
source venv/bin/activate
logsqueak init
This tool can help you turn a pile of daily logs into a structured, searchable knowledge base. Although it can’t yet create new pages from scratch or be given custom instructions about how best to integrate things into the graph, it’s already useful enough for me to use in my daily routine.
It’s very much a proof-of-concept though, and I’d love to get some feedback from other developers and knowledge management enthusiasts. You can check out the code on GitHub.
Building Logsqueak made me realise just how much time I spend thinking about note-taking friction. While Logsqueak handles my fast, keyboard-driven daily logging, I actually do a lot of my deep thinking away from the screen on a Ratta Supernote e-ink tablet.
I recently found myself trying to solve a similar “black hole” problem over there. The result is Slipstream: a Zettelkasten framework to let you build infinitely nested idea networks by hand.
If you happen to be an e-ink user who prefers a stylus to a keyboard when you need to disconnect and focus, you might find it an interesting contrast. As a bonus, because Slipstream has a structured convention, exporting those handwritten notes to plain text makes them perfectly readable for the exact kind of LLM processing Logsqueak relies on. It’s analogue thinking, ready for the AI age.
The post Logsqueak: Rescuing Insights from the Logseq Journal appeared first on PRINT HEAD.
At Uniqlo flagship store in Ginza, Tokyo, there was this T-shirt with an encoded shell script.


Well, I had to decode it and see the result.
I took a photo with my iPhone and used its own OCR, which made a lot of confusion between 0 (zero), O (capital o) and 8, mixed 1 and l (small L), and yielded many chars as very similar glyphs but in advanced Unicode ranges, which are invalid for Base64 encoding. It took me some time to fix it all. The final corrected text is this:
#!/bin/bash
eval "$(base64 -d <<< 'IyEvYmluL2Jhc2gKCiMgQ29uZ3Jhd
HVsYXRpb25zISBZb3UgZm91bmQgdGhlIGVhc3RlciBlZ2chIOKdpO+4jwojIOOBiuO
CgeOBp+OBqOOBhuOBlOOBluOBhOOBvuOBme+8gemaoOOBleOCjOOBn+OCteODl+OD
qeOCpOOCuuOCkuimi+OBpOOBkeOBvuOBl+OBn++8geKdpO+4jwoKIyBEZWZpbmUgd
GhlIHRleHQgdG8gYW5pbWF0ZQp0ZXh0PSLimaVQRUFDReKZpUZPUuKZpUFMTOKZpV
BFQUNF4pmlRk9S4pmlQUxM4pmlUEVBQ0XimaVGT1LimaVBTEzimaVQRUFDReKZpUZ
PUuKZpUFMTOKZpVBFQUNF4pmlRk9S4pmlQUxM4pmlIgoKIyBHZXQgdGVybWluYWwg
ZGltZW5zaW9ucwpjb2xzPSQodHB1dCBjb2xzKQpsaW5lcz0kKHRwdXQgbGluZXMpC
gojIENhbGN1bGF0ZSB0aGUgbGVuZ3RoIG9mIHRoZSB0ZXh0CnRleHRfbGVuZ3RoPS
R7I3RleHR9CgojIEhpZGUgdGhlIGN1cnNvcgp0cHV0IGNpdmlzCgojIFRyYXAgQ1RS
TCtDIHRvIHNob3cgdGhlIGN1cnNvciBiZWZvcmUgZXhpdGluZwp0cmFwICJ0cHV0I
GNub3JtOyBleGl0IiBTSUdJTlQKCiMgU2V0IGZyZXF1ZW5jeSBzY2FsaW5nIGZhY3R
vcgpmcmVxPTAuMgoKIyBJbmZpbml0ZSBsb29wIGZvciBjb250aW51b3VzIGFuaW1hd
Glvbgpmb3IgKCggdD0wOyA7IHQrPTEgKSk7IGRvCiAgICAjIEV4dHJhY3Qgb25lIGN
oYXJhY3RlciBhdCBhIHRpbWUKICAgIGNoYXI9IiR7dGV4dDp0ICUgdGV4dF9sZW5nd
Gg6MX0iCiAgICAKICAgICMgQ2FsY3VsYXRlIHRoZSBhbmdsZSBpbiByYWRpYW5zCiA
gICBhbmdsZT0kKGVjaG8gIigkdCkgKiAkZnJlcSIgfCBiYyAtbCkKCiAgICAjIENhb
GN1bGF0ZSB0aGUgc2luZSBvZiB0aGUgYW5nbGUKICAgIHNpbmVfdmFsdWU9JChlY2
hvICJzKCRhbmdsZSkiIHwgYmMgLWwpCgogICAgIyBDYWxjdWxhdGUgeCBwb3NpdGl
vbiB1c2luZyB0aGUgc2luZSB2YWx1ZQogICAgeD0kKGVjaG8gIigkY29scyAvIDIpIC
sgKCRjb2xzIC8gNCkgKiAkc2luZV92YWx1ZSIgfCBiYyAtbCkKICAgIHg9JChwcmlu
dGYgIiUuMGYiICIkeCIpCgogICAgIyBFbnN1cmUgeCBpcyB3aXRoaW4gdGVybWluY
WwgYm91bmRzCiAgICBpZiAoKCB4IDwgMCApKTsgdGhlbiB4PTA7IGZpCiAgICBpZi
AoKCB4ID49IGNvbHMgKSk7IHRoZW4geD0kKChjb2xzIC0gMSkpOyBmaQoKICAgICM
gQ2FsY3VsYXRlIGNvbG9yIGdyYWRpZW50IGJldHdlZW4gMTIgKGN5YW4pIGFuZCAyM
DggKG9yYW5nZSkKICAgIGNvbG9yX3N0YXJ0PTEyCiAgICBjb2xvcl9lbmQ9MjA4CiA
gICBjb2xvcl9yYW5nZT0kKChjb2xvcl9lbmQgLSBjb2xvcl9zdGFydCkpCiAgICBjb
2xvcj0kKChjb2xvcl9zdGFydCArIChjb2xvcl9yYW5nZSAqIHQgLyBsaW5lcykgJSBj
b2xvcl9yYW5nZSkpCgogICAgIyBQcmludCB0aGUgY2hhcmFjdGVyIHdpdGggMjU2L
WNvbG9yIHN1cHBvcnQKICAgIGVjaG8gLW5lICJcMDMzWzM4OzU7JHtjb2xvcn1tIiQ
odHB1dCBjdXAgJHQgJHgpIiRjaGFyXDAzM1swbSIKCiAgICAjIExpbmUgZmVlZCB0b
yBtb3ZlIGRvd253YXJkCiAgICBlY2hvICIiCgpkb25lCgo= ')"
When base64-decoded, this bash script appears:
#!/bin/bash
# Congratulations! You found the easter egg! ❤
# おめでとうございます!隠されたサプライズを見つけました!❤
# Define the text to animate
text="♥PEACE♥FOR♥ALL♥PEACE♥FOR♥ALL♥PEACE♥FOR♥ALL♥PEACE♥FOR♥ALL♥PEACE♥FOR♥ALL♥"
# Get terminal dimensions
cols=$(tput cols)
lines=$(tput lines)
# Calculate the length of the text
text_length=${#text}
# Hide the cursor
tput civis
# Trap CTRL+C to show the cursor before exiting
trap "tput cnorm; exit" SIGINT
# Set frequency scaling factor
freq=0.2
# Infinite loop for continuous animation
for (( t=0; ; t+=1 )); do
# Extract one character at a time
char="${text:t % text_length:1}"
# Calculate the angle in radians
angle=$(echo "($t) * $freq" | bc -l)
# Calculate the sine of the angle
sine_value=$(echo "s($angle)" | bc -l)
# Calculate x position using the sine value
x=$(echo "($cols / 2) + ($cols / 4) * $sine_value" | bc -l)
x=$(printf "%.0f" "$x")
# Ensure x is within terminal bounds
if (( x < 0 )); then x=0; fi
if (( x >= cols )); then x=$((cols - 1)); fi
# Calculate color gradient between 12 (cyan) and 208 (orange)
color_start=12
color_end=208
color_range=$((color_end - color_start))
color=$((color_start + (color_range * t / lines) % color_range))
# Print the character with 256-color support
echo -ne "\033[38;5;${color}m"$(tput cup $t $x)"$char\033[0m"
# Line feed to move downward
echo ""
done
The original encoded text, when executed in my Linux terminal gives a beatiful animation, similar to this:

After decoding and executing the script, I found other people doing the same:
But I swear I decoded it myself first, without any help.
Fedora Documentation translations will be put on hold from March 4th as the Fedora Localization Team has started the process of migration from pagure.io to the Fedora Forge. From the date, translation projects of the documentation (with ‘fedora-docs-l10n’ in name) will be gradually locked on the Fedora translation platform. Translation automation of the Docs website will also be stopped in the Fedora infrastructure. Consequently, there will be no translation updates available in the language versions on the Fedora Documentation.
The migration involves all repositories which support and ensure the availability of translations of the Fedora Documentation. There is no possibility the migration can be performed ‘on the fly’ as changes in the repositories, related scripts and continuous integration with the translation platform cannot be dealt with independently. Therefore the translation process of the Fedora Documentation is kept on hold.
We regrettably ask the Fedora contributors, our translation community, to pull back from translating of the Fedora Documentation and wait till the translation automation of the documentation is resumed again.
The progress of migration can be followed in the localization tracker as issue #52.
The post Fedora Documentation translations not available from March 4th, 2026 appeared first on Fedora Community Blog.
After reading about the billions spent on AI infrastructure, I kept wondering: how
much of that storage is just… the same bytes over and over? So I decided to find out. As I’m
involved with https://github.com/device-mapper-utils/blk-archive I thought it would be good to understand
how much storage blk-archive can realistically save when pointed at AI-style datasets.
Let me be clear upfront: this isn’t a speed test or hardware benchmark. I’m only interested in one question: how many bytes go in, and how many come out?
I do not work with AI tools. This is not advice from experience of working with AI. It is advice from working with access controls in general.
Any agent has responsibility and authority. Responsibility is what it is required to produce. Authority is the set of resources that you provide to that agent. This does not change if the agent is human or automation, and AI agents fall in to that later category.
The way to limit what an agent can do is to allow it access to nothing, and then see what it requests access to. If that resource is reasonable, provide access.
The best example I can point to for a workflow like this is SELinux. When a new program is added to Fedora or comparable OS, it requires and update to the SELinux policy to say what files it can read/write/execute.
To generate this policy, the developer runs the program on a scratch system in permissive mode, and tracks the aces where policy would deny the program access to a resource. The engineer can then look at the set of resources and build a new policy to allow access to those resources, and only those. If one of the requests is suspect, the SELinux policy team is unlikely to accept the updated policy.
You do not require an agent to self-limit access. We don’t trust humans to that, we certainly should not require automation to do that.
An agent should not be able to make any web request by default. No posts to github/gitlab, or Wells Fargo, or the NSA. Every URL, every Host should be denied until authorized.
However, yes-no access alone may not be sufficient. A request to read or write or make a web call may be perfectly innocuous. And a payment made for a small resource may be perfectly acceptable. But filling up a hard disk, or deleting files, or emptying a bank account are all issues of scale. Any resources that can be exhausted require limits. Quotas are hard, and delegation of quotas to other systems are even harder. But not impossible. I wrote about it a while back: https://adam.younglogic.com/2018/05/tracking-quota/
If your agent is supposed to write code, let it write code in a sandbox. Let the human delegating to the agent take the responsibility of promoting that code to a live system. Do not allow it to delete database schemas that you would not let a human delete.
The year is rolling along, and here we are at the end of Feb.
There were a lot of small day to day investigations and incoming requests, along with a pretty large amount of pull requests for our ansible repo. Since we are in Beta freeze some of them will have to wait, but some we can test out in staging now. It's great to see people submitting fixes and enhancements.
There were also some small fun to debug issues this week, including:
The https://whatcanidoforfedora.org/ site was sometimes alerting that it's ssl cert was expired. Turns out this was caused by that domain having old ip's for 2 proxies that had moved datacenters. So, sometimes it hit those, timed out and the ssl check just assumed it was bad. So, it was DNS. :)
The fedorapeople.org web server started being very slow to respond. Turns out the scrapers were hitting the cgit interface there and downloading xz snapshots of every commit. This caused the server to have to try and compress things over and over again. So, for now I just disabled those links and increased resources on the webserver. scrapers continue to keep on giving.
Much of my time this week has been spent working on our new secure boot signing workflow. This is really really overdue and something I was hoping to finish mid last year, but things kept coming up and it kept getting pushed back.
The new setup leverages our existing signing infrastructure (sigul) so there's no need for special build hardware anymore. It also removes some constraints in the existing setup allowing us to do something we have wanted for a long time, namely sign aarch64 boot loader artifacts for secure boot.
Kudos to Jermey Cline for all his work on the code to make this possible. This uses the siguldry-bridge, rust based server to talk to sigul, and hopfully before too long we can replace the sigul server side with the new rust based server too.
I got everything deployed, I am now able to sign things, but in testing on my aarch64 laptop, there's still some issue with grub2 that needs to be sorted out. Hopefully it's something not too difficult to track down and we can move to this new setup after beta freeze once and for all.
As always, comment on mastodon: https://fosstodon.org/@nirik/116149442549416772
This guide consolidates the steps required to establish a professional-grade LaTeX environment into a single, clean workflow. Following this chronological order ensures that directories exist, dependencies are met, and the environment is correctly configured before the software attempts to utilize them.
By installing TeX Live in user-space (within ~/.local/opt), we adhere to a minimalist philosophy. This approach avoids cluttering the system root, removes the need for sudo during package management via tlmgr, and ensures you are using the most recent upstream binaries rather than potentially outdated distribution packages.
Fedora's "minimal" Perl and X11 profiles often lack specific libraries required by the TeX Live installer and its graphical management tools. While we aim for a lean system, these Perl modules are essential for the installer's checksum verification and the tlmgr interface.
Install these requirements using dnf:
sudo dnf install perl-File-Find perl-File-Copy perl-Digest-MD5 perl-Tk perl-Term-ANSIColor xdotool
We will create a "user-space system" directory. Utilizing ~/.local/opt keeps your home directory organized while mimicking the Linux Filesystem Hierarchy Standard (FHS).
mkdir -p ~/.local/bin
mkdir -p ~/.local/opt/texlive
To avoid "checksum mismatch" errors common with unstable local mirrors, we will point the installer directly to the Comprehensive TeX Archive Network (CTAN) global repository.
cd ~/Downloads
wget http://mirror.ctan.org/systems/texlive/tlnet/install-tl-unx.tar.gz
tar -xzf install-tl-unx.tar.gz
cd install-tl-*
Launch the installer using a reliable global mirror:
./install-tl -repository http://mirror.ctan.org/systems/texlive/tlnet/
Fedora needs to be instructed where to find these new binaries. We will create a modular profile script in ~/.bashrc.d/ to keep our ~/.bashrc clean.
Create the configuration file:
vi ~/.bashrc.d/texlive.sh
Paste the following content into the file:
# TeX Live 2025 Path Configuration
TEX_BIN_PATH="/home/porfirio/.local/opt/texlive/2025/bin/x86_64-linux"
if [ -d "$TEX_BIN_PATH" ]; then
# Prioritize TeX binaries and your local bin folder
export PATH="$TEX_BIN_PATH:/home/porfirio/.local/bin:$PATH"
export INFOPATH="/home/porfirio/.local/opt/texlive/2025/texmf-dist/doc/info:$INFOPATH"
export MANPATH="/home/porfirio/.local/opt/texlive/2025/texmf-dist/doc/man:$MANPATH"
fi
Apply the changes to your current session:
source ~/.bashrc
With the paths set and Perl dependencies satisfied, manually execute the tasks that occasionally fail during the installer's finalization stage due to environment restrictions:
# Set default paper size
tlmgr paper letter
# Build the format files (Engines)
fmtutil-sys --all
To ensure your system is ready for document production, perform a "smoke test" by checking the binary location and compiling a dummy document.
which pdflatex
This should return a path pointing to your ~/.local/bin/ or ~/.local/opt/ folder rather than /usr/bin/.
echo "\documentclass{article}\begin{document}LaTeX 2025 is Live on Fedora!\end{document}" > test.tex
pdflatex test.tex
For future maintenance on your system, keep these paths in mind:
You now have a professional-grade, upstream LaTeX installation that is completely independent of Fedora's system packages, making it easy to back up or migrate to a new machine in the future.
I have released version 0.4.5 of patchutils and also built it in Fedora rawhide. This is a stability-focused update fixing compatibility issues and bugs, some of which had been introduced in 0.4.4.
Version 0.4.4 added support in the filterdiff suite for Git’s extended diff format. Git diffs without content hunks (such as renames, copies, mode-only changes and binary files) were included in the output. This broke compatibility with 0.4.3.
For 0.4.5 this functionality is now gated with a --git-extended-diffs=include|exclude parameter. The default for 0.4.x is to exclude files in Git extended diffs with no content. There were also some fixes relating to file numbering for these types of diffs.
Note: in 0.5.x this default will change to include.
Previously grepdiff --status showed ! for all matching files, but now it correctly reports them as additions (+), removals (-) or modifications (!).
As always, bug reports and feature requests are welcome on GitHub. Thanks to everyone who reported issues and helped to test fixes!
The post Patchutils 0.4.5 released appeared first on PRINT HEAD.
This is a report created by CLE Team, which is a team containing community members working in various Fedora groups for example Infrastructure, Release Engineering, Quality etc. This team is also moving forward some initiatives inside Fedora project.
Week: 20 Feb – 27 Feb 2026
This team is taking care of day to day business regarding Fedora Infrastructure.
It’s responsible for services running in Fedora infrastructure.
Ticket tracker
This team is taking care of day to day business regarding CentOS Infrastructure and CentOS Stream Infrastructure.
It’s responsible for services running in CentOS Infratrusture and CentOS Stream.
CentOS ticket tracker
CentOS Stream ticket tracker
This team is taking care of day to day business regarding Fedora releases.
It’s responsible for releases, retirement process of packages and package builds.
Ticket tracker
This is the summary of the work done regarding the RISC-V architecture in Fedora.
This is the summary of the work done regarding AI in Fedora.
This team is taking care of quality of Fedora. Maintaining CI, organizing test days
and keeping an eye on overall quality of Fedora releases.
This team is working on introduction of https://forge.fedoraproject.org to Fedora
and migration of repositories from pagure.io.
This team is working on keeping Epel running and helping package things.
This team is working on improving User experience. Providing artwork, user experience,
usability, and general design services to the Fedora project
If you have any questions or feedback, please respond to this report or contact us on #admin:fedoraproject.org channel on matrix.
The post Community Update – Week 9, 2026 appeared first on Fedora Community Blog.
Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.
RPMs of PHP version 8.5.4RC1 are available
RPMs of PHP version 8.4.19RC1 are available
ℹ️ The packages are available for x86_64 and aarch64.
ℹ️ PHP version 8.3 is now in security mode only, so no more RC will be released.
ℹ️ Installation: follow the wizard instructions.
ℹ️ Announcements:
Parallel installation of version 8.5 as Software Collection:
yum --enablerepo=remi-test install php85
Parallel installation of version 8.4 as Software Collection:
yum --enablerepo=remi-test install php84
Update of system version 8.5:
dnf module switch-to php:remi-8.5 dnf --enablerepo=remi-modular-test update php\*
Update of system version 8.4:
dnf module switch-to php:remi-8.4 dnf --enablerepo=remi-modular-test update php\*
ℹ️ Notice:
Software Collections (php84, php85)
Base packages (php)
I was interviewed on Veritasium about the rise of Linux and the XZ hack.
Cockpit is the modern Linux admin interface. We release regularly.
Here are the release notes from Cockpit 357, cockpit-files 37, cockpit-machines 350 and cockpit-podman 123:
ShiftTo help with accessibility and user-specific items in your browser context menu we now follow the behavior seen
in Firefox: holding the Shift key while right-clicking will skip our own context menu and instead follow the
default behavior of your browser. Note that while Cockpit Files uses the Cockpit context menu it did not
receive this update before the release, but will get it for the for next one.
Cockpit now supports the --extra-args option of virt-install. This allows you to pass arguments to the
kernel when the installer is booted during the creation of a new virtual machine. Installers like Anaconda and
Agama can be controlled in various ways via these arguments.
Thanks to Nykseli for this contribution!
The image details tab now shows a select set of labels from the OpenContainers Annotations specification.

Cockpit 357, cockpit-files 37, cockpit-machines 350 and cockpit-podman 123 are available now:
Here is a quick howto upgrade default PHP version provided on Fedora, RHEL, CentOS, AlmaLinux, Rocky Linux or other clones with latest version 8.5.
You can also follow the Wizard instructions.
The repository is available for x86_64 (Intel/AMD) and aarch64 (ARM).
On Fedora, standards repositories are enough, on Enterprise Linux (RHEL, CentOS) the Extra Packages for Enterprise Linux (EPEL) and Code Ready Builder (CRB) repositories must be configured.
dnf install https://rpms.remirepo.net/fedora/remi-release-44.rpm
dnf install https://rpms.remirepo.net/fedora/remi-release-43.rpm
dnf install https://rpms.remirepo.net/fedora/remi-release-42.rpm
dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-10.noarch.rpm dnf install https://rpms.remirepo.net/enterprise/remi-release-10.rpm subscription-manager repos --enable codeready-builder-for-rhel-10-x86_64-rpms
dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm dnf install https://rpms.remirepo.net/enterprise/remi-release-9.rpm subscription-manager repos --enable codeready-builder-for-rhel-9-x86_64-rpms
dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm dnf install https://rpms.remirepo.net/enterprise/remi-release-8.rpm subscription-manager repos --enable codeready-builder-for-rhel-8-x86_64-rpms
dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-10.noarch.rpm dnf install https://rpms.remirepo.net/enterprise/remi-release-10.rpm crb install
dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm dnf install https://rpms.remirepo.net/enterprise/remi-release-9.rpm crb install
dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm dnf install https://rpms.remirepo.net/enterprise/remi-release-8.rpm crb install
With Fedora and EL, you can simply use the remi-8.4 stream of the php module
With Fedora (dnf5 has partial module support)
dnf module reset php dnf module enable php:remi-8.5 dnf install php-cli php-fpm php-mbstring php-xml
Other distributions (dnf4)
dnf module switch-to php:remi-8.5/common
By choice, the packages have the same name as in the distribution, so a simple update is enough:
dnf update
That's all :)
$ php -v
PHP 8.5.3 (cli) (built: Feb 10 2026 18:25:51) (NTS gcc x86_64)
Copyright (c) The PHP Group
Built by Remi's RPM repository #StandWithUkraine
Zend Engine v4.5.3, Copyright (c) Zend Technologies
with Zend OPcache v8.5.3, Copyright (c), by Zend Technologies
The upgrade can fail (by design) when some installed extensions are not yet compatible with PHP 8.5.
See the compatibility tracking list: PECL extensions RPM status
If these extensions are not mandatory, you can remove them before the upgrade; otherwise, you must be patient.
Warning: some extensions are still under development, but it seems useful to provide them to upgrade more people and allow users to give feedback to the authors.
If you prefer to install PHP 8.5 beside the default PHP version, this can be achieved using the php85 prefixed packages, see the PHP 8.5 as Software Collection post.
You can also try the configuration wizard.
This is also documented as the community way to install PHP 8.5 on the official PHP web site.
The packages available in the repository were used as sources for Fedora 44.
By providing a full feature PHP stack, with about 150 available extensions, 11 PHP versions, as base and SCL packages, for Fedora and Enterprise Linux, and with 300 000 downloads per day, the remi repository became in the last 21 years a reference for PHP users on RPM based distributions, maintained by an active contributor to the projects (Fedora, PHP, PECL...).
See also:
Hitler became Chancellor of Germany in January 1933.
On 10 February 1933, Hitler gave his first official speech as Chancellor.
He used the occasion to attack his predecessors for everything that had transpired against German interests since the end of World War One.
Many copies of the speech are available online:
That’s what Germany looks like today! Under the rule of these parties who have ruined our Volk for fourteen years. The only question is, for how much longer?
...
And this brings us thus to our sixth item, clearly the goal of our struggle: the preservation of this Volk and this soil, the preservation of this Volk for the future, in the realization that this alone can constitute our reason for being.
The word Volk, which means People or Population, appears forty two times in the speech.
On 10 February 2026, the Debianist leader Andreas Tille reprimanded one of his predecessors G. Branden Robinson, just as Adolf Hitler had reprimanded the predecessors who ruled Germany in the fourteen years before the Nazis took power.
While Hitler claimed to represent the Volk / People, Andreas Tille claims to represent the user, whoever that is, in this spiteful reply:
... directly harms our ability to serve our users
...
the project owes first and foremost to its users.
...
Andreas Tille was, on two successive occasions, elected on Hitler's birthday. Go figure.
Read more about Nazi comparisons.
Well, another saturday, time for another bit of longer form recapping what has been going on in fedora infrastructure and other areas for me.
We started the beta freeze in infrastructure. This is to make sure that we don't cause any problems for the release building and distributing pipeline. We require some acks for any changes to things that might impact those things until the day after the Beta is released.
I think this has served us fine over the years. Every once in a while I wonder if we could just stop doing it as we are usually pretty good about not breaking things day to day, but having the extra eyes on changes and slowing down a bit is a good thing I think.
We have been busy working on migrating things from pagure.io to forge.fedoraproject.org. On tuesday just before the freeze we finally got our ansible repo moved over. I've really been looking forward to this as the review interface in forgejo is a good deal nicer than the pagure one. I've already used it to great effect.
We do still have a few more things to migrate, but overall it's moving along nicely.
We finally finished off the last things (at least that I am aware of) for things we moved in last december from rdu2-cc to rdu3.
There was a very strange and difficut to figure out problem for copr builders on ipv6 that I wasn't able to track down, but luckily Pavel worked with networking and finally did so! It seems to have been a odd caching bug in the switches. Hopefully it's now gone once and for all.
There was some hardware issues to sort out: some bad network cards that had to be replaced, a machine that didn't actually move when it was supposed to, etc.
Anyhow I hope all that work is all finally done.
Finally got back to deploying / testing the new signing path for secure boot signing. I got it all deployed, just need to get things tested now and hopefully we can switch over after the freeze.
This should hopefully allow us to sign aarch64 kernels for secure boot as well as removing reliance on an old smart card for signing.
As always, comment on mastodon: https://fosstodon.org/@nirik/116110354434738317
Potential GSoC contributors may reach out with questions about our project ideas or GNOME internships in general. Please direct them to gsoc.gnome.org to learn more.
You can find our proposed project ideas at gsoc.gnome.org/2026.
Project proposal submissions are open from March 16th to 31st.
Welcome to another GNOME Foundation update post, covering highlights from the past two weeks (this week and last week). It’s been a busy time, particularly due to conference planning and our upcoming audit – read on to find out more!
We were thrilled to be able to announce the location and dates of this year’s Linux App Summit this week. The conference will happen in Berlin on the 16th and 17th of May, at Betahaus Berlin. More information is available on the LAS website.
As usual, we are very pleased to be collaborating with KDE on this year’s LAS. Our partnership on LAS has been a real success that we hope to continue.
Travel sponsorship for LAS 2026 is available for Foundation members through the Travel Committee, so head over to the travel page if you would like to attend and need financial support.
The Board of Directors it’s regular monthly meeting last week, on 9th February. Highlights from the meeting included:
The next Board meeting is scheduled for March 9th.
As I’ve mentioned in previous updates, the GNOME Foundation is due to be audited very soon. This is a routine occurrence for non-profits like us, but this is our first formal audit, so there’s a good deal of learning and setup to be done.
Last week was the deadline to submit all the documentation for the audit, which meant that many of us were extremely busy finalising numbers, filling in spreadsheets, and tidying up other documentation ready to send it all to the auditors.
Our finance team *really* went the extra mile for us to get everything ready on time, so I’d like to give them a huge thank you for helping us out.
The audit inspection itself will happen in the first week of March, so preparations continue, as we assemble and organise our records, update our policies, and so on.
Planning for this summer’s conference has continued over the past two weeks. In case you missed it, the location and dates have been announced, and accommodation bookings are open at a reduced rate. In the background we are gearing up to open the call for papers, and the sponsorship effort is on its way. Now is a good time to start thinking about any talk proposals that you’d like to submit.
A cool community effort is currently underway to provide certificates for GNOME Foundation members. This is a great idea in my opinion, as it will allow contributors to get official recognition which can be used for job applications and so on. More volunteers to help out would definitely be welcome.
That’s it for this week. Thanks for reading, and feel free to ask questions in the comments.
This page explains what Kiwi TCMS Community Edition is, how it ships and what risks are associated with it. Please read about the details below.
This is the official version of the Kiwi TCMS application as produced by
our own team with the help of many contributors. It may also be
referred to upstream or the community edition version and comes
packaged as a container image which is publicly available and can be downloaded via the
docker pull pub.kiwitcms.eu/kiwitcms/kiwi command!
Community Edition is suitable for developers, teams and organizations which are happy to run Kiwi TCMS without any warranty and safeguards for development, testing and even production purposes!
You get access the publicly available container image; public documentation and source code but not much else!
Upstream container image: Kiwi TCMS is packaged as a single container image. See Running Kiwi TCMS as a container to get started!
Only x86_64 build: the community edition container is built on Linux, suitable for Intel/AMD 64bit processors only.
No version tags: the community edition container is always the latest version!
There are no other versions available!
GPL-2.0 license: Kiwi TCMS is an open source software, with very long history, with its primary core licensed under the GNU GPL-2.0 license!
No warranty: the community edition version of Kiwi TCMS is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
EthicalAds: the community edition version of Kiwi TCMS comes with built-in advertisement from EthicalAds rewards from which are paid out to opencollective.com/kiwitcms for transparency.
You assume all risk: you must understand and accept that open source comes with associated risks. To name just a few: our team disappears and stops development; security issues stay unresolved; code slowly becomes outdated & may incur technical debt in the case of one-off patches! All of these have happened in the past with the predecessor of what is now Kiwi TCMS.
There is no charge but please understand that software isn't zero cost! You pay for it indirectly with time and effort invested in engineering, operations, maintenance and resolving any issues which may arise from the use of said software, regardless of whether it is open source or not!
IMPORTANT: open source and Kiwi TCMS by extension, provide you with more options when it comes to mitigating risks and cost compared to proprietary software. However the easiest way to secure the future of Kiwi TCMS is to become a customer and help us sustain development!
Happy Testing!
If you like what we're doing and how Kiwi TCMS please help us!
This page explains what Private Tenant Extras is and how it brings more value to your existing Kiwi TCMS subscription. Please read about the details below.
This is an optional subscription tier which combines our existing Private Tenant SaaS hosting with access to the underlying data in its raw format!
IMPORTANT: any Private Tenant subscription with unit count > 1 entitles you to this Extras add-on! Please contant support for setup!
Private Tenant Extras is suitable for teams using the SaaS version of Kiwi TCMS which require access to their underlying data in machine readable format!
Everything from lower tier subscription plans plus access to a database and file exports in case you would like to keep your own backup copy or provide in-house integration with other tools.
Raw SQL database export: you will receive a database export in SQL format,
suitable for the Postgres database engine. It includes all tables which constitute your
own namespace under the *.tenant.kiwitcms.org domain name. This also includes
information about user accounts authorized to access your tenant!
IMPORTANT: due to technical and security limitations we cannot give you direct access to the underlying database cluster in real-time!
File uploads included: means exactly that - all attachments uploaded to your private tenant will be included as part of this subscription!
Encrypted access: all data we export is stored encrypted and may be accessed using the popular open source tool restic. We will provide you with read-only level access and an unique public/private keypair!
IMPORTANT:
Storage region of your choice: means that we can publish to a geographic region and data center of your choice. Exact locations are subject to availability.
NOTE: At the time of writing our preferred storage backend is Amazon S3.
Happy Testing!
If you like what we're doing and how Kiwi TCMS please help us!
Another weekly recap of happenings around fedora for me.
I spent a fair bit of time looking at one of our proxies. We have them all to a reload (aka 'graceful restart') every hour when we update a ticketkey on them. For the vast majority of them, thats fine and works as expected. However, proxy11 decided to start taking a while (like 12-15seconds) to reload, causing our monitoring to alert that it was down... then back up.
In the end, it seemed the problem was somehow related to some old tls certificates that were present, but not used anywhere. All I can think of is that it's doing some kind of parsing of all certs and somehow those old ones cause it undue processing time. I removed those old certs and reload times went way back down again.
I'm tempted to try and figure out what it's doing exactly here, but I already spent a fair bit of time on it and it's working again now, so I guess I will just shrug and move on.
A while back I had to hurredly deploy anubis in front of our download servers. This was due to the scrapers deciding to just download every rpm / iso from every fedora release since the dawn of time at a massive concurrency. This was saturating one of our 10G links completely, and making another somewhat full. So, I deployed anubis and it dropped things back to 'normal' again.
Fast forward to this last week, and my rush in deploying anubis came back to bite me. We have a cloudfront distribution that uses our download servers as it's 'origin'. Then we point all aws network blocks to use that for any fedora instances in aws. This is a win for us as then everything for them is cached on the aws side saving bandwith, and a win for aws users as that traffic is 'local' to them so faster and doesn't cause them to need to be billed for ingress either.
Last week, anubis started blocking CloudFront, so uses in aws would get a anubis challenge page instead of the actual content they were expecting. But why did it this just happen now? well, as near as I could determine, someone/scrapers were hitting the CloudFront endpoints and crawling our download server (fine, no problem there), but then they hit a directory that they handled poorly.
The directory was used/last updated about 11 years ago with a readme file explaining that the content was moved and no longer there. Great. However, also it had previous subdirectories as links to '.' (ie, the current directory). Since scrapers don't use any of the 20 years of crawling code, and instead just brute force things, this resulted in a bunch of requests like:
GET /foo/ GET /foo/foo/ GET /foo/foo/foo/
and so on. These are all really small (just a directory listing), so that meant it could make requests really really fast. So, after some point anubis started challenging those CloudFront connections and boom.
So, the problem with the hurred deployment I had made there was that The policy file I had deployed was not actually being used. I had allowed CloudFront, but it didn't seem to help any, and it took me far too long to figure out that anubis was starting up, printing one error about not being able to read the policy file and just running with the default configuration. ;( It turned out be a podman/selinux interaction and is now fixed.
I also removed those . links and set that directory tree to just 403 all requests to it.
Also this week, folks were reporting problems with our new forgejo forge. Anubis was doing challenges when people were trying to submit comments and it was messing them up.
In the end here, I just needed to adjust the config to allow POSTs through. At least right now scrapers aren't doing any POSTS and just allowing those seems to fix the issues people were having.
Friday we had them hitting release-monitoring.org. This time it was what I am calling a 'type 0' scraper. It was all coming from one cloud ip and I could just block them.
This morning a bit ago, we had a group hit/find the 'search' button on koji.fedoraproject.org, taking it offline. I was able to block the endpoint for a few hours and they went away, but no telling if they will be back. These were the 'type 2' kind (botnet using users ip's/browsers from 100's of thousands of different ips).
I am sad that the end game here sounds like there's not going to be so much of a open internet anymore. ie, for self defense sites will all have to go to requiring registration of some kind before working. I can only hope business models change before it comes to that.
As always, comment on mastodon: https://fosstodon.org/@nirik/116070476999694239
This topic came up at kernel maintainers summit and some other groups have been playing around with it, particularly the BPF folks, and Chris Mason's work on kernel review prompts[1] for regressions. Red Hat have asked engineers to investigate some workflow enhancements with AI tooling, so I decided to let the vibecoding off the leash.
My main goal:
- Provide AI led patch review for drm patches
- Don't pollute the mailing list with them at least initially.
This led me to wanting to use lei/b4 tools, and public-inbox. If I could push the patches with message-ids and the review reply to a public-inbox I could just publish that and point people at it, and they could consume it using lei into their favorite mbox or browse it on the web.
I got claude to run with this idea, and it produced a project [2] that I've been refining for a couple of days.
I started with trying to use Chris' prompts, but screwed that up a bit due to sandboxing, but then I started iterating on using them and diverged.
The prompts are very directed at regression testing and single patch review, the patches get applied one-by-one to the tree, and the top patch gets the exhaustive regression testing. I realised I probably can't afford this, but it's also not exactly what I want.
I wanted a review of the overall series, but also a deeper per-patch review. I didn't really want to have to apply them to a tree, as drm patches are often difficult to figure out the base tree for them. I did want to give claude access to a drm-next tree so it could try apply patches, and if it worked it might increase the review, but if not it would fallback to just using the tree as a reference.
Some holes claude fell into, claude when run in batch mode has limits on turns it can take (opening patch files and opening kernel files for reference etc), giving it a large context can sometimes not leave it enough space to finish reviews on large patch series. It tried to inline patches into the prompt before I pointed out that would be bad, it tried to use the review instructions and open a lot of drm files, which ran out of turns. In the end I asked it to summarise the review prompts with some drm specific bits, and produce a working prompt. I'm sure there is plenty of tuning left to do with it.
Anyways I'm having my local claude run the poll loop every so often and processing new patches from the list. The results end up in the public-inbox[3], thanks to Benjamin Tissoires for setting up the git to public-inbox webhook.
I'd like for patch submitters to use this for some initial feedback, but it's also something that you should feel free to ignore, but I think if we find regressions in the reviews and they've been ignored, then I'll started suggesting it stronger. I don't expect reviewers to review it unless they want to. It was also suggested that perhaps I could fold in review replies as they happen into another review, and this might have some value, but I haven't written it yet. If on the initial review of a patch there is replies it will parse them, but won't do it later.
[1] https://github.com/masoncl/review-prompts
[2] https://gitlab.freedesktop.org/airlied/patch-reviewer
[3] https://lore.gitlab.freedesktop.org/drm-ai-reviews/
RPMs of PHP version 8.5.3 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
RPMs of PHP version 8.4.18 are available in the remi-modular repository for Fedora ≥ 42 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
ℹ� These versions are also available as Software Collections in the remi-safe repository.
ℹ� The packages are available for x86_64 and aarch64.
ℹ� There is no security fix this month, so no update for versions 8.2.30 and 8.3.30.
Version announcements:
ℹ� Installation: Use the Configuration Wizard and choose your version and installation mode.
Replacement of default PHP by version 8.5 installation (simplest):
On Enterprise Linux (dnf 4)
dnf module switch-to php:remi-8.5/common
On Fedora (dnf 5)
dnf module reset php dnf module enable php:remi-8.5 dnf update
Parallel installation of version 8.5 as Software Collection
yum install php85
Replacement of default PHP by version 8.4 installation (simplest):
On Enterprise Linux (dnf 4)
dnf module switch-to php:remi-8.4/common
On Fedora (dnf 5)
dnf module reset php dnf module enable php:remi-8.4 dnf update
Parallel installation of version 8.4 as Software Collection
yum install php84
And soon in the official updates:
⚠� To be noticed :
ℹ� Information:
Base packages (php)
Software Collections (php83 / php84 / php85)
Â
Cockpit is the modern Linux admin interface. We release regularly.
Here are the release notes from Cockpit 356:
The systemd timer units created by Cockpit now run their command via
the shell. Previously, the text you entered into the “Command” field
was used directly for the ExecStart value in the systemd service unit
for the timer, and was thus subject to systemd specifier expansion and
other idiosyncrasies. Now the text you enter is executed directly by /bin/sh.
Systemd timer units that have been created by this version of Cockpit (or later) can now also be edited in Cockpit.
Thanks to Miguel Ribeiro for contributing this feature!
Cockpit 356 is available now:
I love the Ratta Supernote e-ink devices. They are a delight to use for writing, planning and all sorts of things. Writing things out by hand helps me connect with them on a deeper level because I need to choose the words more carefully than I would when typing, simply because of the speed difference. The Supernote’s onboard handwriting recognition works really well for my handwriting style.
At work I also love using Logseq, an open source personal knowledge management tool and outliner. I use it to write down things I learn and ideas I have, then connect things together in interesting ways in order to gain deeper understanding.
That’s the experience I wanted on the Supernote. Then I discovered Zettelkasten and knew it was what I wanted to use. ZK is a paper-based method and there have been adaptations to all sorts of environments, both physical and virtual. When I started I found it difficult to figure out how to make it feel native on a Supernote.
I also knew I wanted to be able to export my Supernote Zettelkasten as text in case I wanted to process it on a computer in different ways, or even run an AI assistant on it.
My first attempt was pretty messy! I’d create a new handwritten notebook file for each idea. Each filename had a timestamp (pre-filled by Supernote when you create a new file) along with a short title.
I found pysn-digest, a tool which is able to convert Supernote notebooks into Markdown files. So I worked out an elaborate system in which each notebook (each idea) had a level one heading for the title, then level two headings for metadata field names like “type” or “relates to”, then a level one heading again to start the content. The filenames were handwritten (timestamps and all) and I made them links to the idea files. It was really process-heavy and I didn’t stick with it for long before realising I needed to improve it.

Many months later I have evolved this system into a framework I’m really pleased with. It uses only level one headings and all the ideas can be in a single notebook (or split across notebooks if preferred). There are no complicated timestamps or numbering systems, only unique titles.
Idea notes can link together arbitrarily but also cluster together neatly, as well as nest as deeply as I need them to. I don’t have to keep to the Supernote’s native limit of four heading levels. And thanks to a system of templates and stickers I designed, I can easily see how deep in the tree I am, even from the pages overview. This means it is fast to insert a new idea note into the right place.
I’ve called it Slipstream. If you have a Supernote and want to skip the trial and error, I’ve packaged the entire framework (PDF user guide, templates, stickers, quick reference and notebooks) into a ready-to-use kit. Download the full Slipstream kit here.
The post Zettelkasten on a Supernote appeared first on PRINT HEAD.
Welcome to another GNOME Foundation weekly update! FOSDEM happened last week, and we had a lot of activity around the conference in Brussels. We are also extremely busy getting ready for our upcoming audit, so there’s lots to talk about. Let’s get started.
FOSDEM happened in Brussels, Belgium, last weekend, from 31st January to 1st February. There were lots of GNOME community members in attendance, and plenty of activities around the event, including talks and several hackfests. The Foundation was busy with our presence at the conference, plus our own fringe events.
Seven of our nine directors met for an afternoon and a morning prior to FOSDEM proper. Face to face hackfests are something that the Board has done at various times previously, and have always been a very effective way to move forward on big ticket items. This event was no exception, and I was really happy that we were able to make it happen.
During the event we took the time to review the Foundation’s financials, and to make some detailed plans in a number of key areas. It’s exciting to see some of the initiatives that we’ve been talking about starting to take more shape, and I’m looking forward to sharing more details soon.
The afternoon of Friday 30th January was occupied with a GNOME Foundation Advisory Board meeting. This is a regular occurence on the day before FOSDEM, and is an important opportunity for the GNOME Foundation Board to meet with partner organizations and supporters.
Turn out for the meeting was excellent, with Canonical, Google, Red Hat, Endless and PostmarketOS all in attendance. I gave a presentation on the how the Foundation is currently performing, which seemed to be well-received. We then had presentations and discussion amongst Advisory Board members.
I thought that the discussion was useful, and we identified a number of areas of shared interest. One of these was around how partners (companies, projects) can get clear points of contact for technical decision making in GNOME and beyond. Another positive theme was a shared interest in accessibility work, which was great to see.
We’re hoping to facilitate further conversations on these topics in future, and will be holding our next Advisory Board meeting in the summer prior to GUADEC. If there are any organizations out there would like to join the Advisory Board, we would love to hear from you.
GNOME had a stand during both FOSDEM days, which was really busy. I worked the stand on the Saturday and had great conversations with people who came to say hi. We also sold a lot of t-shirts and hats!
I’d like to give a huge thank you to Maria Majadas who organized and ran our stand this year. It is incredibly exhausting work and we are so lucky to have Maria in our community. Please say thank you to her!
We also had plenty of other notable volunteers, including Julian Sparber, Ignacy Kuchciński, Sri Ramkrishna. Richard Litteaur, our previous Interim Executive Director even took a shift on the stand.
On the Saturday night there was a GNOME social event, hosted at a local restaurant. As always it was fantastic to get together with fellow contributors, and we had a good turnout with 40-50 people there.
Moving on from FOSDEM, there has been plenty of other activity at the Foundation in recent weeks. The first of these is preparation for our upcoming audit. I have written a fair bit about this in these previous updates. The audit is a routine exercise, but this is also our first, so we are learning a lot.
The deadline for us to provide our documentation submission to the auditors is next Tuesday, so everyone on the finance side of the operation has been really busy getting all that ready. Huge thanks to everyone for their extra effort here.
Conference planning has been another theme in the past few weeks. For GUADEC, accommodation options have been announced, artwork has been produced, and local information is going up on the website.
Linux App Summit, which we co-organise with KDE, has been a bit delayed this year, but we have a venue now and are in the process of finalizing the budget. Announcements about the dates and location will hopefully be made quite soon.
A relatively small task, but a good one to highlight: this week we facilitated (ie. paid for) the assessment process for GNOME’s integration with Google services. This is an annual process we have to go through in order to keep Evolution Data Server working with Google.
Finally, Bart, along with Andrea, has been doing some work to optimize the resource usage of GNOME infrastructure. If you are using GNOME services you might have noticed some subtle changes as a result of this, like Anubis popping up more frequently.
That’s it for this week. Thanks for reading; I’ll see you next week!
For quite some time I’ve wanted to test how prone agentic tools are to prompt injection. Let’s go.
I’ll be using Claude Code 2.1.5, 4.5 Opus in various different sessions.
RPMs of PHPUnit version 13 are available in the remi repository for Fedora ≥ 42 and Enterprise Linux (CentOS, RHEL, Alma, Rocky...).
Documentation :
ℹ️ This new major version requires PHP ≥ 8.4 and is not backward compatible with previous versions, so the package is designed to be installed beside versions 8, 9, 10, 11, and 12.
Installation:
dnf --enablerepo=remi install phpunit13
Notice: This tool is an essential component of PHP QA in Fedora. This version should be available soon in the Fedora ≥ 43 official repository (19 new packages).
Just to keep up some blogging content, I'll do where did I spend/waste time last couple of weeks.
I was working on two nouveau kernel bugs in parallel (in between whatever else I was doing).
Bug 1: Lyude, 2 or 3 weeks ago identified the RTX6000 Ada GPU wasn't resuming from suspend. I plugged in my one and indeed it wasn't. Turned out since we moved to 570 firmware, this has been broken. We started digging down various holes on what changed, sent NVIDIA debug traces to decode for us. NVIDIA identified that suspend was actually failing but the result wasn't getting propogated up. At least the opengpu driver was working properly.
I started writing patches for all the various differences between nouveau and opengpu in terms of what we send to the firmware, but none of them were making a difference.
I took a tangent, and decided to try and drop the latest 570.207 firmware into place instead of 570.144. NVIDIA have made attempts to keep the firmware in one stream more ABI stable. 570.207 failed to suspend, but for a different reason.
It turns out GSP RPC messages have two levels of sequence numbering, one on the command queue, and one on the RPC. We weren't filling in the RPC one, and somewhere in the later 570's someone found a reason to care. Now it turned out whenever we boot on 570 firmware we get a bunch of async msgs from GSP, with the word ASSERT in them with no additional info. Looks like at least some of those messages were due to our missing sequence numbers and fixing that stopped those.
And then? still didn't suspend/resume. Dug into memory allocations, framebuffer suspend/resume allocations. Until Milos on discord said you did confirm the INTERNAL_FBSR_INIT packet is the same, and indeed it wasn't. There is a flag bEnteringGCOff, which you set if you are entering into graphics off suspend state, however for normal suspend/resume instead of runtime suspend/resume, we shouldn't tell the firmware we are going to gcoff for some reason. Fixing that fixed suspend/resume.
While I was head down on fixing this, the bug trickled up into a few other places and I had complaints from a laptop vendor and RH internal QA all lined up when I found the fix. The fix is now in drm-misc-fixes.
Bug 2: A while ago Mary, a nouveau developer, enabled larger pages support in the kernel/mesa for nouveau/nvk. This enables a number of cool things like compression and gives good speedups for games. However Mel, another nvk developer reported random page faults running Vulkan CTS with large pages enabled. Mary produced a workaround which would have violated some locking rules, but showed that there was some race in the page table reference counting.
NVIDIA GPUs post pascal, have a concept of a dual page table. At the 64k level you can have two tables, one with 64K entries, and one with 4K entries, and the addresses of both are put in the page directory. The hardware then uses the state of entries in the 64k pages to decide what to do with the 4k entries. nouveau creates these 4k/64k tables dynamically and reference counts them. However the nouveau code was written pre VMBIND, and fully expected the operation ordering to be reference/map/unmap/unreference, and we would always do a complete cycle on 4k before moving to 64k and vice versa. However VMBIND means we delay unrefs to a safe place, which might be after refs happen. Fun things like ref 4k, map 4k, unmap 4k, ref 64k, map 64k, unref 4k, unmap 64k, unref 64k can happen, and the code just wasn't ready to handle those. Unref on 4k would sometimes overwrite the entry in the 64k table to invalid, even when it was valid. This took a lot of thought and 5 or 6 iterations on ideas before we stopped seeing fails. In the end the main things were to reference count the 4k/64k ref/unref separately, but also the last thing to do a map operation owned the 64k entry, which should conform to how userspace uses this interface.
The fixes for this are now in drm-misc-next-fixes.
Thanks to everyone who helped, Lyude/Milos on the suspend/resume, Mary/Mel on the page tables.
When the share price of Credit Suisse, one of Switzerland's two top banks, went into its death spiral, the Swiss authorities had some discussions among themselves and the inevitable result of that discussion was the sale of Credit Suisse to their main rival, UBS.
The public were told that UBS had offered three billion Swiss francs to compensate the shareholders of Credit Suisse for their shares. Some people were sceptical about the method used to reach this valuation. Nonetheless, it was important for Swiss national pride.
When writing the reports on the JuristGate web site, I've attempted to operate with the highest level of accuracy and integrity. Nonetheless, everybody has their price. If somebody offered me three billion Swiss francs for the domain then I would probably do like Urban Angehrn and Birgit Rutishauser and take a lengthy garden leave.
In the meantime, to make up for the wayward support from Swiss legal insurance I've started a crowdfunding campaign to resolve one of the disputes relating to Debian. This is vital because we all do a lot of work for Debian and we are all entitled to equal recognition.
In a previous blog, I looked at the prompt and efficient manner in which Bernice is undertaking enforcement action to protect the public from rogue health practitioners in the State of Victoria.
Bernice generated a lot of news stories when she banned a social media influencer, Emily Lal, who promoted herself as The Authentic Birthkeeper. Lal had promoted wild birthing. A number of mothers and babies have died around the world when following the advice of social media influencers.
I don't know how much Bernice gets paid but I found a job vacancy for her deputy. The salary is in the range up to $290,000 per year. That is approximately CHF 160,000 Swiss francs.
As Bernice is a nurse and keyworkers never get paid what they are really worth I'm guessing her salary is not too much bigger than that.
The FINMA annual reports reveal the salaries for the CEO and executive team. Urban Angehrn's salary was CHF 602,000. It looks like FINMA pays their CEO three or four times what the State of Victoria pays Bernice.
We can compare their performance and see how many banning orders each of them has published:
FINMA (Swiss financial regulator) banning orders: they published 24 bans between 2018 and 2025. Some bans are not reported publicly.
Health Complaints Commissioner (Victoria) banning orders: Bernice has personally signed over 100 interim or permanent bans since taking office in July 2022.
This was Urban Angehrn's payout, CHF 581,000, when he resigned at the same time that FINMA published their anonymous decision about Parreaux, Thiebaud & Partners. Clients/victims were offered nothing.
Read more of the JuristGate reports.
Recently a coworker posted that children born this year would be in Generation Beta, and I was like “What? That sounds like too soon…” but then thought “Oh its just that thing when you get older and time flies by.” I saw a couple of articles saying it again, so decided to look at what was on the wikipedia article for generations and saw that yes ‘beta’ was starting.. then I started looking at the lengths of the various generations and went “Hold On”.

Let us break this down in a table:
| Generation | Wikipedia | How Long |
|---|---|---|
| T (lost) | 1883-1900 | 17 |
| U (greatest) | 1901-1927 | 26 |
| V (silent) | 1928-1945 | 17 |
| W (boomer) | 1946-1964 | 18 |
| X | 1965-1980 | 15 |
| Y (millenial) | 1981-1996 | 15 |
| Z | 1997-2012 | 15 |
| alpha | 2013-2025 | 12 |
| beta | 2026-2039 | 13 |
| gamma | 2040-??? | ?? |
So it is bad enough that Generation X,Millenials, and Z got shortened from 18 years to 15.. but alpha and beta are now down to 12 and 13? I realize that this is because all of this is a made up construct to make some people born in one age group angry/sad/afraid in another by editors who are needing to sell advertising for things which will solve the feelings of anger, sadness, or fear.. but could you at least be consistent.
I personally like some order to my starting and ending dates for generations so I am going to update some lists I have put out in the past with newer titles and times. We will use the definiton as outlined at https://en.wikipedia.org/wiki/Generation
A generation is all of the people born and living at about the same time, regarded collectively.[1] It also is “the average period, generally considered to be about 20–30 years, during which children are born and grow up, become adults, and begin to have children.”
For the purpose of trying to set eras, I think that the original 18 years for baby boomers makes sense, but the continual shrinkflation of generations after that is pathetic. So here is my proposal for generation ending dates outside. Choose which one you like the best when asked what generation you belong to.
| Generation | Wikipedia | 18 Years |
|---|---|---|
| T (lost) | 1883-1900 | 1889-1907 |
| U (greatest) | 1901-1927 | 1908-1926 |
| V (silent) | 1928-1945 | 1927-1945 |
| W (boomer) | 1946-1964 | 1946-1964 |
| X | 1965-1980 | 1965-1983 |
| Y (millenial) | 1981-1996 | 1984-2002 |
| Z | 1997-2012 | 2002-2020 |
| alpha | 2013-2025 | 2021-2039 |
| beta | 2026-2039 | 2040-2058 |
| gamma | 2040-??? | 2059-2077 |
(*) I say wikipedia here, but they are basically taking dates from various other sources and putting them together.. which should be seen as more on the statement of social commentators who aren’t good at math.

Psiphon Conduit یک پروژه متنباز است که توسط شرکت Psiphon Inc توسعه یافته و بر پایهی هستهی تونلینگ Psiphon (psiphon-tunnel-core) ساخته شده است. این پروژه شامل یک کلاینت موبایل و CLI کراسپلتفرم برای ایجاد تونل و پروکسی است که با هدف دور زدن سانسور اینترنت و افزایش دسترسی آزاد به اینترنت طراحی شده است. برخی […]
The post دانلود و راهاندازی Psiphon Conduit؛ ابزار قدرتمند ضد سانسور اینترنت first appeared on طرفداران فدورا.
پروژه Snowflake یک فناوری ضدسانسور از پروژه Tor است که به کاربران کمک میکند حتی در کشورها یا شبکههایی که Tor مسدود شده، به اینترنت آزاد و شبکه Tor متصل شوند. چند ویژگی مهم Snowflake: یک Pluggable Transport برای Tor است که ترافیک را طوری پنهان میکند که شناسایی و فیلتر آن سختتر شود. ارتباطات […]
The post آموزش و معرفی Snowflake Tor برای عبور از محدودیتهای اینترنت first appeared on طرفداران فدورا.I find myself writing a program in C that is supposed to handle multiple protocols. At its entry point, the protocol is Platform Communication Channel (extended memory, type 3 and type 4). Embedded in that is an Management Component Transport Protocol (MCTP) message, and embedded in that is one of many different protocols.
I might want to swap out the PCC layer in the future for….something else. MCTP can come over many different protocols, so there is a good be that the tool will be more useful if it can assume that the protocol outside of the MCTP layer is something other than MCTP.
One problem I have is that the MCTP header does not have a length field. We do not not know how long the payload is; all it has is version, source, destination, and flags. Thus, if we want to pass a buffer of type MCTP header along, and we want the length, we need to pass it in a separate field. This goes both for incoming (how many bytes to read) and outgoing (how many bytes to write).
My initial thought on writing this layer is to have a request/response pair for each layer of the protocol. For PCC, I could just do this. For all the other internal ones, I would need to pass length in and length out for each handler. Length in will not change, but length out might, so this needs to be passed as pointer. This leads to functions that look like this:
void handle_mctp_control_message(struct mctp_hdr * mctp_req, int req_len, struct mctp_hdr * mctp_resp, int *resp_len)
{
}
I will also need to be converting from outer protocols to inner protocols. So I will need code like this:
struct mctp_hdr * mctp_req = (struct mctp_hdr *)pcc_req->buffer_start;
int mctp_req_len = pcc_req->length - sizeof(MCTP_SIGNATURE);
struct mctp_hdr * mctp_rsp = (struct mctp_hdr *)pcc_rsp->buffer_start;
The outgoing length would be initialized to 0, and grow as each later of the protocol stack adds its own data. However, I am planning on pre-allocating the buffer, and just passing a pointer to the location where the protocol is supposed to write its data. IN order to confirm we don’t want to write past the end of the function, we will have to pass the overall buffer length in, maybe shortened by the amount we need to reserve for the outer headers.
If each protocol header had a smart pointer, I could pass those around instead. Something like:
struct pcc_header_p {
struct pcc_header * header;
int buffer_length;
}
struct mctp_header_p {
struct mctp_header * header;
int length;
int buffer_length;
}
Then it would be fairly easy to write a function that, given an *pcc_header_p populates a struct mctp_header_p that points to it.
void pcc_2_mctp(struct pcc_header_p *pcc, struct mctp_header_p *mctp)
{
mctp->header = (struct mctp_header *)pcc->header[sizeof struct pcc_header];
mctp->len = pcc->header.length - sizeof(MCTP_SIGNATURE);
mctp->buffer_length = pcc->buffer->length - sizof(struct pcc_header);
}
This seems like it would benefit from a set of preprocesser Macros. I know that Qemu does something like this. But for a first pass, I think I can just code it up like this.
While experiments remain the primary method by which we neuroscientists gather information on the brain, we still rely on theory and models to combine experimental observations into unified theories. Models allow us to modify and record from all components, and they allow us to simulate various conditions---all of which is quite hard to do in experiments.
Researchers model the brain at multiple levels of detail depending on what it is they are looking to study. Biologically detailed models, where we include all the biological mechanisms that we know of---detailed neuronal morphologies and ionic conductances---are important for us to understand the mechanisms underlying emergent behaviours.
These detailed models are complex and difficult to work with. NeuroML, a standard and software ecosystem for computational modelling in Neuroscience, aims to help by making models easier to work with. The standard provides ready-to-use model components and models can be validated before they are simulated. NeuroML is also simulator independent, which allows researchers to create a model and run it using a supported simulation engine of choice.
In spite of NeuroML and other community developed tools, a bottleneck remains. In addition to the biology and biophysics, to build and run models, one also needs to know modelling/simulation and related software development practices. This is a lot, presents quite a steep learning curve and makes modelling less accessible to researchers.
LLMs allow users to interact with complex systems using natural language by mapping user queries to relevant concepts and context. This makes it possible to use LLMs as an interface layer where researchers can continue to use their own terminology and domain-specific language, rather than first learning a new tool's vocabulary. They can ask general questions, interactively explore concepts through a chat interface, and slowly build up their knowledge.
We are currently leveraging LLMs in two ways.
The first way we are using LLMs is to make it easier for people to query information about NeuroML.
As a first implementation, we queried standard LLMs (ChatGPT/Gemini/Claude) for information. While this seemingly worked well and the responses sounded correct, given that LLMs have a tendency to hallucinate, there was no way to ensure that the generated responses were factually correct.
This is a well known issue with LLMs, and the current industry solution for building knowledge systems using LLMs with correctness in mind is the RAG system. In a RAG system, instead of the LLM answering a user query using its own trained data, the LLM is provided with curated data from an information store and asked to generate a response strictly based on it. This helps to limit the response to known correct data, and greatly improves the quality of the responses. RAGs can still generate errors, though, since their responses are only as good as the underlying sources and prompts used, but they perform better than off-the-shelf LLMs.
For NeuroML we use the following sources of verified information:
I have spent the past couple of months creating a RAG for NeuroML. The code lives here on GitHub and a test deployment is here on HuggingFace. It works well, so we consider it stable and ready for use.
Here is a quick demo screen cast:
We haven't dedicated too many resources to the HuggingFace instance, though, as it's meant to be a demo only. If you do wish to use it extensively, a more robust way is to run it locally on your computer. If you have the hardware, you can use it completely offline by using locally installed models via Ollama (as I do on my Fedora Linux installation). If not, you can also use any of the standard models, either directly, or via other providers like HuggingFace.
The package can be installed using pip, and more instructions on installation and configuration is included in the package Readme.
Please do use it and provide feedback on how we can improve it.
The RAG system is implemented as a Python package using LangChain/LangGraph. The "LangGraph" for the system is shown below. We use the LLM to generate a search query for the retrieval step, and we also include an evaluator node that checks if the generated response is good enough---whether it uses the context, answers the query, and is complete. If not, we iterate to either get more data from the store, to regenerate a better response, or to generate a new query.
The RAG system exposes a REST API (using FastAPI) and can be used via any clients. A couple are provided---a command line interface and a Streamlit based web interface (shown in the demo video).
The RAG system is designed to be generic. Using configuration files, one can specify what domains the system is to answer questions about, and provide vector stores for each domain. So, you can also use it for your own, non-NeuroML, purposes.
The second way in which we are looking to accelerate modelling using LLMs is by using them to help researchers build and simulate models.
Unfortunately, off-the-shelf LLMs don't do well when generating NeuroML code, even though they are consistently getting better at generating standard programming language code. In my testing, they tended to write "correct Python", but mixed up lots of different libraries with NeuroML APIs. This is likely because there isn't so much NeuroML Python code out there for LLMs to "learn" from during their training.
One option is for us to fine tune a model with NeuroML examples, but this is quite an undertaking. We currently don't have access to the infrastructure required to do this, and even if we did, we will still need to generate synthetic NeuroML examples for the fine-tuning. Finally, we would need to publish/host/deploy the model for the community to use.
An alternative, with function/tool calls becoming the norm in LLMs, is to set up a LLM based agentic code generation workflow.
Unlike a free-flowing general-purpose programming language like Python, NeuroML has a formally defined schema which models can be validated against. Each model component fits in at a particular place, and each parameter is clearly defined in terms of its units and significance. NeuroML provides multiple levels of validation that give the user specific, detailed feedback when a model component is found to be invalid. Further, the NeuroML libraries already include functions to validate models, read and write them, and to simulate them using different simulation engines.
These features lend themselves nicely to a workflow in which an LLM iteratively generates small NeuroML components, validates them, and refines them based on structured feedback. This is currently a work in progress in a separate package.
I plan to write a follow up post on this once I have a working prototype.
While being mindful of the hype around LLMs/AI, we do believe that these tools can accelerate science by removing/reducing some common accessibility barriers. They're certainly worth experimenting with, and I am hopeful that the modelling/simulation pipeline will help experimentalists that would like to integrate modelling in their work do so, completing the neuroscience research loop.
MIT's DEDP MicroMasters is about Data, Economics and Development Policy. It was recently renamed, Data Economics & Design of Policy although the focus remains on developing countries, not rich countries like Switzerland. From time to time, I've seen people asking what a MicroMasters certificate is really worth. Ironically, that would be a great question for an MIT economist to answer.
In one of the modules, 14.750x Political Economy, Prof Ben Olken begins by asking the online learners to read papers Hit or Miss? The Effect of Assassinations on Institutions and War and Do leaders matter? National leadership and growth since world war II. The above average rate of assassinations for people with this job category is used to give us insights. There is nothing in the paper about leaders kidnapped by President Trump. Trump himself was nearly assassinated before his own return to office. The paper is available online.
In earlier blogs, we were able to prove the anonymous document on the FINMA web site relates to Parreaux, Thiébaud & Partners, Switzerland's "Law Firm X". From there, we were able to prove that FINMA, the regulatory authority knew about it for some years and the Geneva bar association also seemed to know for a long time.
In the world of abuse, it seems that priests knew some of their colleagues were paedophiles but they have a code of conduct, the Crimen Sollicitationis which prevented priests from warning the public about their own colleagues. The Swiss jurists from the bar association and officials from FINMA appeared to be operating from the same playbook. Those who knew about the scandal didn't tell the clients. They put the reputation of their profession and the privacy of rogue colleagues ahead of the interests of public safety and justice.
We then went the next step, showing that FINMA participated in a cover-up. Reports on this web site meticulously reverse-engineering their cover-up tactics.
From the moment the illegal legal insurance launched in 2018, Birgit Rutishauser had been head of FINMA's surveillance of the insurance industry.
Rutishauser is a graduate of ETH Zurich. Many of the attacks on my family revolve around the death of Adrian von Bidder-Senn on our wedding day. He was also an ETH Zurich graduate like Rutishauser. Just as the priests and the jurists have sought to maintain strict silence about the wrongdoing of colleagues, it seems the ETH Zurich alumni are maintaining silence. After all, Adrian von Bidder-Senn's wife obtained a PhD in cybersecurity from the same institution but looking at the last email she sent to the Debianists, we can see that Dr Diana von Bidder-Senn failed to realize the extent to which her husband was a victim of social engineering. Dr von Bidder-Senn is now the mayor of Basel.
When Rutishauser was appointed in 2018, FINMA published the following comments about her background:
The 46-year-old mathematician and actuary has managed the Risk Management section of the Insurance division since June 2016. Prior to joining FINMA, Birgit Rutishauser spent many years working in a variety of management roles within the insurance sector, most recently as Chief Underwriting & Risk Management Officer at Nationale Suisse and, as such, a member of the Group Executive Management Board. Birgit Rutishauser is a Swiss citizen.
Rutishauser had also worked for Zurich insurance, the same company where Urban Angehrn had worked prior to joining FINMA.
By comparison, I am a holder of the MIT MicroMasters diploma in Data, Economics & Development Policy (DEDP). Well, I also worked for a few banks too. UBS in Zurich provided a reference letter:
Competitors and cyberbullies have spent an enormous amount of effort trying to trick people to believe that I'm a mentally ill person who only pretends to be a developer:
Subject: Re: Open Letter to Debian election candidates about Debian vendettas Date: Sun, 20 Mar 2022 21:00:02 +0900 From: Hideki Yamane <henrich@iijmio-mail.jp> To: Daniel Pocock <daniel@pocock.pro> CC: debian-devel@lists.debian.org Hi, [ ... snip gaslighting ... ] Before talking, you should get counseling for a while since it seems that you have some cognitive troubles now. You'd be better to hear about your opinion and current your mind status from professional 3rd parties, not Debian. (If they say you're very healthy and good, then that's good. Don't you think so?) Without that, we cannot make a constructive conversation. As I said in my platform, "Be calm, stay cool, stay safe" - Hope you stay "cool" a bit with help from professionals, and you would to be able to a "contributor" to floss again. Life is short - to waste our time for fighting. Let's make more values for users. -- Hideki Yamane <henrich@iijmio-mail.jp>
The rumours about mental illness were obviously falsified. Nonetheless, it is important to remember they started falsifying these things at a time when I lost two family members. Is it an example of cybertorture or is it simply an example of how rude these people are after spending too much time in social control media?
Is it possible that a mentally ill fake developer with a MicroMasters was able to write a blog post that brought down the Deputy CEO of the Swiss financial market regulator?
Rather than finding me to be insane, Switzerland granted me citizenship in the Canton of Vaud in November 2023:
Early 2024, I came across a photo of Switzerland's attorney general visiting his peers at the Parquet National Financier in Paris, France to talk about cross-border crime. The illegal legal insurance was an example of cross-border crime because they recruited people from France to work for them and they promoted the insurance to French residents.
The meeting and photo has since been removed from the PNF web site but it can be found in the Wayback Machine.
Rencontre avec M. Stefan Blättler, procureur général de la Confédération helvétique
Le 13 février, les quatre chefs de juridiction du tribunal judiciaire de Paris ont reçu M. Stefan Blättler, procureur général de la Confédération helvétique.
Ces échanges stratégiques ont permis d’aborder les questions d’entraide pénale internationale et de lutte contre la criminalité organisée, le terrorisme, les crimes contre l'humanité et la délinquance financière.
Ils ont été l’occasion de rappeler la détermination commune des autorités judiciaires françaises et helvétiques à lutter contre les formes complexes de criminalité et de réaffirmer le caractère essentiel d'un dialogue soutenu et cordial entre pays voisins, partageant une frontière et une relation longue.
I wrote to the PNF and sent them some of the documents. This was their reply:
The PNF, like me, suspects that a crime was committed under French law but for technical reasons, they require a French citizen to bring a formal complaint to a local prosecutor first. The law in France would see me as a witness to the crime.
Later in 2024, I acquired the domain name www.michaelmcgrath.ie, that is the former domain name of Michael McGrath, the EU Commissioner for Democracy, Justice, the Rule of Law and Consumer Protection.
I used the EU Justice Commissioner's former domain name to publish information about the cross-border crime from the Swiss jurists.
The blogs published between January and March 2025 continue to build the case that FINMA not only knew about the illegal legal insurance but they also had a role in the cover-up.
The blog published on 8 March 2025, International Women's Day, considered the case of a French woman who was tricked to quit the job she had for seven years and come to work for the Swiss jurists in Geneva selling illegal legal insurance.
On 27 March 2025, I was discussing the case with an expert in Paris at the same time the BBC was interviewing former Archbishop of Canterbury Justin Welby about the cover-up of their jurist John Smyth QC. In the meeting in Paris, I was asked if there was a link between the paedophiles and the jurists. The BBC's report only appeared after the meeting and it prompted me to write a new blog post about the culture of cover-ups.
When famed whistleblower Trevor Kitchen exposed the Forex scandal, he didn't go as far as comparing anybody to a paedophile, nonetheless, they had him arrested in Portugal and tried to have him extradited back to Switzerland for criminal speech.
Finance chief who exposed currency scandal fights Swiss extradition bid for criminal defamation
...
Mr Kitchen, who worked as a financial controller for companies including Shell, Castrol and Black and Decker, lost his 700,000 franc pension due to the rigged currency fluctuations.
...
“Because I had specialised in finance all my life understanding policies and procedures in companies, I went and reported this to all the regulators,” he said.
Toby Cadman, head and co-founder of Guernica 37 International Justice Chambers, who is working pro bono for Mr Kitchen, said: “If he was in the UK, this would not even get past the issuing of a warrant, let alone the extradition process.
...
He was arrested by Portuguese police on January 19 and taken for questioning. “It was all maximum security stuff. Six policemen were around me and took me into a small room and told me to take all my clothes off. They threw me in prison for 48 hours all because of the words I used,” he said.
The Swiss embassy has been contacted for comment.
Yet Swiss authorities have made no such move against me. Without admitting or denying any of the allegations on this web site, Birgit Rutishauser's resignation was announced on 1 April 2025, barely two days after the strongest blog post about the complicity of the authorities in the cover-up.
First of all, FINMA published a notice about Rutishauser's resignation.
The same day, FINMA published a longer notice about their restructuring. The notice included a paragraph emphasizing that Birgit Rutishauser's departure was not part of the restructuring. In other words, there is some other specific reason she has chosen to tender her resignation and they are not going to give us any more detail about it.
FINMA adapts its organisation to meet future challenges
...
Independently of the introduction of the new organisational structure, Birgit Rutishauser, member of the Executive Board, Deputy CEO and Head of the Insurance division, has decided to leave FINMA. Vera Carspecken will assume the leadership of the Insurance division on an interim basis from 1 May 2025 (see separate press release of 1 April 2025).
Somebody with so many years of service would normally be entitled to three months of notice period. The abrupt departure of Martin Senn from Zurich insurance was referred to as a factor in his suicide. However, it is also possible that Rutishauser was placed on an extended period of garden leave to prevent her from immediately being able to use confidential information she has acquired in the course of her duties.
Rutishauser's LinkedIn profile tells us she is now at StenFo, that is the fund for a Swiss nuclear dump.
All the waste from five decades of Swiss nuclear programs is currently stored in a temporary facility at Zwilag. If they ever try to build a permanent underground storage facility there will be a referendum to stop it. It just sits in the temporary facility for the time being:
President Putin frequently reminds the world about nuclear punishments. What we see here is the power of nuclear teamwork.
Fifty percent of Swiss nuclear fuel comes from Russia, twenty five percent from Canada and twenty five percent was mined in Australia, where I was born.
After leaving her post as second in command at FINMA, head of the insurance division, Rutishauser enrolled at IMD, a well known Swiss business school, where she was awarded a certificate in blockchain.
While there are some companies doing completely legitimate projects with blockchain, it is ironic that the scammers who had been allowed to operate for years under FINMA have also pursued a new career path promoting cryptocurrency "investment" to victims in France.
I started the Software Freedom Institute in May 2021 in the middle of the pandemic. Many people were not working at all. I could have chosen not to work and claimed furlough payments from the government. Instead, by working, I was paying tax and contributing money to the social security system to support the rest of society.
I purchased and paid for various insurance services to protect the business. Invoices for Parreaux, Thiebaud & Partners legal services. When IBM Red Hat attacked my business, the Swiss legal protection did nothing to help. The ADR Forum legal panel eventually gave a ruling against IBM Red Hat, declaring that I was a victim of harassment from a much larger company/competitor.
Likewise, after the conviction of Cardinal George Pell, rogue Debianists attacked my family with rumours about abuse and they attacked us again after I founded the Software Freedom Institute. The Swiss jurists did nothing to help. I went to Italy by myself to speak to the Carabinieri and after years of inquiries, the Cardinal died four hours after I filed a report about exploitation. Despite Switzerland's reputation for privacy, the Swiss jurists provided no help whatsoever to protect my family and I from these intrusions.
Justin Welby had to resign as Archbishop of Canterbury due to his failure to take action against the paedophile jurist John Smyth QC. On 30 March 2025, we published the comparison of Church of England, Crimen Sollicitationis and the cover-up at FINMA and barely two days later, Rutishauser's resignation was announced.
As it is Switzerland, nobody is holding their breath waiting for authorities to confirm the Deputy CEO of FINMA resigned because of the cover-up disclosed in the JuristGate reports. However, we can now try to answer the question we started with, what is the real value of the DEDP MicroMasters from MIT? Priceless.
Read more of the JuristGate reports.
comments? additions? reactions?
As always, comment on mastodon: https://fosstodon.org/@nirik/116228691881195787