This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 14 July – 18 July 2025
Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
Debian's authorship database shows us
Shane Wegner was changed to the "Removed" status
on 25 June 2025. The snobby people in the Debian Account Managers
team didn't give us any "Statement on Shane Wegner" so we can only guess
what has happened.
Normally when someone resigns they are changed to the Emeritus status.
I resigned from some of my voluntary activities around the time my father
died and they spent seven years insulting my family. People don't care
about families any more but even if you don't care about my family,
if you care about your servers running Debian, it is time to ask these
little pricks what the hell is going on with the keyring.
The little pricks did this with Dr Jacob Appelbaum in 2016. They bamboozled
people with rumors about abuse. I researched the case and showed that
they were leading us down the garden path.
Now it has happened again, they removed somebody and it is radio silence.
Looking in debian-private we can see that Wegner lost his key over 15
years ago.
The little pricks attacked me when my father died but when somebody's
key is compromised they are asleep at the wheel.
Subject: [vac] Looking for keysigning Salt Lake City 10-jul - 15-jul
Date: Mon, 28 Jun 2010 17:05:42 -0700
From: Shane Wegner <shane@debian.org>
To: debian-private@lists.debian.org
Hi all,
I will be in Salt Lake on the above mentioned dates. If
anyone is available for key exchange, please contact me
privately for a meet-up. I am out of the keyring due to a
key compromize so looking to get recertified.
Cheers,
Shane
Shane Wegner sent that cry for help a few weeks before
Frans Pop chose Debian Day for his suicide plan. Boo hoo, the little pricks were
in denial about their role in that suicide and they failed to notice the keyring
was compromised.
Wegner reminded us again in 2011:
Subject: [vac] keysigning Salt Lake City 29-Apr - 8-May
Date: Tue, 26 Apr 2011 12:01:42 -0700
From: Shane Wegner <shane@debian.org>
To: debian-private@lists.debian.org
Hello,
Subject says it all. Still looking for a keysigner or two
to get back into the keyring. I'll be in downtown SLC if
anyone wants to get together for keysigning.
Shane
That email was sent right in the middle of the
Adrian von Bidder-Senn crisis. The cause of death is secret because Adrian
died in Basel, Switzerland and his wife, Diana von Bidder is a politician.
Adrian was 32 years old and he
died on the same day as our wedding. Why are they covering this
up and pretending to be a so-called community and a "family" when some of
these deaths appear to be suicides?
Remember the case of
Edward J Brocklesby? Why did they remove him from the keyring? He was
living on the road to GCHQ so they never made a "Statement on Edward Brocklesby".
Is this a family or a bunch of snobs?
When Wegner joined in 2000, people were still talking about
accepting scanned copies of passports as a proof of identity.
How many people on the keyring today can be traced back to a scanned,
possibly forged, copy of a passport?
At DebConf25, there is a secret meeting about the scandal but
they didn't give any report. The Debian Social Contract is dead.
I don’t think I ever posted about it, but nine months ago (exactly, which I just realized as I’m writing these words), I joined CIQ as a Senior Systems Engineer. One of my early tasks was to help one of our customers put together Rocky Linux images that their customers could use, and one of the requirements from their HPC customers was that the latest Intel irdma kernel module be available.
While packaging up the kernel module as an external kmod was easy enough, the question was asked, “What if the kernel ABI changes?” Their HPC customers wanted to use the upstream Rocky kernel, which, as a rebuild of RHEL has the same kABI guarantees that Red Hat has. There is a list of symbols that are (mostly) guaranteed not to change during a point release, but the Intel irdma driver requires symbols that aren’t in that list.
I did some investigation, and, in the lifespan of Rocky 8.10 (roughly 15 months), there have been somewhere just under 60 kernel releases, with only 3 or 4 breaking the symbols required by the Intel irdma driver. This meant that we could build the kmod when 8.10 came out, and, using weak-updates, the kernel module would automatically be available for newer kernels as they’re released until a release came out that broke one of the symbols that the kmod depended on. At that point, we would need to bump the release and rebuild the kmod. The new kmod build would be compatible with the new kernel, and any other new kernels until the kABI broke again.
When doing the original packaging for the kernel, Red Hat had the wisdom to add in a custom dependency generator that automatically generates a “Provides:” in the RPM for each symbol exported by the kernel, along with a hashed signature of its structure. This means that the kmod RPMs can be built to “Require:” each symbol they need, ensuring that the kmod can’t be installed on a system without also having a matching kernel installed.
This last item would seem to solve the whole “make sure kmods and kernels match” problem, except for one minor detail: You can have more than one kernel installed on your system.
Picture this. You have a system, and you install a kernel on it, and then install the Intel irdma/idpf driver, which makes your fancy network card work. A little while later, you update to the latest kernel and reboot… only to find your network card won’t work anymore!
What’s happened is that the kernel update changed one of the symbols required by the Intel irdma kmod, breaking the kABI. The kmod RPM has a dependency on the symbols it needs, but, because the kernel is special (that’s for you, Maple!), it’s one of the few packages that can have multiple versions installed at the same time, and those symbols are provided by the previous kernel, which is still installed, even if it’s not the currently booted kernel. The fix is as easy as booting back into the previous kernel, and waiting for an updated Intel kmod, but this is most definitely not a good customer experience.
What we really need is a safety net, a way to temporarily block the kernel from being updated until a matching kmod is available in the repositories. This is where dnf-plugin-protected-kmods comes in. When configured to protect a kmod, this DNF plugin will exclude any kernel RPMs if that kernel doesn’t have all the symbols required by the kmod RPM.
This means that, in the example above, the updated kernel would not have appeared as an available update until the Intel irdma/idpf kmod was also available (a warning would appear, indicating that this kernel was being blocked).
NVIDIA originally came up with the idea when they created yum-plugin-nvidia-driver, but it was very specifically designed with the NVIDIA kmods and their requirements in mind, so I forked it and made it more generic, updating it to filter based on the kernel’s “Provides:” and the kmod’s “Requires:”.
Our customer has been using this plugin for over six months, and it has functioned as expected. The DNF kmods we’re building for CIQ SIG/Cloud Next (a story for another day) are also built to support it and there’s a “Recommends:” dependency on it when the kmods are installed.
Since this plugin is useful not just to CIQ, but also to the wider Enterprise Linux community, I started working on packaging it up at this year’s Flock to Fedora conference (thanks for sending me, CIQ!), and, thanks to a review from Jonathan Wright (from AlmaLinux) with support from Neal Gompa, it’s now available in EPEL.
Note that there is no DNF 5 version available yet, and, given the lack of kABI guarantees in the Fedora kernel, it isn’t of much point in having it in Fedora proper.
And I do want to emphasize that, out of the box, the plugin doesn’t actually do anything. For it to protect a kmod, a drop-in configuration file is required as described in the documentation.
In 2022, the Transparency International Corruption Perception Index (CPI)
ranked Switzerland at number seven on their list, meaning it is the
seventh least corrupt country based on the methodology used for ranking.
Did Switzerland achieve this favorable score due to genuine attempts to be
clean or due to the effectiveness with which Swiss laws and Swiss culture help
to hide the wrongdoing?
The favorable ranking from Transparency International was reported
widely in the media. At the same time, most media reports also noted
Transparency International's country report card had included caveats about
nepotism, lobbyists and vulnerability of whistleblowers.
When people do try to document the reality, they are sent to prison.
Many multinational companies operate a three hundred and sixty degree
review system whereby employees can give each other feedback. The
human rights activist
Gerhard Ulrich created a web site where Swiss citizens could
write three sixty degree reviews of decisions made by their local judges.
The web site was censored and a SWAT team, the elite TIGRIS unit
was sent to arrest Gerhard Ulrich and take him to prison.
Trevor Kitchen is another well known spokesperson for investors'
rights. In the 1990s Kitchen discovered Swiss people taking credit for his
work and not properly attributing his share. Some time later he
discovered the FX scandal. During Mr Kitchen's retirement in Portugal,
Swiss persecutors used the European Arrest Warrant (EAW) to punish
him from afar. Amnesty International published a report noting
he was subject to physical and sexual abuse by Swiss authorities in 1993
and then using the EAW they tricked the police in Portugal to repeat the
abuse 25 years later in 2018.
By publishing the facts below, I face the same risk of physical and
sexual abuse by corrupt police and lawyerists.
If the Swiss public were fully aware of these details, would
Switzerland still rate so highly on Transparency International's
scale of public perception?
If Transparency International's system can be fooled so easily
by states with criminal speech laws, why doesn't Transparency International
develop a better methodology for ranking corruption?
Every fact I am reporting here can be found using various sources
on the Internet, including the Wayback Machine and the social media
profiles of the various people named below. Yet when these facts are
assembled in the same article they reveal the inconvenient truth about
the Swiss legal system as a whole.
On September 23, both houses of parliament are set to appoint a new crop of judges to the Federal Court. But in the lead-up to this, the rightwing Swiss People’s Party has dropped a bombshell.
“We’re proposing to vote judge Yves Donzallaz out of office,� the leader of the party’s parliamentary group Thomas Aeschi has announced.
Loughnan's next chance to win freedom came a year later when
another young criminal, Mark Brandon Read, walked into a courtroom with
his shotgun and kidnapped a judge to have Loughnan released. Read went on to
become one of Australia's most notorious criminals, using the name
Chopper Read. The movie
Chopper helps us get to know him better.
Escape bid: police
28 January 1978
A man who menaced a County Court judge with a shotgun on Thursday
was a "comic character Charles Chaplin would have portrayed sympathetically",
a barrister told Melbourne magistrates court yesterday.
Ironically, Charlie Chaplin was accused of being a communist and fled
the US to take refuge in Switzerland. He is buried at Corsier-sur-Vevey
in the Canton of Vaud.
... Read had planned to hold the judge hostage while Loughnan was brought
to the court and given an automatic car and a magnum pistol.
Isn't it remarkable to find the Swiss fascist party
(
SVP / UDC) and
Chopper Read both using the same tactics, kidnapping and blackmailing judges,
to get their way?
Suter had anticipated that moment five years prior in the introduction
of his paper:
The author explains how,
in Switzerland, openly political and other considerations are weighed in
the course of electing judges and how the appointment of lay judges
is balanced with an active role of law clerks (Greffier). In contrast,
New Zealand has a proud tradition of apolitical judicial appointments that
are made solely based on merit. The author criticises that Swiss judges are
elected for a term of office, whereas New Zealand judges
enjoy the security of tenure and thus, a greater judicial independence.
Mr Suter asserts that the judges are effectively an extension
of the political parties and the law clerks (Greffier) take a more active role
to prevent the judges indulging themselves. In fact, the word judge
looks similar in English and French but it is not really the same thing
at all. The term law clerk is used for convenience in English
but it is not really a perfect translation either. The role performed
by a law clerk in an English-derived courtroom is very different
to the role performed by a Greffier in a Swiss courtroom.
Therefore, using the term law clerk is confusing and it is
better to simply refer to them by the native name, Greffier
in French or Gerichtsschreiber in German.
In section IV, appointment of judges, Suter tells us:
The formal requirements to be a federal court judge are scant:
any person eligible to vote, that is to say, anyone over the age of 18
who is not incapacitated, may be appointed as a federal court judge.
In other words, a judge does not need to have a law degree or any
experience working in a court.
Suter goes on
Typically, lay judges will only be part of a panel of judges, together
with judges holding a law degree. It may happen though that a lay judge
must act as a single judge as was the case in X v Canton of Thurgau,
where both the President and the Vice-President of the District Court
had recused themselves. The Federal Supreme Court held that to have
a case adjudicated by a lay judge is not in violation of the right to
a fair trial as long as a trained law clerk participates in the management
of the proceedings and the decision making. The court noted that in the
Canton of Thurgau – as in many other cantons – the law clerk may
actively participate in the deliberations on the judgment.
In Switzerland, it is intended that these lay judges, without
legal qualifications, bring some diversity to the system and avoid
the problem of career jurists ruling over society like royal princes.
In English-speaking countries, trials have a jury and the
people in the jury are non-lawyers.
The judges in Switzerland are appointed by a political party for
a period of four to ten years. Members of a jury in English-speaking
countries are selected randomly and replaced for each new trial.
Both lay judges and juries are alternative ways of bringing non-lawyers
into the decision making process of the tribunal.
The idea that lay judges make the tribunal more in touch with the
community is something of a myth. The judges, including lay judges,
are under some control from their political party. The political parties
are under control from their most significant donors. Look at
Elon Musk and his attempt to create the America Party.
Caroline Kuhnlein-Hofmann was the judge in charge of the civil
court in the Canton of Vaud. In another blog post, I demonstrated how
Kuhnlein-Hofmann is a member of the Green Party along with one
of my competitors,
Gerhard Andrey of the company Liip SA. Moreover, Mr Andrey is
also a politician for the
Green party in the federal parliament.
One of Mr Andrey's employees,
Didier Raboud is another Debian Developer. It is an incestuous web
of corruption indeed.
Look specifically at the
payments from the so-called judge's salary into the Green Party's Swiss
bank account. In Australia, when a politician is elected,
they have a similar obligation to give some of their salary back to
their political party. While this woman is using the title "judge",
she is more like a politician and a servant of her political party.
The payments to the
Green Party demonstrate that she has an obligation
to the party, she has to give them money and judgments. This is not
speculation, the
SVP / UDC party said the same thing very loudly in 2020.
Suter has reminded us again of the importance of the Greffier
to complement the work of the unqualified lay judges. But what if the
judges are not real lawyers and the Greffiers were not trustworthy
either?
Look out for the blind leading the blind.
Suter tells us that the Greffier participates in the deliberations
of the judge or judges. In cases where a single lay judge is hearing
a trial, the Federal Supreme Court requires the Greffier to be involved
in the deliberations. Therefore, the ability for rogue Greffiers to
participate in deliberations would bring the whole system and all
the judgements into disrepute. It all comes down like a house of cards.
In some cantons, law clerks are even allowed to act in place of
judges in some respects, for instance in matters of urgency.
In the Canton of Valais/Wallis, law clerks (Greffier) may
substitute district court judges.
A snapshot of Mathieu Parreaux's biography,
captured by the Wayback Machine, tells us that Parreaux was still
working as a Greffier at the same time that he was selling legal fees
insurance to the public.
Mathieu Parreaux began his career in 2010, training in accounting and tax law in a fiduciary capacity at the renowned Scheizerweg Finance. Following this experience, he held a KYC officer position at several private banks in Geneva, such as Safra Sarasin and Audi Bank.
That same year, Mathieu took up his duties as lead Greffier at the Tribunal of Monthey in Canton Valais, thus expanding the Municipality's conciliation authority.
He also began teaching law at the private Moser College in Geneva.
Mathieu practices primarily in corporate law, namely contract law, tax law, corporate law, and banking and finance law.
Mathieu also practices health law (medical law, pharmaceutical law, and forensic medicine).
Therefore, by giving Mr Parreaux payments of legal fees protection
insurance, people would feel they are gaining influence over somebody with the
power of a judge.
Notice in 2021, Mr Parreaux was putting his own name at the bottom
of the renewal invoices sent to clients. In 2022, he changed the
business name to Justicia SA and had one of his employees
put their name at the bottom of the invoice letters.
When thinking about the incredible conflict of interest, it is a
good moment to remember the
story of John Smyth QC, the British
barrister who achieved the role of Recorder, a low-ranking judge,
in the British courts while simultaneously being a Reader in the
Church of England and a prolific pedophile.
After gaining access to client records through the liquidation,
they had unreasonable advantages in using those records during
unrelated litigation.
When FINMA publicly banned Mathieu Parreaux from selling insurance
for two years, they did not make any public comment on his role
or disqualification as a Greffier. Does this mean he can continue
working as a Greffier as long as he does not sell insurance
at the same time?
In the
Lawyer X scandal in Australia, hundreds of judgments had to be
overturned due to a miscarriage of justice. If the Swiss public
were aware of the full circumstances then every judgment involving
Mathieu Parreaux or
Walder Wyss could also be invalidated.
This appears to be one of the reasons for the intense secrecy about
the JuristGate affair.
During my research, I found two other employees of the
legal fee insurance scheme who were also employed in a tribunal
as a Greffier. It looks like there was a revolving door between
the illegal legal insurance scheme and the tribunal.
Is it appropriate for somebody with the powers of a judge to try
and influence the deployment of police resources to suit their
personal circumstances or should they be concerned with distributing
police resources throughout the canton at large?
In the abstract of Benjamin Suter's report, he told us that the
Greffier is meant to help keep the politically-affiliated judges honest.
If the Greffiers are not honest either, the system described by Suter
is a sham.
Imagine for a moment that you are in the middle of a legal dispute
and your brother calls up the Greffier / cat whisperer and asks her
to take his cat for a walk. Hypothetically, he pays ten thousand
Swiss francs for her to interpret his cat and you cross your fingers
and hope that your company's trademark will be granted well-known
status like Coca-cola or Disney.
It needs to be emphasized that the book value of your trademark
increases by millions of francs with a declaration of well known status
under the Paris convention. Any fee that is hypothetically paid for
cat whispering is trivial in comparison to the profit for your
organization.
I will be happy to help you convey the messages you want to send to your
pet and to receive their messages for you, or to help you create an
interior that is conducive to your well-being and that of your loved ones.
In other countries, judges and senior employees of a tribunal
are prohibited from running businesses on the side. When a jury
is deliberating they are usually sequestered in a hotel to prevent
any messages being conveyed through family and the media.
They pretend to be a country and they act like a student union.
I graduated from the National Union of Students in Australia,
traveled half way around the world to Switzerland, I thought
student politics was behind me and I found a bona fide kangaroo court
system at work in the alps.
Here is the letter from the Swiss Intellectual Property Institute
(IGE / IPI) telling the judges, greffiers and cat whisperers that the
so-called Debian "judgment" can not be processed:
The judge
Richard Oulevey sent another letter acknowledging that their
so-called judgment is impossible to follow, in other words, it is on par
with witchcraft.
While in Pentridge Prison's H division in the late 1970s, Read launched a prison war. "The Overcoat Gang" wore long coats all year round to conceal their weapons, and were involved in several hundred acts of violence against a larger gang during this period. Around this time, Read had a fellow inmate cut both of his ears off to be able to leave H division temporarily. ...
In 1978, while Read was incarcerated, his associate Amos Atkinson held 30 people hostage at The Waiters Restaurant in Melbourne while demanding Read's release. After shots were fired, the siege was lifted when Atkinson's mother, in her dressing gown, arrived at the restaurant to act as go-between. Atkinson's mother hit him over the head with her handbag and told him to "stop being so stupid". Atkinson then surrendered.
In 2022, the Transparency International Corruption Perception Index (CPI)
ranked Switzerland at number seven on their list, meaning it is the
seventh least corrupt country based on the methodology used for ranking.
Did Switzerland achieve this favorable score due to genuine attempts to be
clean or due to the effectiveness with which Swiss laws and Swiss culture help
to hide the wrongdoing?
The favorable ranking from Transparency International was reported
widely in the media. At the same time, most media reports also noted
Transparency International's country report card had included caveats about
nepotism, lobbyists and vulnerability of whistleblowers.
When people do try to document the reality, they are sent to prison.
Many multinational companies operate a three hundred and sixty degree
review system whereby employees can give each other feedback. The
human rights activist
Gerhard Ulrich created a web site where Swiss citizens could
write three sixty degree reviews of decisions made by their local judges.
The web site was censored and a SWAT team, the elite TIGRIS unit
was sent to arrest Gerhard Ulrich and take him to prison.
Trevor Kitchen is another well known spokesperson for investors'
rights. In the 1990s Kitchen discovered Swiss people taking credit for his
work and not properly attributing his share. Some time later he
discovered the FX scandal. During Mr Kitchen's retirement in Portugal,
Swiss persecutors used the European Arrest Warrant (EAW) to punish
him from afar. Amnesty International published a report noting
he was subject to physical and sexual abuse by Swiss authorities in 1993
and then using the EAW they tricked the police in Portugal to repeat the
abuse 25 years later in 2018.
By publishing the facts below, I face the same risk of physical and
sexual abuse by corrupt police and lawyerists.
If the Swiss public were fully aware of these details, would
Switzerland still rate so highly on Transparency International's
scale of public perception?
If Transparency International's system can be fooled so easily
by states with criminal speech laws, why doesn't Transparency International
develop a better methodology for ranking corruption?
Every fact I am reporting here can be found using various sources
on the Internet, including the Wayback Machine and the social media
profiles of the various people named below. Yet when these facts are
assembled in the same article they reveal the inconvenient truth about
the Swiss legal system as a whole.
On September 23, both houses of parliament are set to appoint a new crop of judges to the Federal Court. But in the lead-up to this, the rightwing Swiss People’s Party has dropped a bombshell.
“We’re proposing to vote judge Yves Donzallaz out of office,� the leader of the party’s parliamentary group Thomas Aeschi has announced.
Loughnan's next chance to win freedom came a year later when
another young criminal, Mark Brandon Read, walked into a courtroom with
his shotgun and kidnapped a judge to have Loughnan released. Read went on to
become one of Australia's most notorious criminals, using the name
Chopper Read. The movie
Chopper helps us get to know him better.
Escape bid: police
28 January 1978
A man who menaced a County Court judge with a shotgun on Thursday
was a "comic character Charles Chaplin would have portrayed sympathetically",
a barrister told Melbourne magistrates court yesterday.
Ironically, Charlie Chaplin was accused of being a communist and fled
the US to take refuge in Switzerland. He is buried at Corsier-sur-Vevey
in the Canton of Vaud.
... Read had planned to hold the judge hostage while Loughnan was brought
to the court and given an automatic car and a magnum pistol.
Isn't it remarkable to find the Swiss fascist party
(
SVP / UDC) and
Chopper Read both using the same tactics, kidnapping and blackmailing judges,
to get their way?
Suter had anticipated that moment five years prior in the introduction
of his paper:
The author explains how,
in Switzerland, openly political and other considerations are weighed in
the course of electing judges and how the appointment of lay judges
is balanced with an active role of law clerks (Greffier). In contrast,
New Zealand has a proud tradition of apolitical judicial appointments that
are made solely based on merit. The author criticises that Swiss judges are
elected for a term of office, whereas New Zealand judges
enjoy the security of tenure and thus, a greater judicial independence.
Mr Suter asserts that the judges are effectively an extension
of the political parties and the law clerks (Greffier) take a more active role
to prevent the judges indulging themselves. In fact, the word judge
looks similar in English and French but it is not really the same thing
at all. The term law clerk is used for convenience in English
but it is not really a perfect translation either. The role performed
by a law clerk in an English-derived courtroom is very different
to the role performed by a Greffier in a Swiss courtroom.
Therefore, using the term law clerk is confusing and it is
better to simply refer to them by the native name, Greffier
in French or Gerichtsschreiber in German.
In section IV, appointment of judges, Suter tells us:
The formal requirements to be a federal court judge are scant:
any person eligible to vote, that is to say, anyone over the age of 18
who is not incapacitated, may be appointed as a federal court judge.
In other words, a judge does not need to have a law degree or any
experience working in a court.
Suter goes on
Typically, lay judges will only be part of a panel of judges, together
with judges holding a law degree. It may happen though that a lay judge
must act as a single judge as was the case in X v Canton of Thurgau,
where both the President and the Vice-President of the District Court
had recused themselves. The Federal Supreme Court held that to have
a case adjudicated by a lay judge is not in violation of the right to
a fair trial as long as a trained law clerk participates in the management
of the proceedings and the decision making. The court noted that in the
Canton of Thurgau – as in many other cantons – the law clerk may
actively participate in the deliberations on the judgment.
In Switzerland, it is intended that these lay judges, without
legal qualifications, bring some diversity to the system and avoid
the problem of career jurists ruling over society like royal princes.
In English-speaking countries, trials have a jury and the
people in the jury are non-lawyers.
The judges in Switzerland are appointed by a political party for
a period of four to ten years. Members of a jury in English-speaking
countries are selected randomly and replaced for each new trial.
Both lay judges and juries are alternative ways of bringing non-lawyers
into the decision making process of the tribunal.
The idea that lay judges make the tribunal more in touch with the
community is something of a myth. The judges, including lay judges,
are under some control from their political party. The political parties
are under control from their most significant donors. Look at
Elon Musk and his attempt to create the America Party.
Caroline Kuhnlein-Hofmann was the judge in charge of the civil
court in the Canton of Vaud. In another blog post, I demonstrated how
Kuhnlein-Hofmann is a member of the Green Party along with one
of my competitors,
Gerhard Andrey of the company Liip SA. Moreover, Mr Andrey is
also a politician for the
Green party in the federal parliament.
One of Mr Andrey's employees,
Didier Raboud is another Debian Developer. It is an incestuous web
of corruption indeed.
Look specifically at the
payments from the so-called judge's salary into the Green Party's Swiss
bank account. In Australia, when a politician is elected,
they have a similar obligation to give some of their salary back to
their political party. While this woman is using the title "judge",
she is more like a politician and a servant of her political party.
The payments to the
Green Party demonstrate that she has an obligation
to the party, she has to give them money and judgments. This is not
speculation, the
SVP / UDC party said the same thing very loudly in 2020.
Suter has reminded us again of the importance of the Greffier
to complement the work of the unqualified lay judges. But what if the
judges are not real lawyers and the Greffiers were not trustworthy
either?
Look out for the blind leading the blind.
Suter tells us that the Greffier participates in the deliberations
of the judge or judges. In cases where a single lay judge is hearing
a trial, the Federal Supreme Court requires the Greffier to be involved
in the deliberations. Therefore, the ability for rogue Greffiers to
participate in deliberations would bring the whole system and all
the judgements into disrepute. It all comes down like a house of cards.
In some cantons, law clerks are even allowed to act in place of
judges in some respects, for instance in matters of urgency.
In the Canton of Valais/Wallis, law clerks (Greffier) may
substitute district court judges.
A snapshot of Mathieu Parreaux's biography,
captured by the Wayback Machine, tells us that Parreaux was still
working as a Greffier at the same time that he was selling legal fees
insurance to the public.
Mathieu Parreaux began his career in 2010, training in accounting and tax law in a fiduciary capacity at the renowned Scheizerweg Finance. Following this experience, he held a KYC officer position at several private banks in Geneva, such as Safra Sarasin and Audi Bank.
That same year, Mathieu took up his duties as lead Greffier at the Tribunal of Monthey in Canton Valais, thus expanding the Municipality's conciliation authority.
He also began teaching law at the private Moser College in Geneva.
Mathieu practices primarily in corporate law, namely contract law, tax law, corporate law, and banking and finance law.
Mathieu also practices health law (medical law, pharmaceutical law, and forensic medicine).
Therefore, by giving Mr Parreaux payments of legal fees protection
insurance, people would feel they are gaining influence over somebody with the
power of a judge.
Notice in 2021, Mr Parreaux was putting his own name at the bottom
of the renewal invoices sent to clients. In 2022, he changed the
business name to Justicia SA and had one of his employees
put their name at the bottom of the invoice letters.
When thinking about the incredible conflict of interest, it is a
good moment to remember the
story of John Smyth QC, the British
barrister who achieved the role of Recorder, a low-ranking judge,
in the British courts while simultaneously being a Reader in the
Church of England and a prolific pedophile.
After gaining access to client records through the liquidation,
they had unreasonable advantages in using those records during
unrelated litigation.
When FINMA publicly banned Mathieu Parreaux from selling insurance
for two years, they did not make any public comment on his role
or disqualification as a Greffier. Does this mean he can continue
working as a Greffier as long as he does not sell insurance
at the same time?
In the
Lawyer X scandal in Australia, hundreds of judgments had to be
overturned due to a miscarriage of justice. If the Swiss public
were aware of the full circumstances then every judgment involving
Mathieu Parreaux or
Walder Wyss could also be invalidated.
This appears to be one of the reasons for the intense secrecy about
the JuristGate affair.
During my research, I found two other employees of the
legal fee insurance scheme who were also employed in a tribunal
as a Greffier. It looks like there was a revolving door between
the illegal legal insurance scheme and the tribunal.
Is it appropriate for somebody with the powers of a judge to try
and influence the deployment of police resources to suit their
personal circumstances or should they be concerned with distributing
police resources throughout the canton at large?
In the abstract of Benjamin Suter's report, he told us that the
Greffier is meant to help keep the politically-affiliated judges honest.
If the Greffiers are not honest either, the system described by Suter
is a sham.
Imagine for a moment that you are in the middle of a legal dispute
and your brother calls up the Greffier / cat whisperer and asks her
to take his cat for a walk. Hypothetically, he pays ten thousand
Swiss francs for her to interpret his cat and you cross your fingers
and hope that your company's trademark will be granted well-known
status like Coca-cola or Disney.
It needs to be emphasized that the book value of your trademark
increases by millions of francs with a declaration of well known status
under the Paris convention. Any fee that is hypothetically paid for
cat whispering is trivial in comparison to the profit for your
organization.
I will be happy to help you convey the messages you want to send to your
pet and to receive their messages for you, or to help you create an
interior that is conducive to your well-being and that of your loved ones.
In other countries, judges and senior employees of a tribunal
are prohibited from running businesses on the side. When a jury
is deliberating they are usually sequestered in a hotel to prevent
any messages being conveyed through family and the media.
They pretend to be a country and they act like a student union.
I graduated from the National Union of Students in Australia,
traveled half way around the world to Switzerland, I thought
student politics was behind me and I found a bona fide kangaroo court
system at work in the alps.
Here is the letter from the Swiss Intellectual Property Institute
(IGE / IPI) telling the judges, greffiers and cat whisperers that the
so-called Debian "judgment" can not be processed:
The judge
Richard Oulevey sent another letter acknowledging that their
so-called judgment is impossible to follow, in other words, it is on par
with witchcraft.
While in Pentridge Prison's H division in the late 1970s, Read launched a prison war. "The Overcoat Gang" wore long coats all year round to conceal their weapons, and were involved in several hundred acts of violence against a larger gang during this period. Around this time, Read had a fellow inmate cut both of his ears off to be able to leave H division temporarily. ...
In 1978, while Read was incarcerated, his associate Amos Atkinson held 30 people hostage at The Waiters Restaurant in Melbourne while demanding Read's release. After shots were fired, the siege was lifted when Atkinson's mother, in her dressing gown, arrived at the restaurant to act as go-between. Atkinson's mother hit him over the head with her handbag and told him to "stop being so stupid". Atkinson then surrendered.
My workstations and notebooks normally are in English which is not my native language. For currency etc. I have set it to de_DE. However the date I want in ISO8601 24h format. However this is not supported by default. Here is how to get it:
First you need to install the glibc locale source package.
Fedora/RHEL: dnf install glibc-locale-source
SUSE: zypper install glibc-i18ndata
Now we need to create a complete new locale, even we just use the LC_TIME part. You do this the following way:
Hi, I am Mayank Singh, welcome back to this blog series on the progress of the new package submission prototype, if you aren’t familiar with the project, feel free to check out the previous blogpost here.
Event Handling, Forgejo Support, and Source Management (July 8 – July 15)
This week was focused on the service’s forge and tackling the challenge of source management.
Migrating to Forgejo and Handling Events
Based on community feedback, advantages and assessing our requirements, I moved the service’s forge to Forgejo. This minimal, open-source alternative to GitHub and GitLab is simpler to self-host and has significantly smoothed out our testing process.
On the implementation front, I added support for parsing issue and push events in packit-service, which allow to support parsing commands from issue comments. That being done adding support for pull_request is only trivial now and have a solid understanding of packit-service‘s event model to trigger task execution.
Package Source Handling
I hit a technical dilemma when considering handling the case of packages with new dependencies in a single Pull Request and handle their sources. The workflow requires accessing the PR’s diff, resolving it into individual files, and submitting those sources to be built in COPR.
My initial solution to this problem was to create a dedicated organization in Forgejo where every new package would get its own repository to store its sources. However, my mentor advised against this model, we discussed and realized it would become too complex and non-intuitive to work with. Instead, he clarified the path forward to focus on simple packages for now and investigate how Packit already solves this by cloning the source repository.
What’s Next?
Enhancing Forgejo Integration: Implementing methods to allow the service to post comments and add reactions on Forgejo.
Implementing Source Fetching: Building the logic to fetch source files from Pull Requests for package builds.
Expanding Commands: Adding new commands and tasks to support this workflow.
Dans le cadre des 20 ans de Fedora-fr (et du Projet Fedora en lui-même), Charles-Antoine Couret (Renault) et Nicolas Berrehouc (Nicosss) avons souhaité poser des questions à des contributeurs francophones du Projet Fedora et de Fedora-fr.
Grâce à la diversité des profils, cela permet de voir le fonctionnement du Projet Fedora sous différents angles pour voir le projet au delà de la distribution mais aussi comment il est organisé et conçu. Notons que sur certains points, certaines remarques restent d'application pour d'autres distributions.
N'oublions pas que le Projet Fedora reste un projet mondial et un travail d'équipe ce que ces entretiens ne permettent pas forcément de refléter. Mais la communauté francophone a de la chance d'avoir suffisamment de contributeurs de qualité pour permettre d'avoir un aperçu de beaucoup de sous projets de la distribution.
L'entretien du jour concerne Aurélien Bompard (pseudo abompard), développeur au sein du Projet Fedora et employé Red Hat affecté au Projet Fedora en particulier dans l'équipe infrastructure.
Entretien
Bonjour Aurélien, peux-tu présenter brièvement ton parcours ?
Je m'appelle Aurélien, je suis informaticien, et c'est pendant mon école d'ingé que j'ai découvert le logiciel libre, par le biais d'une association étudiante. J'ai vite accroché et j'ai décidé de travailler autant que possible là-dedans quand j'en suis sorti (en 2003).
J'ai commencé avec Mandrake Linux à l'époque, alors que KDE venait de sortir en version 2.0 (et le kernel en 2.4). Malgré tous les efforts de Mandrakesoft, c'était quand même pas évident de tourner sous Linux à l'époque. J'ai mis 2 semaines à faire marcher ma carte son (une SoundBlaster pourtant !), il fallait régler les fréquences de rafraichissement de l'écran soi-même dans le ficher de conf de XFree86, et je n'avais qu'un seul ordinateur ! (et pas de smartphone, lol). Le dual boot et son partitionnement m'a donné quelques sueurs froides, et tout le temps passé sous Linux était du temps coupé du réseau des élèves. Fallait quand même être un peu motivé
Mais j'ai tenu bon, et j'ai installé des applications libres sous Windows aussi : Phoenix (maintenant Firefox), StarOffice (maintenant LibreOffice), etc. Parce que ce qui m'a attiré c'est la philosophie du logiciel libre, et pas seulement la technicité de Linux.
En 2003 j'ai fait mon stage de fin d'études chez Mandrakesoft, mais la société était en redressement judiciaire à l'époque, et ça n'offrait pas des perspectives d'embauche intéressantes.
Après quelques candidatures j'ai été pris à la fin de l'été 2003 dans une SSII en logiciel libre en tant qu'administrateur système pour installer des serveurs Linux chez des PME.
Le contrat qui avait donné lieu à mon embauche s'est arrêté prématurément, et la société ayant aussi une activité de développement autour de Zope/CPS, on m'a proposé de me former à Python (version 2.2 à l'époque si je me souviens bien). J'ai accepté et suis devenu développeur. C'est à cette époque que j'ai quitté Mandrake Linux pour passer sur Red Hat 9, en me disant que c'était plus pertinent de monter en compétences dessus pour le travail. À l'époque il y avait une petite communauté de packageurs qui publiait des RPMs supplémentaires pour Red Hat 9, autour du domaine fedora.us.
Début 2004, Red Hat a décidé qu'il y avait trop de confusion entre leurs offres commerciales à destination des entreprises et leur distribution Linux gratuite, et ont décidé de scinder leur distribution en Red Hat Enterprise Linux d'un côté, et une distribution Linux communautaire de l'autre. Ils ont embauché le fondateur de fedora.us, rassemblé les contributeurs et lancé la distribution Fedora Linux.
La communauté a mis pas mal de temps à se former, et c'était passionnant de voir ça en direct. On avait :
- les contributeurs qui disaient "regarde on a jamais été aussi libres de faire ce qu'on veut, c'est bien mieux qu'avant avec RH9 qui était poussée hors des murs de Red Hat sans qu'on puisse rien y faire" ;
- les utilisateurs des "autres distribs" (kof kof Debian kof kof) qui disaient "vous êtes exploités par une société, ce sera jamais vraiment communautaire, Red Hat se transforme en Microsoft" ;
- les commerciaux de Red Hat qui disaient "Fedora c'est une version bêta, c'est pas stable, faut surtout pas l'utiliser en entreprise, achetez plutôt RHEL" ;
- la communication Red Hat qui disait "Si si c'est communautaire j'vous assure"
Si vous ne l'avez pas lue et que vous lisez l'anglais, cette fausse conversation IRC a fait rire jaune beaucoup de monde à l'époque.
Enfin bref, la communauté a fini par grossir significativement, Fedora Core et Fedora Extras on fusionné, etc. Ma dispo pour contribuer au projet a été assez variable au fil des années, mais j'ai toujours utilisé Fedora.
En 2012, un poste s'est ouvert chez Red Hat dans l'équipe qui s'occupe de l'infrastructure Fedora, j'ai postulé, et j'ai été pris.
Peux-tu présenter brièvement tes contributions au Projet Fedora ?
Au début je packageais les logiciels que j'avais à dispo dans Mandrake Linux mais qui n'existaient pas dans Fedora Extras, et tous les logiciels sympas que je voyais passer. J'ai aussi fait beaucoup de revues de fichiers spec pour leur inclusion dans la distrib.
Après avoir été embauché par Red Hat, j'ai travaillé sur HyperKitty, le logiciel d'archivage / visualisation de Mailman 3. Puis j'ai travaillé sur plein d'autres trucs au sein de l'équipe Fedora Infra, les dernières étant Fedora Messaging, Noggin/FASJSON et FMN. Je suis aujourd'hui responsable technique côté Fedora dans l'équipe (par opposition au côté CentOS) et je travaille surtout sur les applications en tant que dev, beaucoup moins sur la partie sysadmin.
Qu'est-ce qui fait que tu es venu sur Fedora et que tu y es resté ?
J'y suis venu parce que monter en compétences sur une distribution Red Hat me semblait pertinent pour mon métier, en tant que sysadmin Linux.
J'y suis resté parce que Fedora est pour moi le parfait équilibre entre nouveauté et stabilité, tout en étant très ancré dans la défense du logiciel libre, même au prix de quelques complications (le mp3, les pilotes Nvidia, etc).
Pourquoi contribuer à Fedora en particulier ?
Parce que je l'utilise. Je crois que c'est une constante dans ma vie, c'est rare que je reste uniquement utilisateur/consommateur, je suis souvent amené à contribuer à ce que j'utilise ou aux associations dont je fais partie.
Contribues-tu à d'autres Logiciels Libres ? Si oui, lesquels et comment ?
J'ai été amené à développer sur mon temps libre quelques logiciels pour des associations auxquelles je participe, et c'est toujours en logiciel libre. Le dernier en date c'est Speaking List (licence AGPL).
Est-ce que tes contributions dans Fedora se font entièrement dans le cadre de ton travail ? Si non, pourquoi ?
Avant non, parce que je packageais des outils que j'utilisais personnellement (Grisbi, Amarok, etc). Maintenant oui
Est-ce que être employé Red Hat te donne d'autres droits ou opportunités au sein du projet Fedora ?
Oui, j'aimerais que ce ne soit pas le cas mais c'est sûr que je suis plus près des prises de décisions, j'ai des accès plus directs aux personnes influentes et aux évolutions. Ce n'est pas un "droit" au sens strict, qui me serait attribué non pas sur la base de mes contributions mais sur celle de mon employeur, heureusement. Mais disons que je baigne toute la journée dedans, je pense que ça ouvre plus d'opportunités que quand j'étais contributeur "externe".
Tu as été membre de Fedora Infrastructure, peux-tu nous expliquer sur l'importance de cette équipe pour la distribution ? Quels services maintenais-tu ?
Cette équipe est totalement indispensable. Il y a en permanence des problèmes qui apparaissent dans la distrib, des choses qui tombent en panne, de nouveaux services à intégrer, d'anciennes applis qui ne marchent plus sur les nouvelles distributions ou les nouveaux services et qu'il faut porter, etc.
J'ai commencé sur Mailman / HyperKitty mais je me suis diversifié depuis, je dirais qu'aujourd'hui je me concentre sur l'aspect applicatif : maintenance de nos applis, portages, adaptations, évolutions, etc. Les dernières applis sur lesquelles j'ai travaillé sont Fedora Messaging, Noggin/IPA (authentification), Datanommer/Datagrepper, FMN (notifications), MirrorManager, et plus récemment Badges.
Tu as notamment beaucoup contribué à mailman et hypperkitty pour les listes de diffusion du projet. Qu'est-ce que tu as fait ? La migration a-t-elle été difficile ? Quelle importance ont encore les listes de diffusion aujourd'hui au sein du projet Fedora ?
C'était mon premier travail lorsque j'ai été embauché par Red Hat, oui. J'ai fait le développement de HyperKitty, en suivant les travaux de conception d'interface réalisés par Mo Duffy. J'ai travaillé aussi sur Mailman 3 lui-même quand c'était son développement qui me bloquait pour HyperKitty ou pour le déploiement du tout. J'ai écrit un script de migration qui a pas trop mal marché je pense, quand on prend en compte la longue historique des listes de diffusion du projet. Il fait partie de HyperKitty et va maintenant être utilisé pour la migration des listes de CentOS.
Le sujet des listes de diffusion a presque toujours été assez conflictuel chez Fedora. Il y a une quinzaine d'années, avant que je sois embauché pour travailler sur HyperKitty, notre communauté était déjà fractionnée entre les contributeurs plutôt réguliers qui utilisaient les listes, et les utilisateurs et contributeurs occasionnels qui étaient plutôt sur des forums web. En effet, utiliser une liste de diffusion est plus engageant qu'un forum, il faut s'y abonner, mettre en place des filtres dans sa messagerie, gérer l'espace de son compte mail en conséquence, c'est impossible de répondre à un message envoyé avant qu'on s'y abonne, on ne peut pas éditer ses messages, etc. Quand on veut juste poser une question rapidement ou répondre rapidement à quelque chose, les forums peuvent être plus pratiques et plus intuitifs. L'utilisation des listes de diffusion peut être intimidant et contre-intuitif : combien de personnes on envoyé "unsubscribe" à une liste en voulant se désinscrire ?
La promesse d'HyperKitty était d'offrir une interface de type forum aux listes de diffusion, pour faire le pont entre les deux communautés, et permettre plus facilement la conversion d'utilisateurs en contributeurs tout en permettant aux contributeurs d'être plus facilement confrontés aux problèmes rencontrés par les utilisateurs. Ça n'a pas bien fonctionné, mais c'est un sujet qui reste d'actualité aujourd'hui avec l'intégration de Discourse dans le...(discussion.fedoraproject.org). Je crois que le projet essaie de migrer de plus en plus de processus depuis les listes de diffusion vers Discourse, pour que ça atteigne le maximum d'utilisateurs et de contributeurs.
Puis également sur le compte unique au sein du projet Fedora (nommé FAS), quelle est l'importance de ce projet et ce que tu y as fait ?
C'est un projet qu'on a gardé longtemps dans les cartons, peut-être trop longtemps même. L'idée était de remplacer FAS (Fedora Account System), une base de donnée des utilisateurs avec une API maison, par FreeIPA, une intégration de LDAP et Kerberos pour gérer les comptes utilisateurs en entreprise. On utilisait en fait déjà IPA pour la partie Kerberos dans l'infra, mais la base de référence des comptes était FAS. Or, FAS n'était plus maintenu, et sa ré-écriture par un membre de la communauté (un français ! petit clin d'œil à Xavier au passage) prenait un peu trop de temps. FAS tournait sur EL6 et la fin de vie approchait.
Migrer la base de comptes sur IPA a été assez complexe parce que beaucoup d'applications s'intégraient avec, il a donc fallu tout convertir vers le nouveau système. IPA étant prévu pour des entreprises à la base, il n'y a pas de système d'auto-enregistrement et de gestion avancée de son propre compte. Nous avons donc dû développer cette interface, qui s'appelle Noggin. Nous avons aussi écrit une API REST pour IPA, appelée FASJSON. Enfin, il a fallu personnaliser IPA pour qu'il stocke les données dont nous avions besoin dans l'annuaire LDAP.
J'ai été développeur et responsable technique sur ce projet, donc je me suis surtout concentré sur la conception et les points d'implémentation délicats.
Tu fais parti des gros contributeurs du composant Bodhi et même l'infrastructure de la compilation des paquets en général, là encore quel a été ton rôle là dedans et en quoi ces composants sont importants pour le projet ?
Bodhi est vraiment au cœur du cycle de vie d'un paquet RPM dans Fedora. C'est aussi une des seules applications de l'infra qui soit significativement maintenue par un membre de la communauté qui n'est pas un employé de Red Hat (Mattia). Elle permet de proposer une mise à jour des paquets, s'intègre avec les composants de tests de paquets, et permet de commenter une mise à jour.
J'ai travaillé dessus de manière sommaire seulement, depuis le départ de Randy (bowlofeggs) qui la maintenait auparavant. J'ai converti le système d'authentification version OIDC, j'ai écrit les tests d'intégration, j'ai travaillé un peu sur l'intégration continue, mais c'est tout.
Peux-tu expliquer rapidement l'architecture derrière cette mécanique ?
Et bien, disons qu'en résumé quand un packageur veut proposer une mise à jour, il met à jour son fichier spec dans son dépôt, lance une construction du paquet avec fedpkg dans Koji, et doit ensuite déclarer et donner les détails de sa mise à jour dans Bodhi. C'est là que les tests d'intégration des paquets se déclenchent, et au bout d'un certain temps (ou d'un certain nombre de commentaires positifs) la mise à jour arrive sur les miroirs.
Tu as aussi beaucoup travaillé sur Fedora-Hubs, peux-tu revenir sur les ambitions de ce projet ? Pourquoi il n'a finalement pas été adopté et concrétisé comme prévu ?
L'objectif de Fedora Hubs était de centraliser l'information venant de différentes applications Fedora sur une même page, avec une interface qui explique clairement ce que ça veut dire et quelles sont les étapes suivantes. Une sorte de tableau de bord pour contributeur, et pas seulement pour packageur, un peu dans l'esprit de ce que fait aujourd'hui https://packager-dashboard.fedoraproject.org/.
Malheureusement la proposition a été faite à un moment où il y avait d'autres priorités plus urgentes, et vu que c'était quand même pas mal de boulot on a laissé tomber pour s'occuper du reste.
Est-ce qu'il y a des collaborations concernant l'infrastructure entre les projets RHEL, CentOS et Fedora ou même d'autres entités externes ?
Oui, on essaye de partager le maximum ! Le système d'authentification est commun entre CentOS et Fedora, par exemple. On essaie d'échanger sur nos rôles Ansible, sur la surveillance de l'infra, etc.
Si tu avais la possibilité de changer quelque chose dans la distribution Fedora ou dans sa manière de fonctionner, qu'est-ce que ce serait ?
J'adorerais qu'il y ait plus de contributeurs qui participent aussi à l'infrastructure, et notamment à nos applications. À vrai dire je cherche en ce moment des moyens de motiver les gens à venir y mettre les mains. C'est super intéressant, et vous pouvez directement affecter la vie des milliers de contributeurs au projet ! Je suis même prêt à mettre de l'énergie là-dedans si besoin, sous forme de présentations, ateliers, questions/réponses, etc. Et je pose donc la question à tout le monde : si Fedora vous intéresse, si le développement vous intéresse, qu'est-ce qui vous freine pour contribuer aux applis de l'infra ?
À l'inverse, est-ce qu'il y a quelque chose que tu souhaiterais conserver à tout prix dans la distribution ou le projet en lui même ?
Je crois que c'est notre capacité à innover, à proposer les dernières nouveautés du logiciel libre
Que penses-tu de la communauté Fedora-fr que ce soit son évolution et sa situation actuelle ? Qu'est-ce que tu améliorerais si tu en avais la possibilité ?
À vrai dire je n'ai pas suivi de près les évolutions de la communauté française. Mon boulot m'amène à communiquer quasi-exclusivement en anglais, donc j'interagis plus avec la communauté anglophone.
Quelque chose à ajouter ?
Non, rien de spécial, à part revenir sur ma question : si vous avez eu envie d'améliorer l'infra et/ou les applis de l'infra de Fedora, qu'est-ce qui vous a freiné ? Qu'est-ce qui vous freine aujourd'hui ? N'hésitez pas à me contacter sur Matrix (abompard@fedora.im) et sur Discourse.
Merci pour ta contribution !
Merci à vous, et joyeux anniversaire à Fedora-Fr !
Conclusion
Nous espérons que cet entretien vous a permis d'en découvrir un peu plus sur le site Fedora-fr.
Si vous avez des questions ou que vous souhaitez participer au Projet Fedora ou Fedora-fr, ou simplement l'utiliser et l'installer sur votre machine, n'hésitez pas à en discuter avec nous en commentaire ou sur le forum Fedora-fr.
Et ainsi s'achève notre série d'entretiens. On espère que cela vous aura plus et peut être à dans quelques années pour savoir ce qui a changé.
Last year, I wrote a small configuration snippet for syslog-ng: FreeBSD audit source. I published it in a previous blog, and based on feedback, it is already used in production. And soon, it will be available also as part of a syslog-ng release.
As an active FreeBSD user and co-maintainer of the sysutils/syslog-ng port for FreeBSD, I am always happy to share FreeBSD-related news. Last year, we improved directory monitoring and file reading on FreeBSD and MacOS. Now, the FreeBSD audit source is already available in syslog-ng development snapshots.
If you already use the FreeBSD audit source, you only need one little change in your configuration. As the configuration snippet is now part of SCL (the syslog-ng configuration library), you do not need this part in your configuration anymore:
Development snapshots of syslog-ng are not part of FreeBSD ports, but you can compile them yourself with a little effort. Two of my blogs contain the necessary information:
Each commit to the syslog-ng git repository is tested on FreeBSD. I regularly test syslog-ng on FreeBSD when I update my ports repo. However, obviously I cannot test all possible combinations of the syslog-ng configuration, so any testing and feedback is very welcome!
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
Shortly before I was born, Fr O'Connell had been convicted of
harboring an escaped prisoner. Fr O'Connell made an appeal and the conviction
was overturned.
Even more significant, the story shows us that the church explained
their philosophy of forgiveness and redemption to the judge and to the
media in 1977. Journalists traveled out to
Coburg to meet Fr O'Connell
and the story appeared in news reports all around Australia.
In Melbourne, the story and the philosophy was published on the
front page of the newspaper under the heading
"I'd do same again, says cleared priest".
The public and the court had the opportunity to ask Fr O'Connell
how the church would handle a more dangerous criminal, for example,
somebody convicted for abuse. Nobody asked these questions.
These observations don't exonerate institutions for
their failings.
However, if the wider public had this opportunity to examine the
philosophy in 1977 then society at large has to share some responsibility
for failing to scrutinize religious institutions.
The Age, 14 January 1977 gives us a view into Australian
society and prison life in 1977, with letters about police tactics
and abuse inside the prison.
Police behavior is the real scandal
Police go gay to lure homosexuals
The recent arrests of homosexuals by police in the Black Rock
- Sandringham area and the type of "poofter bashing" mentality underlying
them is cause for serious public concern.
...
Peeping-tommery
It was with some humor that I read the report of policemen
baiting homosexuals at Black Rock. ...
...
Deplorable crime
The Minister for Social Welfare, Mr Dixon, confesses he is unaware
whether or not a pack rape victim in J Division of Pentridge needed
medical attention ("The Age," 6/1).
The victim endured these rapes for three successive nights without
any restraint from those supposed to be in authority. Is Mr. Dixon
concerned enough about such serious lapses of supervision to
investigate the causes and reasons why such deplorable crime can
exist in Pentridge?
Such victimisation between prisoners reads like the worst excesses
of the early penal times of Tasmania.
Do we need another Elizabeth Fry to reform the prisons of the
Hamer Government?
The chief of the prison resigned to pursue a PhD. The same newspaper
interviewed him too. On 19 April 1977 they published it prominently
at the top of page 3 with the headline "Pentridge shouldn't be
escape-proof, says its former chief".
Pentridge prison is not escape-proof - and should not be
escape-proof - according to the prison's former superintendent,
Mr. John Van Groningen.
...
"If they knew the prison really was escape-proof, prisoners wouldn't
be able to lie in bed at night and dream of how they could get out,"
he said.
"I'm serious. The vast majority of prisoners have these sorts of
dreams and I believe they are healthy and therapeutic for them.
It looks like the prisoners had a lot of respect for the former
jailmaster. People threw copies of the newspaper over the walls for
prisoners to read the interview. Three weeks later, dreams came true
when somebody threw a rope over the wall and three prisoners escaped.
According to reports, one of the escapees, James Richard Loughnan,
broke both of his legs and hid behind the church beside the prison while
his fellow escapees, Allan Martin and Peter Dawson, made their getaway.
The reports don't tell us if the prisoners were wearing uniforms
or if the sirens were activated to warn the community about an escape.
Therefore, Fr O'Connell may not have had any hints that an escape
had transpired or that the man he was about to meet was one of the
suspects.
Fr O'Connell was leaving in his car and he came across Loughnan on
the ground. Loughnan claimed he had been injured in a traffic
accident and asked for transport to get medical assistance. Fr O'Connell
obliged.
During the ride in Fr O'Connell's car, Loughnan asked to make
confession. Fr O'Connell, like all Catholic priests, is unable to
tell anybody what was said during confession. Nonetheless, it seems
that he did come to realize he was transporting an escaped prisoner.
On 11 May 1977 The Age published a photo of Loughnan being
carried into court by police.
A few weeks later, the magistrate's court convicted Fr O'Connell
for harboring an escaped prisoner. Fr O'Connell's punishment was a six month
good behavior bond. The punishment seems bizarre as a priest is already
meant to be a model of good behavior for everybody else.
Catholic Church defends right of priests to keep confidences
15 June 1977
The Roman Catholic Church yesterday defended the right of its priests
to keep confidences after a Coburg priest was found guilty of harboring
a Pentridge escaper.
...
"I was simply acting as a priest helping a man who was injured with no
thought of harboring or breaking the law."
Nonetheless, Fr O'Connell lodged an appeal in the County Court.
The judge decided that Fr O'Connell had not offered the prisoner shelter,
therefore, the transport by car to a hospital
could not be enough to justify a conviction for harboring.
Fr O'Connell was liberated from the obligation of good behavior.
I'd do same again, says cleared priest
16 August 1977
A Roman Catholic priest cleared of harboring a Pentridge escaper
said last night he would act exactly the same way if the situation
arose again.
...
Father O'Connell, 36, of St. Paul's Church, Coburg, yesterday won
an appeal in the County Court against a conviction and six-month
good-behavior bond for harboring an escaper on May 9.
...
"People have to trust us. It's a trust which has been won over years
and years. I felt that principle was at stake," he said.
He said he had taken confession from Loughnan and was therefore
bound not to tell the police of his whereabouts.
A few weeks later, another news report appeared about escape
dreams.
SM told jail had a sex hideaway
13 September 1977
A secret cubby-hole discovered at Pentridge was used for private
homosexual acts and to hide contraband goods, Melbourne Magistrates
Court was told yesterday.
The court was told the hole was behind a false wall in a stationary
cupboard.
...
"We needed a place to go to where nobody would see us," ...
Meike Reichle is the next case in the
Debian pregnancy cluster. In most voluntary organizations, there is
some privacy for the family lives of volunteers. Meike chose to share
details on the mailing list with over a thousand strangers so
we can talk about it in a general sense here.
Under copyright law, the money raised for a work of joint authorship
is to be divided up equally between every co-author. The law is
very clear on this point. At Columbia University in New York,
the Kernochan Center for Law, Media and the Arts publishes
a page about joint works (Co-Authorship) with this advice:
in the case of two co-authors, and absent an agreement to the contrary, a right to an accounting for 50 percent of the proceeds of the exploitation of a given work
In 2023, when Abraham Raji went to DebConf23, he did a huge amount of
work as an unpaid volunteer, including the design for the DebConf23 logo.
When he arrived at the day trip, everybody was asked to contribute some
of their own money. Abraham Raji didn't put in any money, he was left
alone without a life jacket and he drowned.
If we followed the advice from the experts at Columbia University we
would divide the sum of $32,000 between all 1,720 joint authors.
Each person (or their estate if they are dead) would receive
$18.60. If some of the co-authors want to contribute
their money to a fund for diversity then they can do so. Each co-author
must make a personal decision whether they put their share of the money in
the diversity fund or whether they keep it for themselves.
When the GNOME foundation created
Outreach Program for Women in 2006,
a little bit less than two percent of Debian Developers were female.
When Debian decided to start contributing money to the program in 2013,
the percentage of women was still about two percent.
Today, in 2025, twelve years after Debian started contributing
authors' money to the Outreachy internships, we still have less than
two percent women.
Out of the entire history of the program only one of the women,
Ulrike Uhlig became a Debian Developer. She participated for a couple
of years and then she quit.
Therefore, what is Debian receiving in exchange for this money? Or
what are the men in charge hoping to receive in exchange
for $32,000 per year?
When we put the events in the correct order and look at the evidence
about the
Debian pregnancy cluster it becomes very clear.
The next pregnancy we look at is the email from Meike.
GNOME launched
Outreach Program for Women and in the same year
we saw Meike Reichle at DebConf6. Here is a snippet of the video:
Hi everyone, I am Mayank Singh, currently working on a new service for simplifying the Fedora Package Submission Process, if you’d like to know more check my previous post here.
Diving Deep into Packit Service
(27 June – 8 July):
I began working on the packit-service codebase as the foundation for our project. The first goal was to prototype the user flow by creating new APIs and handlers for functionalities like detecting new packages and linting.
Pretty early on, I hit a roadblock during a test run. When the service was deployed to listen for GitHub events, it wouldn’t reject any incoming events sent through the tunnel to the local deployment. After a lot of digging, I traced the issue to the Apache configuration in the mod_wsgi-express server. This server, responsible for serving the Flask-RESTx endpoints, was misbehaving and causing all the trouble.
Another hiccup was that the service was too heavy for my system to run locally in an OpenShift environment with GitHub. My mentor stepped in and suggested a helpful workaround, disable the unnecessary services for our use case and use GitLab instead in plain docker containers, as it’s much easier to spin up and test locally. Reported a few other problems in the deployment process for development regarding Bitwarden for secrets.
With those issues resolved, I went ahead and trimming the parts of the packit-service codebase that weren’t needed for onboarding new packages. This helped me better understand its event model and the use of Celery in task execution.
This week was mostly about reusing the existing packit-service codebase and resolving issues.
What’s Next?
With the hard parts of setup and architecture done, the next steps would be to:
Add new API endpoints and corresponding event types for task handling
Integrate the current setup with COPR for builds.
Begin work on testing and validation workflows
Stay tuned for more updates in the next blog post!
K9s یک ابزار ترمینالی (CLI-based) قدرتمند و کاربرپسند است که به شما امکان مدیریت و مانیتورینگ کلاسترهای Kubernetes را در یک رابط تعاملی ساده و سریع میدهد. این ابزار برای توسعهدهندگان، مهندسان DevOps و مدیران سیستم طراحی شده که میخواهند بدون نیاز به دستورات پیچیده kubectl، منابع Kubernetes را بهسادگی مشاهده، فیلتر و کنترل کنند. […]
Another week to recap, much of it fixing up things after the datacenter move.
Datacenter Move
The move is done, but there was a lot of fixing up things or sorting out
issues this last week. Mostly to be expected I guess, and luckly none of them
were too bad. A short list (there were many more):
Fixed email sending from bodhi. Our new openshift cluster defaults to
a subdomain (ocp.fedoraproject.org) for dns search, so needed to make sure
the smtp host was a fqdn. (It was DNS!)
Some various fixes for riscv builders.
fedora.im matrix server stopped federating. It was working fine, but
only people on fedora.im could see messages. Turned out to be a endpoint
that was still pointing to the old datacenter ( the .well-known/matrix/server
uri). (It was DNS!)
Various small firewall changes to allow things.
Updates compose failures (still not solved completed, but worked around)
I think it may well be a tcp timeout between vlans.
Cleaned up our dns to remove old DC (and also the one before!).
Amazingly, nothing broke due to this that I can tell yet.
Fixed incoming @fedoraproject.org emails to flow again.
Fixed a db issue on src.fedoraproject.org that was causing forks to sometimes
not work.
pkgs.fedoraproject.org ssh host key changed and the sshfp records
were not entirely right at first. Hopefully this is sorted out now
and everyone is able to verify them (or just use https pushing)
Logins were not working on a few things (fedocal, etc). Should be fixed now
eln composes were not syncing out. Easy fix (missing a mount)
A bunch of fixes to get nagios more green.
We also shutdown all the hardware in the old datacenter, the stuff we were
saving has been deracked, packed, shipped, unpacked, racked and networked.
We do need to work some more with folks there to bring things all back
on line and then it's just a matter of reinstalling them and adding them in.
Most of this is destined to be buildhw builders or openqa worker hosts.
(ie, add capacity).
Overall things are getting back to normal. Hopefully everyone else feels that too.
Upcoming things
With the datacenter move finally behind us, we should hopefully in coming weeks
be able to start working on some backlog. In particular I'd like us to look at
anubis or some other ai/scraper mitigation. So far we are handling things, but
they could be back soon... and it greater numbers.
Some other things I want to work on in the coming months (in no particular order):
revamp our backups. We are currently using rdiff-backup, which is fine, but
moving to restic or borg might give us some nice advantages.
replace out openvpn setup with wireguard
Update from using network_connections to network_state in linux-system-roles/network
Power10 reconfiguration with vHMC/lpars.
iscsi volume for power10 (this will be likely next week)
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 7th – 11th July 2025
Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
I’ve been using Fedora Linux for over a decade, and upgrading the OS is a routine process for me. I started an upgrade about 36 hours ago on my personal laptop, expecting it to be smooth—just like the past few years. However, this time, things went sideways. I encountered an error stating that there was no space left on the device. That was odd because I always double-check disk space before performing any upgrades on my laptop or servers. So, how big was this upgrade?
Unfortunately, my laptop was completely unbootable. To troubleshoot, I burned a live distro onto a USB drive, booted from it, and mounted my disk. Running df -h confirmed that the available space was indeed 0:
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/dm-0 237G 190G 0 100% /
The available space was 0, but mathematically, 237G – 190G should leave about 47G free. So, where did my disk space disappear?
The Root Cause: Btrfs Metadata Exhaustion
A few years ago, Fedora recommended using Btrfs as the default filesystem. I accepted the recommendation, read a bit about it, and moved on. What I didn’t realize was that Btrfs stores filesystem metadata separately from data, and df -h only reports the data partition’s space usage. I had actually run out of metadata space.
To check Btrfs usage, I ran the following command, replacing / with the correct mount path from my live USB session:
Btrfs reserves about 0.5GB for metadata changes, and when this space is exhausted, the system effectively runs out of space. The solution? I needed to balance Btrfs chunks to reclaim space.
Reclaiming Space with Btrfs Balance
Running the following command helped me reclaim space by consolidating underutilized chunks:
This command finds all chunks that are less than 5% full and rewrites them into new chunks, freeing up space. I incrementally increased the dusage value up to 50%, which freed about 15-20 chunks.
However, if metadata is completely full, the balance operation won’t work because it needs some free space to operate. To fix this, I created temporary space by adding a loopback device:
Once the loopback device was added, I retried the balance command, and this time, it worked perfectly.
Fixing the Broken Fedora Upgrade
With space freed up, I removed all USB drives and attempted to boot my laptop. As expected, it didn’t boot properly; instead, it showed a screen with a mouse pointer replaced by an “X.” Luckily, I was able to switch to a TTY session using Ctrl+Alt+F3.
I attempted to synchronize my system with the new release:
dnf distro-sync --releasever=41
However, this threw an error:
Problem: The operation would result in removing the following protected packages: sudo
Since my system was already broken, I decided to override the protected package list:
This is the story of how I became a contributor on the Fedora Release Schedule Planner application hosted on Codeberg.
I started my open-source journey when I got my first laptop. It was old and slow, but I needed it for school, so I started looking into how to fix this.
My Introduction to Linux
While looking around for a solution I found Linux, a free operating system that runs on almost anything.
This led me to dip my toes into the Linux world starting with Puppy Linux. This tiny was perfect for my old laptop.
The experience was amazing; I was blown away at how fast and efficient my laptop became. This initial success sparked my interest in learning more about Linux and open source software.
Discovering Open Source Software
Since I was a Windows user before, I had no idea about open source software. I was also broke so all I knew was the endless cycle of looking for cracked software. Using pirated software honestly felt wrong to me.
Playing around on the small Linux distro had me discovering a wealth of free and open source software. Everything from GIMP for image editing, LibreOffice for productivity and Blender for 3D modeling all for freely available. This was liberating; I no longer had to rely on shady sources for software.
Digging further made me realize that I could actually contribute to these free software. The notion that I could not only use but also help improve the software I loved was incredible.
Getting Involved in the Fedora Project
Thanks to how easy Linux makes learning programming, I learned a few languages. This eventually led to me being able to actually make contributions
I also distro hopped a lot and found myself using Fedora as my daily driver. Been using it for a while now and love how stable and polished it is.
Anyway, once I saw that Outreachy was looking for Fedora contributors I decided to apply. There were a lot of skilled contributors to the project I had applied to, but I was determined.
I managed to scrape through and get in as an intern. I am now working on improving Fedora’s release schedule management by transitioning to something more functional and efficient.
The plan is to learn as much as I can from my experienced mentor, Tomas Hrcka. While doing so engaging with all the other open source contributors will provide a great deal of knowledge and networks.
I hope to learn from this experience contributing to Fedora so I can get others involved with open source software.
Over the past few months I’ve spent some time on-and-off working on
Sigul and some related tools. In particular, I
implemented most of a new Sigul
client,
primarily to enable the sigul-pesign-bridge to run on recent Fedora releases
(since the sigul client relies on python-nss, which is not in Fedora anymore).
At this point, I have a reasonably good understanding of how Sigul works.
Originally, my plan was to completely re-implement the client, then the bridge,
and finally the server using the existing Sigul protocol, version 1.2, as
defined by the Python implementation. However, as I got more familiar with the
implementation, I felt that it would be better to use this opportunity to also
change the protocol. In this post I’m going to cover the issues I have with
the current protocol and how I’d like to address them.
In protocol version 1.2, the client and server start “outer” TLS sessions with
the bridge, and then the client starts a nested “inner” TLS session with the
server. Data is sent in chunks which indicate how big the chunk is and whether
it’s part of the “outer” session (and destined for the bridge) or the “inner”
session. While it’s perfectly doable to parse the two streams out, it’s a
complication. Maybe we can introduce some rules to make it easier?
After looking at the implementation, every command follows the same pattern. The client would:
Open a connection to the bridge and send the bridge the command to pass on to the server.
Open the inner TLS session and send some secrets to the server (a key to use
for HMAC and a key passphrase to unlock a signing key, typically).
Close the inner TLS session.
Send HMAC-signed messages to the bridge, which it relays to the server.
Receive HMAC-signed messages from the server, via the bridge.
Critically, the inner TLS session was only used to exchange secrets so the
bridge couldn’t see them, and was never used again. One option would be to only
allow the inner TLS session once, right at the beginning of the connection.
However, the whole point of the HMAC-signed messages is that the client and
server don’t seem to really “trust” the bridge won’t have tampered with the
messages. Why not just use the inner TLS session exclusively so that we get
confidentiality in addition to integrity?
Homegrown serialization
When Sigul was originally written things like JSON weren’t part of the Python
standard library. It implemented its own, limited format. Now, however, there
are a number of widely supported serialization options. JSON is the obvious
choice as it is fairly human-readable, ubiquitous, and fairly simple. The
downside is that for signing requests, the client needs to send some binary
data. However, we explicitly do not want to be sending huge files to be signed,
so base64-encoding small pieces of binary data should be acceptable.
The bridge is too smart
The bridge includes features that are not used, and that complicate the
implementation.
Fedora Account System integration
The bridge supports configuring a set of required Fedora Account System groups
the user needs to be in to make requests. It checks the user against the
account system when it connects by using the Common Name in the client
certificate as the username.
However, this feature is not used by
Fedora,
and since Fedora is probably the only deployment of this service, we probably
don’t need this feature.
The bridge alters commands
For most commands, the bridge shovels bits between the client connection and
the server connection. However, before it shovels, it parses out the requests
and responses before forwarding them. There’s really only one good reason for
this. Two particular commands may alter the client request before sending it
to the server. Those two commands are “sign-rpm” and “sign-rpms”.
In the event that the client requests a signature for an RPM or multiple RPMs
and the request doesn’t include a payload, the bridge will download the RPM
from Koji. Now, the bridge doesn’t have the HMAC keys used in other commands to
sign the request, so the client also contacts Koji and includes the RPM’s
checksum in the request headers.
This particular design choice might have been done to save a hop when
transferring large RPMs, but these days you don’t need to send the whole RPM,
just the header to be signed.
It’s confusing to have the bridge take an active role in client requests.
What’s more, if we push this responsibility to the client, there’s no reason
the bridge needs to see the requests and responses at all.
Usernames
As noted in the Fedora Account System integration section, the bridge uses the
client certificate’s Common Name to determine the username. However, all
requests also include a “user” field.
The server checks to ensure either the request username matches the client
certificate’s Common Name, or if the lenient_username_check configuration
option is set which disables that check, or if the Common Name is in a
configuration option, proxy_usernames, listing that that Common Name can use
whatever username it wants.
The Fedora
configuration
doesn’t define either of those configuration options, and it’s confusing to
have two places to define the username.
Passwords
Users have several types of passwords.
Users are given access to signing keys by setting a user-specific passphrase
for each key. This passphrase is used to encrypt a copy of the “real” key
password, so each user can access the key without ever knowing what the key
password is.
However, each user also has an “account password” which is only needed for
admin commands, and only works if the account is flagged as an admin. Given
that the client certificate can be password-protected it’s not clear to me that
this adds any value, but it is confusing.
Summary
To summarize, the major changes I’m considering in a new version of the Sigul protocol are:
All client-server communication happens over the nested TLS session. The
client and server no longer need to manually HMAC-sign requests and responses
as a result.
The bridge is a simple proxy that authenticates the client and server
connections via mutual TLS and then shovels bits between the two connections
without any knowledge of the content. Drop the Fedora Account System
integration, and push the responsibility of communicating with Koji to the
client.
Switch from the homegrown serialization format to JSON for requests and responses.
Rely exclusively on the client certificate’s Common Name for the username.
Remove the admin password from user accounts as they can password-protect
their client key and also use features like systemd-creds to encrypt the key
with the host’s TPM.
As syslog-ng 4.9.0 is not yet released, you need to run a development snapshot of syslog-ng to test this feature. You can compile syslog-ng from source, but luckily there are many other options available as well, especially if you want to run syslog-ng on Linux or FreeBSD. I collected these in a recent blog at https://www.syslog-ng.com/community/b/blog/posts/a-call-for-testing-the-upcoming-syslog-ng-releases. You also need Prometheus. I used the version available in openSUSE Leap as a package in the distribution.
Configuring syslog-ng
Append the following configuration snippet to your syslog-ng.conf or create a new .conf file under the /etc/syslog-ng/conf.d/ directory, if your syslog-ng deployment is configured to use it.
After reloading Prometheus and syslog-ng, I could see syslog-ng statistics on the Prometheus web interface and even create some basic graphs for its values.
What is next?
Please share your experiences with us! Reporting issues helps development, especially before a release. However, we are also very happy to hear positive feedback and learning that one of our long-requested features is actually used :-) You can do both at https://github.com/syslog-ng/syslog-ng/
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
Now, with my Altra-based system, I decided to try it again.
Hardware used
Compared to the Pinkiepie (the Mustang), Wooster (the current system) looks beefy:
component
Pinkiepie
Wooster
cpu model
X-Gene 1
Altra Q80-30
core model
X-Gene 1
Neoverse-N1
core arch
v8.0
v8.2
core count
8
80
memory speed
1866 MHz
3200 MHz
memory amount
16 GB
128 GB
storage
SATASSD (500 MB/s)
PCIe 4.0 NVME (6200 MB/s)
graphics card
Radeon HD5450
Radeon RX6700XT
resolution
1920x1080
3440x1440
Both systems ran the latest, stable release of Fedora with the KDE desktop.
Generic use
I started by rsyncing my home directory from Puchatek (my x86-64 desktop) to
Wooster (the AArch64 system). To have the same environment in both places. Of
course, I had to replace a few binaries in the ~/.local/bin directory with their
AArch64 equivalents. And I regenerated some Python virtual environments.
The desktop worked as before, Thunderbird fetched mail and sent it, files could
be edited in Neovim-qt as before etc.
Films from the local NAS share worked just fine using the same “mpv” as on Puchatek.
Multimedia online
But then you realise that it would be nice to listen to some music. For several
reasons, I am using Spotify for this. And their app is x86-64 only…
Firefox refused to play anything. So did Chromium. I Installed the
“widevine-installer” package, ran one command and, thanks to binaries from
ChromeOS, both web browsers started playing. But Firefox was stopping after each
song, so I had to revert to Chromium for it. Widgets on the KDE Plasma desktop
recognised it, and I had information and playback controls embedded in the top panel.
Films on streaming services
But what about films on streaming services? Well, let me create a table, as I was
surprised by the results:
Streaming service
Firefox
Chromium
Amazon Prime Video
Works
Works
Disney Plus
Works
Works
Max
Works
Works
Netflix
Fails (E100)
Fails (E100)
YouTube
Works, up to 4320p
Works, up to 1440p*
Chromium
For Chromium it depends which build of it you are using. I used Fedora
package and then was pointed to Flathub build of
Chromium as better one.
Flathub’s Chromium plays 2160p videos on YouTube and does not have an option to
choose higher resolutions so I could not test 4320p ones.
Again, let make a table of “Graphics Feature Status” information:
Entry
Fedora build
Flathub build
Canvas
Software only
Hardware accelerated
Direct Rendering Display Compositor
Disabled
Disabled
Compositing
Software only
Hardware accelerated
Multiple Raster Threads
Enabled
Enabled
OpenGL
Disabled
Enabled
Rasterization
Software only
Hardware accelerated
Raw Draw
Disabled
Disabled
Skia Graphite
Disabled
Disabled
TreesInViz
Disabled
Disabled
Video Decode
Software only
Hardware accelerated
Video Encode
Software only
Software only
Vulkan
Disabled
Disabled
WebGL
Software only
Hardware accelerated
WebGL2
Software only
Hardware accelerated
WebGPU
Disabled
Disabled
WebNN
Software only
Disabled
WebGL
How does the 3D hardware acceleration situation look? I tested it with
the WebGL Aquarium.
Amount of fish
Firefox
Chromium/Fedora
Chromium/Flathub
1
75
18
75
1000
75
9
75
5000
42-71
4
29-37
10000
33-39
1
15-17
I do not remember numbers I got with the same graphics card in my x86-64 system.
To be continued…
I am planning to write a few more posts about using my Ampere Altra-based system
as a desktop. So stay tuned.
Hey everyone. Welcome to the far side of the 2025 Datacenter Move.
Everything is now moved over and (mostly) working from the new location.
There are of course some things to fix still. We have been tracking
the smaller items in: https://pagure.io/fedora-infrastructure/issue/12620
and larger ones in their own tickets. At this point if you see an issue
please check if it's been mentioned above or in another infra ticket
and if not, let us know.
Things did not go as smoothly as I was hoping they would.
I was hoping to have the build pipeline up and running on wed, but
it took us until thursday morning to finish bringing it up.
We are collecting items for a retrospective now and should hold
that in the next week or two, but a few I will be mentioning:
The good:
Other folks on my team (CLE - Community Linux Engineering) did tons
of great work. My co-workers in .eu timezones shutting things down,
then storage folks in east coast us switching storage meant that
when I got in (west coast us) everything was ready to move.
The other Red Hat teams we worked with ( networking, storage, dc
operations) were all great and very responsive to helping us
The community was great in being patient and waiting for things
to come back up.
The new machines are super fast!
The bad:
Took longer than expected to bring the build pipeline up again.
This was due to a number of reasons, but the two big ones were:
a) We used mtu 9000 in the old datacenter and carried that over to
the new one. However there were some cross vlan links that were not
working right with jumbo frames. This culminated in a networking
outage wed afternoon that took us off line for a few hours. and
b) our pkgs server is not something we redeploy much. It's one of
our last RHEL8 instances. Because of this there were some issues
that were difficult to debug and work through to get things working.
I copied our wiki database several times and it failed with a disk
space full error. Turns out mariadb has a large binary file and
by default rsync just copies to a temp copy and moves it in place
at the end. If your disk isn't more than 2x the size of the db,
boom. --inplace fixed that.
Next week there's still some work to do. We need to power off
all the machines in our old datacenter. Some of the newer hardware
will be shipped to the new datacenter and we will use them to
augment capacity. Some will be more builders, more openqa workers,
etc.
Finally, I've been super focused on this move, now that we are done
after next week I hope to start in on the backlog of other things
that I put off: packaging work, emails to reply to, AI scraper
mitigation, and such.
Version 8.5.0alpha1 has been released. It's still in development and will enter soon in the stabilization phase for the developers, and the test phase for the users (see the schedule).
RPM of this upcoming version of PHP 8.5, are available in remi repository for Fedora ≥ 41 and Enterprise Linux ≥ 8 (RHEL, CentOS, Alma, Rocky...) in a fresh new Software Collection (php85) allowing its installation beside the system version.
As I (still) strongly believe in SCL's potential to provide a simple way to allow installation of various versions simultaneously, and as I think it is useful to offer this feature to allow developers to test their applications, to allow sysadmin to prepare a migration or simply to use this version for some specific application, I decide to create this new SCL.
I also plan to propose this new version as a Fedora 44 change (as F43 should be released a few weeks before PHP 8.5.0).
Installation :
yum install php85
⚠️ To be noticed:
the SCL is independent from the system and doesn't alter it
this SCL is available in remi-safe repository (or remi for Fedora)
installation is under the /opt/remi/php85 tree, configuration under the /etc/opt/remi/php85 tree
the FPM service (php85-php-fpm) is available, listening on /var/opt/remi/php85/run/php-fpm/www.sock
the php85 command gives simple access to this new version, however, the module or scl command is still the recommended way.
for now, the collection provides 8.5.0-alpha1, and alpha/beta/RC versions will be released in the next weeks
some of the PECL extensions are already available, see the extensions status page
tracking issue#307 can be used to follow the work in progress on RPMS of PHP and extensions
the php85-syspaths package allows to use it as the system's default version
ℹ️ Also, read other entries about SCL especially the description of My PHP workstation.
$ module load php85
$ php --version
PHP 8.5.0alpha1 (cli) (built: Jul 1 2025 21:58:05) (NTS gcc x86_64)
Copyright (c) The PHP Group
Built by Remi's RPM repository #StandWithUkraine
Zend Engine v4.5.0-dev, Copyright (c) Zend Technologies
with Zend OPcache v8.5.0alpha1, Copyright (c), by Zend Technologies
As always, your feedback is welcome on the tracking ticket.
چه باور کردنی باشد و چه نه، امروز چهاردهمین سالگرد تأسیس وبسایت FedoraFans است! در این چهارده سال پر فراز و نشیب، با هم یاد گرفتیم، تجربه کردیم و رشد کردیم. از روزهای اولی که تنها چند مقاله ساده در سایت منتشر میشد، تا امروز که وب سایت طرفداران فدورا به یک مرجع معتبر برای […]
Update Django from 5.1.8 to 5.1.11, addressing medium severity vulnerabilities,
CVE-2025-32873 and
CVE-2025-48432,
which do not appear to affect Kiwi TCMS
Improvements
Remove the django-uuslug dependency
Update django-colorfield from 0.13.0 to 0.14.0
Update django-grappelli from 4.0.1 to 4.0.2
Update django-guardian from 2.4.0 to 3.0.3
Update django-simple-history from 3.8.0 to 3.10.1
Update django-tree-queries from 0.19.0 to 0.20.0
Update markdown from 3.8 to 3.8.2
Update psycopg[binary] from 3.2.6 to 3.2.9
Update pygments from 2.19.1 to 2.19.2
Update python-gitlab from 5.6.0 to 6.1.0
Update uwsgi from 2.0.29 to 2.0.30
Update node_modules/pdfmake from 0.2.18 to 0.2.20
Display nested Test Plan(s) in select drop-down on New Test Run page
Implement Bugzilla.details() method to fetch more information about
reported bugs via the existing Bugzilla integration interface
Refactor URL /accounts/<username>/profile/ into
/accounts/<pk>/profile/
to prevent usernames being exposed in logs or anonymous analytics
Refactor URL /plan/<pk>/<slug> into /plan/<pk>/ to prevent test plan
summary being exposed in logs or anonymous analytics. Fixes
Issue #3994
Bug fixes
Make sure IssueTrackerType.details() method provides id and
status fields to prevent crashes when IssueTracker integration falls back
to this method
For Bug.details() API method always cast internal result to dict
to avoid the situation where modernrpc/handlers/xmlhandler.py::dumps_result()
doesn't know how to serialize that! Fixes
Sentry KIWI-TCMS-VV
Don't send email notifications to inactive users
Refactoring and testing
Update node_modules/eslint-plugin-import from 2.31.0 to 2.32.0
Update node_modules/webpack from 5.99.6 to 5.99.9
Use the public interface tcms_api.TCMS().exec in tests
Add test for unauthenticated Bugzilla.details() which falls back to
OpenGraph
Dans le cadre des 20 ans de Fedora-fr (et du Projet Fedora en lui-même), Charles-Antoine Couret (Renault) et Nicolas Berrehouc (Nicosss) avons souhaité poser des questions à des contributeurs francophones du Projet Fedora et de Fedora-fr.
Grâce à la diversité des profils, cela permet de voir le fonctionnement du Projet Fedora sous différents angles pour voir le projet au delà de la distribution mais aussi comment il est organisé et conçu. Notons que sur certains points, certaines remarques restent d'application pour d'autres distributions.
N'oublions pas que le Projet Fedora reste un projet mondial et un travail d'équipe ce que ces entretiens ne permettent pas forcément de refléter. Mais la communauté francophone a de la chance d'avoir suffisamment de contributeurs de qualité pour permettre d'avoir un aperçu de beaucoup de sous projets de la distribution.
L'entretien du jour concerne Kévin Raymond (pseudo shaiton), ancien contributeur de Fedora et de Fedora-fr.org.
Entretien
Bonjour Kévin, peux-tu présenter brièvement ton parcours ?
Perso ? Au sein de la communauté ? C'est un peu trop large pour être bref.. Curieux et créatif, j'ai découvert l'informatique, l'électronique et la programmation au Lycée. J'ai eu la chance que mon père fasse le choix de Linux pour moi et s'occupe de mes premières installations… Voir configuration matérielle. Au début il fallait internet pour configurer internet (recompilation de pilotes de périphériques) c'était un beau casse-tête sans personne pour vous aiguiller. Bon ok si on remet dans l'ordre au tout début il n'y avait pas la problématique du Wi-Fi. J'ai poursuivi mes études par un DUT GEII (Génie Electrique et Informatique Industrielle) en poursuivant sur une licence puis une école d'ingénieur électronique/informatique en alternance. Plus les années passaient plus j'avais de projets informatique ou électronique perso ou professionnels à réaliser et plus je m'éloignais du monde Windows, plus je me sentais chez moi sous Linux.
Peux-tu présenter brièvement tes contributions au Projet Fedora ?
Initialement traducteur francophone, j'en suis arrivé à être le coordinateur principal de l'équipe francophone. J'ai également contribué à l'internationalisation des sites internet du Projet Fedora. Ce qui m'a amené à en être un des mainteneurs principaux pendant quelques temps et de m'occuper du déploiement des nouvelles versions à chaque nouvelle sortie de version de Fedora. De là j'ai intégré l'équipe infrastructure afin de pouvoir suivre ou déclencher les mises à jour des sites. Étant un des traducteurs principaux, j'ai rejoins l'équipe documentation pour la même raison que l'équipe site internet : améliorer le déploiement des traductions. Et c'est également pour les traductions que je suis devenu coordinateur Transifex, une des plateformes de traduction utilisée un temps par le projet. Il a fallu accompagner les développeurs pour la migration et assurer le suivi du déploiement. Le problème principal étant qu'un développeur ne se rend pas compte qu'une traduction est disponible ou même qu'elle n'est pas inclue dans sa dernière version. Ayant un pied dans pas mal de portes, j'en suis venu à aider la coordination de chaque sortie de versions de Fedora. Ça c'est pour la partie productivité, mais il y a la partie sociabilité qui est une part importante de la vie d'un contributeur. Je suis devenu ambassadeur du Projet Fedora jusqu'à en devenir un mentor. J'ai également intégré le bureau de l'association Fedora-Fr, devenue Borsalinux-Fr lors de mon mandat. J'ai dirigé quelque temps la production de goodies pour l'équipe FR, les réunions hebdomadaire et co-organisé une rencontre internationale à Paris − le FUDCON − après en avoir été plusieurs fois un participant actif (aux États-Unis, Italie, Suisse).
Qu'est-ce qui fait que tu es venu sur Fedora et que tu y es resté ?
Pour le produit, c'était « Fedora Core » à l'époque : pour la nouveauté. Les distributions GNU/Linux étaient en plein développement, beaucoup de nouveautés arrivaient chez l'un avant l'autre. J'en ai essayé plusieurs et j'ai beaucoup aimé les choix proposés sous Fedora Core, j'y suis resté depuis 2005 (au moins en perso, si l'entreprise ne me laissait pas le choix).
Pourquoi contribuer à Fedora en particulier ?
Quant-au Projet et bien c'est parce que le produit me plaisait que tout naturellement c'est là que j'ai participé. Autant contribuer à ce dont on se sert tous les jours. Bon je dois quand même dire que c'est le programme d'ambassadeur qui m'a permis d'être inclus. Les forum d'entraide je voyais plutôt ça comme un mal nécessaire. En arrivant sur Paris, j'ai eu la chance de pouvoir rencontrer physiquement des passionnés prêt à vous écouter et vous aiguiller sur vos aspirations. Merci à Mathieu Bridon (dit « bochecha ») sans qui je serai resté de l'autre côté de la fenêtre.
Contribues-tu à d'autres Logiciels Libres ? Si oui, lesquels et comment ?
Très et trop peu. Je maintiens la traduction du logiciel GNU Make. D'ailleurs il me faut la rafraichir depuis la dernière mise à jour. J'en suis devenu le mainteneur parce que le précédent ne répondait plus et que la traduction actuelle ne me convenait pas. Si quelqu'un veut le reprendre je m'en sépare bien volontiers ! Ensuite j'ai des contributions ponctuelles sur mes interactions professionnelles. Principalement pour du correctif je n'ai pas une part active malgré mon envie. J'essaye autant que possible de tester les nouvelles versions de Fedora dès la version Alpha. Grâce à ma connaissance de Fedora, j'ai intégré le projet OLPC France où j'ai pu apporter mon expertise « Fedora » pour les outils de sauvegarde et mise à jour des XO (nom des ordinateurs portable du Projet OLPC basés sur la distribution GNU/Linux Fedora). Et je suis même allé à Madagascar gérer le déploiement d'une mise à jour distro de l'ensemble d'un parc. Expérience très enrichissante.
Utilises-tu Fedora dans un contexte professionnel ? Et pourquoi ?
Oui autant que possible. C'est mon univers, je maîtrise l'environnement et je n'ai pas besoin de chercher comment faire telle ou telle actions ce qui est bien plus rapide. J'apprécie aussi énormément les choix proposés par défaut. GNOME et son mode non intrusif me permet de rester concentré sur le principal (en espérant que la proposition du choix par défaut KDE ne sera pas acceptée…) Mais c'est aussi parce que Fedora fait partie de moi, je me suis construit avec le produit, avec le projet et avec la communauté. C'est comme quitter son pays natal, on peut le faire mais on n'est plus chez nous. Je me sens très bien sous Fedora et si je veux aider, corriger ou contribuer, je sais déjà comment m'y remettre.
Est-ce que tes contributions à Fedora sont un atout direct ou indirect dans ta vie professionnelle ? Si oui, de quelle façon ?
C'est un atout direct bien évidemment. Étant ingénieur en systèmes embarqués Linux, autant maîtriser l'environnement qui permet de répondre au besoin de l'entreprise. Dans l'entreprise on doit répondre à un besoin. Et pour ça l'humain invente des outils. S'il ne maîtrise pas ses outils il est moins productif et perd une part de ses capacités pour s'adapter à son environnement. D'un autre côté, j'aime le produit Fedora (peut-être maintenant surtout pour la Communauté) et travailler sous Fedora c'est vouloir se lever le matin et être accueilli par quelque chose qui nous fait plaisir. C'est devenu important pour mon bien être.
Tu as été actif sur de nombreux projets de Fedora durant quelques années tout en étant non employé de Red Hat, est-ce que cela a été un frein dans ta participation d'une quelconque façon ?
Absolument pas. J'étais assez pris par toutes mes contributions pour ne pas chercher à vouloir faire de la politique. Il y avait déjà assez de Guru dans l'équipe française pour que je ne m'y colle pas. Au travers de Red Hat j'ai trouvé du soutien, des conseils et du professionnalisme. Mais également des amis.
Qu'est-ce que tu as fait plus exactement pour l'infrastructure et les sites web du Projet Fedora ?
Pour les sites web, j'ai cherché à faire en sorte que mes traductions soient utilisées, déployées. C'est bien beau de passer ses nuits à traduire plutôt que dormir ou réviser, mais si le jour J la traduction n'est pas utilisée, à quoi cela sert-il ? Et si on te réponds « arf, si ça avait été publié 12h plutôt ça aurait apparu, maintenant il faut attendre le prochain déploiement dans 6 j » ça frustre. Et parfois ce n'est pas qu'une question de date c'est aussi un problème de code. Le développeur ne sait pas qu'une traduction est disponible, il ne l'utilise donc pas. J'ai donc pris en charge la synchronisation des équipes de traduction avec la génération des différents sites. J'ai créé des outils et modifié les process de déploiement des sites afin que les traducteurs soient au courant des dates et que l'équipe websites déploit automatiquement les traductions sans étapes manuelles inutiles. Côté infra j'étais là pour seconder l'équipe sur le déploiement des sites internet. Je pouvais déployer moi-même la version de test du site fedoraproject.org afin que les traducteurs puissent relire leurs traductions et soumettre des problèmes/correctifs avant le déploiement le jour J (pour rappel une nouvelle version tous les 6 mois). En tant que coordinateur principal de l'équipe de traduction francophone, j'étais aux premières loges pour corriger les problèmes et indiquer la procédure de test aux autres équipes.
Tu as aussi géré la traduction quelques années entre Thomas Canniot et Jean-Baptiste Holcroft, qu'est-ce qui t'a attiré dans cette activité et qu'est-ce que tu as fait ?
Ça a été entre mon année d'étude en Écosse ou j'ai beaucoup amélioré mon anglais et ma première année d'école d'ingénieur à Paris. Si j'ose l'annoncer à voie haute, j'ai eu beaucoup de temps perso lors de mes années d'école d'ingénieur c'est grâce à tout ce temps libre que j'ai pu plonger dans le projet Fedora. Et Matthieu, mon mentor m'a correctement accompagné pour trouver là ou je serai le plus utile, l'équipe de traduction où Thomas s'est quasiment retrouvé tout seul. Il gérait une équipe d'1,5 personne en se comptant lui-même. Je suis arrivé et à deux on a abattu un travail énorme pour rattraper les dérives. Je traduisais puis lui me corrigeait. Je venais de loin, il a fallu attendre mes 24 ans que je découvre la grammaire française, les règles d'accord COI/COD… Bon ce n'était pas très fun alors je me suis spécialisé sur la syntaxe. À deux nous avons recruté d'autres traducteurs, ensuite il a vu qu'on était une équipe, il m'a laissé la main sur la traduction. J'ai vécu 2 transitions d'outils de gestion des traductions. J'ai donné beaucoup d'effort sur le projet Transifex. Jusqu'à ce que ce projet se tourne vers un modèle commercial et que le projet Fedora change d'outil. Là je me suis dit que je ne voulais pas recommencer, j'avais des projets de refonte de tout l'outil de déploiement des sites internet. L'équipe de traduction n'était plus mon sujet prioritaire. Je ne sais même plus comment Jean-Baptiste a pris la main, mais à un moment donné les projets ont migré sur le nouvel outil, et moi j'ai perdu tout ce que j'avais mis en place. Mes outils ou scripts permettant d'obtenir les dernières traductions, d'obtenir le pourcentage de complétion de traduction de chaque langue sur chaque projet. Je n'ai plus rouvert cette porte j'ai laissé la main et je me suis concentré uniquement sur la relecture et la formation des nouveaux : habitudes de traduction pour la cohérence de l'historique, utilisation de la bonne syntaxe. Jusqu'à ce qu'on ne me voit plus contribuer sur la liste de diffusion. En résumé, j'avais le développement des sites qui était prioritaire, j'avais de moins en moins de temps à accorder, une équipe de jeunes (pas dans l'âge mais dans la date d'arrivée dans l'équipe de traduction francophone) et un changement d'outil et de process qui m'ont tout naturellement écartés de mes responsabilités.
En 2012, le FUDCon (devenu Flock depuis) s'est tenu à Paris et tu en as été l'un des principaux organisateurs. Peux-tu expliquer le but de ces rencontres et de leur importance ? Quelles ont été les difficultés d'une telle organisation ? Quels souvenirs en retires tu ?
Le FUDCon était un événement annuel (par région) qui était l'occasion pour les contributeurs de se rencontrer pour mieux se connaître mais aussi pour avancer plus rapidement sur des points particuliers et abolir les fuseaux horaires. C'était également l'occasion de rencontrer les « Redhatters ». C'est également lors de ces événements qu'on rapproche tous les organes de la communauté. Les rencontres physiques sont très importantes. Beaucoup d'échanges dans la communauté se passent en anglais, dont c'est la langue maternelle pour une grande partie. Il est parfois difficile de cerner le ton employé à l'écrit par un individu − oui toutes nos réunions étaient en chat/IRC −, c'est lors de rencontre de ce genre qu'on peut cerner le caractère d'un individu et comprendre quand il est sérieux, ironique ou espiègle. C'est aussi l'occasion d'échanger sur la vie, d'autres sujets qui ne sont pas ceux de tous les jours. Ou ouvre notre horizon. Lors de mon premier FUDCon à Zurich, j'étais tout jeune arrivé dans le projet. Je n'avais pas grand chose à dire mais beaucoup à apprendre. Et plus je me rapprochais de l'équipe France, plus j'entendais que la communauté rêverait d'un événement en France, à Paris. Alors un jour, un peu poussé, on a monté une petite équipe pour cet événement. Il a fallu choisir une date, trouver un lieu, proposer des logements et réaliser toute la logistique : - créer des goodies pour faire de petit cadeaux pour que les contributeurs puissent repartir avec un beau souvenir (t-shit « tour Eiffel » et dessous-de verre réutilisables) - gérer les subventions des contributeurs, s'ils proposaient un sujet (talk) ils pouvaient bénéficier d'une subvention par Red Hat - trouver un traiteur - coordonner l'équipe d'orga… J'ai rencontré pas mal de nouvelles difficultés. C'est la première fois que je gérais (en partie) un budget autre que le mien. Mais c'est également la première fois qu'on comptait sur moi -physiquement- à une si grande échelle. L'équipe d'organisation s'est principalement tournée autour des membres de l'association Borsalinux-Fr, mais on avait bien entendu des personnes en charge côté « Red Hat » sur qui se reposer puisque cet événement était sponsorisé tous les ans sur les différentes régions (Amérique, Asie, EMEA). N'oubliez surtout pas que dans EMEA il y a Europe, mais également Afrique. Ahhhh l'Afrique. C'est loin et dans l'espace et dans la culture. Je me souviens d'un appel le matin de l'événement. « Salut Kévin, j'ai raté mon avion, tu peux me trouver un autre vol » ? Oui, c'était un contributeur dont le billet d'avion était payé complètement sur le budget subvention de l'événement. Et ça paraissait naturel pour lui que je lui paie un nouveau billet pour l'avion qu'il avait raté parce qu'il est arrivé en retard à l'aéroport... Bon il a choisit de prendre lui-même le nouveau billet il est venu et on a passé de bons moments ! C'était un des contributeurs les plus actifs de sa région à cette époque. Autre problématique plus ennuyante au long terme : j'ai utilisé mon adresse email perso pour réserver le traiteur. Et je ne sais pas ce qu'il a fait, mais il s'est enregistré avec ma propre adresse email sur une liste quelque part, et depuis 12 ans je reçois des emails en tant que gestionnaire de cette entreprise. J'ai des candidatures spontané pour des comptables, je reçois des promotions pour acheter des sardines en gros, je reçois des nouvelles de la mairie de Paris… Ça c'est pénible. Mais j'avais à l'époque l'alias email @fedoraproject.org et j'aurai du écrire avec mon contact « pro » plutôt que perso. On apprend beaucoup en contribuant dans les projets communautaires ! Finalement, avoir organisé cet événement m'a donné de l'expérience pour organiser le même genre d'événement au sein du projet OLPC − One Laptop Per Child.
Tu as aussi beaucoup rédigé et géré le magazine francophone Muffin, peux-tu nous expliquer en quoi ça consistait et ce que tu as fait ? Que penses-tu de ce format et du travail réalisé ?
Muffin c'était incroyable. Depuis mes années d'études ou je devais rédiger des rapports et présenter du contenu d'une manière très formelle, j'ai appris à rédiger en LaTeX. Je pensais savoir, connaître et comprendre. Quand j'ai vu ce que mettait en place melmorabity (Mohamed El Morabity) pour le rendu, j'étais obligé de rester pour en apprendre plus ! J'ai surtout participé à la rédaction du numéro 3. C'est l'occasion de mettre en avant des nouveautés, d'anticiper sur les demandes qui vont venir dans les forum et de présenter de contenu de qualité à nos utilisateurs. J'étais tous les premiers samedi du mois aux PSL à la Cité des Sciences à Paris. C'était une rencontre mensuelle (peut-être a-t-elle encore lieu ?) où plusieurs contributeurs de plein de communautés différentes venaient à la rencontre de leurs utilisateurs. Ce magazine avait une cible de plus. C'était également quelque chose qu'on était fier de mettre en avant lors des différents salons que nous représentions (FOSDEM, Solution Linux…) Fedora misant sur les nouveautés, il est important qu'on utilise différents moyens pour annoncer les changements aux utilisateurs. C'était un moyen de plus.
Par ailleurs à la même période il y avait je crois un _Linux Pratique Essentiel_ dédié à Fedora 13 sorti vers 2011-2012 qui a impliqué de nombreux rédacteurs de la communauté francophone dont toi. Peux-tu revenir sur cette expérience ? Quelle a été la plus-value de travailler avec un éditeur pour créer ce magazine payant ?
Hum, ça me dit quelque chose mais je n'en n'ai plus aucun souvenir. J'ai peut-être très peu contribué dans ce magazine ? Je me souviens plutôt de ma première rencontre avec mon mentor, au 42 de je ne sais plus quelle rue à Paris. C'était pour un live sur Radio Libertaire pour ensuite aller à une rediffusion d'une conférence de RMS dans un lieu plein d'idées nouvelles… Y'a pas à dire il se passe plein de chose à Paris !
Tu t'es ensuite mis en retrait de la communauté francophone après 2013, pour quelles raisons ?
Plus j'en faisais au sein de la communauté Fedora, plus j'étais en capacité d'en faire plus. Je n'arrêtais pas d'interagir avec les différentes équipes pour améliorer la productivité, réduire les freins rencontrés par différents contributeurs, améliorer la collaboration. Sauf qu'au bout d'un moment, on entend les oiseaux chanter par la fenêtre et on se dit « mince, c'est déjà le matin ? » aller il faut aller chercher une ou deux heures de sommeil avant d'attaquer le boulot. Celui pour lequel on est payé. J'ai passé des semaines à plus de 30h de contribution sur les projets libres. Avec le boulot à plein temps à côté. Je n'étais pas le seul, mais si en plus on se disperse dans trop de sujets on ne peut pas tous les suivre complètement. Et si en plus on est à Madagascar avec une connexion internet limité, que ça coïncide avec le déploiement d'une nouvelle version et que c'est habituellement toi qui appuie sur le bouton ? Et bien tu trouves un « jeune » tout fou que tu formes et qui passe autant de temps avec toi sur internet qu'avec sa famille, tu lui confies la tâche d'appuyer sur le bouton et d'utiliser en prod tout le process de déploiement que tu viens de changer et activer après 2 mois de refontes complète. Tu te rends compte que ça se passe bien sans toi, que ça tourne, qu'il est fiable. Tu es rassuré et tu te dis « je me suis libéré d'une charge, qu'elle est ma prochaine priorité ? » Merci à Robert Mayr (robyduck) pour cette belle succession ! À ce moment j'ai également quitté Paris pour revenir dans mes montagnes (Haute-Savoie). J'ai intégré un nouveau travail dans lequel j'ai mis tout mon temps et même plus. Je n'ai plus eu l'occasion de rencontrer les collaborateurs du Projet Fedora aussi souvent et j'ai décroché. Oui j'ai trop donné pour mon entreprise à l'époque pour ce qu'elle me rendait, mais ayant relâché les rennes de la traduction FR et des sites internet, je me suis redirigé vers le loisir en montagne ce qui m'a permis de passer moins de temps sur l'ordinateur. Je n'avais plus non plus de contacts réguliers avec des guru de l'informatique : ceux qui vous tirent vers le haut. J'avais beaucoup de connaissances sur la collaboration que j'ai acquis au sein du projet à mettre en place dans mon entreprise. Finalement, j'ai continué mes « contributions libres » mais en tant que bénévole, je suis maintenant formateur en alpinisme. Je passe beaucoup de mon temps libre dans une autre association qui n'a plus de lien avec Fedora si ce n'est le bénévolat.
Si tu avais la possibilité de changer quelque chose dans la distribution Fedora ou dans sa manière de fonctionner, qu'est-ce que ce serait ?
Je ne suis plus au fait des axes politiques Fedora/Red Hat, je ne sais pas à quel point le rachat de Red Hat a influé sur le Projet Fedora même si j'ai vu passer quelques messages sur ce contexte. Je trouve que certains aspects sont trop éparpillés. J'ai vécu (à côté sans être contributeur) le développement de la forge Pagure (hello pingou !) mais également le choix de certains projets de partir sur gitlab ou github. Il y a des avantages et des inconvénients. Personnellement, j'ai découvert Gerrit et tout ce que ça permet. Je l'ai moi-même mis en place dans mon nouvel emplois en 2014. Dès cet instant je n'ai plus réussi à contribuer au Projet Fedora. Les outils utilisés étaient un frein pour moi je ne pouvais plus suivre les développements aussi facilement. Donc si je devais changer quelque chose dans le projet, on passerait tout sous Gerrit. Ok je n'ai pas répondu à la vraie question qui concernait le produit... Wayland, systemd ? Non non la politique ce n'est pas pour moi, j'aime avancer et ma première et dernière modification si ce n'est la traduction c'est l'activation de la console en 256 couleurs par défaut. Ça me suffit je vis très bien avec !
À l'inverse, est-ce qu'il y a quelque chose que tu souhaiterais conserver à tout prix dans la distribution ou le Projet en lui même ?
J'aime me dire que c'est un produit qui évolue avec sa communauté et non pas avec une entreprise. Ce que je souhaite conserver, c'est les quatre fondations : Freedom, Friends, Features, First Les nouvelles fonctionnalités étant ce que j'apprécie beaucoup. Je suis du genre à désactiver les mises à jour auto et aimer déclencher moi-même les mises à jour afin de surveiller tout ce qui arrive.
Que penses-tu de la communauté Fedora-fr que ce soit son évolution et sa situation actuelle ? Qu'est-ce que tu améliorerais si tu en avais la possibilité ?
Malheureusement je n'en fais plus partie, j'aimerai. Mais je n'arriverai pas à retrouver le même sentiment en restant à distance, le contact physique ou régulier avec les contributeurs me manque. Mais de la même manière que la plongée ou le parapente me manque. On n'a qu'une vie et elle est remplie de choix.
Quelque chose à ajouter ?
Merci beaucoup à vous d'être encore actif, aux nouveaux d'avoir pris la relève et à tout le monde de continuer à contribuer pour ce produit. C'est tous les jours que je pense aux milliers de contributeurs Fedora et aux centaines de contributeurs que j'ai connus personnellement.
Merci Kévin pour ta contribution !
Conclusion
Nous espérons que cet entretien vous a permis d'en découvrir un peu plus sur le site Fedora-fr.
Si vous avez des questions ou que vous souhaitez participer au Projet Fedora ou Fedora-fr, ou simplement l'utiliser et l'installer sur votre machine, n'hésitez pas à en discuter avec nous en commentaire ou sur le forum Fedora-fr.
À dans 10 jours pour un entretien avec Aurélien Bompard, développeur au sein du Projet Fedora et employé Red Hat affecté au Projet Fedora en particulier dans l'équipe infrastructure.
Some of our most active users chose syslog-ng because of its detailed and accurate documentation (https://syslog-ng.github.io/). Later I received complaints that it is too detailed, and we need a tutorial: https://peter.czanik.hu/posts/syslog-ng-tutorial-toc/. This time, I was asked for something even shorter. Here you are!
Before you begin
If you want to configure syslog-ng, you have to install it first. There are way too many ways to install syslog-ng, but I cannot include that many details here. Possibilities range from installing syslog-ng from a package included in your Linux distribution, through installing it from a 3rd party repository, like ours, to building syslog-ng from source. In either case, you end up with syslog-ng installed in your environment and a default syslog-ng configuration.
Configuring syslog-ng
The default configuration usually collects local log messages in one or more text files. It is perfect for standalone workstations, but the main strength of syslog-ng is central log collection. Here we will learn about the main building blocks of a syslog-ng configuration and building a minimal syslog-ng configuration to collect log messages centrally.
This is the shortest config I can think of. Of course, shorter identifiers and file names could make it even shorter, just as removing any white space. There is also an alternative syntax to make it shorter, however that is confusing for new users, and sometimes even to seasoned professionals.
So, what do you see in this configuration? It starts with declaring the version number. This ensures that you get appropriate warnings if you use an old config with a new syslog-ng.
Next, you will see three configuration blocks. Each one states the kind of building block first, like source, destination or log, all of which, except for log, have unique names. The other configurations are optional. By tradition, each name starts with a letter referring to the type of the block. This is not mandatory, and many Linux distros have a different naming scheme.
Each source and destination might include multiple drivers. The system() source collects platform-specific local logs. The internal() source collects syslog-ng’s own logs. The file() destination writes logs to a file.
The log statement (or log path) is a bit special: it connects the various building blocks together. In this case, it makes sure that logs from the s_sys source are written to the d_mesg destination.
As you can see, the formatting of the configuration is pretty much flexible. One line, multiple lines, white space, no white space, it is up to you. Formatting can make the config easier to read, but syslog-ng does not need it.
Adding a filter and more
I extended the previous configuration a bit. The changes are marked with bold:
@version:4.8
@include "scl.conf"
source s_sys {
system();
internal();
};
# this is a comment
destination d_mesg { file("/var/log/messages.${MONTH}.${DAY}"); };
filter f_default { level(info..emerg) and not (facility(mail)); };
log {
source(s_sys);
filter(f_default); destination(d_mesg);
};
You can include other configuration files. “scl.conf” is a special one, as this name stands for the syslog-ng configuration library, which includes many useful configuration snippets. For example, parsers for Apache access logs or an Elasticsearch destination.
You can use comments to explain more complex parts of your configuration.
You can use macros in file names. In this example, instead of a single log file, a new one is created each day, and named after the current month and day. If you do additional message parsing, you could also use values parsed from log messages: for example, user names.
You can use blank lines to separate building blocks. Unfortunately, I could not mark them with bold :-)
I also added a filter, which means that I defined the filter, and included in the log path. I also defined a filter and included it in the log path. There is no need to add new lines to the configuration, therefore I placed the filter in the same line as the reference to the destination.
Adding a network source and a few more destinations
This configuration adds a network source and a few destinations to show that you can use a configuration block multiple times in your configuration.
Here, I added a source and two more destinations, and, of course, also connected them in two log statements:
s_syslog collects RFC5424-compliant log messages at port 601 over non-encrypted TCP connections.
d_fromnet writes log messages to two different files. The first one is a regular syslog-formatted destination. The second one writes logs with a JSON template function, with all name-value pairs parsed from log messages included.
d_elasticsearch stores log data to an Elasticsearch database. It uses the JSON template function for message formatting, and it formats dates to be accepted by Elasticsearch.
The first log statement sends both local and network source log messages to the Elasticsearch destination. Here you can see that various building blocks can be used multiple times. Logs from s_src are written in a file, and also sent to the Elasticsearch destination.
The second log statement stores logs from the network source into files.
What is next?
Of course, while this blog is enough to understand the basic concepts of syslog-ng configuration, it did not cover all possibilities. There are parsers and other configuration elements, you can make things conditional with an if statement within a log path, and you can define sources and others within a log statement in-line, just to mention a few.
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
I've spent time over the last month enabling Blackwell support on NVK, the Mesa vulkan driver for NVIDIA GPUs. Faith from Collabora, the NVK maintainer has cleaned up and merged all the major pieces of this work and landed them into mesa this week. Mesa 25.2 should ship with a functioning NVK on blackwell. The code currently in mesa main passes all tests in the Vulkan CTS.
Quick summary of the major fun points:
Ben @ NVIDIA had done the initial kernel bringup in to r570 firmware in the nouveau driver. I worked with Ben on solidifying that work and ironing out a bunch of memory leaks and regressions that snuck in.
Once the kernel was stable, there were a number of differences between Ada and Blackwell that needed to be resolved. Thanks to Faith, Mel and Mohamed for their help, and NVIDIA for providing headers and other info.
I did most of the work on a GB203 laptop and a desktop 5080.
1. Instruction encoding: a bunch of instructions changed how they were encoded. Mel helped sort out most of those early on.
2. Compute/QMD: the QMD which is used to launch compute shaders, has a new encoding. NVIDIA released the official QMD headers which made this easier in the end.
3. Texture headers: texture headers were encoded different from Hopper on, so we had to use new NVIDIA headers to encode those properly
4. Depth/Stencil: NVIDIA added support for separate d/s planes and this also has some knock on effects on surface layouts.
5. Surface layout changes. NVIDIA attaches a memory kind to memory allocations, due to changes in Blackwell, they now use a generic kind for all allocations. You now longer know the internal bpp dependent layout of the surfaces. This means changes to the dma-copy engine to provide that info. This means we have some modifier changes to cook with NVIDIA over the next few weeks at least for 8/16 bpp surfaces. Mohamed helped get this work and host image copy support done.
6. One thing we haven't merged is bound texture support. Currently blackwell is using bindless textures which might be a little slower. Due to changes in the texture instruction encoding, you have to load texture handles to intermediate uniform registers before using them as bound handles. This causes a lot of fun with flow control and when you can spill uniform registers. I've written a few efforts at using bound textures, so we understand how to use them, just have some compiler issues to maybe get it across the line.
7. Proper instruction scheduling isn't landed yet. I have a spreadsheet with all the figures, and I started typing, so will try and get that into an MR before I take some holidays.
I should have mentioned this here a week ago. The Vulkan AV1 encode extension has been out for a while, and I'd done the initial work on enabling it with radv on AMD GPUs. I then left it in a branch, which Benjamin from AMD picked up and fixed a bunch of bugs, and then we both got distracted. I realised when doing VP9 that it hasn't landed, so did a bit of cleanup. Then David from AMD picked it up and carried it over the last mile and it got merged last week.
So radv on supported hw now supports all vulkan decode/encode formats currently available.
Just when you thought it was safe to go to court, think again.
Your lawyer might not have your best interests at heart and even
worse, they may be working for the other side.
In 2014, journalists discovered Victoria Police had a secret
informer, a mole snitching on the underworld, identified by the
code name Lawyer X.
It was beyond embarassing: not only did police have the burden
of protecting their secret informer, they may also have to
protect her relatives who share the same name. The most
notable among them, the informer's uncle,
James Gobbo,
a supreme court judge who subsequently served as Governor
for the State of Victoria.
There is absolutely no suggestion that Lawyer X's
relatives had anything to do with her misdeeds. Nonetheless,
the clients she betrayed were the biggest crooks in town,
until, of course, her unethical behavior gave them the opportunity
to have those convictions overturned and present themselves as
model citizens once again. Any relatives or
former business associates of Lawyer X, including
the former governor, would be in danger for the rest of their
lives.
James Gobbo and his son
James Gobbo junior are both Old Xaverians,
graduates of Melbourne's elite Jesuit school for boys, like my
father and I.
Lawyer X was eventually revealed to be
Nicola Gobbo,
a graduate of the elite girls school Genazzano FCJ College.
My aunt, that is my father's sister, also went to Genazzano.
Alumni communications typically refer to Old Xaverians with
the symbols "OX" and the year of graduation, for example,
"OX96" for somebody who graduated in 1996.
Whenever a scandal like this arises, if the suspect is a
graduate of one of these elite schools, the newspapers will be
very quick to dramatize the upper class background.
The case of Lawyer X was a head and shoulders above
any other scandal: a former prefect and class captain who
made a career out of partying with drug lords, having their children
and simultaneously bugging their conversations for the police.
Stories like this are inconvenient for those elite schools
but in reality, I don't feel the schools are responsible when
one of these unlucky outcomes arises. The majority of students
are getting a head start in life but there is simply nothing that any
school can do to prevent one or two alumni going off the rails
like this.
Having been through this environment myself, I couldn't
believe what I was seeing in 2023 when the Swiss financial regulator (FINMA)
voluntarily published a few paragraphs from a secret judgment, using the
code name "X" to refer to a whole law office (cabinet juridique in
French) of jurists in Geneva who had ripped off their clients.
The Gobbo family, Genazzano FCJ College and alumni have finally been
vindicated. The misdeeds of Lawyer X pale in comparison to the
crimes of the Swiss law firm X.
Lawyer X was a former member of a political party.
One of the jurists from Law firm X was working for the rogue law office
at the same time that he was a member of Geneva city council.
He is a member of the same political party as the Swiss president from
that era.
In 1993, Lawyer X was an editor of Farrago, Australia's leading
student newspaper. Law firm X used the
Swiss media to write positive stories about their company.
When the same company was outlawed, nanny-state laws prevented the media reporting
anything at all about its downfall. Ironically,
one of my former clients was also an editor of Farrago before he became
Australia's Minister for Finance. The word Farrago gives a fascinating
insight into the life of Lawyer X.
Here is a sample sentence using the word Farrago in the
Cambridge dictionary:
... told us a farrago of lies
When FINMA revealed the secret judgment shuttering Law Firm X,
Urban Angehrn, the FINMA director, resigned citing health reasons.
His dramatic resignation helped bury news stories about the Law firm X judgment.
In Australia, a number of chief commissioners have resigned. In fact, Victoria
Police have been through three leaders in the last year.
Who predicted Elon Musk would acquire Twitter?
In 2018, I attended the UN Forum on Business and Human Rights,
where I made this brief intervention predicting the future of Facebook and
Twitter. When
Elon Musk purchased Twitter in 2022, he called it X.
Go figure.
Just a day or two more until the big datacenter move!
I'm hopeful that it will go pretty well, but you never know.
Datacenter move
Early last week we were still deploying things in the new datacenter,
and installing machines. I ran into all kinds of trouble installing
our staging openshift cluster. Much of it around versions of images
or installer binaries or kernels. Openshift seems fond of 'latest'
as a version, but thats not really helpfull all the time. Especially
when we wanted to install 4.18 instead of the just released 4.19.
I did manage to finally fix all my mistakes and get it going in the end though.
We got our new ipa clusters setup and replicating from the old dc to new.
We got new rabbitmq clusters (rhel9 instead of rhel8 and newer rabbitmq) setup
and ready.
With that almost everything is installed (except for a few 'hot spare' type things
that we can do after the move, and buildvm's...which I will be deploying
this weekend.
On thursday we moved our staging env and it mostly went pretty well I think.
There's still some applications that need to be deployed or fixed up, but
overall it should mostly be functional. We can fix things up as time
permits.
We still have an outstanding issue with how our power10's are configured.
Turns out we do need a hardware management console to set things up as
we had planned. We have ordered this and will be reconfiguring things
post move. For normal ppc64le builds this shouldn't have any impact.
For composes that need nested virt, they will just fail until the week
following the move (when we have some power9's on hand to handle this case).
So, sorry ppc64le users, likely a bit of failed rawhide composes,
sorry about that.
Just a reminder about next week:
mirrorlists (dnf updates), docs/websites, downloads, discourse, matrix should all be unaffected
YOU SHOULD PLAN TO NOT TRY AND USE ANY OTHER SERVICES until the goahead (wed).
Monday:
Around 10:00 UTC services will start going down.
We will be moving storage and databases for a while.
Once databases and storage are set we will bring services back up
On monday koji will be up and you can probibly even do builds (but I strongly
advise you to not). However, bodhi will be down, so no updates will move forward
from builds done in this period.
Tuesday:
koji/build pipeline goes down.
We will be moving it's storage databases for a while.
We will bring things up once those are moved.
Wed:
Start fixing outstanding issues, deploy missing/lower pri services
At this point we can start taking problem reports to fix things (hopefully)
Thursday:
More fixing outstanding items.
Will be shutting down machines in old DC
Friday:
Holiday in the US
Hopefully things will be in a stable state by this time.
To take a short break from datacenter work, I have been meaning
to look into ansible lightspeed for a long time, so I finally
sat down and took an introductory course and have some thoughts
about how we might use it in on fedora's ansible setup.
The official name of the product is: "Red Hat Ansible Lightspeed
with IBM watsonx Code Assistant", which is a bit... verbose, so I will
just use 'Lightspeed' here.
This is one of the very first AI products Red Hat produced, so
its been around a few years. Some of that history is probibly why
it's specifically using watsonx instead of some other LLMs on the
backend.
First a list of things I really like about it:
It's actually trained on real, correct, good ansible content.
It's not a 'general' LLM trained on the internet, it's using some
ansible galaxy content (you can opt out if you prefer) as well
as a bunch of curated content from real ansible. This always
struck me as one of the very best ways to leverage LLMs instead
of general hoover in any data and use it. In this case it really
helps make the suggestions and content more trustable and less
hallucinated.
Depending on the watsonx subscription you have, you may train
it on _your_ ansible content. Perhaps you have different standards
than others or particular ways you do things. You can train it
on them and actually get it to give you output that uses that.
Having something be able to generate a biolerplate for you that
you can review and fix up is also a really great use for llms, IMHO.
And some things I'm not crazy about:
It requires AAP (ansible automation platform) and watsonx licenses.
(mostly, see below). It would be cool if it could leverage a local
model or Red Hat AI in openshift instead of watsonx, but as noted above
it's likely tied to that for historical reasons.
It uses a vscode plugin. I'm much more a vim type old sysadmin,
and the idea of making a small ansible playbook thats just a text
file seems like vscode is... overkill. I can of course see why they
choose to implement things this way.
And something I sure didn't know: There's an ansible code bot on github.
It can scan your ansible git repo and file a PR to bring it in line with
best practices. Pretty cool. We have a mirror of our pagure ansible repo
on gitub, however, it seems to be not mirroring. I want to sort that out
and then enable the bot to see how it does. :)
Image mode, aka. “bootable containers�, aka. “bootc� is an exciting new way to
build and deploy operating systems. A bootable container image can be used to
install or upgrade a real or virtual machine, similar to container images for
applications. This is currently supported for
Red Hat Enterprise Linux 9/10
and Fedora/CentOS, but also in
other projects like universal-blue.
With system roles being the supported high-level API to set up
Fedora/RHEL/CentOS systems, we want to make them compatible with image mode
builds. In particular, we need to make them detect the “non-booted� environment
and adjust their behaviour to not e.g. try to start systemd units or talk to
network services, and defer all of that to the first boot. We also need to add
full bootc end-to-end integration tests to ensure this keeps working in the
future on all supported platforms.
Build process
This can work in two ways. Both ought to work, and which one you choose depends
on your available infrastructure and preferences.
Treat a container build as an Ansible host
Start a container build with e.g.
buildah from --name buildc quay.io/centos-bootc/centos-bootc:stream10
Install Ansible and the system roles into the container
The Containerfile looks roughly like this:
FROM quay.io/centos-bootc/centos-bootc:stream10
RUN dnf -y install ansible-core rhel-system-roles
COPY ./setup.yml .
RUN ansible-playbook setup.yml
Everything happens inside of the image build, and the playbooks run against
localhost. This could use a multi-stage
build to avoid having
Ansible and the roles in the final image. This is entirely self-contained and
thus works well in automatic container build pipelines.
⚠� Warning: Unfortunately this is currently broken for many/most roles because
of an Ansible bug: service: fails in a container build environment.
Once that is fixed, this approach will work well and might often be the
preferred choice.
Status
This effort is tracked in the RHEL-78157 epic.
At the time of writing, 15 roles are already supported, the other 22 still need to be updated.
Roles which support image mode builds have the containerbuild tag, which you
can see in the Ansible Galaxy view (expand the tag list at the top), or in the source code in meta/main.yml.
Note that some roles also have a container tag, which means that they are
tested and supported in a running system container (i.e. a docker/podman
container with the /sbin/init entry point, or LXC/nspawn etc.), but not
during a non-booted container build.
Steps for converting a role
Helping out with that effort is very much appreciated! If you are interested in
making a particular role compatible with image mode builds, please follow these steps:
Clone the role’s upstream git repository. Make sure that its meta/main.yml
file does not yet have a containerbuild tag – if it does, the role was
already converted. In that case, please update the status in the epic.
Familiarize yourself with the purpose of the role, have a look at README.md,
and think about whether running the role in a container generally makes
sense. That should be the case for most of them, but e.g storage is
hardware specific and for the most part does not make sense in a container
build environment.
Make sure your developer machine can run tests in in general. Do the
integration test setup and also read the following sections about running QEMU and container tests.
E.g. running a QEMU test should work:
The most common causes of failures are service_facts: which just simply
doesn’t work in a container, and trying to set the state: of a unit in
service:. The existing PRs linked from RHEL-78157
have plenty of examples what to do with these.
The logging role PR
is a good example for the standard approach of adding a
__rolename_is_booted flag to the role variables, and use that to
conditionalize operations and tests which
can’t work in a container. E.g. the above service: status: can be fixed
with
state:"started"
service_facts: can be replaced with systemctl is-enabled or similar, see e.g. the corresponding
mssql fix or
firewall fix.
Do these “standard recipe� fixes to clear away the easy noise.
As described above, the container tag means that the role is supported and
works in (booted) system containers. In most cases this is fairly easy to
fix, and nice to have, as running tests and iterating is faster, and
debugging is also a bit easier. In some cases running in system containers
is hard (like in the selinux or podman roles), in that case don’t bother and
remove that tag again.
Go through the other failures. You can download the log archive and/or run
the individual tests locally. The following command helps for easier debugging – it
keeps the container running for inspection after a failure, and removes
containers and temp files from the previous run:
Add an end-to-end integration test which ensures that running the role
during a container build actually works as intended in a QEMU deployment.
If there is an existing integration test which has representative complexity
and calls the role just once (i.e. tests one scenario), you can convert it
like
sudo’s bootc e2e test.
If there is no existing test, you can also add a specific bootc e2e test
like in
this demo PR
or the
postgresql role.
Il defunto Papa Francesco chiese a un gruppo di circa quattrocento vescovi di lavorare insieme dal 2021 al 2024 per analizzare il modo in cui i fedeli
cattolici interagiscono e si sviluppano come movimento. Formalmente, a questo comitato di vescovi fu dato il titolo di
Sinodo sulla sinodalità . Il termine Sinodo è ampiamente utilizzato in tutte le religioni cristiane per riferirsi a comitati, consigli o riunioni di tali gruppi a qualsiasi livello della gerarchia ecclesiastica. Il termine
sinodalità è specifico della Chiesa cattolica. Il Sinodo ha una pagina web ufficiale in cui
cerca di spiegare la sinodalità .
Sono stati creati diversi gruppi di lavoro su un'ampia gamma di argomenti. In questa analisi, mi limiterò a esaminare il gruppo di lavoro numero tre, che ha esaminato il tema
della missione nell'ambiente digitale . Successivamente, fornirò alcune mie prove sugli argomenti che il gruppo di lavoro sta prendendo in considerazione.
Anche
i ripetitori di pacchetti
per radio amatoriali rientrano nel campo di applicazione, sebbene le licenze per radio amatoriali non consentano la trasmissione esplicita di materiale religioso.
Il Vaticano è stato uno dei primi ad adottare la radio a onde corte. Papa Leone XIV e monsignor Lucio Adrian Ruiz, segretario del Dicastero per la Comunicazione, hanno visitato questa settimana la sede della Radio Vaticana:
Leggendo i risultati sia del gruppo di lavoro che del Sinodo nel suo complesso, ritengo che la Chiesa nel suo complesso non abbia deciso né di accogliere né di rifiutare
i media di controllo sociale . Stanno riconoscendo che fanno parte del panorama digitale e stanno cercando di decidere come la Chiesa si relaziona ad esso.
Come si è evoluto il processo sinodale ad alto livello
Prima di entrare nei dettagli, ecco una panoramica del processo e dei resoconti pubblicati in momenti diversi, con link diretti alle edizioni tradotte.
Il sito web principale del Sinodo è
www.Synod.va ed è disponibile in diverse lingue. A quanto pare, il contenuto è stato creato in italiano e tradotto in inglese e in altre lingue. Questo lo rende un po' più difficile da leggere.
Nell'ottobre 2023 si è svolto un lungo incontro a Roma durante il quale è stata elaborata una bozza iniziale del rapporto.
Punti chiave del rapporto finale in relazione all'ambiente digitale
Al punto 58, il rapporto osserva che i cristiani potrebbero tentare di proclamare il Vangelo attraverso la loro partecipazione in un ambiente digitale.
58. ... I cristiani, ciascuno secondo i suoi diversi ruoli - nella famiglia e negli altri stati di vita; nel mondo del lavoro e nelle professioni; impegnati civilmente, politicamente, socialmente o ecologicamente; nello sviluppo di una cultura ispirata al Vangelo, inclusa l'evangelizzazione dell'ambiente digitale - percorrono le strade del mondo e annunciano il Vangelo lì dove vivono, sostenuti dai doni dello Spirito.
59. Così facendo, chiedono alla Chiesa di non abbandonarli, ma di farli sentire inviati e sostenuti nella missione.
Questo punto sembra incoraggiare la Chiesa a riflettere sulla situazione affrontata da coloro che sono sotto l'influenza di un ambiente digitale, ma non implica necessariamente che l'ambiente digitale sia buono o cattivo.
Al punto 112, riguardante la mobilità, che comprende persone di tutti i livelli sociali, il rapporto osserva:
Alcuni mantengono forti legami con il loro Paese d'origine, soprattutto grazie ai media digitali, e per questo motivo può risultare difficile stabilire legami nel nuovo Paese; altri si ritrovano a vivere senza radici.
Questa è un'osservazione eccellente. In Europa, ho incontrato coppie le cui relazioni dipendono interamente dai dispositivi che usano per la traduzione automatica. Quando arrivano nuovi arrivati &ZeroWidthSpace&ZeroWidthSpacein città, la cultura di WhatsApp incoraggia i vicini a passare settimane o mesi a parlare alle loro spalle senza mai guardarli negli occhi.
113. La diffusione della cultura digitale, particolarmente evidente tra i giovani, sta cambiando profondamente la loro esperienza dello spazio e del tempo, influenzando le loro attività quotidiane, la comunicazione e le relazioni interpersonali, inclusa la fede. Le opportunità che offre internet stanno ridisegnando relazioni, legami e confini. Oggi sperimentiamo spesso solitudine ed emarginazione, anche se siamo più connessi che mai. Inoltre, coloro che hanno interessi economici e politici propri possono usare
i social media per diffondere ideologie e generare forme di polarizzazione aggressive e manipolatrici. Non siamo ben preparati a questo e dobbiamo dedicare risorse affinché l’ambiente digitale diventi uno spazio profetico per la missione e l’annuncio. Le Chiese locali devono incoraggiare, sostenere e accompagnare quanti si impegnano nella missione nell’ambiente digitale. Le comunità e i gruppi digitali cristiani, in particolare i giovani, sono chiamati anche a riflettere sul modo in cui creano legami di appartenenza, promuovendo l’incontro e il dialogo. Devono offrire formazione ai loro coetanei, sviluppando un modo sinodale di essere Chiesa. Internet, costituito come una rete di connessioni, offre nuove opportunità per vivere meglio la dimensione sinodale della Chiesa.
Questo paragrafo riconosce i pericoli della tecnologia digitale, in particolare
dei social media che controllano la società , e le parole chiave sono "Non siamo ben preparati a questo". Tuttavia, suggerisce che le chiese locali dovrebbero "incoraggiare" a ridurre questi rischi online. Non credo che "incoraggiare" sia la parola giusta da usare, ma non credo nemmeno che dovrebbero scoraggiare.
149. Il processo sinodale ha richiamato con insistenza l'attenzione su alcuni ambiti specifici della formazione del Popolo di Dio alla sinodalità. Il primo di questi riguarda l'impatto dell'ambiente digitale sui processi di apprendimento, sulla concentrazione, sulla percezione di sé e del mondo e sulla costruzione delle relazioni interpersonali. La cultura digitale costituisce una dimensione cruciale della testimonianza della Chiesa nella cultura contemporanea e un campo missionario emergente. Ciò richiede di garantire che il messaggio cristiano sia presente online in modi affidabili che non ne distorcano ideologicamente i contenuti. Sebbene i media digitali abbiano un grande potenziale per migliorare le nostre vite, possono anche causare danni e lesioni attraverso il bullismo, la disinformazione, lo sfruttamento sessuale e la dipendenza. Le istituzioni educative della Chiesa devono aiutare i bambini e gli adulti a sviluppare competenze critiche per navigare in sicurezza nel web.
Questi commenti sono molto pertinenti e molto coerenti con la mia testimonianza, parte della quale è riprodotta più avanti in questa relazione.
150. Un altro ambito di grande importanza è la promozione in tutti i contesti ecclesiali di una cultura della tutela, rendendo le comunità luoghi sempre più sicuri per i minori e le persone vulnerabili.
Quando ho sollevato questo argomento nelle comunità del software libero, la mia famiglia è stata attaccata senza pietà. Si vedano le
email che ho inviato alla fine del 2017 e i commenti su IBM
Red Hat più avanti in questo rapporto.
Fonti relative al gruppo di lavoro tre, la missione in un ambiente digitale
Il sito web di Synod.va ha pubblicato l'elenco di
tutti i gruppi di lavoro . Il sito web include un breve video su ciascun gruppo e un link ai loro rapporti più recenti.
Il video del gruppo di lavoro tre dura poco meno di due minuti. Ecco alcune delle citazioni chiave e le mie osservazioni:
"Oggi le persone, soprattutto i giovani, hanno imparato a vivere contemporaneamente e senza soluzione di continuità sia negli spazi digitali che in quelli fisici."
Le affermazioni contenute nel video non sono quelle presentate nel rapporto finale. Ci arriveremo. Ciononostante, ogni volta che
si parla
di controllo sociale sui media , si tende a generalizzare sull'impossibilità di vivere senza. Ogni volta che vediamo un'affermazione come questa, è importante contestarla.
"In che modo la Chiesa utilizza e si appropria della cultura digitale?"
La domanda retorica è interessante. In realtà, i superpoteri della Silicon Valley usano e si appropriano di qualsiasi contenuto che forniamo loro. La chiesa non usa loro, usa noi. Come pensi che siano diventati così ricchi?
Una domanda più appropriata potrebbe essere: "In che modo la Chiesa
supplisce alle carenze delle culture digitali?".
"Questo ambiente è ormai "indistinguibile dalla sfera della vita quotidiana".
Papa Francesco era un uomo intelligente e aveva intorno a sé persone intelligenti, tra cui il defunto Cardinale Pell. Possiamo far risalire questa citazione al pensiero di Alan Turing. Turing è considerato il padre dell'informatica e un martire. Turing ci ha trasmesso esattamente lo stesso concetto nel leggendario test di Turing, che lo stesso Turing definì il gioco dell'imitazione nel 1949.
Un altro modo di interpretare questo fenomeno è dire che le masse sono state plagiate dai signori della Silicon Valley.
Le scelte prese dai vertici di Facebook rappresentano un problema enorme – per i bambini, per la sicurezza pubblica, per la democrazia – ed è per questo che mi sono fatto avanti. E sia chiaro: non deve andare per forza così. Siamo qui oggi grazie alle scelte deliberate di Facebook.
Il riassunto del gruppo di lavoro continua...
Per annunciare efficacemente il Vangelo nella nostra cultura contemporanea, dobbiamo discernere le opportunità e le sfide presentate da questa nuova dimensione del “luogo”
Ciononostante, il rapporto include l'espressione "maggiore immersione" e ritengo che la Chiesa non dovrebbe dare per scontato che questa sia una linea d'azione predefinita.
La sintesi affronta anche il concetto di giurisdizione. La Chiesa cattolica si è tradizionalmente organizzata su base geografica. Internet permette alle persone di connettersi e formare comunità virtuali senza alcuna connessione geografica.
Tra l'altro, prima di Internet, la Chiesa poteva spostare sacerdoti ad alto rischio da una parrocchia all'altra senza doversi preoccupare di eventuali collegamenti. Ho esaminato meticolosamente i documenti della Commissione Reale australiana e ho trovato questa nota del leggendario Padre X___:
Ciò significa che se qualcuno in Australia venisse a sapere che Padre Z___ è in cura a causa di qualcosa accaduto a Boston e andasse lì per scoprirlo, si troverebbe in un vicolo cieco.
La lettera in questione è stata scritta poco prima che Internet diventasse di dominio pubblico. Rileggendo quelle parole oggi, ci ricordano con chiarezza come Internet stia stravolgendo la nostra vita.
Il gruppo di lavoro prosegue affermando che sta cercando "raccomandazioni o proposte pratiche" da tutta la comunità su qualsiasi argomento correlato alla missione della Chiesa nell'ambiente digitale.
Le persone impegnate nel movimento del software libero, siano esse
cattoliche o meno, possono contattare la propria diocesi locale per scoprire chi coordina a livello locale la risposta a queste sfide.
Un'altra frase che mi ha colpito:
"oggi viviamo in una cultura digitale"
Non esattamente. Alcuni direbbero che ci viene imposta una cultura digitale. Istituzioni come la politica e i media ne sono dipendenti e la mettono su un piedistallo. Pertanto, è ancora più vitale che altre istituzioni, come la Chiesa, si assumano il compito di mettere in discussione ogni aspetto della cultura digitale e di promuovere valide alternative.
La vita senza cellulari, la vita senza app
Telefoni cellulari e app sono strettamente correlati. Alcune persone scelgono di vivere senza uno smartphone, in altre parole, hanno solo la metà dei problemi di un telefono cellulare completo. Alcune persone scelgono anche di avere smartphone senza l'app store di Google o Apple, ad esempio chi installa
Replicant o
LineageOS e utilizza l'
app store di F-Droid per limitare il proprio telefono alle app etiche.
In termini pratici, ci sono persone che non riescono a spostarsi nella propria città natale senza usare il telefono. Un interrogativo interessante per la chiesa è: quale percentuale di fedeli non è in grado di identificare il percorso più diretto da casa alla chiesa più vicina senza usare un'app? Sarebbe interessante analizzare le risposte in base a diversi fattori, come l'età e gli anni di residenza nella parrocchia.
Un'altra domanda chiave, strettamente correlata a quella precedente, è: quanti parrocchiani riescono a ricordare gli orari delle messe e gli eventi chiave del calendario parrocchiale senza guardare il telefono? È fantastico avere queste informazioni visibili sul sito web della parrocchia; tuttavia, quando le persone sono veramente coinvolte nella parrocchia e nella comunità, queste informazioni vengono memorizzate. Più queste informazioni sono diffuse in una comunità, più questa è resiliente.
I sistemi di autenticazione minano la dignità umana
Oggigiorno vediamo spesso aziende che insistono sul fatto che hanno bisogno dei nostri numeri di cellulare per "autenticarci" o per "firmare" documenti tramite SMS.
Questo tipo di cose è particolarmente inquietante. Molte persone hanno familiarità con la pratica nazista di marchiare a fuoco i numeri di identificazione sulla pelle dei prigionieri ebrei. I numeri di cellulare hanno una funzione simile. Anche se i numeri non vengono marchiati a fuoco sulla pelle, spesso è scomodo per le persone cambiare il proprio numero.
Esistono molti fenomeni strettamente correlati, tra cui siti web che richiedono agli utenti di autenticarsi tramite un account Gmail o Facebook.
A livello di Chiesa, Stato, istruzione, assistenza sanitaria e servizi finanziari, è fondamentale garantire che tutti possano partecipare nel modo che desiderano senza rinunciare alla propria dignità.
La Chiesa deve esprimersi su questi argomenti con la stessa voce con cui si esprime su temi come l'aborto.
È necessario sottolineare il consenso
Le preoccupazioni relative al consenso e alla coercizione sono diventate un tema di grande attualità nel mondo di oggi. Ironicamente, le
piattaforme
di controllo sociale che fingono di aiutare le donne a trovare una piattaforma violano il principio del consenso in molti altri modi.
Si consideri, ad esempio, chi ha dedicato tempo alla creazione di un profilo su Facebook o Twitter, a volte per molti anni, connettendosi con centinaia o migliaia di follower, per poi ritrovarsi a dover aggiungere il proprio numero di cellulare al proprio account. Se non lo fanno, l'account viene bloccato. Non esiste una vera e propria ragione tecnica per avere un numero di cellulare nell'account, poiché molti di questi servizi hanno funzionato esattamente allo stesso modo per molti anni prima che tali richieste diventassero comuni.
Le persone non acconsentono liberamente a condividere i propri numeri di telefono con Mark Zuckerberg ed Elon Musk. I servizi sono stati imbastarditi per tendere un'imboscata ai loro utenti con queste richieste.
È significativo che questa cultura di agguati e costrizioni si insinui nella società. In Australia, Chanel Contos ha lanciato una petizione/rivista molto pubblicizzata con storie di donne di scuole private d'élite che si sentivano vittime di agguati, bullismo e costrette a incontri fisici indesiderati.
Ironicamente, la signorina Contos ha reso pubbliche le sue preoccupazioni proprio attraverso le stesse piattaforme che stanno minando la nostra comprensione del consenso e della privacy.
La Chiesa stessa ha dovuto fare un profondo esame di coscienza sui temi del consenso e degli abusi di potere. Questo la pone in una posizione interessante, in cui possiamo affermare che, anche considerando alcune delle rivelazioni più sconvolgenti sugli abusi, i responsabili sono il male minore rispetto ai padroni della Silicon Valley.
È sorprendente la rapidità con cui le istituzioni della Silicon Valley hanno abbandonato ogni sistema di pesi e contrappesi, ritenendo opportuno fare ciò che più gli aggrada. La Chiesa cattolica e altre istituzioni religiose possono ora fare tesoro di quanto hanno imparato dall'analisi critica dei propri errori e mettere in guardia la società da quanto sarebbe stupido ripetere la stessa strada con questi gangster digitali.
La tecnologia digitale è molto più di un semplice controllo sociale dei media
La chiesa non è nuova alla tecnologia. Le prime macchine da stampa furono installate nei locali della chiesa. Caxton installò la prima macchina da stampa inglese nell'Abbazia di Westminster. Altri siti includevano Oxford e l'Abbazia di St Alban. Prima della stampa, leggere e scrivere erano attività riservate ai chierici e molte delle loro opere esistevano solo in latino. La stampa permise la produzione in serie di Bibbie in tedesco e inglese. Questo, a sua volta, ebbe un enorme impatto sulla standardizzazione della lingua, così come contribuì a standardizzare gli atteggiamenti morali che la Silicon Valley sta distruggendo sotto di noi. La versione della Bibbia di Re Giacomo è ampiamente riconosciuta per il suo impatto sulla lingua inglese.
La standardizzazione del linguaggio fu solo un effetto collaterale di questa invenzione. La Riforma fu un altro. Con l'acquisizione dei libri e della capacità di leggere, le persone divennero meno dipendenti dal clero.
Allo stesso modo,
i media di controllo sociale stanno avendo un impatto sulla nostra cultura, nel bene e nel male. Proprio come la stampa ha permesso la Riforma,
i media di controllo sociale potrebbero portare a ulteriori cambiamenti nel modo in cui gli esseri umani si organizzano attorno a strutture e credenze religiose. I signori della Silicon Valley stanno attivamente riflettendo su questi ruoli. Elon Musk si è persino travestito da Satana. Se la Chiesa cattolica non offrirà un'alternativa convincente a questi spostamenti di potere, verrà sottratta al suo controllo.
Frances Haugen (informatrice di Facebook): quasi nessuno al di fuori di Facebook sa cosa succede al suo interno. I vertici dell'azienda nascondono informazioni vitali al pubblico, al governo degli Stati Uniti, ai suoi azionisti e ai governi di tutto il mondo. I documenti che ho fornito dimostrano che Facebook ci ha ripetutamente ingannato su ciò che le sue stesse ricerche rivelano sulla sicurezza dei bambini, sul suo ruolo nella diffusione di messaggi d'odio e divisivi e molto altro ancora.
Mentre le generazioni precedenti si rivolgevano al clero per un consiglio, per poi leggere la Bibbia a loro volta, i giovani di oggi si rivolgono a un motore di ricerca e un domani potrebbero affidarsi all'intelligenza artificiale. Possiamo già osservare come motori di ricerca,
social media e bot di intelligenza artificiale spingano le persone a livelli crescenti di conflitto con i vicini o le spingano su sentieri oscuri di isolamento, autolesionismo e suicidio.
Risorse della Chiesa cattolica rilevanti per l'ambiente digitale
La Chiesa cattolica ha un ruolo importante nell'istruzione e nelle scuole, pertanto può vedere l'impatto del
controllo sociale dei media e può far rispettare i divieti per i bambini e fornire formazione al personale e ai genitori.
Gli insegnanti, in quanto dipendenti della Chiesa o dello Stato, hanno segnalato un aumento dei casi di bullismo da parte di genitori che si raggruppano sulle app di messaggistica. In un caso recente,
la polizia britannica ha inviato sei agenti a umiliare un genitore che aveva usato WhatsApp per protestare contro la scuola locale. Il conflitto, la natura conflittuale di questo ambiente e l'enorme spreco di risorse della polizia sono tutte conseguenze del modo in cui la tecnologia è progettata e utilizzata nella società. Ogni episodio come questo offre uno spunto di riflessione sulle opportunità che la Chiesa cattolica ha di chiedersi "esiste un modo migliore?".
Le parole di Frances Haugen aiutano a spiegare ai genitori di bambini piccoli l'assedio dei sei poliziotti:
Ho visto che Facebook ha ripetutamente incontrato conflitti tra i propri profitti e la nostra sicurezza. Facebook ha sistematicamente risolto questi conflitti a favore dei propri profitti. Il risultato è stato un sistema che amplifica divisione, estremismo e polarizzazione, minando le società di tutto il mondo.
La Chiesa cattolica è un importante datore di lavoro in molti paesi. Questo le conferisce la facoltà di prendere decisioni sull'uso di telefoni cellulari e app di messaggistica nel rapporto datore di lavoro/dipendente. Un datore di lavoro non può vietare ai dipendenti di utilizzare questi dispositivi nel tempo libero, ma può decidere di eliminarne l'uso ufficiale per motivi di lavoro. Il rapporto datore di lavoro/dipendente offre un'ulteriore opportunità per formare sull'importanza della dignità umana al di sopra delle esigenze dei nostri dispositivi.
L'agenda pubblica nell'ambiente digitale, l'aborto della nostra specie
Con molti politici e giornalisti che oggi vivono la loro vita sotto
il controllo dei social media , la loro capacità di valutare quali temi meritino un dibattito pubblico è fortemente influenzata dalle tematiche che si suppone siano di tendenza online. Si pensa che le tematiche siano di tendenza online in conseguenza dell'interesse pubblico, mentre in realtà i gestori delle piattaforme online esercitano la loro influenza per garantire che alcune questioni sembrino crescere in modo organico, mentre argomenti significativi ma scomodi vengono opportunamente sepolti nel flusso di notizie.
In questo contesto, la Chiesa cattolica offre una via alternativa per porre questioni all'ordine del giorno del dibattito pubblico, indipendentemente dal fatto che una particolare questione appaia "di tendenza" o meno. Questo potere viene spesso utilizzato per questioni vicine all'insegnamento della Chiesa, come il lobbying sull'aborto, ma non c'è motivo per cui la Chiesa non possa utilizzare le stesse risorse per fare lobbying contro l'aborto del genere umano da parte dell'intelligenza artificiale.
Aiuto alle vittime di discriminazione da parte dei signori della Silicon Valley e delle bande online
Le origini della Chiesa cattolica risalgono alla persecuzione di Gesù e dei martiri San Pietro e San Paolo.
Ma lasciamo da parte gli esempi antichi e veniamo a coloro che, nei tempi a noi più vicini, hanno lottato per la fede. Prendiamo i nobili esempi della nostra generazione. Per gelosia e invidia, i più grandi e giusti pilastri della Chiesa furono perseguitati e giunsero fino alla morte. Poniamo davanti ai nostri occhi i buoni Apostoli. Pietro, per ingiusta invidia, sopportò non una o due, ma molte fatiche, e alla fine, dopo aver reso la sua testimonianza, se ne andò verso il luogo di gloria che gli spettava. Anche Paolo, per invidia, mostrò con l'esempio il premio che è dato alla pazienza: fu incatenato sette volte; fu bandito; fu lapidato; divenuto araldo sia in Oriente che in Occidente, ottenne la nobile fama dovuta alla sua fede; e dopo aver predicato la giustizia al mondo intero, giunto fino all'estremità dell'Occidente e reso testimonianza davanti ai governanti, lasciò finalmente il mondo e andò verso il luogo santo, divenendo il massimo esempio di pazienza. (prima lettera di Clemente ai Corinzi, 5:1 - 5:7)
Queste parole spiegano la persecuzione di Pietro e Paolo sotto l'imperatore Nerone, avvenuta quasi duemila anni fa.
Ottocento anni fa è stata promulgata la Magna Carta che, nel corso del tempo, ha ispirato la Carta dei diritti degli Stati Uniti, la Dichiarazione universale dei diritti umani e l'abolizione della pena di morte.
Eppure oggi vediamo i signori della Silicon Valley voler buttare tutto questo dalla finestra e riportarci indietro ai tempi di Nerone.
Ogni individuo ha diritto di prendere parte liberamente alla vita culturale della comunità, di godere delle arti e di partecipare al progresso scientifico ed ai suoi benefici.
Ogni individuo ha diritto alla protezione degli interessi morali e materiali derivanti da ogni produzione scientifica, letteraria e artistica di cui egli sia autore.
Quando visitiamo i siti web di noti progetti di software libero come Debian e Fedora, li vediamo dichiarare apertamente il loro desiderio di censurare certe persone. Chiunque si esprima su questioni etiche nel nostro settore è stato oggetto di queste estreme rappresaglie di tanto in tanto.
Le somiglianze tra questi casi e la crescente lista di vittime sono la prova lampante che non si tratta di casi casuali. Esiste uno sforzo coordinato per ridurre o eludere i diritti civili. Se esiste uno spazio o un mondo digitale, allora è inquietantemente simile al mondo in cui gli imperatori romani ricorrevano a esecuzioni raccapriccianti per perpetuare il controllo attraverso la paura.
La Chiesa cattolica può andare alla ricerca delle vittime che sono state cancellate, delle vittime che sono state de-piattaformate e di coloro che hanno qualcosa da dire sulla dignità umana nell'era dell'intelligenza artificiale. Che queste persone siano
cattoliche o meno, le preoccupazioni che gli esperti indipendenti hanno cercato di indagare e pubblicizzare devono essere poste al di sopra del rumore prodotto dai dipartimenti di pubbliche relazioni.
Allo stesso tempo, l'impatto orribile inflitto alle nostre famiglie è spesso nascosto alla vista del pubblico.
I bambini nell'ambiente digitale
È significativo che abbiamo trovato tattiche molto simili utilizzate da Harvey Weinstein e Chris Lamb, ex leader del progetto Debian.
Questo è significativo perché Lamb è stato formato durante il Google Summer of Code ed è stato finanziato da Google, che ha anche ricevuto un cospicuo pagamento di 300.000 dollari poco prima che tre vittime rivelassero lo scandalo. Nonostante la promessa di trasparenza di Debian, il denaro è stato rivelato solo
più di sei mesi dopo e il nome di Google non è mai stato pubblicamente collegato a quei numeri.
Quando Weinstein nutriva preoccupazioni sul comportamento di alcune donne, inviava pettegolezzi sgradevoli sul "comportamento" ad altri membri del settore. C'è qualcosa di snob in questi atteggiamenti nei confronti del comportamento umano.
"Ricordo che la Miramax ci disse che lavorare con loro era un incubo e che avremmo dovuto evitarli a tutti i costi. Probabilmente era il 1998", ha detto Jackson.
"All'epoca non avevamo motivo di mettere in dubbio ciò che queste persone ci stavano dicendo, ma a posteriori mi rendo conto che molto probabilmente si trattava della campagna diffamatoria della Miramax in pieno svolgimento."
Diverse persone si sono fatte avanti dimostrando che Chris Lamb stava facendo esattamente la stessa cosa nel suo ruolo in Debian. Secondo la legge sul copyright, i coautori non hanno alcun obbligo nei confronti della persona eletta a ricoprire di volta in volta il ruolo di Debian Project Leader. Siamo tutti uguali.
Oggetto: R: Stato di sviluppatore Debian
Data: mar, 18 dic 2018 10:36:09 +0900
Da: Norbert Preining <norbert@preining.info>
A: Daniel Pocock <daniel@pocock.pro>
Ciao Daniel,
anche se affrontare una causa come questa nel Regno Unito è al di sopra delle
mie capacità e possibilità finanziarie,
ho paura che Lamb abbia effettivamente rovinato una candidatura per un'azienda
di New York, un lavoro correlato a Debian. Se è successo, e posso
ragionevolmente documentarlo, prenderei in considerazione una causa per diffamazione.
> Lamb è residente nel Regno Unito e invia email dal Regno Unito
> https://regainyourname.com/news/cyberbullying-cyberstalking-and-online-harassment-a-uk-study/
Grazie per i link, li terrò a mente.
Norbert
--
PREINING Norbert http://www.preining.info
Accelia Inc. + JAIST + TeX Live + Debian Developer
GPG: 0x860CDC13 fp: F7D8 A928 26E3 16A1 9FA0 ACF0 6CAC A448 860C DC13
Ancora più inquietante è il fatto che Lamb abbia iniziato ad attaccare la mia famiglia proprio nello stesso periodo in cui il cardinale George Pell è stato condannato nel 2018. Un mio cugino di secondo grado era membro dell'ex coro del cardinale George Pell a Melbourne. Lamb e i suoi complici, finanziati da Google, hanno diffuso voci anonime di abusi.
Diverse persone si sono fatte avanti con prove che Lamb si comportava come Weinstein, diffondendo voci alle nostre spalle. Quando io e il Dott. Preining abbiamo parlato, una terza vittima ha visto lo scandalo e si è identificata pubblicamente il giorno di Natale:
Oggetto: Ri: Censura in Debian
Data: mar, 25 dic 2018 23:44:38 +0100
Da: martin f krafft
Organizzazione: Il progetto Debian
A: debian-project@lists.debian.org
Ciao progetto,
È molto triste leggere di quello che sta succedendo.
So che c'è stato almeno un altro caso in cui DAM e AH
hanno agito al di fuori del loro mandato, minacciando di
espellere il progetto e scegliendo in modo molto selettivo con chi comunicare.
Lo so perché sono stato preso di mira.
Né DAM né AH (le stesse persone ancora attive oggi) hanno fatto
un solo tentativo di ascoltarmi. Nessuna delle mie e-mail a DAM o ad AH
ha mai ricevuto risposta.
Al contrario, DAM ha emesso un verdetto e ha influenzato altre persone al
punto che "perché DAM ha emesso una sentenza" è stato addotto come motivazione per altre
misure. Si è trattato di un abuso incostituzionale dei poteri di DAM e, nel
caso di AH, l'intera vicenda ha anche sfiorato la diffamazione. Tra gli altri,
l'attuale DPL Chris Lamb ha promesso una revisione a tempo debito, ma
non è mai successo nulla.
... [ snip ] ...
Ma se non è sicuro per gli ingegneri che sviluppano questa tecnologia, non lo è certamente per i bambini.
Il 5 ottobre 2021 ho sollevato le preoccupazioni relative ai bambini in questa cultura con il rapporto
Google, FSFE e lavoro minorile .
Red Hat , una sussidiaria di IBM dal 2019, ha avviato un'azione legale per censurare e screditare le mie preoccupazioni. Mi hanno accusato di malafede per aver pubblicato quell'articolo. Tuttavia, il collegio legale ha stabilito che
Red Hat mi stava molestando e che stava commettendo un abuso della procedura amministrativa.
L'ironia, ovviamente, è che i Cardinali indossano cappelli rossi, come il nome dell'azienda
Red Hat che è stata scoperta a maltrattarmi. Chris Lamb di Debian aveva diffuso voci sulla mia famiglia quando il Cardinale Pell fu condannato.
Il modo in cui tutto questo ha intersecato le nostre vite e la nostra fede, le voci di abusi dopo la condanna del defunto Cardinale Pell, la mia visita ai Carabinieri il giorno della morte del Cardinale, il giorno delle nozze, la Domenica delle Palme, un suicidio simulato (non confermato), la crocifissione del Dr. Stallman a Pasqua e i linciaggi natalizi di Debian, è sconcertante. Come si dice nei film polizieschi, segui i soldi.
L'ambiente digitale sottopone i parrocchiani alla sorveglianza di terze parti
La Chiesa cattolica è nata dalla persecuzione e bisogna ricordare che la sorveglianza è un pilastro della persecuzione.
Il fatto che i servizi più grandi, come Google, Facebook e Twitter, siano tutti apparentemente gratuiti è la prova che ricavano tutti i loro profitti dalla capacità di condurre un'efficace sorveglianza e manipolazione della popolazione.
Un tempo, la Chiesa svolgeva ruoli simili. I fedeli si sottoponevano a una forma di sorveglianza attraverso il sacramento della confessione, dove ricevevano consiglio dal loro sacerdote. I sacerdoti cercavano di esercitare una certa influenza dal pulpito, con la minaccia della scomunica e, di tanto in tanto, l'inquisizione o la persecuzione di qualcuno che era all'avanguardia, come Galileo.
Se le aziende tecnologiche riescono ad approssimare tutte queste funzioni in modo così efficace tramite algoritmi, corriamo il rischio che la religione diventi superflua.
Pertanto, cercare di svolgere il ruolo della Chiesa attraverso un mezzo che si sostituisce a quello della religione è molto simile a scavarsi la fossa con i propri occhi.
Attraverso una serie di inchieste pubbliche e di segnalazioni di informatori, abbiamo appreso fino a che punto questi padroni ci stiano privando della nostra dignità. Il loro obiettivo è anticipare ogni nostra decisione, influenzare le persone con cui parliamo, il nostro voto e ogni singolo centesimo del nostro bilancio.
Se ognuna di queste decisioni è controllata e perfino microgestita per noi, con precisione scientifica, fino all'ultimo centesimo sul nostro conto in banca ogni mese, dall'influenza di algoritmi, quale spazio rimane nella nostra coscienza per l'influenza del Vangelo?
Missione: rimanere rilevanti
Pertanto, la domanda posta al gruppo di lavoro sulla
missione nell'ambiente digitale
potrebbe essere riformulata come segue: in che modo la religione, di qualsiasi natura, continua a essere rilevante?
Oggi, per tradizione, in molte famiglie delle culture più ricche la chiesa è un luogo in cui si celebrano matrimoni, funerali e talvolta anche l'istruzione dei figli.
Affinché la chiesa possa dotare i propri parrocchiani di tecnologia, anziché perderli a causa della tecnologia, dobbiamo porci delle domande su alcuni degli argomenti sollevati dal movimento del software libero.
Come garantire che ogni persona abbia il pieno controllo sui propri dispositivi, incluso il diritto alla riparazione e il diritto di cambiare il sistema operativo.
Sviluppare strategie per proteggere le persone dai rischi della tecnologia. Ad esempio,
i social media che controllano il controllo sociale consentono a piccoli gruppi, ma molto rumorosi, di arrecare gravi danni alle proprie vittime attraverso la diffusione deliberata e ripetuta di pettegolezzi e diffamazione. Sta diventando sempre più difficile garantire che nessuna persona o minoranza venga esclusa dalle vendette online. Come fornire supporto alle persone prese di mira da questi individui tossici? Come garantire che ogni persona e gruppo possa parlare a turno?
Missione: proteggere la società dagli stessi errori
L'Australia ha avviato la procedura per l'istituzione di una Commissione Reale sugli abusi commessi da una vasta gamma di istituzioni, tra cui la Chiesa. Eppure, per molte persone decedute o che hanno perso familiari, salute e carriera, era troppo tardi. Non sarebbe opportuno intervenire con misure così incisive prima che si verifichino fallimenti catastrofici? È giunto il momento di rivolgere lo stesso livello di attenzione ai
dirigenti dei
media che controllano i social media e allo sfruttamento e alla manipolazione dell'opinione pubblica a più livelli.
Conclusione
I media di controllo sociale stanno rapidamente diventando una copertura per l'intelligenza artificiale. Come ci ha suggerito il test di Turing (il gioco dell'imitazione) fin dal 1949, è inevitabile che ogni nuova iterazione di questo fenomeno diventi sempre più indistinguibile dalla realtà. In quanto tale, potrebbe presentarsi non solo come un sostituto per i nostri simili, ma anche come un'alternativa alla Chiesa. Le persone potrebbero essere indotte ad accettarla come il loro Dio. In altre parole,
i media di controllo sociale potrebbero rendere la Chiesa irrilevante e, dopo averlo fatto, potrebbero continuare a rendere irrilevante l'umanità.
Basta guardare come mi fanno le smorfie dopo la morte di mio padre. La maleducazione che subisco quasi quotidianamente è iniziata in un momento di dolore. Le persone vengono plagiate a mettere da parte anche il più elementare rispetto per la dignità umana, il rispetto per la famiglia, in un momento di dolore, e questo diventa solo un'altra opportunità per strumentalizzare gli altri per divertimento. Questo aspetto della mia vita è stato interamente creato dai
social media
e dalle persone che stanno definendo quello spazio nella mia professione.
Nella sua testimonianza al Congresso, Frances Haugen ci ha detto:
Credo che ciò che ho fatto sia stato giusto e necessario per il bene comune, ma so che Facebook ha risorse infinite che potrebbe usare per distruggermi.
Nel 2018, ho partecipato al Forum delle Nazioni Unite su Imprese e Diritti Umani a Ginevra, dove ho rilasciato alcune brevi dichiarazioni sul fatto che Facebook e Twitter fossero finiti nelle mani sbagliate. Il Forum delle Nazioni Unite si è svolto contemporaneamente alla giuria che stava esaminando le accuse contro il Cardinale George Pell. Pell è stato condannato e queste
piattaforme
di controllo sociale si sono riempite di voci su di me e la mia famiglia, proprio quei fenomeni di cui la stessa Haugen sembra aver paura.
Ecco il video con i commenti che ho fatto al Forum delle Nazioni Unite. Ho parlato per appena quarantatré secondi e hanno speso 120.000 dollari per attaccare la mia famiglia.
Le regretté pape François a demandé à un groupe d'environ quatre cents évêques de travailler ensemble, de 2021 à 2024, à une réflexion sur la manière dont les fidèles
catholiques interagissent et progressent en tant que mouvement. Officiellement, ce comité d'évêques a reçu le titre de
Synode sur la synodalité . Le terme « Synode » est largement utilisé dans toutes les religions chrétiennes pour désigner les comités, conseils ou réunions de ces groupes, à tous les niveaux de la hiérarchie ecclésiale. Le terme
« Synodalité » est spécifique à l'Église catholique. Le Synode dispose d'une page web officielle où il
tente d'expliquer ce qu'est la synodalité .
Plusieurs groupes de travail ont été créés sur des sujets très variés. Dans cette analyse, je me concentre uniquement sur le troisième groupe, qui s'est penché sur la
mission dans l'environnement numérique . Je présente ensuite mes propres observations sur les sujets abordés par ce groupe.
Dans un récent article de presse, sans rapport avec le Synode, le diocèse de Paderborn (centre-nord de l'Allemagne)
a annoncé qu'il tenterait d'utiliser TikTok pour interagir avec les jeunes . Le champ d'action du groupe de travail 3 est très vaste et ne se limite pas aux
plateformes
de médias sociaux . Il me semble qu'il couvre toutes les formes de technologies numériques.
Même
les répéteurs de paquets
radioamateurs sont concernés, même si les licences de radioamateur n'autorisent pas la transmission explicite de matériel religieux.
Le Vatican a été l'un des premiers à adopter la radio à ondes courtes. Le pape Léon XIV et Mgr Lucio Adrian Ruiz, secrétaire du Dicastero per la Comunicazione, ont visité les installations de Radio Vatican cette semaine :
À la lecture des résultats du groupe de travail et du Synode dans son ensemble, j'ai le sentiment que l'Église dans son ensemble n'a pas décidé d'adopter ou de rejeter
les médias de contrôle social . Elle reconnaît qu'ils font partie du paysage numérique et tente de définir la manière dont l'Église s'y rapporte.
Comment le processus synodal a évolué à un niveau élevé
Avant d’entrer dans les détails, voici un aperçu du processus et des rapports parus à différents moments, avec des liens directs vers les éditions traduites.
Le site web principal du Synode est
www.Synod.va et il est disponible en plusieurs langues. Il semble que le contenu ait été rédigé en italien puis traduit en anglais et dans d'autres langues. Cela le rend un peu plus difficile à lire.
Une réunion élargie a eu lieu à Rome en octobre 2023, au cours de laquelle un premier projet de rapport a été élaboré.
Points clés du rapport final concernant l'environnement numérique
Au point 58, le rapport note que les chrétiens pourraient tenter de proclamer l’Évangile par leur participation à un environnement numérique.
58. ... Les chrétiens, chacun selon leurs divers rôles - au sein de la famille et des autres états de vie ; sur le lieu de travail et dans leur profession ; engagés civilement, politiquement, socialement ou écologiquement ; dans le développement d'une culture inspirée par l'Évangile, y compris l'évangélisation de l'environnement numérique - parcourent les chemins du monde et annoncent l'Évangile là où ils vivent, soutenus par les dons de l'Esprit.
59. Ce faisant, ils demandent à l’Église de ne pas les abandonner mais de leur permettre de se sentir envoyés et soutenus dans la mission.
Ce point semble encourager l’Église à réfléchir à la situation à laquelle sont confrontés ceux qui sont sous l’influence d’un environnement numérique, mais cela n’implique pas nécessairement que l’environnement numérique soit bon ou mauvais.
Au point 112, concernant la mobilité, qui inclut des personnes de tous les niveaux de la société, le rapport note :
Certains entretiennent des liens forts avec leur pays d’origine, notamment grâce aux médias numériques, et peuvent donc avoir du mal à nouer des liens dans leur nouveau pays ; d’autres se retrouvent à vivre sans racines.
C'est une excellente observation. En Europe, j'ai rencontré des couples dont la relation dépend entièrement des appareils de traduction automatique. Lorsque de nouveaux arrivants arrivent en ville, la culture WhatsApp encourage les voisins à passer des semaines, voire des mois, à discuter dans leur dos sans même les regarder dans les yeux.
113. La diffusion de la culture numérique, particulièrement visible chez les jeunes, transforme profondément leur perception de l'espace et du temps ; elle influence leurs activités quotidiennes, leur communication et leurs relations interpersonnelles, y compris la foi. Les opportunités offertes par Internet remodèlent les relations, les liens et les frontières. Aujourd'hui, nous ressentons souvent la solitude et la marginalisation, même si nous sommes plus connectés que jamais. De plus, ceux qui ont leurs propres intérêts économiques et politiques peuvent exploiter
les médias sociaux pour diffuser des idéologies et générer des formes de polarisation agressives et manipulatrices. Nous ne sommes pas bien préparés à cela et devons consacrer des ressources pour que l'environnement numérique devienne un espace prophétique de mission et d'annonce. Les Églises locales devraient encourager, soutenir et accompagner ceux qui s'engagent dans la mission dans l'environnement numérique. Les communautés et groupes chrétiens numériques, en particulier les jeunes, sont également appelés à réfléchir à la manière dont ils créent des liens d'appartenance, favorisant la rencontre et le dialogue. Ils doivent offrir une formation à leurs pairs, développant une manière synodale d'être Église. Internet, constitué comme un réseau de connexions, offre de nouvelles opportunités pour mieux vivre la dimension synodale de l’Église.
Ce paragraphe reconnaît les dangers du numérique, en particulier
des médias de contrôle social , et le mot clé est : « Nous ne sommes pas bien préparés à cela ». Il suggère néanmoins que les églises locales devraient « encourager » davantage ces risques en ligne. Je ne pense pas que le mot « encourager » soit le bon, mais je ne pense pas qu'elles devraient non plus décourager.
149. Le processus synodal a insisté sur certains aspects spécifiques de la formation du Peuple de Dieu à la synodalité. Le premier concerne l'impact de l'environnement numérique sur les processus d'apprentissage, la concentration, la perception de soi et du monde, et la construction des relations interpersonnelles. La culture numérique constitue une dimension cruciale du témoignage de l'Église dans la culture contemporaine et dans un champ missionnaire émergent. Cela exige de veiller à ce que le message chrétien soit présent en ligne de manière fiable, sans en déformer idéologiquement le contenu. Bien que les médias numériques aient un grand potentiel pour améliorer nos vies, ils peuvent aussi causer des préjudices et des blessures par le biais du harcèlement, de la désinformation, de l'exploitation sexuelle et des addictions. Les établissements d'enseignement de l'Église doivent aider les enfants et les adultes à développer des compétences essentielles pour naviguer en toute sécurité sur le web.
Ces commentaires sont très pertinents et très cohérents avec mon propre témoignage, dont une partie est reproduite plus loin dans ce rapport.
150. Un autre domaine de grande importance est la promotion dans tous les contextes ecclésiaux d’une culture de protection, faisant des communautés des lieux toujours plus sûrs pour les mineurs et les personnes vulnérables.
Lorsque j'ai évoqué ce sujet dans les communautés du logiciel libre, ma famille a été impitoyablement attaquée. Voir les
courriels que j'ai envoyés fin 2017 et les commentaires sur IBM
Red Hat plus loin dans ce rapport.
Sources liées au groupe de travail trois, la mission dans un environnement numérique
Le site web Synod.va a publié la liste de
tous les groupes de travail . Il comprend une courte vidéo sur chaque groupe et un lien vers leurs rapports les plus récents.
La vidéo du troisième groupe de travail dure un peu moins de deux minutes. Voici quelques citations clés et mes propres observations :
« Aujourd’hui, les gens, en particulier les jeunes, ont appris à vivre simultanément et de manière transparente dans des espaces numériques et physiques. »
Je pense que cette affirmation est totalement erronée. Les gens ont appris à utiliser les espaces numériques. Une étude récente suggère que
près de 70 % des jeunes se sentent mal après avoir utilisé les réseaux sociaux . Autrement dit, ils se sentent obligés de les utiliser. Par conséquent, leur vie est perturbée. Les gens souffrent.
Les déclarations faites dans la vidéo ne sont pas celles présentées dans le rapport final. Nous y reviendrons. Néanmoins, dès qu'il
est question
des médias de contrôle social , on a tendance à généraliser sur l'impossibilité de vivre sans eux. Chaque fois que nous voyons une telle affirmation, il est important de la remettre en question.
« Comment l’Église utilise-t-elle et s’approprie-t-elle la culture numérique ? »
La question rhétorique est intéressante. En réalité, les extrémistes de la Silicon Valley utilisent et s'approprient tout le contenu que nous leur fournissons. L'Église ne les utilise pas, elle nous utilise. Comment pensez-vous qu'ils sont devenus si riches ?
Une meilleure question pourrait être : « Comment l’Église
comble-t-elle les lacunes des cultures numériques ? ».
« Cet environnement est désormais « indiscernable de la sphère de la vie quotidienne ». »,
Le pape François était un homme intelligent, entouré de personnes brillantes, dont le regretté cardinal Pell. Cette citation trouve son origine dans la pensée d'Alan Turing. Turing est considéré comme le père de l'informatique et un martyr. Turing nous a transmis exactement le même concept avec le légendaire test de Turing, que Turing lui-même a appelé le « jeu de l'imitation » en 1949.
Une autre façon d’interpréter ce phénomène est de dire que les masses ont subi un lavage de cerveau de la part des seigneurs de la Silicon Valley.
Les choix des dirigeants de Facebook constituent un problème majeur – pour les enfants, la sécurité publique, la démocratie – et c'est pourquoi je me suis exprimé. Et soyons clairs : il n'est pas nécessaire qu'il en soit ainsi. Si nous sommes ici aujourd'hui, c'est grâce aux choix délibérés de Facebook.
Le résumé du groupe de travail continue...
« Pour proclamer efficacement l'Évangile dans notre culture contemporaine, nous devons discerner les opportunités et les défis présentés par cette nouvelle dimension du « lieu » »
Cette citation précise reconnaît qu'il existe à la fois des opportunités et des défis. L'année du jubilé est placée sous le signe de l'espoir et j'espère sincèrement que les membres du groupe de travail lisent les informations des lanceurs d'alerte, des psychologues pour enfants et
même des médecins légistes qui nous alertent sur l'impact de Facebook et de ses semblables .
Néanmoins, le rapport inclut l’expression « immersion plus grande » et j’estime que l’Église ne devrait pas supposer que « l’immersion plus grande » est une ligne de conduite par défaut.
Le résumé aborde également la notion de juridiction. L'Église catholique s'est traditionnellement organisée sur une base géographique. Internet permet aux individus de se connecter et de former des communautés virtuelles sans lien géographique.
Par ailleurs, avant l'avènement d'Internet, l'Église pouvait déplacer des prêtres à haut risque d'une paroisse d'un bout à l'autre de la ville sans craindre que quiconque ne les relie. J'ai examiné minutieusement les documents de la Commission royale australienne et suis tombé sur cette note du légendaire Père X___ :
Cela signifie que si quelqu'un en Australie, apprenant que le père Z___ a suivi un traitement à cause de quelque chose qui s'est passé à Boston et se rendant sur place pour le savoir, se retrouverait dans une impasse.
La lettre en question a été écrite juste avant qu'Internet ne devienne une réalité pour le grand public. Lire ces mots aujourd'hui nous rappelle brutalement à quel point Internet bouleverse notre quotidien.
Le groupe de travail poursuit en indiquant qu'il recherche des « recommandations ou propositions pratiques » de la part de toute la communauté, sur tout sujet lié à la mission de l'Église dans l'environnement numérique.
Les personnes engagées dans le mouvement du logiciel libre, qu’elles soient
catholiques ou non, peuvent contacter leur diocèse local pour savoir qui coordonne localement la réponse à ces défis.
Une autre phrase qui a attiré mon attention :
« Aujourd'hui, nous vivons dans une culture numérique »
Pas exactement. Certains diront qu'une culture numérique nous est imposée. Des institutions comme la politique et les médias s'y accrochent et la mettent sur un piédestal. Il est donc d'autant plus crucial que d'autres institutions, comme l'Église, se donnent pour mission de remettre en question l'ensemble de la culture numérique et de proposer des alternatives viables.
La vie sans téléphone portable, la vie sans applications
Téléphones mobiles et applications sont étroitement liés. Certains choisissent de vivre sans smartphone, c'est-à-dire qu'ils n'ont que la moitié des problèmes d'un téléphone portable classique. D'autres choisissent également d'utiliser un smartphone sans l'App Store de Google ou d'Apple, par exemple ceux qui installent
Replicant ou
LineageOS et utilisent l'
App Store de F-Droid pour limiter leur utilisation aux applications éthiques.
Concrètement, certaines personnes sont incapables de se déplacer dans leur ville sans utiliser leur téléphone. Une question intéressante se pose pour l'Église : quelle proportion de fidèles est incapable d'identifier le chemin le plus direct entre leur domicile et l'église la plus proche sans consulter une application ? Il serait intéressant d'analyser les réponses en fonction de divers facteurs tels que l'âge et le nombre d'années de résidence dans la paroisse.
Une autre question clé, étroitement liée à la précédente, est de savoir combien de paroissiens peuvent se souvenir des horaires de messe et des événements clés du calendrier paroissial sans consulter leur téléphone. C'est formidable que ces informations soient visibles sur le site web de la paroisse, mais lorsque les gens s'impliquent véritablement dans la paroisse et la communauté, elles resteront gravées dans leur mémoire. Plus ces informations sont diffusées dans une communauté, plus celle-ci est résiliente.
Les systèmes d’authentification portent atteinte à la dignité humaine
Aujourd’hui, nous voyons fréquemment des entreprises insister sur le fait qu’elles ont besoin de nos numéros de téléphone portable pour nous « authentifier » ou pour « signer » des documents par SMS.
Ce genre de chose est particulièrement inquiétant. Nombreux sont ceux qui connaissent la pratique nazie consistant à graver des numéros d'identification sur la peau des prisonniers juifs. Les numéros de téléphone portable ont une fonction similaire. Même s'ils ne sont pas gravés physiquement sur notre peau, il est souvent gênant de les changer.
Il existe de nombreux phénomènes étroitement liés, notamment des sites Web exigeant que les utilisateurs s’authentifient à partir d’un compte Gmail ou Facebook.
Au niveau de l’Église, de l’État, de l’éducation, des soins de santé et des services financiers, il est essentiel de garantir que chacun puisse participer comme il le souhaite sans renoncer à sa dignité.
L’Église doit s’exprimer sur ces sujets avec la même intensité que sur des thèmes tels que l’avortement.
Il faut mettre l’accent sur le consentement
Les préoccupations relatives au consentement et à la coercition sont devenues un sujet majeur dans le monde d'aujourd'hui. Ironiquement, les
plateformes
médiatiques de contrôle social qui prétendent donner une tribune aux femmes violent le principe du consentement de bien d'autres manières.
Prenons l'exemple de personnes qui ont créé un profil sur Facebook ou Twitter, parfois pendant des années, se connectant à des centaines, voire des milliers d'abonnés, et qui se voient ensuite demander d'ajouter leur numéro de téléphone portable à leur compte. S'ils ne le font pas, leur compte est bloqué. Il n'y a aucune raison technique valable d'avoir un numéro de téléphone portable sur son compte, car nombre de ces services fonctionnaient exactement de la même manière pendant de nombreuses années avant que ces demandes ne deviennent courantes.
Les gens ne consentent pas librement à partager leur numéro de téléphone avec Mark Zuckerberg et Elon Musk. Les services ont été altérés pour piéger leurs utilisateurs avec ces demandes.
Il est significatif que cette culture du piège et de la coercition se propage dans la société. En Australie, Chanel Contos a lancé une pétition/journal très médiatisé, rassemblant les témoignages de femmes scolarisées dans des écoles privées prestigieuses qui se sentaient victimes de pièges, de harcèlement et de violences physiques non désirées.
Ironiquement, Mme Contos a fait connaître ses inquiétudes sur les mêmes plateformes qui sapent notre compréhension du consentement et de la vie privée.
L'Église elle-même a dû se livrer à un examen de conscience approfondi sur les questions de consentement et d'abus de pouvoir. Cela la place dans une position intéressante : même au vu des révélations les plus choquantes sur les abus, les responsables constituent un moindre mal par rapport aux dirigeants de la Silicon Valley.
Il est remarquable de constater la rapidité avec laquelle les institutions de la Silicon Valley ont abandonné tout contrôle et tout équilibre pour n'agir qu'à leur guise. L'Église catholique et les autres institutions religieuses peuvent désormais tirer les leçons de l'analyse critique de leurs propres erreurs et mettre en garde la société contre la stupidité de s'engager à nouveau sur la même voie avec ces gangsters du numérique.
La technologie numérique est bien plus qu’un simple moyen de contrôle social
L'Église n'est pas novice en matière de technologie. Les premières presses à imprimer ont été installées dans les locaux de l'Église. Caxton a installé la première presse d'Angleterre à l'abbaye de Westminster. Parmi les autres sites figuraient Oxford et l'abbaye de Saint-Alban. Avant l'apparition de l'imprimerie, la lecture et l'écriture étaient réservées aux clercs et nombre de leurs ouvrages n'existaient qu'en latin. L'imprimerie a permis la production en masse de bibles en allemand et en anglais. Cela a eu un impact considérable sur la standardisation de la langue, tout comme elle a contribué à standardiser les attitudes morales que la Silicon Valley est en train de détruire. La version King James de la Bible est largement reconnue pour son influence sur la langue anglaise.
La standardisation de la langue n'était qu'un effet secondaire de cette invention. La Réforme en était un autre. À mesure que les gens acquéraient des livres et la faculté de lire, ils devenaient moins dépendants du clergé.
De même,
les médias de contrôle social influencent aujourd'hui notre culture, pour le meilleur et pour le pire. Tout comme l'imprimerie a permis la Réforme,
ces médias pourraient entraîner de nouveaux changements dans la manière dont les humains s'organisent autour des structures et des croyances religieuses. Les dirigeants de la Silicon Valley envisagent activement ces rôles. Elon Musk s'est même déguisé en Satan. Si l'Église catholique ne propose pas d'alternative convaincante à ces changements de pouvoir, elle sera dépossédée de ses pouvoirs.
Frances Haugen (lanceuse d'alerte Facebook) : Presque personne en dehors de Facebook ne sait ce qui se passe en interne. La direction de l'entreprise cache des informations vitales au public, au gouvernement américain, à ses actionnaires et aux gouvernements du monde entier. Les documents que j'ai fournis prouvent que Facebook nous a induits en erreur à plusieurs reprises sur ce que révèlent ses propres recherches concernant la sécurité des enfants, son rôle dans la diffusion de messages haineux et clivants, et bien d'autres choses encore.
Alors que les générations précédentes consultaient les religieux pour obtenir des conseils, puis lisaient la Bible, les jeunes d'aujourd'hui se tournent vers les moteurs de recherche et, demain, ils pourraient faire confiance à l'intelligence artificielle. Nous constatons déjà que les moteurs de recherche,
les médias sociaux et les robots d'intelligence artificielle incitent les gens à multiplier les conflits avec leurs voisins ou les conduisent sur les sentiers sombres de l'isolement, de l'automutilation et du suicide.
Ressources de l'Église catholique pertinentes pour l'environnement numérique
L'Église catholique joue un rôle important dans l'éducation et les écoles. Par conséquent, l'Église peut voir l'impact des
médias de contrôle social et l'Église peut imposer des interdictions aux enfants et fournir une formation au personnel et aux parents.
Les enseignants, qu'ils soient employés par l'Église ou l'État, ont signalé une augmentation du harcèlement de la part de parents qui se regroupent sur des applications de messagerie. Récemment,
la police britannique a envoyé six agents humilier un parent qui avait utilisé WhatsApp pour s'en prendre à l'école locale. Le conflit, le caractère conflictuel de cet environnement et l'énorme gaspillage de ressources policières sont autant de conséquences de la manière dont cette technologie est conçue et utilisée dans la société. Chaque incident de ce type donne un aperçu des occasions pour l'Église catholique de se demander « existe-t-il une meilleure solution ? ».
Les mots de Frances Haugen aident à expliquer les six policiers qui assiégent les parents de jeunes enfants :
J'ai constaté que Facebook se heurtait régulièrement à des conflits entre ses propres profits et notre sécurité. Facebook a systématiquement résolu ces conflits en faveur de ses propres profits. Il en résulte un système qui amplifie les divisions, l'extrémisme et la polarisation, et qui fragilise les sociétés du monde entier.
L'Église catholique est un employeur important dans de nombreux pays. Cela lui confère le pouvoir de décider de l'utilisation des téléphones portables et des applications de messagerie dans le cadre de la relation employeur-employé. Un employeur ne peut interdire à ses employés d'utiliser ces appareils pendant leur temps libre, mais il peut décider d'en supprimer toute utilisation officielle à des fins professionnelles. La relation employeur-employé offre une nouvelle occasion de sensibiliser les employés à l'importance de la dignité humaine, au-delà des exigences de nos appareils.
L'agenda public dans l'environnement numérique, l'avortement de notre espèce
Alors que de nombreux hommes politiques et journalistes vivent désormais au sein
des médias sociaux , leur capacité à évaluer les sujets dignes d'un débat public est fortement influencée par les sujets supposément tendance en ligne. On pense que ces sujets deviennent tendance en ligne grâce à l'intérêt général, alors qu'en réalité, les gestionnaires des plateformes en ligne exercent une influence pour que certains sujets semblent se développer naturellement, tandis que des sujets importants mais gênants sont commodément noyés dans le flot de l'actualité.
Dans ce contexte, l'Église catholique offre une voie alternative pour inscrire des questions à l'ordre du jour du débat public, qu'elles soient d'actualité ou non. Ce pouvoir est le plus souvent utilisé pour des questions proches de l'enseignement de l'Église, comme le lobbying sur l'avortement. Cependant, rien n'empêche l'Église d'utiliser les mêmes ressources pour lutter contre l'avortement de l'humanité par l'IA.
Aide aux victimes de discriminations de la part des seigneurs de la Silicon Valley et des mafias en ligne
L’Église catholique trouve ses origines dans la persécution de Jésus et des martyrs saint Pierre et saint Paul.
Mais laissons de côté les exemples anciens et venons-en à ceux qui, aux temps les plus proches de nous, ont lutté pour la foi. Prenons les nobles exemples de notre génération. Par jalousie et envie, les plus grands et les plus justes piliers de l'Église ont été persécutés et ont même été condamnés à mort. Plaçons devant nos yeux les bons apôtres. Pierre, par une envie injuste, a enduré non pas une ou deux, mais de nombreuses épreuves, et finalement, après avoir rendu son témoignage, il est parti pour la gloire qui lui était due. Par envie, Paul aussi a montré par l'exemple la récompense qui est donnée à la patience : sept fois il a été enchaîné ; il a été banni ; il a été lapidé ; devenu un héraut, tant en Orient qu'en Occident, il a acquis la noble renommée due à sa foi ; et après avoir prêché la justice au monde entier, et étant arrivé aux extrémités de l'Occident, et ayant rendu témoignage devant les dirigeants, il a finalement quitté le monde et est allé au lieu saint, étant devenu le plus grand exemple. de patience. » (première épître de Clément aux Corinthiens, 5:1 - 5:7)
Ces paroles expliquent la persécution de Pierre et Paul sous l’empereur Néron il y a près de deux mille ans.
Il y a huit cents ans, la Magna Carta est arrivée et, au fil du temps, elle a inspiré la Déclaration des droits des États-Unis, la Déclaration universelle des droits de l'homme et l'abolition de la peine capitale.
Et pourtant, aujourd’hui, nous voyons les seigneurs de la Silicon Valley vouloir tout jeter par la fenêtre et nous ramener à l’époque de Néron.
Toute personne a le droit de participer librement à la vie culturelle de la communauté, de jouir des arts et de participer au progrès scientifique et aux bienfaits qui en découlent.
Toute personne a droit à la protection des intérêts moraux et matériels découlant de toute production scientifique, littéraire ou artistique dont elle est l'auteur.
Lorsque nous consultons les sites web de projets de logiciels libres bien connus comme Debian et Fedora, nous les voyons proclamer ouvertement leur volonté de censurer certaines personnes. Quiconque s'exprime sur les questions éthiques dans notre secteur est parfois victime de représailles extrêmes.
Les similitudes entre ces cas et la liste croissante des victimes prouvent clairement qu'ils ne sont pas le fruit du hasard. Il existe une action coordonnée visant à restreindre ou à contourner les droits civiques. Si un espace ou un monde numérique existe, il ressemble étrangement à celui où les empereurs romains recouraient à des exécutions macabres pour perpétuer leur emprise par la peur.
L'Église catholique peut rechercher les victimes de l'annulation de leur publication, celles qui ont été déplateformées et celles qui ont leur mot à dire sur la dignité humaine à l'ère de l'IA. Que ces personnes soient
catholiques ou non, les préoccupations que des experts indépendants tentent d'étudier et de faire connaître doivent être mises en avant, au-delà du brouhaha des services de relations publiques.
Dans le même temps, l’impact horrible infligé à nos familles est souvent caché au public.
Les enfants dans l'environnement numérique
Il est révélateur que nous ayons trouvé des tactiques très similaires utilisées par Harvey Weinstein et Chris Lamb, ancien dirigeant du projet Debian.
C'est important, car Lamb a été formé grâce au Google Summer of Code et financé par Google, notamment par un important versement de 300 000 dollars peu avant que trois victimes ne révèlent le scandale. Malgré la promesse de transparence de Debian, l'argent n'a été révélé que
plus de six mois plus tard, et le nom de Google n'est jamais associé publiquement à ces chiffres.
Lorsque Weinstein s'inquiétait du comportement de certaines femmes, il envoyait de vilaines rumeurs à d'autres acteurs du milieu. Il y a quelque chose de snob dans ces attitudes envers le comportement humain.
Lorsque des femmes ont porté plainte auprès de la police, le réalisateur Peter Jackson a pris la parole et
a confirmé que Weinstein avait utilisé ces sales tours , répandant des rumeurs sur le comportement de femmes qui n'étaient pas assez soumises à son goût.
« Je me souviens que Miramax nous avait dit que travailler avec eux était un cauchemar et que nous devions les éviter à tout prix. C'était probablement en 1998 », a déclaré Jackson.
« À l'époque, nous n'avions aucune raison de remettre en question ce que ces gens nous disaient, mais avec le recul, je me rends compte qu'il s'agissait très probablement de la campagne de diffamation de Miramax en plein essor. »
Plusieurs personnes se sont manifestées, démontrant que Chris Lamb faisait exactement la même chose dans son rôle chez Debian. En vertu du droit d'auteur, les coauteurs n'ont aucune obligation envers la personne élue pour occuper ponctuellement le poste de chef de projet Debian. Nous sommes tous égaux.
Objet : Re : Statut de développeur Debian
Date : mar. 18 déc. 2018 10:36:09 +0900
De : Norbert Preining <norbert@preining.info>
À : Daniel Pocock <daniel@pocock.pro>
Bonjour Daniel,
même si subir un procès comme celui-ci au Royaume-Uni dépasse
mes capacités et mes possibilités financières,
j'ai peur que Lamb ait également refusé une candidature pour une
entreprise à New York, un emploi lié à Debian. Si cela s'est produit, et que je peux
raisonnablement le prouver, j'envisagerais une action en justice pour diffamation.
> Lamb réside au Royaume-Uni et envoie des e-mails depuis le Royaume-Uni
> https://regainyourname.com/news/cyberbullying-cyberstalking-and-online-harassment-a-uk-study/
Merci pour les liens, je les garderai à l'esprit.
Norbert
--
PREINING Norbert http://www.preining.info
Accelia Inc. + JAIST + TeX Live + Développeur Debian
GPG : 0x860CDC13 fp : F7D8 A928 26E3 16A1 9FA0 ACF0 6CAC A448 860C DC13
Plus inquiétant encore, Lamb a commencé ses attaques contre ma famille au moment même où le cardinal George Pell a été condamné en 2018. Mon cousin au second degré était membre de l'ancienne chorale du cardinal George Pell à Melbourne. Lamb et ses complices, financés par Google, ont lancé des rumeurs anonymes d'abus.
Plusieurs personnes ont apporté des preuves montrant que Lamb se comportait comme Weinstein, répandant des rumeurs dans notre dos. Lorsque le Dr Preining et moi-même avons pris la parole, une troisième victime a eu vent du scandale et s'est identifiée publiquement le jour de Noël :
Objet : Re : Censure dans Debian
Date : mar. 25 déc. 2018 23:44:38 +0100
De : martin f krafft
Organisation : Le projet Debian
À : debian-project@lists.debian.org
Bonjour projet,
C’est très triste de lire ce qui se passe.
Je sais qu’il y a eu au moins un autre cas où DAM et AH
ont outrepassé leur mandat, menaçant d’
expulsion du projet et choisissant très sélectivement leurs interlocuteurs.
Je le sais, car j’étais ciblé.
Ni DAM ni AH (les mêmes personnes toujours actives aujourd’hui) n’ont
tenté une seule fois de m’entendre. Aucun de mes courriels à DAM ou à AH
n’a reçu de réponse.
Au lieu de cela, DAM a rendu un verdict et a influencé d’autres personnes au
point que « parce que DAM a statué » a été invoqué comme justification pour d’autres
mesures. Il s’agissait d’un abus de pouvoir inconstitutionnel de DAM, et dans
le cas d’AH, tout ce désordre frisait la diffamation. Entre autres,
l’actuel DPL, Chris Lamb, a promis une révision en temps voulu, mais
rien ne s’est jamais produit.
… [snip] …
Mais si cette technologie n’est pas sûre pour les ingénieurs qui la développent, elle n’est certainement pas sûre pour les enfants.
Le 5 octobre 2021, j'ai soulevé les inquiétudes concernant les enfants dans cette culture avec le rapport
Google, FSFE & Child Labor .
Red Hat , filiale d'IBM depuis 2019, a intenté une action en justice pour censurer et discréditer mes propos. Ils m'ont accusé de mauvaise foi pour avoir publié cet article. Pourtant, la commission d'enquête a jugé que
Red Hat me harcelait et abusait de la procédure administrative.
L'ironie, bien sûr, c'est que les cardinaux portent des casquettes rouges, comme le nom de l'entreprise
Red Hat qui a été accusée d'abus à mon égard. Chris Lamb, de Debian, avait lancé les rumeurs concernant ma famille lorsque le cardinal Pell a été condamné.
La manière dont cela a interféré avec nos vies et notre foi, les rumeurs d'abus après la condamnation du regretté cardinal Pell, ma visite aux carabiniers le jour de la mort du cardinal, le jour du mariage, le dimanche des Rameaux, qui a été un suicide par imitation (non confirmé), la crucifixion du Dr Stallman à Pâques et les lynchages de Noël de Debian, tout cela est stupéfiant. Comme on dit dans les films policiers, il faut suivre l'argent.
L'environnement numérique soumet les paroissiens à la surveillance de tiers
L’Église catholique est née de la persécution et il faut se rappeler que la surveillance est la pierre angulaire de la persécution.
Le fait que les plus grands services, comme Google, Facebook et Twitter, soient tous ostensiblement gratuits est la preuve qu’ils tirent tous leurs profits de leur capacité à surveiller et à manipuler efficacement la population.
Autrefois, l'Église remplissait des rôles similaires. Les fidèles se soumettaient à une forme de surveillance par le sacrement de la confession, où ils recevaient les conseils de leur prêtre. Les prêtres cherchaient à exercer une certaine influence depuis la chaire, menaçant l'excommunication et, de temps à autre, l'inquisition ou la persécution de personnes en avance sur leur temps, comme Galilée.
Si les entreprises technologiques peuvent approximer toutes ces fonctions de manière aussi efficace grâce à des algorithmes, nous courons le risque que la religion devienne redondante.
Par conséquent, tenter de jouer le rôle de l’Église à travers un média qui se substitue à celui de la religion revient à creuser sa propre tombe.
Grâce à une série d'enquêtes publiques et de lanceurs d'alerte, nous avons constaté à quel point ces seigneurs nous privent de notre dignité. Leur objectif est d'anticiper chacune de nos décisions, d'influencer nos interlocuteurs, nos votes et le moindre centime de notre budget.
Si chacune de ces décisions est contrôlée et même microgérée pour nous, avec une précision scientifique, jusqu’au dernier centime de notre compte bancaire chaque mois, par l’influence des algorithmes, quelle place reste-t-il dans notre conscience pour l’influence de l’Évangile ?
Mission : rester pertinent
Par conséquent, la question assignée au groupe de travail sur la
mission dans l’environnement numérique
pourrait être reformulée ainsi : comment la religion, quelle que soit sa nature, reste-t-elle pertinente ?
Pour de nombreuses familles des cultures aisées d’aujourd’hui, l’Église est engagée par tradition dans les mariages, les funérailles et parfois dans l’éducation des enfants.
Pour que l’Église puisse donner du pouvoir à ses paroissiens grâce à la technologie, plutôt que de les perdre à cause de la technologie, nous devons nous poser des questions sur certains des sujets soulevés par le mouvement du logiciel libre.
Comment garantir que chaque personne ait le contrôle total de ses appareils, y compris le droit de les réparer et le droit de modifier le système d’exploitation.
Élaborer des stratégies pour protéger les individus des risques liés à la technologie. Par exemple,
les médias de contrôle social permettent à des groupes restreints, mais très bruyants, de nuire gravement à leurs victimes par la diffusion délibérée et répétée de rumeurs et de diffamations. Il devient de plus en plus difficile de garantir qu'aucune personne ni minorité ne soit exclue par les vendettas en ligne. Comment apporter un soutien aux personnes ciblées par ces individus toxiques ? Comment garantir que chaque personne et chaque groupe puisse s'exprimer à son tour ?
Mission : protéger la société des mêmes erreurs
L'Australie a mis en place une commission royale d'enquête sur les abus commis par diverses institutions, dont l'Église. Pourtant, il était trop tard pour nombre de personnes décédées ou ayant perdu des proches, la santé ou leur carrière. Ne serait-il pas judicieux d'intervenir aussi vigoureusement avant plutôt qu'après des échecs catastrophiques ? Il est grand temps d'exercer le même contrôle sur
les dirigeants
des médias, qui exercent un contrôle social , ainsi que sur l'exploitation et la manipulation du public à de multiples niveaux.
Conclusion
Les médias de contrôle social deviennent rapidement une façade pour l'intelligence artificielle. Comme le test de Turing (jeu d'imitation) nous l'a suggéré depuis 1949, il est inévitable que chaque nouvelle itération de ce phénomène devienne de plus en plus indiscernable de la réalité. De ce fait, ils pourraient se présenter non seulement comme un substitut à nos semblables, mais aussi comme une alternative à l'Église. Les gens pourraient être dupés et l'accepter comme leur Dieu. Autrement dit,
les médias de contrôle social pourraient rendre l'Église insignifiante, et par la suite, rendre l'humanité insignifiante.
Il suffit de voir les grimaces des gens après la mort de mon père. L'impolitesse que je subis presque quotidiennement a commencé dans une période de deuil. On leur a inculqué le respect le plus élémentaire de la dignité humaine, le respect de la famille dans un moment de deuil, et cela devient une nouvelle occasion de se servir les uns des autres à des fins récréatives. Cet aspect de ma vie a été entièrement créé par
les médias sociaux
et ceux qui définissent cet espace dans ma propre profession.
Dans son témoignage devant le Congrès, Frances Haugen nous a dit :
Je crois que ce que j’ai fait était juste et nécessaire pour le bien commun, mais je sais que Facebook dispose de ressources infinies, qu’il pourrait utiliser pour me détruire.
En 2018, j'ai assisté au Forum des Nations Unies sur les entreprises et les droits de l'homme à Genève, où j'ai brièvement commenté la situation de Facebook et Twitter, tombés entre de mauvaises mains. Le Forum des Nations Unies s'est tenu au moment même où le jury examinait les accusations portées contre le cardinal George Pell. Pell a été condamné et ces
plateformes
de contrôle social se sont répandues dans les rumeurs concernant ma famille et moi, le phénomène même que Haugen elle-même semble redouter.
Voici la vidéo avec les commentaires que j'ai faits au Forum de l'ONU. J'ai parlé à peine quarante-trois secondes et ils ont dépensé 120 000 dollars pour attaquer ma famille.
Single signon is a pretty vital part of modern enterprise security. You have users who need access to a bewildering array of services, and you want to be able to avoid the fallout of one of those services being compromised and your users having to change their passwords everywhere (because they're clearly going to be using the same password everywhere), or you want to be able to enforce some reasonable MFA policy without needing to configure it in 300 different places, or you want to be able to disable all user access in one place when someone leaves the company, or, well, all of the above. There's any number of providers for this, ranging from it being integrated with a more general app service platform (eg, Microsoft or Google) or a third party vendor (Okta, Ping, any number of bizarre companies). And, in general, they'll offer a straightforward mechanism to either issue OIDC tokens or manage SAML login flows, requiring users present whatever set of authentication mechanisms you've configured.
This is largely optimised for web authentication, which doesn't seem like a huge deal - if I'm logging into Workday then being bounced to another site for auth seems entirely reasonable. The problem is when you're trying to gate access to a non-web app, at which point consistency in login flow is usually achieved by spawning a browser and somehow managing submitting the result back to the remote server. And this makes some degree of sense - browsers are where webauthn token support tends to live, and it also ensures the user always has the same experience.
But it works poorly for CLI-based setups. There's basically two options - you can use the device code authorisation flow, where you perform authentication on what is nominally a separate machine to the one requesting it (but in this case is actually the same) and as a result end up with a straightforward mechanism to have your users socially engineered into giving Johnny Badman a valid auth token despite webauthn nominally being unphisable (as described years ago), or you reduce that risk somewhat by spawning a local server and POSTing the token back to it - which works locally but doesn't work well if you're dealing with trying to auth on a remote device. The user experience for both scenarios sucks, and it reduces a bunch of the worthwhile security properties that modern MFA supposedly gives us.
There's a third approach, which is in some ways the obviously good approach and in other ways is obviously a screaming nightmare. All the browser is doing is sending a bunch of requests to a remote service and handling the response locally. Why don't we just do the same? Okta, for instance, has an API for auth. We just need to submit the username and password to that and see what answer comes back. This is great until you enable any kind of MFA, at which point the additional authz step is something that's only supported via the browser. And basically everyone else is the same.
Of course, when we say "That's only supported via the browser", the browser is still just running some code of some form and we can figure out what it's doing and do the same. Which is how you end up scraping constants out of Javascript embedded in the API response in order to submit that data back in the appropriate way. This is all possible but it's incredibly annoying and fragile - the contract with the identity provider is that a browser is pointed at a URL, not that any of the internal implementation remains consistent.
I've done this. I've implemented code to scrape an identity provider's auth responses to extract the webauthn challenges and feed those to a local security token without using a browser. I've also written support for forwarding those challenges over the SSH agent protocol to make this work with remote systems that aren't running a GUI. This week I'm working on doing the same again, because every identity provider does all of this differently.
There's no fundamental reason all of this needs to be custom. It could be a straightforward "POST username and password, receive list of UUIDs describing MFA mechanisms, define how those MFA mechanisms work". That even gives space for custom auth factors (I'm looking at you, Okta Fastpass). But instead I'm left scraping JSON blobs out of Javascript and hoping nobody renames a field, even though I only care about extremely standard MFA mechanisms that shouldn't differ across different identity providers.
Someone, please, write a spec for this. Please don't make it be me.
Dans le cadre des 20 ans de Fedora-fr (et du Projet Fedora en lui-même), Charles-Antoine Couret (Renault) et Nicolas Berrehouc (Nicosss) avons souhaité poser des questions à des contributeurs francophones du Projet Fedora et de Fedora-fr.
Grâce à la diversité des profils, cela permet de voir le fonctionnement du Projet Fedora sous différents angles pour voir le projet au delà de la distribution mais aussi comment il est organisé et conçu. Notons que sur certains points, certaines remarques restent d'application pour d'autres distributions.
N'oublions pas que le Projet Fedora reste un projet mondial et un travail d'équipe ce que ces entretiens ne permettent pas forcément de refléter. Mais la communauté francophone a de la chance d'avoir suffisamment de contributeurs de qualité pour permettre d'avoir un aperçu de beaucoup de sous projets de la distribution.
L'entretien du jour concerne Nicolas Berrehouc (pseudo Nicosss), contributeur de Fedora-fr et mainteneur de sa documentation. Il est devenu président de l’association Borsalinux-fr en avril 2025.
Entretien
Bonjour Nicolas, peux-tu présenter brièvement ton parcours ?
Je me nomme donc Nicolas, je ne suis pas informaticien de métier malgré ce que certaines personnes pourraient croire et je ne travaille pas pour Red Hat non plus. Je suis plus issu d'une formation automatisme, micro-contrôleur et électronique donc malgré tout un monde technique. Mon activité professionnelle actuelle n'est d'ailleurs pas en lien avec l'informatique ni à proprement dit avec ma formation. Je suis un touche à tout autodidacte qui aime apprendre et partager
Peux-tu présenter brièvement tes contributions au Projet Fedora ?
Mes contributions directes au Projet Fedora se limitent uniquement aux journées de tests ainsi que tout ce qui va toucher aux validations des correctifs apportés via testing suite à des rapports de bugs, que j'aurais initiés ou non, avant que ce soit poussé en stable. D'ailleurs, je rapporte soit sur le bugzilla Red Hat, soit directement upstream aussi.
Il y a aussi Anitya que j'ai pas mal renseigné à sa sortie car j'ai vraiment trouvé que c'était un super projet.
De ce fait, mon poste de travail est tout le temps en testing et en général je bascule vers la Beta dès qu'elle est disponible. Voilà le plus gros de mes contributions au Projet Fedora.
J'ai par ailleurs fait le choix de ne pas faire partie de groupes directement au sein du Projet Fedora par manque de temps.
Qu'est-ce qui fait que tu es venu sur Fedora et que tu y es resté ?
J'ai découvert le monde GNU/Linux avec Slackware fin des années 90 (oui l'autre siècle ) mais ça a été une grosse douche froide à l'époque, donc stand-by avant de goûter à Mandrake qui proposait une utilisation plus abordable. Puis j'ai découvert Fedora Core à sa sortie que j'ai commencé à utiliser en parallèle d'un Windows car c'était encore difficile de se défaire de ses habitudes avec certains logiciels. Mais la bascule s'est faite finalement très rapidement par la suite malgré tout et depuis pas mal d'années maintenant je n'utilise plus que Fedora Linux, tant en poste de travail que serveur d'ailleurs.
J'y voyais aussi la possibilité de faire passer des utilisateurs Windows au monde GNU/Linux juste par le fait de donner une seconde vie à leurs ordinateurs car bien souvent les utilisatrices et utilisateurs ont une utilisation basique de leurs ordinateurs.
Le Projet Fedora proposait une vision qui me correspondait assez avec l'idée d'être novatrice, s'orienter vraiment vers le Logiciel Libre et surtout une gestion communautaire donc il était possible de ne pas rester un simple consommateur dans son coin. A l'époque Ubuntu avait pignon sur rue et était LA distribution mais malgré une grosse communauté francophone enjouée à l'époque le principe de fonctionnement ne me plaisait pas vraiment.
Pourquoi contribuer à Fedora en particulier ?
En fait, tous les champs sont quasi possibles pour contribuer. Il y a énormément d'outils disponibles pour faciliter les contributions à travers son compte FAS désormais en plus. Par ailleurs il y a une bonne dynamique et des gens passionnés donc ça donne d'autant plus envie de participer.
Fedora intègre beaucoup de technologies et d'innovations que l'on retrouvera par la suite dans les autres distributions, alors pourquoi attendre Ça reste ma philosophie personnelle donc ça colle avec Fedora.
Contribues-tu à d'autres Logiciels Libres ? Si oui, lesquels et comment ?
N'étant pas développeur, il m'arrive tout de même de rapporter des bugs upstream sur certains logiciels que j'utilise lorsque ce n'est pas déjà fait. En général les différents projets sont assez réactifs et ça permet toujours de faire avancer les choses. Mais sinon pas de vraies contributions à un projet particulier en tant que tel.
Utilises-tu Fedora dans un contexte professionnel ? Et pourquoi ?
Absolument pas, mon entreprise propose uniquement des postes sous Windows 10 assez verrouillés ; donc souvent quelques moments de solitude avec des réflexes propres à mon utilisation quotidienne de GNOME :D
Après, en contexte "semi-professionnel" dirons-nous, j'ai des serveurs auto-hébergés sous Fedora Linux aussi proposant des services pour un cercle restreint comme des outils Web, Cloud, XMPP, Mail. Pour ce point c'est arrivé assez rapidement aussi car cela faisait partie d'un apprentissage que je souhaitais réaliser afin d'avoir une compréhension et une indépendance sur la gestion de mes données personnelles. Un grand merci au monde du Logiciel Libre qui permet de faire ça !
Est-ce que tes contributions à Fedora sont un atout direct ou indirect dans ta vie professionnelle ? Si oui, de quelle façon ?
Je dirais que ça a plus un aspect indirect, comme pouvoir parler technique avec des personnes qui sont plus côté Informatique ou Informatique Industrielle et donc faciliter des résolutions de problèmes.
Tu participes essentiellement à la communauté francophone : maintenance du site web, documentation, répondre au forum, suivi de l'association, pour quelles raisons tu y contribues ? Pourquoi se focaliser sur la communauté francophone dans ton cas ?
Oui, j'ai décidé assez tôt de plus me focaliser sur la Communauté Francophone car malgré ce que l'on peut croire il y a une énorme demande et je pense que pour que les personnes passent le cap en France il faut un support accessible en Français pour les accompagner au mieux.
J'avais regardé côté Projet Fedora mais je n'avais pas vraiment trouvé quelque chose qui pouvait avoir une portée surtout pour l'utilisateur final car je voyais du potentiel dans l'utilisation de Fedora Linux en remplacement d'un Windows pour une utilisation dite courante. Je sais que la priorité du Projet est d'avoir des contributeurs mais il faut des utilisateurs aussi et compter sur le fait qu'un faible pourcentage passera le cap vers la contribution.
Il y a eu beaucoup de choses de faites côté documentation en français par le Projet Fedora mais je trouve que nous avons encore très largement notre place car les pionniers de Fedora-fr avaient déjà répondu bien avant à ce manque.
Par conséquent, je suis assez actif dans l'ensemble des domaines cités afin d'essayer de relancer une dynamique car je sais que les personnes sont en place depuis un long moment maintenant. J'espère aussi que ça permettra à d'autres de se lancer dans l'aventure au travers de l'Association. N'hésitez pas à vous faire connaitre lors des réunions du 1er Lundi de chaque mois !
Nous avons fourni il y a quelques années un gros effort pour moderniser la documentation, peux-tu revenir sur cette épisode et la nécessité d'une telle action ?
Oui en effet, nous avons réalisé un très très très gros travail qu'il faudrait arriver à poursuivre d'ailleurs et j'en appelle à toutes les bonnes volontés à se faire connaître.
Quoi dire sur toutes ces soirées de travail Nous avons décidé d'une organisation pour identifier les articles obsolètes (quasi tous :D ) ainsi que les priorités par rapport aux demandes. Puis nous nous sommes répartis les articles pour la mise à jour et nous effectuions les relectures croisées. Pas sûr que ce soit à jour mais voilà ce qui a servi de support de travail. Toutes les semaines nous faisions des points via un canal IRC pour aborder des questionnements et lever des doutes dans notre travail.
Aujourd'hui l'idée est de pouvoir fournir ou améliorer des articles autour des questions récurrentes sur le Forum afin de faciliter la transition vers Fedora Linux et éviter les multiples répétitions via le Forum.
Quels manques identifies-tu au niveau de la documentation ?
Le plus gros manque est le maintien à jour de tout ce qui est disponible
Aujourd'hui il y a des articles très sollicités (comme les pilotes propriétaires Nvidia ; quelle idée d'avoir un GPU de cette marque aussi) qui mériteraient d'avoir plus de suivi mais sinon aujourd'hui l'accent est vraiment porté sur le fait de pouvoir fournir une documentation pour les questions les plus récurrentes sur le Forum afin d'éviter les redites ou recherches sur celui-ci et devoir renvoyer vers des discussions similaires.
Bien évidemment toute autre contribution pour un nouvel article est bienvenue car nous sommes vraiment dans l'idée de partager les connaissances et c'est aussi le meilleur moyen pour découvrir d'autres choses.
Quelle importance il y a d'avoir un forum en français à propos de Fedora ? Est-ce un bon médium pour résoudre les problèmes des gens en général ?
L'anglais n'est pas vraiment une maîtrise forte en France, et c'est difficile de faire quitter Windows à des personnes en leurs annonçant que le package contient aussi la surprise de devoir se mettre à l'anglais. Il existe des traducteurs en ligne désormais mais les utilisateurs Fedora Linux francophones nous montrent bien que ce Forum a son importance car il y a régulièrement des questions et il est très consulté.
Il n'y a évidemment aucun concours avec le Forum officiel en anglais du Projet Fedora. D'ailleurs avant de migrer toute l'infrastructure de Fedora-fr, il s'était aussi posé la question de rejoindre le Forum officiel du Projet via la section non anglaise mais finalement nous avons voulu continuer à offrir tout un écosystème francophone pour continuer dans nos objectifs au niveau de l'Association Borsalinux-fr.
Pourquoi penses-tu que la fréquentation du site a baissé depuis 2011 qui est le pic historique d'activité ?
Pour moi les gens sont de plus en plus devenus de simples consommateurs courant après les effets de mode ou cherchant uniquement du divertissement. Ce sont bien souvent des personnes hyper connectées mais qui ne comprennent absolument rien au fonctionnement de leurs outils et applications, ce qui est vraiment dommage. Beaucoup de personnes se sont désintéressées des ordinateurs et se concentrent uniquement sur des ordiphones ou tablettes avec pour OS Android Google ou Apple, ce qui limite les besoins de se tourner vers une distribution GNU/Linux.
D'un autre côté la distribution Fedora Linux, malgré l'intégration de nouveautés, a vraiment perfectionné tout son process et son assurance qualité donnant ainsi une distribution vraiment stable et performante. Par conséquent qui dit stabilité dit aussi moins de problèmes à régler
Au niveau des Logiciels Libres il y a aussi eu beaucoup de travail de fond pour proposer une concurrence de haut-niveau face à des logiciels propriétaires. C'est vraiment un point à souligner car beaucoup de monde ne sait même pas, bien souvent, qu'il utilise du Logiciel Libre.
Tout cet écosystème qui a gagné en stabilité et performance engendre forcément moins de demandes aussi.
Il faudrait avoir de vraies statistiques sur le nombre d'utilisateurs en fait. En ce moment il y a des articles concernant l'augmentation de la part de marché des distributions GNU/Linux mais ça reste à suivre.
Tu as participé avec Guillaume à la dernière mise à jour du site alors que tu n'es pas webmaster de métier, qu'as-tu apporté dans la procédure ?
Alors en fait ça a porté plus largement que sur le site en lui-même. Guillaume était un peu seul dans le cadre de cette nécessité de migration de toute l'infrastructure qui devenait un très gros frein pour maintenir et faire évoluer tous les outils déployés. Ça a été l'occasion de pouvoir lui redonner de la motivation puis ensuite d'intégrer ce projet pour déclencher tout ce que nous connaissons aujourd'hui.
La migration a malgré tout été très précipitée car il y avait tout à faire et en très peu de temps.
Effectivement je ne suis pas webmaster de métier et il a fallu s'approprier très rapidement WordPress afin de proposer à minima un équivalent de ce que nous avions avant avec fedora-fr.org donc ça a été un enchainement de journées très chargées. Et comme vous pouvez le constater nous ne sommes pas des designers dans l'âme non plus :D Donc tout aide est la bienvenue aussi ; même si le Forum est plus le point de chute.
L'idée en parallèle était d'en profiter pour rédiger de la documentation partagée sur notre Nextcloud à propos de toute notre infrastructure ainsi que notre fonctionnement interne au niveau de l'Association. Ce travail est d'ailleurs toujours en cours.
Quels manques identifies-tu au niveau de la communauté francophone en général ?
Je pense que c'est ce que l'on peut retrouver un peu de partout avec un manque d'appartenance. Aujourd'hui la consommation prime et l'utilisateur ne se considère que comme un consommateur alors qu'il pourrait trouver un épanouissement personnel en participant au sein d'une communauté et de fait s'engager dans une démarche d'échanges et de partages.
Tu nous as représenté de nombreuses années aux JDLL à Lyon, qu'est-ce qui te plaît ou qui ne te plaît pas dans cet événement ? Quels intérêts trouves-tu à y aller ?
En effet, j'ai commencé à me rendre aux JDLL à partir de 2005 car j'étais désormais pas loin de Lyon et que le monde du Libre avait commencé à faire son petit bout de chemin dans ma tête, donc j'assistais à pas mal de conférences et pendant les pauses je faisais le tour des stands. C'est d'ailleurs à ce moment là que j'ai pu rencontrer des membres de la Communauté Fedora-Fr (shaiton entre autre qui était un bon recruteur ) qui tenaient le stand sur le site Universitaire de la Doua à l'époque.
Puis au fil des années, j'ai passé de plus en plus de temps vers le stand Fedora/Borsalinux-Fr pour finalement me faire embringuer dans l'aventure de la tenue du stand (petit clin d’œil à number80 qui y est pour beaucoup). Depuis quelques années maintenant j'ai hérité des relations avec l'organisation des JDLL pour la tenue du stand pour Fedora/Borsalinux-Fr lors de cet évènement.
C'est un moment de l'année où il est possible de rencontrer tout type de population et je trouve ça super intéressant de pouvoir échanger avec autant de monde. Ça permet aussi de pouvoir se remettre en question car ce n'est pas comme se réunir au sein d'une communauté où tout le monde est d'accord avec les mêmes idées. Bref c'est très enrichissant humainement !
Quelles sont tes tâches au niveau de l'association ? Qu'est-ce qui doit être amélioré à ton avis ? Et qu'est-ce qui fonctionne bien ?
Nous sommes un effectif très réduit au niveau de l'Association donc je suis multitâches mais j'avoue que le temps me manque.
Si l'on en revient à mes débuts dans l'Association j'ai surtout essayé de relancer du dynamisme et de l'animation au sein des réunions hebdomadaires tenues historiquement sur IRC. Je pense que cela est aussi dû aux années écoulées sans renouvellement des membres du bureau et un manque de bénévoles pour assurer une répartition des tâches.
Ensuite il a été décidé de passer ces réunions au pas mensuel et en visio pour essayer de toucher plus de monde. Il y a quelques passages mais ce n'est pas encore ça derrière. Dans le même temps, nous avons assuré la transparence des informations échangées lors de ces réunions en postant le compte rendu sur le Forum qui est l'outil le plus fréquenté par la communauté Fedora-fr.
Une instance Nextcloud a été déployée sur notre infrastructure, ça a été l'occasion de pouvoir construire de la documentation sur les outils que nous utilisons, des procédures liées à la maintenance ou à la gestion de l'Association, etc afin que tout le monde puisse s'y retrouver, voire même se projeter un peu plus dans une activité de l'Association. L'idée est de pouvoir assurer la continuité de fonctionnement de l'Association et ce même après un départ de quelqu'un.
Nous avons la chance d'avoir encore parmi nous des piliers de l'Association qui sont encore investis.
Pour le moment il y a encore pas mal de travail mais après ça devrait se calmer pour pouvoir se focaliser sur des choses j'espère plus concrètes.
Si tu avais la possibilité de changer quelque chose dans la distribution Fedora ou dans sa manière de fonctionner, qu'est-ce que ce serait ?
Côté distribution je n'ai pas vraiment à me plaindre, il y a certes un gros rythme qui peine à tenir les dates de sorties mais le travail est énorme et surtout la qualité du processus est arrivée à un sacré niveau de maturité. Fini l'époque où il fallait allumer des cierges et invoquer les grands esprits avant de se lancer dans une migration :D
L'installation peut encore amener certaines questions pour des néophytes mais ensuite tout est tellement fiabilisé que n'importe qui peut s'en servir sans problème.
À l'inverse, est-ce qu'il y a quelque chose que tu souhaiterais conserver à tout prix dans la distribution ou le projet en lui même ?
Le côté vivant et passionné de toutes celles et de tous ceux qui participent à ce projet dans le monde. C'est vraiment quelque chose que l'on retrouve à chaque fois dans les interviews lors des élections Fedora et ça fait plaisir.
Au niveau de la distribution, qu'elle aille toujours de l'avant et propose toujours autant l'implémentation de nouveautés technologiques.
Que penses-tu de la communauté Fedora-fr que ce soit son évolution et sa situation actuelle ? Qu'est-ce que tu améliorerais si tu en avais la possibilité ?
J'ai connu l'époque des communautés Fedora-fr très actives dans différentes grandes villes, puis ça s'est perdu et je pense qu'aujourd'hui ça ne reverra pas le jour. Idem pour d'autres grandes distributions qui avaient leurs communautés Fr.
Aujourd'hui je pense que la communauté Fedora-Fr (les personnes actives) est une bonne chose car ça répond vraiment à un besoin au niveau francophone car l'anglais reste un peu la bête noire.
Ceci permet donc de faciliter la prise en main de Fedora Linux avec un Forum de qualité ainsi que de la Documentation orientée pour répondre aux débutantes et débutants tout en assurant une base de connaissances collaborative. Et ça ouvre la porte pour contribuer directement au Projet Fedora par la suite.
Malheureusement tout ceci est en train de s’essouffler et je ne sais pas combien de temps cette grande aventure va durer si de nouvelles personnes ne viennent pas apporter un peu de souffle à l'équipe.
Pour ma part je trouve que c'est sociétale donc il faudrait peut-être revoir le modèle complet mais dans tous les cas nous aurons besoin de bénévoles.
Quelque chose à ajouter ?
Un appel à volontaires Si vous voulez vous investir dans l'Association que ce soit pour des évènements, de la documentation, du webdesign, du marketing pour des goodies ou d'autres idées alors n'hésitez pas à nous contacter via le site de l'Association, lors d'une réunion mensuelle ou tout autre canal. Nous vous accueillerons avec plaisir !
Merci Nicolas pour ta contribution !
Conclusion
Nous espérons que cet entretien vous a permis d'en découvrir un peu plus sur le site Fedora-fr.
Si vous avez des questions ou que vous souhaitez participer au Projet Fedora ou Fedora-fr, ou simplement l'utiliser et l'installer sur votre machine, n'hésitez pas à en discuter avec nous en commentaire ou sur le forum Fedora-fr.
À dans 10 jours pour un entretien avec Kévin Raymond, ancien contributeur de Fedora et de Fedora-fr.org.
Super busy recently focused on the datacenter move thats happening
in just 10 days! (I hope).
datacenter move
Just 10 days left. We are not really where I was hoping to be at this
point, but hopefully we can still make things work.
We got our power10 boxes installed and setup and... we have an issue.
Some of our compose process uses vm's in builder guests, but the way
we have the power10 setup with one big linux hypervisor and guests on that
doesn't allow those guests to have working nested virt. Only two levels
is supported. So, we are looking at options for early next week and
hopefully we can get something working in time for the move. Options
include getting a vHMC to carve out lpars, moving an existing power9
machine in place at least for the move for those needs and a few more.
I'm hoping we can get something working in time.
We are having problems with our arm boxes too. First there were
strange errors on the addon 25G cards. That turned out to be a transceiver
problem and was fixed thursday. Then the addon network
cards in them don't seem to be able to network boot, which makes installing
them anoying. We have plans for workarounds there too for early next week:
either connected the onboard 1G nics, or some reprogramming of the cards
to get them working, or some installs with virtual media. I'm pretty sure
we can get this working one way or another.
On the plus side, tons of things are deployed in the new datacenter already
and should be ready. Early next week we should have ipa clusters replicating.
Also soon we should have staging openshift cluster in place.
Monday, networking is going to do a resilance test on the networking setup
there. This will have them take down one 'side' of the switches and confirm
all our machines are correctly balancing over their two network cards.
Tuesday we have a 'go/no-go' meeting with IT folks. Hopefully we can be go
and get this move done.
Next wed, I am planning to move all of our staging env over to the new
datacenter. This will allow us to have a good 'dry run' at the production
move and also reduce the number of things that we need to move the following
week. If you are one of the very small number of folks that uses our
staging env to test things, make a note that things will be down on wed.
Then more prep work and last minute issues and on into switcharoo week.
Early monday of that week, things will be shutdown so we can move storage,
then storage moves, we sync other data over and bring things up. Tuesday
will be the same for the build system side. I strongly advise contributors
to just go do other things monday and tuesday. Lots of things will be in
a state a flux. Starting wed morning we can start looking at issues and
fixing them up.
Thanks for everyone's patience during this busy time!
misc other stuff
I've been of course doing other regular things, but my focus as been on datacenter
moving. Just one other thing to call out:
Finally we have our updated openh264 packages released for updates in stable
fedora releases. It was a long sad road, but hopefully now we can get things
done much much quicker. The entire thing wasn't just one thing going wrong or
blocking stuff, it was a long series of things, one after another. We are
in a much better state now moving forward though.
23 years ago I was in a bad place. I'd quit my first attempt at a PhD for various reasons that were, with hindsight, bad, and I was suddenly entirely aimless. I lucked into picking up a sysadmin role back at TCM where I'd spent a summer a year before, but that's not really what I wanted in my life. And then Hanna mentioned that her PhD supervisor was looking for someone familiar with Linux to work on making Dasher, one of the group's research projects, more usable on Linux. I jumped.
The timing was fortuitous. Sun were pumping money and developer effort into accessibility support, and the Inference Group had just received a grant from the Gatsy Foundation that involved working with the ACE Centre to provide additional accessibility support. And I was suddenly hacking on code that was largely ignored by most developers, supporting use cases that were irrelevant to most developers. Being in a relatively green field space sounds refreshing, until you realise that you're catering to actual humans who are potentially going to rely on your software to be able to communicate. That's somewhat focusing.
This was, uh, something of an on the job learning experience. I had to catch up with a lot of new technologies very quickly, but that wasn't the hard bit - what was difficult was realising I had to cater to people who were dealing with use cases that I had no experience of whatsoever. Dasher was extended to allow text entry into applications without needing to cut and paste. We added support for introspection of the current applications UI so menus could be exposed via the Dasher interface, allowing people to fly through menu hierarchies and pop open file dialogs. Text-to-speech was incorporated so people could rapidly enter sentences and have them spoke out loud.
But what sticks with me isn't the tech, or even the opportunities it gave me to meet other people working on the Linux desktop and forge friendships that still exist. It was the cases where I had the opportunity to work with people who could use Dasher as a tool to increase their ability to communicate with the outside world, whose lives were transformed for the better because of what we'd produced. Watching someone use your code and realising that you could write a three line patch that had a significant impact on the speed they could talk to other people is an incomparable experience. It's been decades and in many ways that was the most impact I've ever had as a developer.
I left after a year to work on fruitflies and get my PhD, and my career since then hasn't involved a lot of accessibility work. But it's stuck with me - every improvement in that space is something that has a direct impact on the quality of life of more people than you expect, but is also something that goes almost unrecognised. The people working on accessibility are heroes. They're making all the technology everyone else produces available to people who would otherwise be blocked from it. They deserve recognition, and they deserve a lot more support than they have.
But when we deal with technology, we deal with transitions. A lot of the Linux accessibility support depended on X11 behaviour that is now widely regarded as a set of misfeatures. It's not actually good to be able to inject arbitrary input into an arbitrary window, and it's not good to be able to arbitrarily scrape out its contents. X11 never had a model to permit this for accessibility tooling while blocking it for other code. Wayland does, but suffers from the surrounding infrastructure not being well developed yet. We're seeing that happen now, though - Gnome has been performing a great deal of work in this respect, and KDE is picking that up as well. There isn't a full correspondence between X11-based Linux accessibility support and Wayland, but for many users the Wayland accessibility infrastructure is already better than with X11.
That's going to continue improving, and it'll improve faster with broader support. We've somehow ended up with the bizarre politicisation of Wayland as being some sort of woke thing while X11 represents the Roman Empire or some such bullshit, but the reality is that there is no story for improving accessibility support under X11 and sticking to X11 is going to end up reducing the accessibility of a platform.
When you read anything about Linux accessibility, ask yourself whether you're reading something written by either a user of the accessibility features, or a developer of them. If they're neither, ask yourself why they actually care and what they're doing to make the future better.
The Linux Kernel source is too big to generate all tags for all files. I want only a subset of C files and the corresponding headers. Here is my first take at it. yes it is in python. The program is designed to be run from the root of the Linux Kernel tree.
#!/usr/bin/python3
import sys
from plumbum import local, FG
print("Hello, Tristate Area.")
sources = []
for file in sys.argv[1:]:
sources.append(file)
print("file = " + file)
with open(file) as cfile:
for line in cfile.readlines():
if line.startswith("#include"):
for part in line.split():
if part.startswith("<"):
header = part.replace("<","").replace(">","")
header = "include/" + header
sources.append(header)
sources =sorted(set(sources))
ctags = local['ctags']
print(ctags.bound_command(sources).formulate())
ctags.bound_command(sources) & FG
I get a file of size 155502. Running ctaginator I get a file of size 491157. Feels about right.
However, this does not include headers only included from other headers. To do that, we would need something recursive. That something would need cycle-breaking ability….
Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for parallel installation, the perfect solution for such tests, and as base packages.
RPMs of PHP version 8.4.9RC1 are available
as base packages in the remi-modular-test for Fedora 40-42 and Enterprise Linux≥ 8
as SCL in remi-test repository
RPMs of PHP version 8.3.23RC1 are available
as base packages in the remi-modular-test for Fedora 40-42 and Enterprise Linux≥ 8
as SCL in remi-test repository
ℹ️ The packages are available for x86_64 and aarch64.
ℹ️ PHP version 8.2 is now in security mode only, so no more RC will be released.
It’s been a while since I’ve posted here on the good old blog, I’ve been busy with life and work, however, that may change soon as the big 5 banks in Canada are now forcing everyone to a mandated RTO back in to downtown Toronto. I had to move out of the city some years back due to the cost of living crisis here so I may be out of a job come September.
Anyway, I started a new MacOS app in Swift called TurnTable which is written from scratch to try and copy the old spirit and simplicity of the original iTunes application. It doesn’t have anything fancy yet implemented but I just wrote it all today and am posting the source code of course up on my github. I will try to add more features to it over time when I get a free chance to do so!
This is the 132nd issue of syslog-ng Insider, a monthly newsletter that brings you syslog-ng-related news.
NEWS
Installing nightly syslog-ng arm64 packages on a Raspberry Pi
Last week, I posted about running nightly syslog-ng container images on arm64. However, you can also install syslog-ng directly on the host (in my case, a Raspberry Pi 3), running the latest Raspberry OS.
Working with One Identity Cloud PAM Linux agent logs in syslog-ng
One Identity Cloud PAM is one of the latest security products by One Identity. It provides asset management as well as secure and monitored remote access for One Identity Cloud users to hosts on their local network. Last year, I showed you how collect One Identity Cloud PAM Network Agent log messages on Windows and create alerts when somebody connects to a host on your local network using PAM Essentials. This time, I will show you how to work with the Linux version of the Network Agent.
Testing the new syslog-ng wildcard-file() source options on Linux
Last year, syslog-ng 4.8.0 improved the wildcard-file() source on FreeBSD and MacOS. Version 4.9.0 will do the same for Linux by using inotify for file and directory monitoring, resulting in faster performance while using significantly less resources. This blog is a call for testing the new wildcard-file() source options before release.
I'm lucky enough to have a weird niche ISP available to me, so I'm paying $35 a month for around 600MBit symmetric data. Unfortunately they don't offer static IP addresses to residential customers, and nor do they allow multiple IP addresses per connection, and I'm the sort of person who'd like to run a bunch of stuff myself, so I've been looking for ways to manage this.
What I've ended up doing is renting a cheap VPS from a vendor that lets me add multiple IP addresses for minimal extra cost. The precise nature of the VPS isn't relevant - you just want a machine (it doesn't need much CPU, RAM, or storage) that has multiple world routeable IPv4 addresses associated with it and has no port blocks on incoming traffic. Ideally it's geographically local and peers with your ISP in order to reduce additional latency, but that's a nice to have rather than a requirement.
By setting that up you now have multiple real-world IP addresses that people can get to. How do we get them to the machine in your house you want to be accessible? First we need a connection between that machine and your VPS, and the easiest approach here is Wireguard. We only need a point-to-point link, nothing routable, and none of the IP addresses involved need to have anything to do with any of the rest of your network. So, on your local machine you want something like:
The addresses here are (other than the VPS address) arbitrary - but they do need to be consistent, otherwise Wireguard is going to be unhappy and your packets will not have a fun time. Bring that interface up with wg-quick and make sure the devices can ping each other. Hurrah! That's the easy bit.
Now you want packets from the outside world to get to your internal machine. Let's say the external IP address you're going to use for that machine is 321.985.520.309 and the wireguard address of your local system is 867.420.696.005. On the VPS, you're going to want to do:
Now, all incoming packets for 321.985.520.309 will be rewritten to head towards 867.420.696.005 instead (make sure you've set net.ipv4.ip_forward to 1 via sysctl!). Victory! Or is it? Well, no.
What we're doing here is rewriting the destination address of the packets so instead of heading to an address associated with the VPS, they're now going to head to your internal system over the Wireguard link. Which is then going to ignore them, because the AllowedIPs statement in the config only allows packets coming from your VPS, and these packets still have their original source IP. We could rewrite the source IP to match the VPS IP, but then you'd have no idea where any of these packets were coming from, and that sucks. Let's do something better. On the local machine, in the peer, let's update AllowedIps to 0.0.0.0/0 to permit packets form any source to appear over our Wireguard link. But if we bring the interface up now, it'll try to route all traffic over the Wireguard link, which isn't what we want. So we'll add table = off to the interface stanza of the config to disable that, and now we can bring the interface up without breaking everything but still allowing packets to reach us. However, we do still need to tell the kernel how to reach the remote VPN endpoint, which we can do with ip route add vpswgaddr dev wg0. Add this to the interface stanza as:
PostUp = ip route add vpswgaddr dev wg0 PreDown = ip route del vpswgaddr dev wg0
That's half the battle. The problem is that they're going to show up there with the source address still set to the original source IP, and your internal system is (because Linux) going to notice it has the ability to just send replies to the outside world via your ISP rather than via Wireguard and nothing is going to work. Thanks, Linux. Thinux.
But there's a way to solve this - policy routing. Linux allows you to have multiple separate routing tables, and define policy that controls which routing table will be used for a given packet. First, let's define a new table reference. On the local machine, edit /etc/iproute2/rt_tables and add a new entry that's something like:
1 wireguard
where "1" is just a standin for a number not otherwise used there. Now edit your wireguard config and replace table=off with table=wireguard - Wireguard will now update the wireguard routing table rather than the global one. Now all we need to do is to tell the kernel to push packets into the appropriate routing table - we can do that with ip rule add from localaddr lookup wireguard, which tells the kernel to take any packet coming from our Wireguard address and push it via the Wireguard routing table. Add that to your Wireguard interface config as:
PostUp = ip rule add from localaddr lookup wireguard PreDown = ip rule del from localaddr lookup wireguard and now your local system is effectively on the internet.
You can do this for multiple systems - just configure additional Wireguard interfaces on the VPS and make sure they're all listening on different ports. If your local IP changes then your local machines will end up reconnecting to the VPS, but to the outside world their accessible IP address will remain the same. It's like having a real IP without the pain of convincing your ISP to give it to you.
As of today, all Copr builder virtual machines are now being spawned from
bootc images, which is no small feat because the builder infrastructure
involves multiple architectures (x86_64, aarch64, ppc64le, s390x), multiple
clouds (Amazon AWS, IBM Cloud), and on-premise hypervisors. It scales up to
400 builders running simultaneously and peaking at
30k builds a day.
Before bootc
You can find some interesting history and previous numbers in Pavel’s article -
Fedora Copr farm of builders - status of July 2021. The part it
leaves out is how we used to generate the Copr builder images.
The process is documented in the
official Copr documentation. In a nutshell, it
involved manually spawning a VM from a fresh Fedora Cloud image,
running Ansible playbooks to provision it, and then using custom scripts to
upload the image to the right place. Because we need to build the images
natively, we had to follow this process for every architecture.
The easiest workflow was for x86_64 builders running on our own hypervisors. It
meant connecting to the hypervisor using SSH and running a custom
copr-image script from the praiskup/helpers
repository. While its usage looks innocent, internally it had to execute many
virt-sysprep commands. It also required some
guestfish hacks to modify cloud-init configuration inside
of the image so that it works outside of an actual cloud. Then, finally, using
the upload-qcow2-images script to upload the image into
libvirt.
The same exact workflow for ppc64le builders. However, internally it had a
special case uploading the image also to OSU OSL OpenStack.
For s390x builders, we don’t have a hypervisor where we could natively build the
image. Thus we needed to spawn a new VM in IBM Cloud and run
the previously mentioned copr-image script inside of it. Once
finished, we needed to upload the image to IBM Cloud. This is supposed to be
done using the ibmcloud tool, but the problem is that
it is not FOSS, and as such, it cannot be packaged for
Fedora. We don’t want to run random binaries from the internet, so we
containerized it.
At this point, only x86_64 and aarch64 images for Amazon AWS remain.
While not straightforward to create a new AMI from a local qcow2 image, it’s
quite easy to create an AMI from a running EC2 instance. That
was our strategy. Spawn a new instance from a fresh Fedora Cloud
image, provision it, and then create an AMI from it.
Current situation
I disliked exactly three aspects concerning the previous solution. It required a
lot of manual work, the process was different for every cloud and architecture,
and the bus factor was less than one.
Even though at this moment generating a fresh set of builder images still
requires about the same amount of manual work as before, there is a potential
for future automation. By switching to bootc and
Image Builder, we were able to offload some dirty work to them
while also unifying the process to follow the same steps for all architectures
and clouds (with minor caveats).
For Amazon AWS, we can utilize the image-builder upload feature which is
amazing. But for other clouds and hypervisors, we still need our custom
upload-qcow2-images and
quay.io/praiskup/ibmcloud-cli. If image-builder could
implement the missing support and enable uploading to all of them, that would be
a major win for us.
Future plans
My goal is simple, I want one-button deployment. Well, almost.
When a change is made to our Containerfile, or when triggered
manually, or periodically after a period of inactivity, I want the images to be
automatically built for all architectures and uploaded to all the necessary
places. Then seeing a list of image names and AMIs that I can either choose to
use or ignore.
The bootc-image-builder-action seems like the
perfect candidate, but the problem is that it cannot natively build images for
ppc64le and s390x.
SNThrailkill recommended GitLab Runners but
that would require us to maintain the runner VMs, which is annoying. Moreover,
there is a potential chicken-and-egg problem, meaning that if we break our
image, we might not be able to spawn a VM to build a new working image. We also
wouldn’t be able to use the existing GitHub action and would have to port it for
GitLab.
At this moment, our team is leaning towards Konflux and a tekton pipeline for
building images. Fedora Konflux instance is limited to x86_64 and aarch64, so we
would temporarily have to use an internal Red Hat instance which provides all
the architectures needed by us.
Many questions are yet to be answered. Is Konflux ready? Does the pipeline for
building images already exist? Does it support everything we need? Is it built
on top of image-builder so that we can use its upload feature?
Pitfalls along the way
Hopefully, this can help Image Builder and bootc
developers better understand their blind spots in the onboarding process, and
also prevent new users from repeating the same mistakes.
Before discovering that bootc exists, our original approach was to use
just Image Builder and its blueprints, and
automatize the process using Packit. There were several
problems. It was easy to build the image locally from our blueprint, but it
wasn’t possible to upload the same blueprint to be built in
a hosted Image Builder service. Additionally, I had several issues with the
Blueprint TOML format. The order of operations is pre-defined
(e.g. all users are always created before any packages are installed). There is
no escape hatch to run a custom command. And finally, it’s yet another
specification to learn. My recommendation? Just go with bootc.
Our main problem with bootc is the immutability of the filesystem. Can somebody
please help me understand whether the immutable filesystem is a fundamental
building block, a key piece of technology that enables bootable containers, or
whether it is an unrelated feature? If it is technologically possible, our team
would love to see officially supported mutable bootc base images. Currently, we
are going forward with a
hack to make the root filesystem transient.
One of the issues that probably stems out of the immutable filesystem is the
necessity to change the default location of the RPM database.
This hack is baked into the bootc base images and we needed to revert it because
it causes Mock to fail under some specific circumstances. This unfortunately
cost us many many hours of debugging.
The process of building system images is quite storage intensive in
/var/lib/containers and /run. To avoid running out of disk space on our
virtual machines, we had to
turn our swap partition into a data volume and mount the
problematic directories there. Not sure if there is something that
image-builder can do to make this a less of problem.
We build the system images natively on VMs of the same architecture that they
are targeted for, but then we fetch all of them to an x86_64 machine and upload
the images to the respective clouds from there. We discovered a
bug in cross-arch upload to AWS, which was promptly
confirmed and fixed by the image-builder team. Big
customer satisfaction right here.
We also struggled with setting up AWS permissions for the image-builder upload
command to work correctly. We tried running it, fixing the insufficient
permissions it complained about, running it again, and again, and so on. I
don’t recommend this approach. It turns out there is a
documentation page with instructions.
I hope this chapter doesn’t come across as too discouraging. In fact, we found
workarounds for all of our problems, and we are now happily using this in
production. So you can probably too.
LibreWolf یک مرورگر امن، حریمخصوصیمحور و متنباز است که بر پایه Firefox توسعه داده شده است. هدف اصلی این مرورگر، حذف ویژگیهایی است که ممکن است به حریم خصوصی کاربران آسیب بزنند یا دادههای آنها را به سرورهای خارجی ارسال کنند. LibreWolf برای کاربرانی طراحی شده که میخواهند تجربهای امنتر و بدون ردیابی در وب […]
When the noise of the DotCom boom became too loud for me to ignore, I finally left my role enabling mainframes and joined a consultancy focusing on Electronic Commerce with a skillset based on Java. I was pretty well prepared for this, as the Object Oriented approach I had learned in the Visual C++ world translated over fairly cleanly, and I had taken a Java transition course at UC Berkely extenstion. Thus, I moved two streets up, two streets over, to the overheated, noisy, open bay office if Fort Point Partners.
Fort Point was still such a young company that it had not even gotten the fortpoint.com domain name yet…that would happen in a few months. They already had a couple projects under their belts, but the big ones were just starting, and they were staffing up for them. I was hired to work on a project for the Culinary Institute of America (yes, the CIA) that was about recipe management as a way of selling high end ingredients for these recipes. The site was called “Tavolo,” after the Italian word for table, but with the accent on the second syllable.
Our Project Manager was Roger Kibbe, and our technical lead was Paul Duey. While these two guys are real characters, and it would be fun to try and describe the ups and downs of the project in dramatic terms, they are very real people who I bonded with during a fairly intense project, and have become life long friends. Suffice to say that we all learned and grew a lot during the year that the project developed.
OK, I will add that I was the person that exposed Roger to UML diagrams. His initial response was “What the hell is that?” But I think he got it pretty quickly.
I was actually a bit above average age for the team. Most of the engineers were right out of college, with maybe a year or two of professional experience. I didn’t have much more technical experience, just my two years at Walker Interactive, but I had spent 3 years in the Army after graduation. On the other hand, a few of the coders on the project had really learned their craft, and were well beyond me in productivity and thoughtfulness…it made for a really great learning environment.
We were Java based, but this was before Java Enterprise Edition, and we built on the most powerful platform of the day: ATG Dynamo. ATG had built a data-mapping technique based on the mapping of Java objects to database tables using the properties of those objects. This very much heralded what we would see with J2EE’s second go round with data mapping, and very much mapped how Hibernate would work in the future. However, ATG was learning that they needed to make tools for analysts, and they had changed their approach, using a dynamic mapping based on XML and HashMaps. These were called Repositories and their first foray into using them was in the personalization of the user experience. This part of the site fell to me.
I, of course, was arrogant and thought I could do everything, so I also horned in on Bill Noto’s piece, which was the back end integration with the AS400 machines…this very much looked like the kind of work I was doing back at Walker, and I thought I had something to offer. In retrospect, the leadership should have told me to stick to my lane, or switched me off the personalization work, as it turned out that the fast timeline of the project meant I was seriously behind.
I will also say that I messed up in the backend integration, in that I did not use the data mapping mechanism from ATG. For some reason, I ended up building my own, that probably looked like the early iterations of ATGs. I thought it was going to run outside of the app server, on its own machine. We ended up just running it alongside the existing one, and it was fine.
Getting back to the Repository API and the personalization, I found myself frustrated by the lack of type safety in building that code. It really was not as neat a fit as the data transfer object design from the rest of the site. I do remember writing some really bad code to work around that. In solving the problem of dynamic data query, it stopped supporting its basic use case.
But by far the biggest frustration, and I think I speak for all engineers on the project, was the amount of time you had to wait for the Dynamo app server to restart every time you made a change. I later learned that the biggest time chunk in the restart process was the re-indexing of the objects, and we should have turned that off, but that solution didn’t help us on Tavolo. This i
The restart problem continued to haunt us on follow on projects until we found a hack-work-around. The Pages were in a format called JHTML, a pre-cursor to Java Server Pages. JHTML was dynamically converted to Java when the page was first loaded. It turns out you could do full Java objects within these pages, and they would be dynamically re-created when the page was reloaded. This is a technique I have used on a few occasions in on JSPs after my time at Fort Point.
Possibly the biggest lesson learned from Tavolo is that everyone follows the pattern set by the leadership. The technical lead was dating a night nurse, and thus had little motivation to go home at the end of the day. He tended to start late and work until after midnight. This was the pattern for the whole team. Most of use were young, and didn’t have families, so it was fine, although it did mess with my Rock Climbing schedule. I think we all enjoyed (to some degree) the craziness of working until 4 in the morning, going home to sleep until noon, and the coming back the next day.
Until that point, I had been completely removed from the customer side of the business. With Fort Point’s approach, I found myself at in front of the customer a few times, both early on for the offsite planning, and as we progressed. This pattern would continue with other projects, as one aspect of consulting is that you often find yourself integrated into the customers work-force, either building something to hand over to them, or expanding something they did, or figuring out how to get things to work together. I also got to work with partners on the integration effort. All of this helped me to understand the process of software development and deployment much better than I had in the past.
Many years later, Fort Point was bought by Adobe, which is why www.fortpoint.com redirectes to Adobe’s site.
In an era where software testing, cross-platform development, and cyber hygiene are increasingly vital, virtual machines (VMs) have become indispensable. They offer a way to run entire operating systems—Windows, Linux, or even macOS—inside another OS environment, with no need to repartition disks, dual boot, or invest in additional hardware. Whether you’re a developer, a cybersecurity enthusiast, or just a curious power user, VMs are a sandbox of opportunity.
In a surprising move following its acquisition by Broadcom, VMware made Workstation Pro free for personal use in May 2024, turning the tides in a market dominated by open-source solutions like VirtualBox. This article explores VMware Workstation Pro in-depth, its use cases, performance, macOS compatibility, and viable alternatives.
Why Choose VMware Workstation Pro?
VMware Workstation Pro has long been a professional favorite. Its robust performance, feature-rich environment, and hardware-accelerated virtualization make it ideal for everything from software testing to enterprise development environments.
Historically priced at over $200/€200, it’s now available free of charge for personal use, while businesses still require a $120/year (€119/year) commercial license.
Key Advantages
Superior Performance: Unlike VirtualBox, VMware harnesses native virtualization features like Intel VT-x or AMD-V with greater efficiency.
Snapshots & Clones: Save and revert VM states easily for testing and rollback scenarios.
Hardware Compatibility: Better handling of USB passthrough, GPU acceleration, and networking modes (NAT, Bridge, Host-Only).
Seamless Integration: Drag and drop, shared clipboard, and folder sharing enhance productivity.
Cross-Platform Support: Native support on Windows and Linux; with some workarounds, you can also use VMs on macOS.
System Requirements
To run VMware Workstation Pro efficiently, make sure your host machine meets or exceeds the following specs:
Component
Minimum
Recommended
CPU
64-bit with VT-x or AMD-V
Intel Core i5 / Ryzen 5 or better
RAM
4 GB
8 GB or more
Disk Space
1.5 GB (software) + 10–50 GB per VM
SSD for optimal performance
Host OS
Windows 10/11 or modern Linux distros
Windows 11 Pro, Ubuntu 24.04
Graphics
Optional 3D acceleration
Dedicated GPU for development/testing
Tip: To enable virtualization, access your BIOS/UEFI (usually F2 or DEL during boot) and activate Intel VT-x or AMD-V.
Installing VMware Workstation Pro (Windows & Linux)
Launching a VM with VMware is straightforward yet powerful:
Quick Setup (Typical Mode)
Launch VMware and click “Create a New Virtual Machine.�
Choose “Typical (recommended)� to skip advanced configuration.
Attach an ISO file—download Windows from Microsoft or Linux from Ubuntu.
Optionally enter a product key for Windows or set up a username.
Assign a name and storage path for the VM.
Allocate resources:
2–4 GB RAM for Windows
1 GB RAM for Linux
20–40 GB disk space
Click Finish and start your virtual journey.
Run Windows, Linux and other virtual machines with VMware Workstation Pro for Windows and Linux or VMware Fusion for Mac, the industry standard desktop hypervisors.
Build and test nearly any app with the world’s leading desktop hypervisor app for Windows and Linux.
Optimizing VM Performance
Even on modest systems, VMware lets you squeeze more out of your virtual machines. Here’s how:
1. Tune Resource Allocation
Increase RAM and CPU cores—without starving the host.
Enable hardware-assisted virtualization in VM settings.
2. Enable VMware Tools
This set of drivers improves:
Video performance
Clipboard sharing
Time sync
File dragging and dropping
To install:
Go to VM > Install VMware Tools (the ISO is mounted automatically).
3. Configure Network Modes
NAT: Default and safe for most users.
Bridged: Lets the VM appear as a real machine on the local network.
Host-Only: Isolated network for safe testing.
4. Use Snapshots & Clones
Snapshots allow point-in-time backups. If something breaks, just roll back.
VM > Snapshot > Take Snapshot
You can even clone VMs for parallel testing.
macOS and Virtualization: The Legal and Technical Landscape
Running macOS as a Guest
By default, VMware does not support macOS guests unless you’re on a Mac using VMware Fusion. However, unofficial patches like the macOS Unlocker (used at your own risk) enable macOS installation on VMware Workstation for educational use.
You will need:
VMware Workstation Pro (on Windows or Linux)
macOS Unlocker (e.g., via GitHub: paolo-projects/auto-unlocker)
macOS installation ISO (legally obtained from Apple’s site)
Disclaimer: Installing macOS on non-Apple hardware violates Apple’s EULA and is not supported by VMware.
Running VMs on macOS
For Mac users, the counterpart is VMware Fusion—a polished, professional virtualization suite with native M1/M2 chip support as of version 13.
Test apps across different OSes without maintaining physical test devices.
2. Web Development
Run LAMP or MEAN stacks in isolated environments.
3. Cybersecurity
Use Kali Linux or Parrot OS to run penetration tests in a sandbox.
4. OS Experimentation
Try new Linux distros like Fedora or Arch without touching your main setup.
5. Legacy Software Support
Run Windows XP or 7 for apps that don’t support modern Windows.
6. Gaming Mods
Install niche game mods or utilities that could harm your primary OS.
Free Alternatives to VMware Workstation Pro
Though VMware is now free for personal use, it’s not the only option. Here are notable alternatives:
Tool
Pros
Cons
VirtualBox
Free, cross-platform, good community support
Slower performance, weaker 3D support
Hyper-V (Windows Pro)
Native, low overhead
Not user-friendly for beginners
GNOME Boxes (Linux)
Simple, great for quick tests
Limited features
UTM (macOS)
Native on M1/M2, runs VMs and emulators
UI not as advanced
Parallels Desktop (macOS)
Optimized for Mac
Paid only
Best Practices for Virtualization in 2025
Keep Host OS Updated
Virtualization exploits often target host kernels. Stay patched.
Use SSDs or NVMe Drives
VMs benefit enormously from fast read/write speeds.
Encrypt VMs
Use full-disk encryption inside the VM, especially for sensitive data.
Isolate Networks
Use Host-Only or NAT to prevent VMs from exposing your entire LAN.
Backup Snapshots
Keep copies of VM states for recovery. Use version control for code.
Final Thoughts
Virtualization has transitioned from a niche capability to a mainstream necessity. With Broadcom’s decision to make VMware Workstation Pro free for personal use, more users can now enjoy a professional-grade hypervisor without the cost barrier.
Whether you’re experimenting with Linux, testing risky software, or building a sandboxed dev environment, VMware offers a reliable, feature-rich, and now accessible platform. And with tools like Fusion for macOS, VirtualBox for the open-source crowd, or UTM for M-series Macs, there’s a virtual solution for everyone.
In a world trending toward cloud everything, the power to run isolated, fully offline virtual operating systems is a liberating option—especially when it doesn’t come with a monthly fee.
Another year, another Fedora contributor conference! This year, Flock to Fedora returned to Prague, Czechia. It’s a beautiful city and always worth taking a long walk around, which is what many of the conference attendees did the day before the conference started officially. Unfortunately, my flight didn’t get in until far too late to attend, but I’m told it was a good time.
Day One: The Dawn of a New Era
After going through the usual conference details, including reminders of the Code of Conduct and the ritual Sharing of the WiFI Password, Flock got into full swing. To start things off, we had the FPL Exchange. Once a frequent occurence, sometimes only a few short years apart, this year saw the passing of the torch from Matthew Miller who has held the position for over eleven years (also known as “roughly as long as all of his predecessors, combined”) to his successor Jef Spaleta.
In a deeply solemn ceremony… okay, I can’t say that with a straight face. Our new Fedora Project Leader made his entrance wearing a large hotdog costume, eliciting laughter and applause. Matthew then proceeded to anoint the new FPL by dubbing him with a large, Fedora Logo-shaped scepter. Our new JefPL gave a brief overview of his career and credentials and we got to know him a bit.
After that, the other members of FESCo and myself (except Michel Lind, who was unable to make it this year) settled in for a Q&A panel with the Fedora community as we do every year. Some years in the past, we’ve had difficulty filling an hour with questions, but this time was an exception. There were quite a few important topics on peoples’ minds this time around and so it was a lively discussion. In particular, the attendees wanted to know our stances on the use of generative AI in Fedora. I’ll briefly reiterate what I said in person and during my FESCo election interview this year: My stance is that AI should be used to help create choices. It should never be used to make decisions. I’ll go into that in greater detail in a future blog post.
After a brief refreshment break, the conference launched into a presentation on Forgejo (pronounced For-jay-oh, I discovered). The talk was given by a combination of Fedora and upstream developers, which was fantastic to see. That alone tells me that the right choice was made in selecting Forgejo for out Pagure replacement in Fedora. We got a bit of history around the early development and the fork from Gitea.
Next up was a talk I had been very excited for. The developers of Bazzite, a downstream Fedora Remix focused on video gaming, gave an excellent talk about the Bootc tools underpinning it and how Fedora provided them with a great platform to work with. Bazzite takes a lot of design cues from Valve Software’s SteamOS and is an excellent replacement OS for the sub-par Windows experience on some of the SteamDeck’s competitors, like the Asus Rog Ally series. It also works great on a desktop for gamers and I’ve recommended it to several friends and colleagues.
After lunch, I attended the Log Detective presentation, given by Tomas Tomecek and Jiri Podivin. (Full disclosure: this is the project I’m currently working on.) They talked about how we are developing a tool to help package maintainers quickly process the logs of build failures to save time and get fixes implemented rapidly. They made sure to note that Log Detective is available as part of the contribution pipeline for CentOS Stream now and support for Fedora is coming in the near future.
After that, I spent most of the remainder of the day involved in the “Hallway Track”. I sat down with quite a few Fedora Friends and colleagues to discuss Log Detective, AI in general and various other FESCo topics. I’ll freely admit that, after a long journey from the US that had only gotten in at 1am that day, I was quite jet-lagged and have only my notes to remember this part of the day. I went back to my room to grab a quick nap before heading out to dinner at a nearby Ukrainian restaurant with a few old friends.
That evening, Flock held a small social event at an unusual nearby pub. GEEKÁRNA was quite entertaining, with some impressive murals of science fiction, fantasy and videogame characters around the walls. Flock had its annual International Candy Swap event there, and I engaged in my annual tradition of exchanging book recommendations with Kevin Fenzi.
Day Two: To Serve Man
Despite my increasing exhaustion from jet lag, I found the second day of the conference to be exceedingly useful, though I again did not attend a high number of talks. One talk that I made a particular effort to attend was the Fedora Server Edition talk. I was quite interested to hear from Peter Boy and Emmanuel Seyman about the results of the Fedora Server user survey that they conducted over the past year. The big takeaway there was that a large percentage of Fedorans use Fedora Server as a “home lab server” and that this is a constituency that we are under-serving today.
After the session, I sat down with Peter, Emmanuel and Aleksandra Fedorova and we spent a long while discussing some things that we would like to see in this space. In particular, we suggested that we want to see more Cockpit extensions for installing and managing common services. In particular, what I pitched would be something like an “App Store” for server applications running in containers/quadlets, with Cockpit providing a simple configuration interface for it. In some ways, this was a resurrection of an old idea. Simplifying the install experience for popular home lab applications could be a good way to differentiate Fedora Server from the other Editions and bring some fresh interest to the project.
After lunch, I spent most of the early afternoon drafting a speech that I would be giving at the evening event, with some help from Aoife Moloney and a few others. As a result, I didn’t see many of the talks, though I did make sure to attend the Fedora Council AMA (Ask Me Anything) session.
The social event that evening was a boat cruise along the Vltava River, which offered some stunning views of the architecture of Prague. As part of this cruise, I also gave a speech to honor Matthew Miller’s time as Fedora Project Leader and wish him well on his next endeavors at Red Hat. Unfortunately, due to technical issues with the A/V system, the audio did not broadcast throughout the ship. We provided Matthew with a graduation cap and gown and Aoife bestowed upon him a rubber duck in lieu of a diploma.
Day Three: Work It!
The final day of the conference was filled with workshops and hacking sessions. I participated in three of these, all of which were extremely valuable.
The first workshop of the day was for Log Detective. Several of the attendees were interested in working with the project and we spent most of the session discussing the API, as well as collecting some feedback around recommendations to improve and secure it.
After lunch, I attended the Forgejo workshop. We had a lengthy (and at times, heated) discussion on how to replace our current Pagure implementation of dist-git with a Forgejo implementation. I spent a fair bit of the workshop advocating for using the migration to Forgejo as an opportunity to modernize our build pipeline, with a process built around merge requests, draft builds and CI pipelines. Not everyone was convinced, with a fair number of people arguing that we should just reimplement what we have today with Forgejo. We’ll see how things go a little further down the line, I suppose.
The last workshop of the day was a session that Zbigniew Jędrzejewski-Szmek and I ran on eliminating RPM scriptlets from packages. In an effort to simplify life for Image Mode and virtualization (as well as keep updates more deterministic), Zbigniew and I have been on a multi-year campaign to remove all scriptlets from Fedora’s shipped RPMs. Our efforts have borne fruit and we are now finally nearing the end of our journey. Zbigniew presented on how systemd and RPM now has native support for creating users and groups, which was one of the last big usages of scriptlets. In this workshop, we solicited help and suggestions on how to clean up the remaining ones, such as the use of the alternatives system and updates for SELinux policies. Hopefully by next Flock, we’ll be able to announce that we’re finished!
With the end of that session came the end of Flock. We packed up our things and I headed off to dinner with several of the Fedora QA folks, then headed back to my room to sleep and depart for the US in the morning. I’d call it time well spent, though in the future I think I’ll plan to arrive a day earlier so I’m not so tired on the first day of sessions.
تیم Rocky Linux با خوشحالی اعلام کرد که نسخهی Rocky Linux 10.0 بهصورت GA (General Availability) در دسترس قرار گرفته است. تصاویر (image/iso) جدید نصب، کانتینر، کلاد و live آمادهی دانلود هستند. ️ تغییرات مهم پشتیبانی از معماریهای جدید: اکنون نسخه ۱۰ فقط از معماریهای x86-64-v3، ARM (aarch64)، RISC‑V (riscv64)، IBM POWER (ppc64le) و IBM […]
The All Systems Go! 2025 Call for Participation Closes Tomorrow!
The Call for Participation (CFP) for All Systems Go!
2025 will close tomorrow, on 13th of
June! We’d like to invite you to submit your proposals for
consideration to the CFP submission
site quickly!
Dans le cadre des 20 ans de Fedora-fr (et du Projet Fedora en lui-même), Charles-Antoine Couret (Renault) et Nicolas Berrehouc (Nicosss) avons souhaité poser des questions à des contributeurs francophones du Projet Fedora et de Fedora-fr.
Grâce à la diversité des profils, cela permet de voir le fonctionnement du Projet Fedora sous différents angles pour voir le projet au delà de la distribution mais aussi comment il est organisé et conçu. Notons que sur certains points, certaines remarques restent d'application pour d'autres distributions.
N'oublions pas que le Projet Fedora reste un projet mondial et un travail d'équipe ce que ces entretiens ne permettent pas forcément de refléter. Mais la communauté francophone a de la chance d'avoir suffisamment de contributeurs de qualité pour permettre d'avoir un aperçu de beaucoup de sous projets de la distribution.
L'entretien du jour concerne Jean-Baptiste Holcroft, un des mainteneurs de la traduction française de Fedora.
Entretien
Bonjour Jean-Baptiste, peux-tu présenter brièvement tes contributions au projet Fedora ?
Gêné par des traductions partielles de logiciels que je trouve super, j'ai aidé d'abords en signalant des problèmes, puis en traduisant, et ne voyant pas les traductions arriver, à fluidifier le processus de traduction.
Ayant compris le fonctionnement, grâce à la communauté, j'ai voulu aider cette communauté à être plus efficace, en migrant sur la très bonne plateforme de traduction Weblate, en permettant la traduction de la totalité de la documentation de Fedora (on parle ici de 3,5 millions de mots, de milliers de pages).
Transifex, la plateforme précédente, ne permettait pas un travail collectif efficace (entre les traducteurices et entre traducteurices-projets de développement).
Avec l'expérience, j'ai constaté que la communauté du logiciel libre propose une expérience désastreuse pour les traducteurs, le coût de traduction vs l'effort nécessaire pour traduire tout un système d'exploitation est monstrueux, j'ai maintenant voulu rendre cela perceptible et accessible à tous (ce site est moche, sa valeur est la mesure de traduction transverse).
Qu'est-ce qui fait que tu es venu sur Fedora et que tu y es resté ?
Fedora accueille les contributeurs, leur permet de gagner en responsabilité, de financer des initiatives et de grandir en tant que personne. Si mon implication varie dans le temps, ce n'est qu'une question de temps disponible.
Pourquoi contribuer à Fedora en particulier ?
La ligne est claire, au plus proche des créateurs de logiciels libre, en collaboration, que du logiciel libre et très fiable.
C'est une mentalité que je trouve excellente et dans laquelle je me sens à l'aise.
Contribues-tu à d'autres Logiciels Libres ? Si oui, lesquels et comment ?
J'ai contribué pendant quelques temps au projet YunoHost sur les thèmes de la traduction, de l'internationalisation et de l'empaquetage de logiciels.
Ce projet est mature et autonome sur ces deux sujets, ayant moins de temps, j'ai arrêté d'y contribuer.
Je continue à l'utiliser au quotidien car je le considère aussi stable que Fedora pour gérer mon serveur personnel avec mes emails, mes fichiers, mes contacts, etc.
Aujourd'hui, je m'intérresse plutôt à notre efficacité collective plutôt qu'un projet en particulier.
Est-ce que tes contributions à Fedora sont un atout direct ou indirect dans ta vie professionnelle ? Si oui, de quelle façon ?
Toute la culture technique gagnée en lisant l'actualité des projets, en contribuant via des rapports de bugs, des traductions, des développements m'ont aidé pour obtenir mon emploi actuel, et pour mon travail au quotidien.
Le logiciel libre et le fait d'y contribuer, même modestement est un lien réel, concret et palpable, très loin de l'informatique fantasmée qui ne fait le bonheur que du porte-monnaie et du pouvoir des puissants.
Dans le travail, qu'il soit lucratif, amical ou militant, je veux du concret qui nous aide à avancer, et c'est une valeur très forte du logiciel libre.
Tu as maintenu la traduction française de Fedora pendant des années, peux-tu nous expliquer l'importance de la traduction et même de l'internationalisation dans ce genre de projets ?
Le logiciel libre est un outil de lutte contre l'appropriation des communs par une minorité.
Si on veut qu'il soit un outil d'émancipation des masse, on veut réduire les barrières à l'utilisation, tout en respectant les singularités de ses utilisateurs et utilisatrices.
Un utilisateur de logiciel ne devrait pas avoir à apprendre une nouvelle langue pour utiliser un outil émmancipateur et respectueux, d'où l'intérêt de ces activités.
Traduire un logiciel est une activité complexe, quelles sont les difficultés rencontrées lors de cette activité ?
Traduire est la partie facile, ça consomme très peu de temps, ce qui est compliqué c'est :
savoir où traduire - trouver quel logiciel affiche la chaîne, trouver où il est hébergé, comprendre quelle version est à traduire, etc
demander de pouvoir traduire un logiciel - tout n'est pas traduisible, notre pouvoir pour faire évoluer ça en tant que traducteurice est faible
comprendre comment traduire - l'idéal c'est Weblate directement lié au dépôt de logiciel du dépôt, le pire c'est l'ouverture de Pull Request
maintenir les traductions dans le temps - pour chaque projet
Tu as participé à la migration de la plateforme de traduction Zanata vers Weblate, peux-tu revenir sur cette tâche et les motivations derrière cette décision ?
Weblate est un outil de traduction performant, qui facilite la vie des créateurices de logiciels et des traducteurices. Cet outil est proche du dépôt de code source et permet beaucoup d'autonomie aux traducteurices pour s'organiser comme iels le souhaitent, tracer les modifications, être notifiés, etc.
Zanata, ben c'était un objet ok pour traduire, mais c'est tout, tout le reste était déficient.
A titre d'illustration, pour savoir si une traduction a été modifiée, je devais aller regarder sur chaque phrase l'historique des modifications.
Sur Weblate, l'historique est transparent et efficace, et permet de filtrer par langue, projet, composants et type de changements. Voici par exemple l'historique des changements de traduction en Français sur tous les projets.
Quand Weblate est arrivé, j'ai activement démontré la pertinence de ce projet et poussé le sujet pour que nous soyons plus efficaces.
Tu as également participé à obtenir des statistiques de traduction au sein du projet Fedora, quel intérêt à cela et comment cela a été mis en œuvre ?
C'est un sujet génial, mais c'est légèrement compliqué, voici une simplification :
Une distribution Linux, c'est l'assemblage de milliers de logiciels, des lignes de code contenues dans les paquets.
Chaque paquet est disponible au téléchargement sur des mirroirs, on y retrouve même les paquets d'il y a plusieurs années (j'arrive à exploiter les données jusqu'à Fedora 7 sortie en mai 2007).
En suivant de près le fonctionnement de Weblate, je me suis rendu compte que le créateur de Weblate a créé des petits outils pour : avoir des listes de tous les codes de langues connus, et d'auto-détection des fichiers de traduction.
La mécanique va donc :
télécharger chaque paquet existant dans Fedora
en extraire le code source
lancer l'auto-détection des fichiers de traduction
calculer pour chaque fichier le pourcentage d'avancement
agréger les résultats par langue grâce aux codes connus
puis générer un site web pour afficher les résultats
Avec mon ordinateur, cela m'a pris plus de dix jours de calcul en continu, et le téléchargement de 2 To de données pour réussir à avoir une vue sur plus de 15 ans de la distribution Fedora. Je n'ai malheureusement pas encore eu le temps d'en faire une rétrospective pertinente dans le cadre d'une conférence, faute de temps pour analyser les données. Pour l'instant, la seule partie visible est le site https://languages.fedoraproject.org. J'espère avancer sur ce sujet pour la rencontre annuelle 2025 du projet Fedora et le FOSDEM 2026.
La traduction est une activité spécifique pour chaque langue mais tout le monde a des problèmes communs vis à vis de l'outillage ou des situations complexes, y a-t-il des collaborations entre les différentes équipes de traduction dans Fedora ?
D'une façon générale, résoudre un problème pour une langue résous systématiquement un problème pour une autre langue.
Les traducteurs et traductrices se soutiennent beaucoup notamment pour ces raisons, soutenez-les vous aussi !
L'absence de centralisation dans cette activité rend la cohérence des traductions dans l'ensemble des logiciels libres très complexe. Peux-tu nous expliquer ces difficultés ? Est-ce qu'il y a une volonté francophone notamment d'essayer de résoudre le problème en collaborant ensemble d'une certaine façon sur ces problématiques ?
Un logiciel est une création, sa communauté peut être plus ou moins inclusive et pointue sur certaines traductions.
La cohérence vient avec les usages et évolue comme la langue de façon progressive et délocalisée.
On pourrait imaginer proposer des outils, mais si c'est un sujet très important, ce n'est pour l'instant pas mon combat.
Je vois ça comme un problème de privilégié, car spécifique aux langues ayant suffisamment de traduction, alors que la quasi totalité des langues en ont très peu et sont incapables de tenir le rythme exigé par l'évolution de nos logiciels libres.
Je voudrais d'abord démontrer et faire acter à la communauté du logiciel libre qu'il y a urgence à améliorer notre efficacité avec des changements de processus et de l'outillage. Cet outillage pourrait sûrement permettre d'améliorer la cohérence.
Fedora n'est sans doute pas le projet le plus avancé sur la question de l'internationalisation malgré ses progrès au fil des ans, qu'est-ce que le projet Fedora pourrait faire à ce sujet pour améliorer la situation ?
Si on veut faciliter la vie des traducteurices, il faudrait envisager de permettre de traduire à l'échelle de Fedora, de façon distincte des traductions de chaque projet, comme le fait Ubuntu.
Le problème, c'est qu'Ubuntu utilise des outils médiocres (Launchpad) et n'a pas de moyen automatiser pour renvoyer ce travail aux créateurs de logiciels.
Fedora pourrait innover sur ce sujet, et réussir à faire les deux avec une bonne plateforme de traduction (Weblate) et beaucoup d'outillage pour partager ce travail avec les différentes communauté, les utilisateurices y gagneraient en confort, les traducteurices en efficacité et les projets en contributions.
Quelque chose à ajouter ?
Un grand merci à la communauté francophone de Fedora, à la communauté Fedora et à l'ensemble des communautés qui collaborent tous les jours pour nous permettre d'avoir des outils émancipateurs et qui nous respectent. Le travail réalisé au quotidien est exceptionnellement utile et précieux, merci, merci et merci.
Gardons à l'esprit que le logiciel n'est qu'un outil au service d'autres luttes dans lesquelles nous devons prendre notre part.
Merci Jean-Baptiste pour ta contribution !
Conclusion
Nous espérons que cet entretien vous a permis d'en découvrir un peu plus sur le site Fedora-fr.
Si vous avez des questions ou que vous souhaitez participer au Projet Fedora ou Fedora-fr, ou simplement l'utiliser et l'installer sur votre machine, n'hésitez pas à en discuter avec nous en commentaire ou sur le forum Fedora-fr.
À dans 10 jours pour un entretien avec Nicolas Berrehouc, contributeur de Fedora-fr et mainteneur de sa documentation.
The Vulkan WG has released VK_KHR_video_decode_vp9. I did initial work on a Mesa extensions for this a good while back, and I've updated the radv code with help from AMD and Igalia to the final specification.
There is an open MR[1] for radv to add support for vp9 decoding on navi10+ with the latest firmware images in linux-firmware. It is currently passing all VK-GL-CTS tests for VP9 decode.
Adding this decode extension is a big milestone for me as I think it now covers all the reasons I originally got involved in Vulkan Video as signed off, there is still lots to do and I'll stay involved, but it's been great to see the contributions from others and how there is a bit of Vulkan Video community upstream in Mesa.
Flock to Fedora is my favorite conference and this year was no
exception.
Too many good presentations and workshops to name them all. But I want to
mention at least the most surprising (in a good way) ones. It takes some courage
to be the first person to go for a lightning talk, especially when lightning
talks aren’t even scheduled and organizers open the floor at the very
moment. Smera, I tip my hat to you. Also, I was meaning to ask,
how do graphic designers choose the FOSS project they want to work on? As an
engineer, I typically get involved in sofware that I use but is broken somehow,
or is missing some features. I am curious what is it like for you. Another
pleasant surprise was Marta and her efforts to
replace grub with nmbl. I will definitely try having no more
boot loader. In a VM though, I’d still like to boot my workstation :D.
Something happened to me repeatedly during this conference and amused me every
time. I introduced myself to a person, we talked for five minutes, and then the
person asked “so what do you do in Fedora?”. I introduced myself once more, by
my nickname. To which the immediate reaction was “Ahaaa, now I know exactly what
you do!”. I am still laughing about this. Organizers, please bring back FAS
usernames on badges.
It was nice to hear Copr casually mentioned in every other
presentation. It makes the work that much more rewarding.
My favorite presentation was
Bootable Containers: Moving From Concept to Implementation.
I’ve spent all my free time over the last couple of months trying to create a
bootc image for Copr builders, and seeing
Sean falling into and crawling out of all the same traps as
myself was just cathartic. We later talked in the hallway and I appreciated how
quickly he matched my enthusiasm about the project. He gave me some valuable
advice regarding CI/CD for the system images. Man, now I am even more hyped.
I learned about Fedora Ready, an amazing initiative to partner
with laptop vendors and provide a list of devices that
officially support Fedora. Slimbook loves Fedora so much that they
even offer a laptop with Fedora engravings. How amazing would
it be if my employer provided this option for a company laptop? What surprised
me, was not seeing System76 on the list. I am a fan of theirs, so I
am considering reaching out.
Feeling a tap on your shoulder 30 seconds after you push a commit is never a
good sign. When you turn around, Karolina is looking into your eyes
and saying that f’d up, you immediately know that push was a bad idea. For a
petite lady, she can be quite terrifying :D. I am exaggerating for effect. We
had a nice chat afterward and I pitched an idea for an RPM macro that would
remove capped versions from Poetry dependencies. That should make our
lives easier, no?
One of my favorite moments this year was chilling out with Zbigniew
on a boat deck, cruising the Vltava River, and watching the sunset over the
beautiful city of Prague. Kinda romatic if you ask me. Just joking, but indeed,
it was my pleasure to get to know you Zbigniew.
The JefFPL exchange
The conference began with a bittersweet moment - the passing of the Fedora
Project Leadership mantle from Matthew Miller to
Jeff Spaleta.
I didn’t know Jeff before, probably because he was busy doing really effin cool
stuff in Alaska, but we had an opportunity to chat in the hallway after the
session. He is friendly, well-spoken, and not being afraid to state his
opinions. Good qualities for a leader. That being said, Matthew left giant shoes
to fill, so I think it is reasonable not to be overly enthusiastic about the
change just yet.
Matthew, best wishes in your next position, but at the same time, we are sad to
see you go.
FESCo and Fedora Council
The FESCo Q&A and the Fedora Council AMA were
two different sessions on two different days, but I am lumping them together
here. Both of them dealt with an
unspecified Proven Packager incident, the lack of communication
surrounding it, and the inevitable loss of trust as a consequence.
I respectfully disagree with this sentiment.
Let’s assume FESCo actions were wrong. So what? I mean,
really. Everybody makes mistakes. I wrote bugfixes that introduced twice as many
new bugs, I accidentally removed data in production, and I am regularly wrong in
my PR comments. Yet I wasn’t fired, demoted, or lost any trust from the
community. Everybody makes mistakes, it’s par for the course. Even ifFESCo made a mistake (I am not in the position to judge whether they
did or not), it would not overshadow the majority of decisions they made
right. They didn’t lose any of my trust.
As for the policies governing Proven Packagers, one incident
in a decade does not necessarily imply that new rules are needed. It’s possible
to just make a gentlemen’s agreement, shake hands, and move on.
That being said, I wanted to propose the same thing as
Alexandra Fedorova. Proven Packagers are valuable in emergencies,
and I think, it is a bad idea to disband them. But requiring +1 from at least
one other person before pushing changes, makes sense to me. Alexandra proposed
+1 from at least one other Proven Packager, but I would broaden the eligible
reviewers to also include Packager Sponsors and
FESCo members. I would also suggest requiring the name of the reviewer
to be clearly mentioned in the commit description.
As I wrote in my last post, Twitter's new encrypted DM infrastructure is pretty awful. But the amount of work required to make it somewhat better isn't large.
When Juicebox is used with HSMs, it supports encrypting the communication between the client and the backend. This is handled by generating a unique keypair for each HSM. The public key is provided to the client, while the private key remains within the HSM. Even if you can see the traffic sent to the HSM, it's encrypted using the Noise protocol and so the user's encrypted secret data can't be retrieved.
But this is only useful if you know that the public key corresponds to a private key in the HSM! Right now there's no way to know this, but there's worse - the client doesn't have the public key built into it, it's supplied as a response to an API request made to Twitter's servers. Even if the current keys are associated with the HSMs, Twitter could swap them out with ones that aren't, terminate the encrypted connection at their endpoint, and then fake your query to the HSM and get the encrypted data that way. Worse, this could be done for specific targeted users, without any indication to the user that this has happened, making it almost impossible to detect in general.
This is at least partially fixable. Twitter could prove to a third party that their Juicebox keys were generated in an HSM, and the key material could be moved into clients. This makes attacking individual users more difficult (the backdoor code would need to be shipped in the public client), but can't easily help with the website version[1] even if a framework exists to analyse the clients and verify that the correct public keys are in use.
It's still worse than Signal. Use Signal.
[1] Since they could still just serve backdoored Javascript to specific users. This is, unfortunately, kind of an inherent problem when it comes to web-based clients - we don't have good frameworks to detect whether the site itself is malicious.
Si tu utilises une seedbox pour tes téléchargements torrents, tu sais à quel point ça peut être pénible de devoir transférer les fichiers manuellement vers ton NAS. C’est exactement pour ça que j’ai créé SeedboxSync : un outil simple et léger qui automatise cette étape. SeedboxSync se connecte à ta seedbox via SFTP, et copie […]
(Edit: Twitter could improve this significantly with very few changes - I wrote about that here. It's unclear why they'd launch without doing that, since it entirely defeats the point of using HSMs)
When Twitter[1] launched encrypted DMs a couple of years ago, it was the worst kind of end-to-end encrypted - technically e2ee, but in a way that made it relatively easy for Twitter to inject new encryption keys and get everyone's messages anyway. It was also lacking a whole bunch of features such as "sending pictures", so the entire thing was largely a waste of time. But a couple of days ago, Elon announced the arrival of "XChat", a new encrypted message platform built on Rust with (Bitcoin style) encryption, whole new architecture. Maybe this time they've got it right?
tl;dr - no. Use Signal. Twitter can probably obtain your private keys, and admit that they can MITM you and have full access to your metadata.
The new approach is pretty similar to the old one in that it's based on pretty straightforward and well tested cryptographic primitives, but merely using good cryptography doesn't mean you end up with a good solution. This time they've pivoted away from using the underlying cryptographic primitives directly and into higher level abstractions, which is probably a good thing. They're using Libsodium's boxes for message encryption, which is, well, fine? It doesn't offer forward secrecy (if someone's private key is leaked then all existing messages can be decrypted) so it's a long way from the state of the art for a messaging client (Signal's had forward secrecy for over a decade!), but it's not inherently broken or anything. It is, however, written in C, not Rust[2].
That's about the extent of the good news. Twitter's old implementation involved clients generating keypairs and pushing the public key to Twitter. Each client (a physical device or a browser instance) had its own private key, and messages were simply encrypted to every public key associated with an account. This meant that new devices couldn't decrypt old messages, and also meant there was a maximum number of supported devices and terrible scaling issues and it was pretty bad. The new approach generates a keypair and then stores the private key using the Juicebox protocol. Other devices can then retrieve the private key.
Doesn't this mean Twitter has the private key? Well, no. There's a PIN involved, and the PIN is used to generate an encryption key. The stored copy of the private key is encrypted with that key, so if you don't know the PIN you can't decrypt the key. So we brute force the PIN, right? Juicebox actually protects against that - before the backend will hand over the encrypted key, you have to prove knowledge of the PIN to it (this is done in a clever way that doesn't directly reveal the PIN to the backend). If you ask for the key too many times while providing the wrong PIN, access is locked down.
But this is true only if the Juicebox backend is trustworthy. If the backend is controlled by someone untrustworthy[3] then they're going to be able to obtain the encrypted key material (even if it's in an HSM, they can simply watch what comes out of the HSM when the user authenticates if there's no validation of the HSM's keys). And now all they need is the PIN. Turning the PIN into an encryption key is done using the Argon2id key derivation function, using 32 iterations and a memory cost of 16MB (the Juicebox white paper says 16KB, but (a) that's laughably small and (b) the code says 16 * 1024 in an argument that takes kilobytes), which makes it computationally and moderately memory expensive to generate the encryption key used to decrypt the private key. How expensive? Well, on my (not very fast) laptop, that takes less than 0.2 seconds. How many attempts to I need to crack the PIN? Twitter's chosen to fix that to 4 digits, so a maximum of 10,000. You aren't going to need many machines running in parallel to bring this down to a very small amount of time, at which point private keys can, to a first approximation, be extracted at will.
Juicebox attempts to defend against this by supporting sharding your key over multiple backends, and only requiring a subset of those to recover the original. I can't find any evidence that Twitter's does seem to be making use of this,Twitter uses three backends and requires data from at least two, but all the backends used are under x.com so are presumably under Twitter's direct control. Trusting the keystore without needing to trust whoever's hosting it requires a trustworthy communications mechanism between the client and the keystore. If the device you're talking to can prove that it's an HSM that implements the attempt limiting protocol and has no other mechanism to export the data, this can be made to work. Signal makes use of something along these lines using Intel SGX for contact list and settings storage and recovery, and Google and Apple also have documentation about how they handle this in ways that make it difficult for them to obtain backed up key material. Twitter has no documentation of this, and as far as I can tell does nothing to prove that the backend is in any way trustworthy. (Edit to add: The Juicebox API does support authenticated communication between the client and the HSM, but that relies on you having some way to prove that the public key you're presented with corresponds to a private key that only exists in the HSM. Twitter gives you the public key whenever you communicate with them, so even if they've implemented this properly you can't prove they haven't made up a new key and MITMed you the next time you retrieve your key)
On the plus side, Juicebox is written in Rust, so Elon's not 100% wrong. Just mostly wrong.
But ok, at least you've got viable end-to-end encryption even if someone can put in some (not all that much, really) effort to obtain your private key and render it all pointless? Actually no, since you're still relying on the Twitter server to give you the public key of the other party and there's no out of band mechanism to do that or verify the authenticity of that public key at present. Twitter can simply give you a public key where they control the private key, decrypt the message, and then reencrypt it with the intended recipient's key and pass it on. The support page makes it clear that this is a known shortcoming and that it'll be fixed at some point, but they said that about the original encrypted DM support and it never was, so that's probably dependent on whether Elon gets distracted by something else again. And the server knows who and when you're messaging even if they haven't bothered to break your private key, so there's a lot of metadata leakage.
Signal doesn't have these shortcomings. Use Signal.
[1] I'll respect their name change once Elon respects his daughter
[2] There are implementations written in Rust, but Twitter's using the C one with these JNI bindings
[3] Or someone nominally trustworthy but who's been compelled to act against your interests - even if Elon were absolutely committed to protecting all his users, his overarching goals for Twitter require him to have legal presence in multiple jurisdictions that are not necessarily above placing employees in physical danger if there's a perception that they could obtain someone's encryption keys
comments
This is an independent, censorship-resistant site run by volunteers. This site and the blogs of individual volunteers are not officially affiliated with or endorsed by the Fedora Project.
comments? additions? reactions?
As always, comment on mastodon: https://fosstodon.org/@nirik/114841380033994886