Hello testers,
starting today we are introducing several important changes to our subscription backend.
New Quay.io accounts
Subscribers have the ability to access extra
container images via private docker repositories.
What is changing: quay.io account username will no longer be based on email address,
instead it is based on the subscription ID which is more stable over time.
Discovering your credentials is explained on the page above.
New accounts have been created automatically for those eligible!
Old accounts will continue to be active until December 31st 2024,
afterwards they will be removed! Make sure to update your
workflows with the new credentials before December 31st!
Automatic account creation for new subscriptions
Previously subscribers who purchased a subscription were required to create
an account on https://public.tenant.kiwitcms.org using the same email address
used during their purchase.
What is changing: user accounts for new subscriptions will be created automatically
if they do not exist and a random password would be assigned to them. Customers will
be able to reset passwords for these accounts via
https://public.tenant.kiwitcms.org/accounts/passwordreset/! The account username
is sent as a reminder in the password reset email!
Happy Testing!
If you like what we're doing and how Kiwi TCMS supports various communities
please help us!
With the transition of the applications from Fedmsg to Fedora Messaging inching towards completion, today we want to introduce a new service, Webhook To Fedora Messaging. Webhook To Fedora Messaging has been researched and developed by the Fedora Infrastructure team members with the company of an Outreachy mentee over the last quarter to communicate with services using webhooks.
Webhook To Fedora Messaging takes webhook events from services and translates them into semantic messages to be sent over on the Fedora Messaging bus, to which every Fedora Project application can listen and act for automation. Currently, the project supports services like GitHub but going forward we plan on implementing support for services like Discourse, GitLab, Forgejo etc.
As this service was designed to be the successor to the existing Github2Fedmsg service, we are also announcing that the service is now deprecated and users are encouraged to migrate to the newer service. If you are an existing user of the Github2Fedmsg service, please open a private ticket in the fedora-infra/w2fm-registration repository using the template named “Github2Fedmsg Migration Request”.
Additionally, as GitHub allows for managing webhooks at an organizational level, users migrating from the GitHub2Fedmsg service can explore the functionality by visiting the page https://github.com/organizations//settings/hooks. Once the changes have been made by the owner of the GitHub organization, the activities from all repositories can be conveniently relayed on the Fedora Messaging bus.
Recently I got myself a Thinkpad X1 Tablet Gen 3, however, unlike other Thinkpads, this model does
not seem to use the usual Thinkpad kernel module which allows sensitivity to be set in sysfs.
Thanks to some inspiration from ChatGPT, I found out that it is possible to apply multiplier
to the event stream by building a daemon that intercept the input events and modify it.
So here’s how you can do it:
Dependency
On Fedora, you will need python3-evdev installed
sudo dnf install python3-evdev
Event modification script
Save this in /usr/local/bin/alter-sensitivy.py
importevdevfromevdevimportInputDevice,UInput,ecodesimporttimeimportargparseparser=argparse.ArgumentParser()parser.add_argument('-d','--device',required=True)parser.add_argument('-m','--multiplier',type=int,default=2)parser.add_argument('-t','--trigger-threshold',type=int,default=2)args=parser.parse_args()# Open the input device
device=InputDevice(args.device)# Create a virtual input device to emit the modified events
ui=UInput.from_device(device,name="ModifiedDevice")# Loop to process incoming events
foreventindevice.read_loop():ifevent.type==ecodes.EV_RELandabs(event.value)>args.trigger_threshold:# Modify only relative movement events if it above threshold
foriinrange(args.multiplier):print(event.value)ui.write_event(event)ui.syn()time.sleep(0.0005)# for smoothing out the movement
else:# Emit the modified event
ui.write_event(event)ui.syn()
Find your device using sudo libinput list-devices and evtest [device].
Test it out. In this example I’m altering event of /dev/input/event5 input device which is
my trackpoint. -m option allows you to configure how much you want to multiply
the event (default is 2).
Its spooky season, and also release season! Here’s a roundup of some stuff’n’things happeing in the project.
Fedora Linux 41
The F41 Go/No-Go meeting will happen on Thursday 24th October @ 1700 UTC. To join, you can find the information on the fedocal calendar entry. This meeting will determine if we can meet the early release date of Tuesday 29th October at this time or not.
There are still some proposed blockers for F41 too, so if you can help resolve some of these bus, please check out our blocker bugs app, or for a more condensed summary, please read out blocker bug report email.
Fedora Linux 42
Fedora Linux 42 is currently in development, and for the most recent set of changes planned in this rel;ease, please refer to our change set page. Our release schedule is also live, and a reminder of some key dates are below:
December 18th – Changes requiring infrastructure changes
December 24th – Changes requiring mass rebuild
December 24th – System Wide changes
January 14th – Self Contained changes
February 4th – Changes need to be Testable
February 4th – Branching
February 18th – Changes need to be Complete
The changes that are currently in our community feedback period are :
For all the latest on boot-c, check out the bootc post on discourse!
Our Git Forge evaluation is continuing, with an instance of both Forgejo and GitLab CE available to try out in the Communshift app. Details of how to get access can be found on this discussion thread, and we are encouraging folks to try out each instance and report their feedback, preferably against the user stories collected, on this discussion thread.
FOSDEM 2025 returns on Saturday 1st and Sunday 2nd February! The call for devrooms has passed, but the call for stands is still open until November 7th.
Do you have an idea for an episode of the Fedora Podcast, or want to see what some of the upcoming episodes will be? Bookmark The IT Guy’s discussion post on planning for the podcast!
Did you know there are EPEL Office Hours? If not, check out the details on how to join and when they happen on the announcement post!
Josh and Kurt talk to Seth Larson from the Python Software Foundation about security the Python ecosystem. Seth is an employee of the PSF and is doing some amazing work. Seth is showing what can be accomplished when we pay open source developers to do some of the tasks a volunteer might consider boring, but is super important work.
The config.assume_ssl adds additional ActionDispatch::AssumeSSL middleware that will set the following HTTPS headers:
HTTPS to on
HTTP_X_FORWARDED_PORT to 443
HTTP_X_FORWARDED_PROTO to https
rack.url_scheme to https
This is useful when running Rails with force_ssl but behind a load balancer or proxy that terminates SSL connection. This prevents ActionDispatch::SSL auto-redirect to HTTPS.
Cybersecurity is a vital topic for Switzerland and
social engineering attacks are a significant issue in the realm
of cybersecurity.
Organizations like Google, Facebook and LinkedIn could be seen as a
very effective social engineering attack against Swiss culture and
privacy.
Frans Pop, the
Debian Day Volunteer Suicide Victim, had sent at least one of
his suicide notes on debian-private gossip network the
night before Debian Day. If an organization can get into somebody's
head like that, such that decisions about life and death revolve
around this software, we could contemplate the possibility that
Frans Pop died under the influence of a social engineering culture.
Adrian von Bidder died on the same day that Carla and I got married.
Why can't we ask questions about that?
Switzerland reportedly has
a higher per-capita ratio of Debian Developers than any other country
except perhaps Ireland. Yet according to Shuttleworth's
email, many of these people have a loyalty to Debian culture that is above
their loyalty to Swiss employers and Swiss law. This dual
allegience appears to be a sign that they are under the sway of
social engineering or at risk of external influence.
By way of background, in 2006, Adrian and Diana got
married. In 2007,
the suicide petition to Basel Stadt authorities
was signed by A. von Bidder.
In August 2010, we had the confirmed suicide of Frans Pop, the
warning from Mark Shuttleworth and a sustained period of stress
among volunteers in the Debian Developer world.
In April 2011, Adrian von Bidder died. It was discussed like a suicide
but they told us casually that it could be a heart attack.
There was no comment about whether the couple had any children
during the five years of their marriage.
On 28 April 2011, very soon after von Bidder died, Diana modified his blog,
adding a new post:
Sadly, I have to make an end to this blog. Adrian - my husband - died on april 17th of a heart attack.
Adrian von Bidder had made various blog posts with critical commentary
about the risks of social media and other devious enterprises.
Many of his concerns have been proven correct by the passage of time.
Yet I feel the manner in which Diana writes "I have to make an end to this
blog" has an air of disapproval for Adrian's work. Then again,
this must have been a very disturbing time for Diana and on top
of that, English may not be her native language so the tone
of her comments may not reflect her real thoughts and feelings about
the subject.
Some time later, Diana completely erased the blog, removed the DNS
entry for blog. and placed a picture on the main page
fortytwo.ch.
The picture's metadata tells us
it was taken on 20 January 2011 with a Canon EOS 40D, possibly
the camera Adrian discussed in some of his blog posts.
We know that other Debian Developers in Switzerland were subject
to social engineering attacks involving blackmail and public
humiliation. One of those cases was the blackmail of
Daniel Baumann. Did Adrian von Bidder receive similar messages
in the days before his heart attack?
Did Adrian von Bidder communicate with anybody before his
heart attack, for example, leaving a note? In English-speaking countries,
all these things are published by the coroner's office. In
Switzerland, it is the opposite, evidence is only given to those
in close proximity to the deceased. At the time, Diana may not
have known about the earlier suicide of Frans Pop. She may not
have realized there was the risk of a connection between deaths
in a single community. Now
the suicide cluster is public knowledge, is it time for a fresh
discussion about that?
Most cybersecurity experts around the world believe that
transparency is important for education and mitigating risks.
Here is a photo of Diana and Adrian on their wedding day:
Hitler and the Nazis were obsessed with the idea that Jews
could be identified by a distinctive smell. While America was
building the A-bomb, Hitler
diverted science funding to research the Jewish smell.
The smell was rumored to resemble sulfur.
It makes the case that there was a shift in the way that smell, beginning in the late nineteenth century, was used to not simply demarcate groups but, in addition, to supposedly detect ‘race’ and ethnicity.
Prominent Debian Developer Daniel Pocock has recently released
details of the
Swiss harassment judgment. His former landlady, an organizer of
the SVP senioren (far right Swiss seniors group) had started rumors
about a smell coming from Pocock's cats. Even the judge asked
if it could be acceptable to pose questions about this imaginary smell.
Obviously the judge was not familiar with this awkward similarity
to the persecution of Jewish and African people throughout
history.
For about six years now, people have been creating gossip about
harassment and abuse against various Debian co-authors. Nobody ever
provided any evidence.
Earlier this year, when I nominated in the European elections, the
misfits were desperate to attack me but they didn't have any grounds to do so.
They waited until the last minute before voting began and on
6 June 2024, the day before voting, they
published a document that appears to be invalid, full of forgeries, racism
and nonsense.
But wait, there really was a harassment case and a judgment.
With the Irish General Election approaching, I am considering whether
to nominate again and it is really important that people can see the
truth about who really harassed who.
Swiss racism, cats of colour, women harassing women and a 10,000 Swiss franc settlement
The only mistake I made was taking black cats to Zurich.
The real Debian harassment story is about women harassing women
and occasionally, a woman harassing our cats and women harassing men.
In Switzerland, both in the law and in the culture, when you have
a harassment problem like this the matter is usually settled privately
and everybody moves on with their life as quickly as possible.
Carla and our black cats, who are also female victims, were subject
to racism from a white Swiss woman. We received a payment of CHF 10,000.
Surely I would have rushed to publish that on my blog the same day.
But I didn't publish it before. When the WeMakeFedora case was resolved,
I immediately put it on my blog. But in the case of the harassment
in Zurich, I wanted to respect all the parties involved, I wanted to respect
the Swiss cultural approach to such disputes in Switzerland and just
put it out of my mind and get on with serious problems.
Nonetheless, Debianists, including people like Axel Beckert at
ETH Zurich and at the Google office in Zurich have been stirring
up rumors about the harassment and paw behavior for six years.
Ironically, the Google engineering headquarters for Europe
is located in Zurich and Google's role in spreading rumors about
the harassment case had actually undermined the privacy that
people used to take for granted in Switzerland.
Women harassing women: a common problem
In the case of serious violent crime against women, the majority
of perpetrators appear to be male.
In the case of less tangible crimes, like harassment, stalking,
racism and even sexism, we can find many cases where women are
either protagonists or associates of an offender.
The recent Netflix series
Baby Reindeer
cast a spotlight on the story of a woman harassing a male
employee at a bar.
In 2021, we saw a female volunteer, Molly de Blanc, started an online
petition
harassing her former boss, Dr Richard Stallman at FSF. Approximately
three thousand people joined the petition but a petition about a person
is not a real petition at all, it is harassment. de Blanc made the petition
more than two years after leaving her job at FSF.
In a previous blog, I looked at the case of another non-developing
Debian volunteer, Laura Arjona,
harassing one of my female interns in the Outreachy program.
After learning that this goes on behind mentors' backs, I didn't
volunteer to be a mentor again.
Then there was Amaya Rodrigo Sastre who helped spread the rumors
that Ted Walther's partner at the DebConf6 dinner was
alleged to be a prostitute. In fact, the woman was a dentist and
these rumors were disastrous for her reputation.
Ariadne Conill from the Alpine Linux project, which has no
relationship to Switzerland as far as I can tell, was spreading
the rumor that my intern in Google Summer of Code was my girlfriend.
The rumor was offensive to me but even more offensive to the intern
because
that was the year she got married.
Shortly before DebConf15, we received
nasty messages from Margarita (Marga) Manterola of Google telling us
that Carla is not welcome to eat the food at DebConf, despite the
fact that other woman like Marga go there with their husbands every year.
While waiting for the train to go down the
Uetliberg one day, Carla and I were talking to a British woman
in the playground beside the railway station. The woman told
us about her Swiss landlady, a little old lady, who had been
whinging and whining about the behavior of her small children.
The Swiss landlady had become quite obsessed and had even been
caught at the window taking pictures of the way the children played
inside their home.
Looking at the
invalid and falsified legal documents distributed
by rogue members of Debian, we can find various references to
my Irish heritage. Everybody seems to know that I was born and
raised in Australia. I acquired Irish citizenship because my mother
is from Ireland. We find that the racist women in Switzerland, and
we'll see more of them in this blog, are not classifying people based
on our skills and talents, they are obsessed about little things
like my mother's Irish heritage. In fact, some of these
documents were prepared by two women in Zurich,
Pascale Koster and Albane die Ziegler. The documents don't mention
that I am a citizen of three countries, they emphasize my Irish
heritage as some kind of a hint to their racist colleagues
that my mother and I should be treated badly in Zurich.
What we see here is another example of women being offensive to
other women.
One of the most well known examples of women exhibiting poor
behavior to other women in Zurich was the infamous Oprah Winfrey
handbag incident. A woman in the handbag shop refused to let
Oprah look at a particular handbag. Oprah gives a testimony about
her experience with the Swiss saleswoman (Kauffrau) in this
video:
This brings us to the point where we will consider the paw behavior
of a Swiss landlady towards Carla and our black cats, who are both female
cats, so there was a female offender and three female victims.
I don't wish to make the generalization that all women are like
this. I've worked with many professional women who act with
integrity in everything they do. But when we see gossipmongers
making up stories about harassment in groups like Debian, we need
to remember the risk of listening to attention seekers
and their paid lawyers/liars. Gossip and
social engineering attacks go hand in hand and if we care
about cybersecurity, we need to call out gossip behavior.
Harassment and racism are not only Swiss problems
Before rushing to any conclusion about racism in Switzerland,
we need to remember that there is racism in every country.
When we look at the concerns about Brexit in the United Kingdom,
there was a lot of racism during the campaign period before the
referendum. Some of the practical changes in the UK, like
canceling the driving licenses of foreigners, actually happened
before the Brexit referendum. Likewise, whenever there is a Swiss
referendum about the relationship with the EU, some people
may voice racist opinions about the subject but there may be
some valid political or economic discussions that take place
at the same time.
We can also ask the question: are there times when Swiss citizens
are subject to extreme acts of bullying or extreme injustice by
employers, landladies or the public authorities? In fact,
some examples do exist.
Looking at the JuristGate
affair, we can see that the rogue legal protection scheme, which
smells like a ponzi scheme, had both Swiss customers and foreign
customers. All the customers lost their money at the same time.
When FINMA shut down the rogue insurance, they hid the details
from everybody, both Swiss and foreign clients were kept in the dark
to an equal extent. Therefore, there was extraordinary injustice,
there were some foreign clients but racism wasn't the main theme
in JuristGate.
When I look at
the case of Adrian von Bidder (avbb / cmot),
the Debian Developer who died on our wedding day,
I wonder if he had one of the same bad experiences that
foreigners often complain about in Switzerland. For example, did one
of the health insurance companies bungle a treatment for his wife
or did an employer fail to make contributions to his pension scheme
and then go into liquidation?
Here is a photo of Diana and Adrian on their wedding day:
In Swiss culture, sensitivity about the cause of death is
an important cultural consideration. After blogging the
initial evidence about how the death was discussed in
the debian-private gossip channel, I came to realize that Adrian's
widow, Diana, was listed as a member of the Basel City
parliament. In such cases, there is obviously even more opportunity
to ask questions about the interaction between the death, any
environmental or cultural factors, whether in Debian or in his community
but at the same time, the cultural aversion to asking those questions
is a very steep obstacle.
Real harassment, real evidence ordered chronologically
Some time in 2017 or 2018, Chris Lamb, former leader of the
Debian project, started making mischievous references to harassment.
He didn't provide any facts, dates, victims or evidence.
Most of the larger property management companies in Zurich and
Switzerland are somewhat consistent in their application of tenancy
regulations.
When people find a nice apartment with a responsible landlord, they
usually keep the apartment for a very long time.
Some smaller buildings, usually sized between five and ten apartments,
are owned by a resident landlord/landlady. This gives rise to the
phenomena where the landlady and tenant may cross paths almost every day.
It goes without saying that the turnover of tenants in some of these
owner-occupier buildings is much higher than in the buildings owned by
a silent investor.
Web sites advertising the apartments sometimes have a checkbox and
filter option for potential tenants to exclude apartments with a resident
landlady (Vermieter wohnt im Mehrfamilienhaus). Most people
who have had a bad experience with one of these will go out of
their way to avoid them in future.
Due to the very high turnover in buildings with a resident landlady,
the number of advertisements for such apartments is disproportionate
to the number of buildings that don't have a resident landlady.
Laundry duties & the status of women
Very new buildings in Switzerland have a washing machine and
clothes dryer in every apartment. Most traditional buildings and
some new buildings have a laundry room or drying room shared by all the
tenants. Most buildings have a handwritten roster where the tenants
can reserve the machines for a particular day.
You may only have one reservation to use the laundry every two weeks.
If that reservation falls on a work day and you have multiple loads
of washing to do then it can be very inconvenient. Nonetheless,
nobody sees any urgency to change this system. There is a prevailing
attitude that the wife or girlfriend will stay home on the laundry day
and ensure that all the clothes are nicely washed, dried and folded
and the laundry room is left in a proper state for the tenant
who will use it on the following day.
Switzerland is notable for its neutral status and hosting diplomats
from around the world at the United Nations in Geneva. But if
the washing machine breaks down and one tenant's drying time
runs over into the next day, there is anything but diplomacy
and tenants regress to communicating with each other through handwritten
notes written in
one of the four official Swiss languages.
The application process, religious harassment and cats
When tenants arrive to visit a prospective apartment, they are
given an application form that must be completed for the landlord
or letting manager.
They tend to ask more questions than necessary. It is not unusual
to find questions about your religious affiliations on the form.
We can quickly find examples of these forms in a search engine
by searching for words like Anmeldungformular and
Konfession (
Example 1,
Example 2,
Example 3).
In effect, if your religion has been
persecuted in Switzerland,
you may well feel that filling out the application form
is an experience of harassment.
News articles appear from time to time about whether or not
you should declare your religion. (
Example 1,
Example 2,
Example 3).
Not every Anmeldungformular asks about religion but
it is almost certain they will ask about your pets and musical
instruments. It is a good idea to answer those questions honestly
in any country. While some landlords and letting agents will decline
certain requests, others will be quite
happy to direct you to the most suitable apartments for your
lifestyle.
Whenever we applied for any apartment in Switzerland, we did
so with total honesty and integrity. We declared our cats
(Katzen):
Specifically, we have written Hauskatzen, which literally
translates to house cats. In other words, we are not
talking about something exotic like a tiger or panther.
No room for undocumented aliens
The confession of cat ownership led to a flurry of paperwork
mediated by the letting agent. Everybody who rents an apartment
in Switzerland is expected to purchase a civil liability insurance
and pay three months of rent as a security deposit.
In our case, that simply wasn't enough. The landlady insisted
that we sign a guarantee against any paw behavior by our cats:
Costs anticipated by this document were already anticipated
by the security deposit and our civil liability insurance. Therefore,
I feel this additional cat contract was superfluous. Can we call
it harassment or bullying?
Fair wear and tear
Switzerland has high standards for construction and due to
the level of wealth, even the most mundane apartments typically
have very high quality components in their bathrooms and kitchens.
It is typical to have mixer taps on the showers and sinks, good water
pressure and wall mounted toilets.
When tenancies are concluded in Switzerland, the apartment or
house is subject to a forensic examination that may last several
hours.
It is expected that the tenant leaving an apartment will arrange
to have it cleaned back to the original state before the inspection
day.
Even if the bathroom is 30 or 40 years old, the high quality
components still look like new after each cleaning.
Nonetheless, internal components like washers and gaskets don't
last forever, no matter how beautiful the sinks and toilet bowls
appear on the outside.
In this particular apartment we experienced the failure of both
the shower mixer and the gasket joining the cistern to the toilet
bowl. Both of these things failed within a short span of time.
The plumber came promptly to make the necessary repairs.
Nonetheless, after the drama about whether our cats were a national
security risk, we were never on a good footing with this
particular landlady. She was 76 years old and the far right party,
of which she was a member, was constantly warning her to be
on the lookout for mischievous foreigners.
If you look at the far right propaganda circulated in advance of
referendums and elections in Switzerland, the foreigners are
typically depicted in black, like our cats.
At Kaltbad on the Rigi, we found a white cat in the snow:
A large professional landlord company with thousands of
apartments probably wouldn't worry about the cost of repairing
these washers and gaskets. On the other hand, for these owner-occupier
landladies who like to micro-manage their tenancies, some of them
stay up all night worrying about whether
tenants (or cats) do something like this as a prank.
Here is the report about the shower defect about two weeks
after we moved in. There is no way that tenants or cats could
have put rust into the pipes. These are simply the problems of
an old building.
Subject: bath / shower water problems
Date: Thu, 1 Dec 2016 09:39:29 +0100
From: Daniel Pocock <daniel@pocock.pro>
To: Letting agent
Hi [redacted],
The plumber visited today, he replaced the dishwasher door and the
shower hose.
He also looked at the flow from the hot water tap in the shower. He
found a lot of rust inside the tap.
He removed the hot and cold taps, cleaned out the taps and ran the water
directly from the pipes in the wall. A lot of rust came out of both hot
and cold pipes.
- the hot water pipe is now flowing better, but it is still less than normal
- water from both hot and cold pipes still has a slight red colour
He said he will contact you to explain and discuss how it can be fixed.
Regards,
Daniel
Cat smell letter
While I was on a trip to the UK, Carla received this ugly letter:
It says there is an unknown smell in the common areas and
it asks if the smell could come from our cats or deficiencies
in cleanliness.
Carla and the cats were really sad.
We contacted our legal insurance and had a lawyer draft a
response. We hoped that would be the end of the matter.
The window nazi
Then came the windows. There are 11 apartments in the
building and somebody would sometimes open one of the windows
in the stairwell and leave it open.
The landlady become obsessed with closing the windows and
leaving handwritten notes on the windows.
Mediation requested
After some months of receiving insults in the post and in the
common areas, it reached a point
where we had to take legal action. We demanded a mediation
session at the tribunal of Zurich.
Our cats were members of our family. Everybody loved our
cats. My Italian cousin came to stay with them on several occasions:
Remarkably, the landlady sent an expensive lawyer to repeat
the accusations about a cat smell, the window in the stairs and
a dirty towel that another tenant found in the washing room.
There were no fingerprints, no paw-prints, no video evidence,
no DNA evidence, not even a whisker to link any
of these problems to us. It was just a witchhunt and as we had
black cats, we were the most recent arrivals in the building,
and we were foreigners, we felt we had been victimized.
Here is the accusation about a disobedient tenant who opens the window
in the stairs:
Swiss lawyer tried to deceive Swiss judge about far right membership
Early in the mediation session, the lawyer for the landlady claimed
that it wasn't clear whether or not she was really a member of the
far right political party.
We were able to show the judge that the landlady had a web site
promoting the party. Here is one of the photos, she is chairing a meeting
and the poster attached to the table has her name and face on it.
The filename tells us it is a meeting of the far right seniors committee
(SVP senioren):
In this photo, she is standing beside then president of the
Kanton parliament, Dr. Christian Huber:
Shortly after the photo
was taken, Dr Huber resigned from the parliament and resigned
from the SVP in mysterious circumstances.
Dr Huber and his spouse spent the next ten years traveling around the
European Union by houseboat. This is ironic of course, a leader
from an anti-immigration/anti-EU party living like a refugee in a boat
in the EU. In Australia the far right uses the term
boat people as a derogatory term for immigrants who travel by boat.
Mystery smell: who is defaming who?
Here is the accusation about a mysterious smell. The lawyer
is saying it is not clear where it comes from because he doesn't
want to be caught defaming foreigners directly. He doesn't provide any
expert evidence or witnesses, he basically says the landlady has a
hunch about this smell and the judge should trust the landlady.
The letting agent is also in the room and if the rumor was credible
he would have surely commented on it. I don't think he wanted
to comment about the smell at all so it came down to the expensive
lawyer to talk this imaginary smell into existence.
When I hear references to these mysterious smells, I feel it is
a way for the jurists to give each other a wink and a nod and ask
for the foreigners to be punished.
Every time Carla went down to the laundry in the basement,
the little old lady would appear. We don't know if she had
video surveillance cameras or if she spent all her day going
up and down the steps to check on the laundry.
Nonetheless, Carla had become quite upset about the cat letter
and the intrusions in the laundry and at some point I had to
start doing the laundry because it was impossible for Carla to
go down there alone.
The landlady was taken aback by the sight of a man in the laundry.
She started calling Carla's employer. We don't know what she was
hoping to achieve. Was she trying to determine if Carla had absconded?
Or was she trying to find out why the employer expected Carla to work
on laundry day?
The lawyer sent a stern letter demanding that these phone calls to
Carla's employer must cease immediately.
Frau [----] hat letzte Woche beim Arbeitsort meiner Mandantin
angerufen und unter Vorwand, sie wolle mit ihr sprechen,
gegenüber der Chefin meiner Mandnatin während ca. 30 Minuten
meine Mandanten im Zusammenhang mit dem vorliegenden Verfahren
angeschwärzt, resp. diese in ihrer Ehre verletzt.
Ich fordere Sie auf, Ihre Klientin über die Tragweite der
Bestimmungen über strafbare Handlungen gegen üble Nachrede
und Verleumdung zu informieren.
Es gab und gibt keinen Grund der direkten Kontaktaufnahme und
insbesondere keinen Grund für Ihr Klientin, beim Arbeitsort
meiner Mandantin anzurufen.
Sollte es noch einmal vorkommen, dass Ihre Klientin gegenüber
meinen Mandanten oder Dritten ausfällig wird und sich sonst
rassistisch äussert, so wird dies entsprechende Konsequenzen haben.
Ich denke auch nicht, dass das Verhalten Ihrer Klientin die
Verhandlungsbereitschaft meiner Mandanten bezüglich des
vorliegenden Verfahrens erhöht.
and translated into English:
Last week, [----] called my client's place of work and,
under the pretext that she wanted to speak to her,
spent around 30 minutes denigrating my client in connection
with the current proceedings to my client's boss, and insulted her honor.
I request that you inform your client of the scope of the
provisions on criminal offenses against slander and defamation.
There was and is no reason to make direct contact and in particular
no reason for your client to call my client's place of work.
Should your client become abusive towards my client or third parties
or otherwise make racist comments, this will have the appropriate
consequences.
I also do not think that your client's behavior increases my
client's willingness to negotiate with regard to the current proceedings.
Would a female judge in Zurich be any more sympathetic than a
female landlady? Maybe not. Here, the landlady's lawyer is explaining that
if the man (me) is busy with my job, the woman (Carla) can look for
another flat. The judge and the translator are both female.
Nobody calls out the sexism.
The search for a flat in Zurich is not a trivial task. In
German, the press refer to it as the Wohnungslotterie. When
a new building is about to be completed, hundreds of prospective
tenants line up outside to submit copies of their Anmeldungformular
in person.
What we see here is Swiss feminism, that is feminism for Swiss women.
I don't think it's up to a man to give the definition of feminism.
But I feel it is safe to say that Swiss feminism or Australian feminism
are contradictions because it is basically privileged women from
rich countries who go to university and become jurists and meddle
in the lives of women from other countries.
One of the reasons we are in court in the first place is because
Carla didn't feel comfortable being that woman from latin America
who does laundry with the Swiss landlady looking over her shoulder.
When Swiss families want to apply for apartments,
they send their foreign nannies to stand in those queues and submit
the forms.
"In the early days ... every client meeting I would be asked to get the coffee. The other male graduates
were never asked to do such things,"
She's right: in more than twenty years since I graduated, nobody ever
asked me to make coffee in the workplace. And when I tried to share
responsibility for doing the laundry in Zurich, the landlady was
opposed to the idea. She seemed to feel that women like Carla were
easier to control.
In Renens, Canton Vaud, a white cat can sleep on the steps at
the railway station and nobody complains about the risk that
somebody might trip over the cat. Every ten minutes, the metro arrives
at the top of the steps and hundreds of people come down the steps to
search for their trains. There is a serious risk that somebody could
trip over the cat and suffer an injury. If it was a black cat, would
the police come with dogs to remove it?
Everybody in west Lausanne seems to know this cat but nobody
knows who it belongs to.
Here is the part of the trial where they talk about the landlady
calling Carla's workplace about the laundry:
Who owns that towel?
Given the lack of evidence about the imaginary cat smell, the
landlady had tried to diversify her legal strategy by introducing
a dirty towel that somebody found in the washing room.
Most landlords would simply provide a basket for
lost property. Even at Swiss prices, the cost of a basket for
these elusive towels and socks would be far less than the cost
of the lawyers.
The cat smell trial consisted of four jurists, an interpreter, the
letting manager and an engineer, myself. The combined cost of our
time was over CHF 2,000 per hour for three hours in court
debating the anxieties of a landlady who didn't show up.
In comparison, many Swiss residents drive over to Germany or
France each weekend for shopping. At
Action in France, you can buy another towel and a lost property
basket for a combined cost of less than ten Swiss francs.
The fact they tried to bring this towelgate affair into the courtroom
only proves that they had no serious case in the first place.
They were clutching at straws.
Speaking English in a Zurich courtroom
I think the judge realized that the landlady had a very weak case
and on top of that, the landlady's lawyer had been somewhat deceptive
about the political connection. The judge decided to continue the
mediation session using the English language.
The far right Swiss landlady was unable to sleep due to the
imaginary smell, the sight of a man doing laundry and our stubborn refusal
to take phone calls during our working hours about every little drama
in the missing towels department. Yet my family had far
more serious concerns due to my father's health. I tried to explain
that in the court but were they listening?
Switzerland is a very small country and many people live in the same
valley where they grew up with their parents. Even if they move from
their valley to a city like Zurich, they can always reach most of
their extended family with a short journey by train.
In the most hostile company where I worked in Switzerland, a line
manager's mother had developed a terminal illness and had less than
six months to live. The manager went back to his country for a number of
months and the company strategy, organization and culture was totally unable
to cope with this situation.
Nonetheless, in our case, Carla's aunt was getting very old and
my father was very ill. The financial cost of the mediation session
where we spoke about missing towels and the imaginary cat smell was
greater than the financial cost of a trip to Australia to see my father.
The judge and I seem to agree there are cultural differences but
the extent to which some people react to small differences is
extraordinary:
Defending the honor of black cats before a Swiss judge
Vous promettez d’être fidèle à la Constitution fédérale et à la Constitution du Canton de Vaud.
Vous promettez de maintenir et de défendre en toute occasion et de tout votre pouvoir les droits, les libertés et
l’indépendance de votre nouvelle patrie, de procurer et d’avancer son honneur et profit, comme aussi d’éviter tout ce qui
pourrait lui porter perte ou dommage
and translated into English:
You promise to be true to the federal constitution and the constitution
of the Canton of Vaud.
You promise to maintain and defend on every occasion and with all your
powers the rights, freedoms and independence of your new country,
to develop and advance her reputation and wealth and equally to
avoid all that could cause her loss or damage.
What does an oath like this mean in practice? In the Zurich courthouse,
I defended the honor and reputation of our black cats before a
Swiss tribunal:
Remarkably, the judge repeats the question about whether there
could be a smell. This was so offensive to us as a family.
In fact, these rumors about smells have Holocaust origins.
Hitler commissioned significant scientific research to determine
if the Jews have a distinctive smell. When the judge tried
to legitimize these black-cat-smell comments in Zurich, I couldn't believe
what I was hearing.
When the
Albanian whistleblowers came to Zurich, they slept with the
cats. Here is Anisa Kuci from OpenStreetmap, Wikimedia and
GNOME Foundation on our sofa bed with Buffy the black kitten
sleeping beside her:
If people want to confirm the cat smell was a lie, just ask Anisa.
Switzerland vs Australia, which country is more beautiful
I feel that honesty is always important in any relationship.
When we see courtrooms on television, the witnesses promise to
tell the whole truth, the complete truth and nothing but the truth.
I guess that mantra stuck in my head. I simply told the tribunal
that I didn't really want that apartment anyway because Australia
is more beautiful. At that very moment, the jurists stop speaking English
and revert to German.
In fact, both Switzerland and Australia have some amazing geographic
and cultural features and I think we were just unlucky with this
particular landlady from the SVP senioren (far right seniors) cabal.
Far right dictator or eccentric old lady?
While this landlady was definitely a member of the far right party,
her behavior was rather foolish and I don't think every member of the
far right party behaves like this. Many of the people in the far right
party own small businesses and they don't want to start silly disputes
with their customers and tourists over things like a missing towel.
In this case, I suspect the propaganda of the far right party
has become mixed up with the aging process and contributed to
behavior that is erratic.
Most political parties and religions try to exploit the insecurities
of little old ladies like this in the hope little old ladies
will leave bequests to the party or the religion in question.
With that in mind, I don't blame the landlady alone for the pain
my family experienced in Zurich.
Google and Debian forcing the harassment verdict into the spotlight
While we had to collect a lot of evidence at the time of the dispute,
I never imagined publishing this case on my blog.
The only reason I am publishing this is because of vague rumors
about a harassment case being distributed on the web sites of Debian,
the World Intellectual Property Organization (WIPO) in Geneva and
some other web sites.
I don't want to encourage cat enthusiasts to seek revenge against
this little old lady. If she is still alive today, and I haven't
even bothered to check, she would be well into her eighties and there
would be no benefit whatsoever from harassing her.
The case was resolved with a cash settlement of CHF 10,000, equivalent
to EUR 10,500 or USD 10,000.
The cats were transported in a box to a new home:
Here is the judgment in German. We've redacted parts of it
to avoid identifying anybody. Ultimately, this was another case
of a woman instigating harassment, a lot like Baby Reindeer:
Chris Lamb and Molly de Blanc violated Swiss privacy
Soon after the harassment case was finished, it was Chris Lamb
and Molly de Blanc who started a gossip campaign.
Some of these women spreading rumors in the free software
community are particularly vicious.
One of the cats, Floe, died shortly after the relocation.
de Blanc then showed up at FOSDEM in Brussels with her
infamous speech about
putting cats behind bars:
de Blanc's behavior was a horrible act of trolling after the
death of our beloved cat.
Carla and I did not choose to make the harassment verdict
public. We didn't have any vendetta with that little old lady. We just
wanted to get on with our lives.
The far right landlady paid the compensation money on time. She
has a right to get on with her life too. She is well into her
eighties now and Google is violating her privacy
with the ongoing gossip about harassment.
The ten thousand Swiss francs we received is less than half
the cost of the handbag that Oprah Winfrey wanted to see in
Bahnhofstrasse, Zurich.
What we see is a range of women, both the landlady and Molly de Blanc,
meddling in peoples' lives. Women and female cats
are victims of these stalkers but the stalkers are women too.
Role Based Access Control (RBAC) as defined by NIST is based on the concept of global roles. Global, in this case, means the scope of the application. So if you have the role of ADMIN, and you are in a globally scoped RBAC based application, that role applies to all APIs and resources within the program.
OpenStack was written assuming that the ADMIN role was a global role. But then it was implemented as a non-global role. It was implemented as a role scoped to a tenant. The term tenant was the original (and I would argue, better) term for what was later called Project, and then again expanded to Domains as well.
A project, or a domain, or a tenant, or a namespace, is a way of sub-setting the resources in a system. Resources are explicitly labeled with exactly one name. When a user attempts to interact with that resource, the system checks the roles associated with the users account, and the rules will deny or allow the requested access.
However, the Nova project explicitly continued to use ADMIN in a global manner, and typically reserved it for sensitive operations. Keystone, on the other hand, assumed that ADMIN was a scoped role, and required that role to perform operations that were limited to a tenant. It is this disconnect that lead to the longevity of bug 968696.
So…is there any problem with continuing to keep ADMIN as a global role, even though it is assigned locally? Yes and no. The No comes from the fact that you can lock down all scoped operations to roles other than admin….OpenStack is going to use the term Manager instead. For example, a user with the manager role on a group will be able to add and remove people from that group. Thus, if you make it so no one actually needs the ADMIN role to perform day-to-day operations, and thus ADMIN is never required to be tied to a specific scope, you can make a functioning system.
TO continue the “NO” from above, you have to make sure that role assignments cannot lead to an elevation of principals. Thus, if the manager can assign roles to a user, they CANNOT be allowed to assign the ADMIN role to that user…unless they themselves have admin. Put more strictly, a user cannot be allowed to delegate any role that they themselves do not have. However, this means that a user needs to be explicitly assigned all roles that they can assign, or you need to be able to infer roles from the assigned roles. Keystone has role inference rules, and these are sufficient, if set up, to enforce this scheme. We have also hard-coded in a check that no role can imply the ADMIN role, essential to preventing elevation of privileges. It also performs inference cycle checks, so that a lower rule cannot point to a higher rule that points back to itself. Thus, if ADMIN directly or indirectly infers all other rules, you could never make a rule that then infers ADMIN.
With those two pre-conditions in place, you could make a system that allows ADMIN to continue to function globally even when assigned to a scope. However, those rules need to be enforced continually in the future. The problem would then come if some service creates or modifies an API that requires a scoped ADMIN to perform it. And, since RBAC enforcement is local to each server, this is not only possible, but likely based on history. If this happens, people will require a scoped ADMIN role, and in doing so, will get the unscoped power that comes with it. And, in the cases where there is automation and delegation involved (HEAT, Trusts, etc), there is going to be cases where these delegated credentials are stored on disk.
The current OpenStack approach is called Secure RBAC. That very term should raise alarms. RBAC by itself needs to be secure, and the need to put an additional qualifier on it indicates a problem. Changing all the APIs that required ADMIN to now require manager where Keystone treated it as a scoped role is opening up a huge security issue in all of the installations where ADMIN is assigned out. Perhaps this is not a real problem as the operators were already forced to limit it due to Nova. But there is a potential for abuse there. People that could not do role assignments on other domain and projects can do so now. As Keystone core, I would not have accepted this change.
Why does OpenStack insist on using ADMIN as a global role? Momentum. Inertia. It has always been done that way, and the users have built their workflow around the existing implementation. The previous attempt to replace project scoped roles with a different scoping mechanisms (System scoped roles) met with a thunderous NO from the Operators. That was after a quieter ‘no’ based on my own approached based on Admin-projects. They found admin projects confusing, and system scopes seemed more intuitive. I agree they were, but they ignored the inertia that the change needed to over come, and system scope comes with integration issues on its own.
I’ll write up why I think the admin-project approach is better in a follow on article. However, I think the current SRBAC effort to make ADMIN act as a global only role is a necessary step anyway. Distinguishing between global and privileges is an essential hardening scheme. Making more fine-grained roles that have limited scopes, and using them consistent across the OpenStack services is proper hardening. Locking down the role-assigment delegation rules in Keystone is essential. If all that is done, the addition of is_admin_project becomes and additional safety check, part of a defense in depth, and not the make-or-break it was when I originally wrote up the proposal.
The damage is already done. ADMIN is global, and proliferated. That is why most people do not expose their OpenStack APIs to their end users, and instead used things like CloudForms as a front to it. Or they wrote custom policy.
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 14th October – 18th October 2024
Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
Version 4.8.1 was released recently. As you could guess from the version number change, it is primarily a bug fix release, but some minor features also slipped in. From this blog, you can learn what changed in syslog-ng 4.8.1 and where you can get its latest stable version.
there is a new destination for Elasticsearch data stream
the key fingerprint of the peer is now available as a syslog-ng macro
you can now control what happens if there is a parsing error in the syslog-parser
However, there were many more changes under the hood. The number of open issues on GitHub became a lot shorter. Many bugs were fixed, including some hard-to-debug crashes, if you used a space at the wrong place in the syslog-ng configuration.
One of the original goals of syslog-ng was to support many different platforms. Over the years, many commercial UNIX variants have disappeared. Recently, the main target of syslog-ng development became Linux on x86_64, as the majority of our users install syslog-ng on these OS-es. Some of the new features could only be compiled on the very latest Linux versions.
Version 4.8.1 did not only bring stability fixes, but going back to the roots, it also improved platform support a lot. Many syslog-ng features are now also available on older OS releases, and compile not just on Linux, but on MacOS and FreeBSD as well. We also received help from external contributors in this: syslog-ng is now available in MacPorts again with a much more extended feature set. You can read more details on this topic at https://www.syslog-ng.com/community/b/blog/posts/huge-improvements-for-syslog-ng-in-macports.
For many years, the main development platform for syslog-ng was Debian Testing, as only this OS included all the possible dependencies of syslog-ng. Using a rolling Linux distro is good for testing but can cause unexpected problems at the worst moments (like the middle of a release process). We switched to a stable OS as a base for our development and release containers: https://www.syslog-ng.com/community/b/blog/posts/we-are-switching-syslog-ng-containers-from-debian-testing-to-stable The nightly containers were based on Debian Stable for the past couple of weeks, and with version 4.8.1 of syslog-ng, the release container is also based on Debian Stable.
Before going back to regular feature development, we keep working a bit more on stabilizing the syslog-ng code base and enhancing platform support.
Where to get syslog-ng 4.8.1
There are many ways to get syslog-ng. Even if there is little chance, you might want to check if the latest syslog-ng is available for your OS: https://repology.org/project/syslog-ng/versions. While most larger Linux distributions lag behind, Alpine Linux, Homebrew, and a number of smaller Linux distributions already have syslog-ng 4.8.1.
Version 4.8.1 of syslog-ng is supposed to be our most stable release with the best platform support in many years. Still, if you run into any problems, let us know: https://github.com/syslog-ng/syslog-ng/issues
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
Hello testers,
our team has worked with our largest customers in order to better define what the
Managed Hosting subscription is and how it can bring more value beyond the
Kiwi TCMS application itself. You can read about the details below.
What is Managed Hosting by Kiwi TCMS
This is our top-tier of support services, where the Kiwi TCMS team leverages
our existing experience of hosting and running the Kiwi TCMS application
in production. It is more about the batteries included which make
overall operations easier, rather than specific software features.
Who is this subscription for
Managed Hosting is suitable for large organizations which consider
their Kiwi TCMS instance to be a mission critical piece of infrastructure.
These are typically organizations with hundreds or thousands of testers
which have more requirements towards security and performance.
What do you get
Everything from lower tier subscription plans!
Most notably you get access to all versions of
community & enterprise releases and a SaaS namespace which could be useful for in-house
development and experimentation with Kiwi TCMS. The SaaS version
(or a self-deployed enterprise version) can also be used as a sandbox instance to exercise
backwards compatibility testing against the latest version of Kiwi TCMS before
we upgrade your designated production instance!
What do all of the individual items mean
1x Kiwi TCMS hosted in AWS: we've been running Kiwi TCMS in production since 2017 without
major incidents so far! This is lots of application and operation specific experience which
allows us to run a Kiwi TCMS instance securely and efficiently for you.
Managed Hosting frees up
your DevOps team from figuring it all out and lets them work on higher priority items.
Under this subscription you may chose one of
Amazon Web Services regions
if you have team members concentrated in a specific geographic area. There is no guarantee for the
actual underlying technology, e.g. EC2, ECS, Lightsail or other - this is up to us!
IMPORTANT: Kiwi TCMS would adjust application size in order to meet your performance requirements
within reason. The total cost of all consumed cloud resources should still be covered by your
monthly payment. In extreme scenarios we would ask you to purchase a higher quantity
of the same subscription.
Fully isolated instance: means exactly that - your database, web application and any additional
services (e.g. a Redis cache) will be completely isolated from resources provisioned for other customers
due to security concerns.
As part of the Managed Hosting subscription you get an unlimited storage quota for
uploaded files and attachments. We may work with you to establish a data retention period if necessary.
Email delivery via Amazon SES: depending on the amount of testing you perform Kiwi TCMS may be
sending lots of emails. Throughout the years we've observed that SMTP connections sometimes fail
resulting in unreliable service or email messages get marked as SPAM hurting the reputation of the sender.
This is also an area which requires prior configuration and does not work out of the box.
Managed Hosting deployments use Amazon SES for email delivery which has the added benefit of
automatically managing blacklists when a delivery fails or is marked as SPAM. As part of
this subscription you will have to authorize one of your email addresses to be used with SES!
Alternatively we may use an email address on one of our own domains.
DNS & SSL management: correct DNS and SSL configuration is vital for the
so called multi-tenant feature in Kiwi TCMS. This is the feature where you can create
unlimited namespaces of the type team-1.tcms.example.com and product-2.tcms.example.com
via the Kiwi TCMS web interface and have them available immediately.
A misconfigred DNS and/or misconfigured and/or expired SSL certificates is something that
happens regularly and leads to sub-optimal performance. With Managed Hosting we're going
to be managing all of this in the background.
Full application admin via web: as a customer you get the super-user account defined
in the Kiwi TCMS application and we promise that Kiwi TCMS staff will not login to minimize
the possibility of incidents. In any case we treat all of the data stored in a
Managed Hosting instance as confidential.
Can override app/host settings: most of Kiwi TCMS' settings are not exposed via the web
interface and some customers find it cumbersome to override them. This is unfortunately
how the underlying application framework is designed to work. With a Managed Hosting
subscription our team will be taking care of this for you. This also extends to settings
of the underlying host system when possible.
Kiwi TCMS upgrade upon request: currently an application upgrade involves human
interaction and is not something that can be automated as an unattended process.
Upgrading may also have implications towards backwards compatibility with 3rd
party systems and in-house software. That's why we would upgrade your instances
only after an explicit request and strive to keep downtime to a minimum.
Regular security updates: relates to the underlying host OS (where applicable)
and web server configuration for Kiwi TCMS. With a Managed Hosting subscription
our team will be keeping track of this for you. The underlying Postgres database
is not exposed via the Internet however we try to keep this up-to-date too!
Your InfoSec team is welcome to submit suggestions for improvements and we would
happily implement them where possible.
Access to encrypted backups: our team performs regular backups for every Kiwi TCMS
instance within our care. All backups are encrypted using the popular open source tool
restic. This includes database and file uploads.
As part of the Managed Hosting subscription we will work with your IT department and
provide you with shared access to these files in case you would like to keep your own
disaster recovery copy and/or to provision staging Kiwi TCMS instances with production like
data. Our team is open to further collaboration in this area!
Extended support: as part of Managed Hosting subscription you get more support coverage,
07-22 UTC/Mon-Sun, this is 3 hours extra and coverage over the weekends
plus a video conference option when necessary in order to resolve
requests faster.
Mid-term plans
Hosting on Red Hat Enterprise Linux: in some cases we are running containers directly
onto a bare-metal or virtualized machine. We are exploring the possibility of using
Red Hat Enterprise Linux throughout the entire hosting stack and tying this with
Red Hat's existing management infrastructure without adding extra charges towards
our customers.
Red Hat Enterprise Linux package upgrades: will apply to both host OS (where applicable)
and the kiwitcms/enterprise container application which is already built on top of
Red Hat UBI 9
container image.
Load balanced deployment: running the Kiwi TCMS application in a load balanced
environment for customers with high performance requirements.
Access to monitoring tools: we're exploring how to securely provide access
for each customer to our existing monitoring tools and/or implement new ones where needed.
This will allow your DevOps team to scrutinize how well we are doing and
provide us with valuable feedback. Let us know what you would be interested to have access to!
Long-term plans
Security certifications via NDA: our vision is to be able to securely share existing and
future security related certifications with our Managed Hosting customers under a
non-disclosure agreement. Ideally we would enroll provisioned instances into a penetration testing
program which will be provided by a 3rd party vendor. Let us know if you have specific suggestions here.
24/7 support: the first step here will be to refactor our existing support program
and migrate to a ticket management system. Ideally such system will be open source too.
At a later date our goal is to have 24x7 coverage in order to minimize response times!
Happy Testing!
If you like what we're doing and how Kiwi TCMS supports various communities
please help us!
Here’s the simplest example to deploy a containerized Next application with Kamal.
What’s Kamal
Kamal is a new tool from 37signals for deploying web applications to bare metal and cloud VMs. It comes with zero-downtime deploys, rolling restarts, asset bridging, remote builds, and more.
Kamal needs SSH configured and Docker installed to run. You also need to create a cloud VM somewhere like Hetzner or Digital Ocean.
SSH configuration
Linux and macOS should come with SSH installed but you’ll need a new key pair for the server:
$ ssh-keygen -t EcDSA -a 100 -b 521 -C"admin@example.com"
This will promt you for the key name and will save the keys to ~/.ssh/ by default.
Add the private key to the SSH agent:
$ ssh-add ~/.ssh/[KEY]
Host provisioning
Create a cloud VM on your favourite provider, choose your prefered Linux operating system (e.g. Ubuntu 24 LTS), and provision the server with your public key from the previous step.
Note the host public IP address.
Once the cloud VM is ready, you can recheck if SSH works by running:
$ ssh root@[IP_ADDRESS]
Docker configuration
Install Docker locally if you don’t have it. If you are on macOS you can install Docker Desktop or OrbStack.
Then sign up for a Docker repository such as Docker Hub and create a first private repository with your application name like my-next-app.
Create an access token and note it down.
Dockerfile
You’ll need to write a Dockerfile for your Next application. A very basic one can look like the following:
FROM node:20 AS base
WORKDIR /app
# RUN npm i -g pnpm
COPY package.json package-lock.json ./
RUN npm install
COPY ..
RUN npm run build
FROM node:20-alpine3.19 AS release
WORKDIR /app
# RUN npm i -g pnpm
COPY --from=base /app/node_modules ./node_modules
COPY --from=base /app/package.json ./package.json
COPY --from=base /app/.next ./.next
EXPOSE 80
CMD ["npm", "start"]
If you use pnpm you’ll need to install it and replace the expected files. Also note that we are exposing port 80. Commit the Dockerfile to your source control.
To start Next we’ll provide the -p 80 argument to next start in package.json:
You likely don’t have Ruby around, so install Kamal as Docker image by creating a command alias.
On macOS for $HOME/.zshrc:
$ alias kamal='docker run -it --rm -v "${PWD}:/workdir" -v "/run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock" -e SSH_AUTH_SOCK="/run/host-services/ssh-auth.sock" -v /var/run/docker.sock:/var/run/docker.sock ghcr.io/basecamp/kamal:v2.2.1'
On Linux for $HOME/.bashrc:
$ alias kamal='docker run -it --rm -v "${PWD}:/workdir" -v "${SSH_AUTH_SOCK}:/ssh-agent" -v /var/run/docker.sock:/var/run/docker.sock -e "SSH_AUTH_SOCK=/ssh-agent" ghcr.io/basecamp/kamal:v2.2.1'
Kamal configuration
Now open a new terminal window and generate the basic Kamal files with kamal init:
$ kamal init
Open .kamal/secrets and provide the registry token as password:
KAMAL_REGISTRY_PASSWORD="dckr_pat..."
And open config/deploy.yml and provide a starting configuration:
# Name of your application. Used to uniquely configure containers.service:my-next-app# Name of the container image.image:[REGISTRY_USER]/my-next-app# Deploy to these servers.servers:web:-[PUBLIC_IP_ADDRESS]# Enable SSL auto certification via Let's Encrypt (and allow for multiple apps on one server).# If using something like Cloudflare, it is recommended to set encryption mode# in Cloudflare's SSL/TLS setting to "Full" to enable end-to-end encryption.proxy:ssl:truehost:[DOMAIN_NAME]# Credentials for your image host.registry:# Specify the registry server, if you're not using Docker Hub# server: registry.digitalocean.com / ghcr.io / ...username:[REGISTRY_USER]# Always use an access token rather than real password (pulled from .kamal/secrets).password:-KAMAL_REGISTRY_PASSWORD# Configure builder setup.builder:arch:amd64
If you have a domain name, provide a domain name and set ssl to true, otherwise keep it false.
Deploy
Now you can deploy the app by running:
$ kamal setup
Kamal now should be able to do its thing, log in to Docker registry, build the application, run Kamal Proxy on the server, and all of the other required steps to run your application.
Any subsequent deploys can be then done using kamal deploy.
It’s once again time for the outrage generators on social media to ask if SBOMs have any value. This seems to happen a few times a year. Probably lines up with the pent up excitement while we wait for the McRib to return. I could dig up a few examples of these articles but I can’t be bothered, and it doesn’t matter. I’d rather spend my time searching for a McRib … I mean, writing this blog post.
I wanted to write down some thoughts, I’m sure it won’t change the constant complaining about how SBOMs are completely useless, and how important it is to tell everyone that, because it’s very normal to remind people about useless things and how useless they are. Nothing really enforces true uselessness like having to spend time explaining how useless something is.
Let’s answer the big question first. Are SBOMs useless? It depends really. If I’m a farmer waiting for my big McRib payoff, they might be. Although with all the right to repair and tractor hacking and all that, it’s possible they’re not totally useless. But like anything, SBOMs are good at certain things, but they’re not good at everything. It’s always good to temper expectations and be reasonable. And most importantly, if there’s something you don’t really like, you can just not use it. If something does suck, just ignore it. Your time has value you know.
OK, so that’s a bad start I suppose. Here’s maybe a better start. Much like rats in New York City. You’re never more than 6 feet from an SBOM. Wait, is that actually better? We started with McRib and now we’re talking about rats … the topic is SBOMs. But really, they drive a lot of modern software, but not quite in ways you realize. That package manager you just used to install a JSON library? That’s an SBOM. You run a vulnerability scanner? Also SBOM! Your car’s entertainment system, it probably lists all the open source it uses in a well hidden menu. Let’s call it a bill of materials, for software.
SBOMs are really just lists of software. I bet you could make a list of things that use lists of software. Package managers, app stores, top ten lists. It’s a pretty common idea.
“HOLD THE #%$& ON!” we hear from the back row, filled with manufactured outrage “THAT’S NOT WHAT ME MEAN AT ALL! We mean systems that manage SBOMs, in SPDX and/or CycloneDX format, can you show me that? That’s what I want to see. Let’s see it! Oh you can’t, WE KNEW IT, sweet sweet victory, we have proven the uselessness beyond a reasonable doubt!” Right about here I’d like to remind you your time has value, and it’s OK to ask for a hug sometimes. I mean, not from me though.
It’s also probably worth explaining such a system will never exist because SBOMs aren’t useful by themselves. SBOMs are tools used to solve other problems where having a list of software is the solution, or at least part of the solution.
You should compare this line of reasoning to going phone shopping and loudly complaining that a company making screws is failing at making devices that can look at memes on the internet. I mean, there are screws in the phones, but if you are measuring screws against their ability to look up memes on the internet, you will be very disappointed. Much like an SBOM, a screw is but a gear in a much larger machine (see what I did there).
The thing is, SBOMs aren’t something that should exist by themselves. They’re a tool, a part, in a larger system. The two SBOM formats that get much of the attention and complaints are SPDX and CycloneDX. But those are just two standards. There are formal definitions of SBOMs that tell us what sort of information they should contain – SPDX and CycloneDX meet those requirements, but it’s foolish to use such a narrow definition by itself, remember, it’s a tool. Standards are importantly for interoperability, but they aren’t necessarily important for solving problems.
Demanding to be shown a successful SBOM tool is the fallacy in this whole argument. Lots of tools use and create lists of software. Some of those can import and export SPDX or CycloneDX. But those formats aren’t necessarily useful by themselves. It’s time for an example.
Let’s say I have a vulnerability scanner. There are many vulnerability scanners in the world, commercial and open source. They have various levels of quality, some are great, some are horrid. Now, in order to scan something for vulnerabilities, I need to know what the software on the system is. This seems like an obvious first step. So the tool has some sort of software identification system built into it. You know what else that identification system can do? It can output the list of software it found, maybe in SPDX or CycloneDX format. More than one vulnerability scanner figured this out, and now they output SBOMs.
At this point the cynical of us would proclaim that while this is technically correct, you can only call it an SBOM if it’s generated in the correct region of France, otherwise it’s just a sparkling list. The reality of it all is complaining about SBOMs gets some attention, but it doesn’t really matter. The ideas behind SBOMs are used everywhere, and formats like SPDX and CycloneDX are just nice ways to communicate a list of software. If you can solve your problem with a list of software, that’s great, do that. Work in this space is progressing wonderfully and there are more tools to help than we’ve ever had before.
And the next time you see someone who seems unreasonably angry about SBOMs, maybe ask them if they need a hug. If you’re not a hugger, ask if they need a McRib.
Josh and Kurt talk about the current WordPress / WP Engine mess. In what is certainly a supply chain attack, the Advanced Custom Fields forking. This whole saga is weird and filled with chaos and stupidity. We have no idea how it will end, but we do know that the blog platform you use shouldn’t be this exciting. The bad sort of exciting.
Happy October folks! In this post you’ll find some information on our F41 and F42 releases, plus a few lines on a couple of topics happening around the project lately. Read on to find out more!
Fedora Linux 41
We are nearing the end of the Fedora Linux 41 release cycle! Our Go/No-Go meeting will happen next Thursday 17th October @ 1700 UTC. To join, you can find the information on the fedocal calendar entry.
There are still some proposed blockers for F41 too, so if you can help resolve some of these bus, please check out our blocker bugs app, or for a more condensed summary, please read out blocker bug report email.
Fedora Linux 42
Fedora Linux 42 is currently in development, and for the most recent set of changes planned in this rel;ease, please refer to our change set page. Our release schedule is also live, and a reminder of some key dates are below:
December 18th – Changes requiring infrastructure changes
December 24th – Changes requiring mass rebuild
December 24th – System Wide changes
January 14th – Self Contained changes
February 4th – Changes need to be Testable
February 4th – Branching
February 18th – Changes need to be Complete
The changes that are currently in our community feedback period are :
For all the latest on boot-c, check out the bootc post on discourse!
Our Git Forge evaluation is taking shape. We have an instance of both Forgejo and GitLab CE available to try out in the Communshift app. Details of how to get access can be found on this discussion thread, and we are encouraging folks to try out each instance and report their feedback, preferably against the user stories collected, on this discussion thread. Directly linked to the git forge evaluation, following Wednesdays council meeting, we have decided to extend the report comparing both Forgejo and GitLab CE to be due by December 5th to allow our QA team, and other teams impacted by the F41 release, time to properly validate their use cases against each forge option. This means the council decision may not happen until early January, but we do hope to proceed with a decision as early as possible so the CPE team can help create a migration plan for affected workflows that can be shared and agreed to in good time to allow minimal disruption to the project as a whole as possible.
There is still time to share feedback on the proposal to amend the Editions Promotion policy. You can read the current policy, and what the council would like to change, in this discussion thread.
FOSDEM 2025 returns on Saturday 1st and Sunday 2nd February! The call for devrooms has passed, but the call for stands is still open until November 7th.
A new episode of The Fedora Podcast is now available! Details of episode 38 and how to listen can be found on the discussion post and by visiting the audio post for the latest, and all episodes.
Update Django from 5.0.8 to 5.0.9, addressing multiple potential security
vulnerabilities, which do not seem to affect Kiwi TCMS directly however
this is not 100% guaranteed
Improvements
Update markdown from 3.6 to 3.7
Update psycopg from 3.2.1 to 3.2.3
Update pygithub from 2.3.0 to 2.4.0
Update python-bugzilla from 3.2.0 to 3.3.0
Update python-gitlab from 4.9.0 to 4.13.0
Update tzdata from 2024.1 to 2024.2
Update uwsgi from 2.0.26 to 2.0.27
Update node_modules/pdfmake from 0.2.10 to 0.2.14
Specify large_client_header_buffers for NGINX proxy configuration example
to match the configuration of Kiwi TCMS
Set uWSGI configuration max-requests to 1024
Settings
Explicitly set DATA_UPLOAD_MAX_NUMBER_FIELDS to 1024, default is 1000
Bug fixes
Increase uWSGI configuration buffer-size to 20k to allows the creation of
a TestRun with 1000 test cases! Fixes
Issue #3387,
Issue #3800
Refactoring and testing
Update black from 24.8.0 to 24.10.0
Update pylint-django from 2.5.5 to 2.6.1
Update selenium from 4.23.1 to 4.25.0
Update sphinx from 8.0.2 to 8.1.1
Update node_modules/webpack from 5.93.0 to 5.95.0
Update node_modules/eslint from 8.57.0 to 8.57.1
Update node_modules/eslint-plugin-import from 2.29.1 to 2.31.0
Assert that password reset email contains username reminder
Update translation source strings
Kiwi TCMS Enterprise v13.6-mt
Based on Kiwi TCMS v13.6
Update django-ses from 4.1.0 to 4.2.0
Update kiwitcms-tenants from 3.1.0 to 3.2.1
Update sentry-sdk from 2.12.0 to 2.16.0
Update value for Content-Security-Policy header to match
upstream Kiwi TCMS
Private container images
quay.io/kiwitcms/version 13.6 (aarch64) 14f4599db480 12 Oct 2024 705MB
quay.io/kiwitcms/version 13.6 (x86_64) 2d925723ab4e 12 Oct 2024 693MB
quay.io/kiwitcms/enterprise 13.6-mt (aarch64) 27a5de45d8dc 12 Oct 2024 1.07GB
quay.io/kiwitcms/enterprise 13.6-mt (x86_64) f2ba176b5e0f 12 Oct 2024 1.05GB
Y and X are variables, but 5 and 19 are numbers. The numbers don’t change.
In Algebra, we use letters as variables, and think of digit based numbers as the actual numbers. Here are the first four chords of a standard jazz tune
Fm7 | Bb-7 | Eb7 | Abmaj7 |
Suppose your singer can’t reach the low notes, and wants to transpose it up a full step. You end up with:
Gm7 | C-7 | F7 | Bbmaj7 |
This works because the relationship between the chords is the same in both keys. To represent this relationship, we represent the chord using roman numerals. If we assume the first version is in the key of Ab, and the second in the key of Bb, we say the last chord of the sequence is the root, and refer to it using the roman numeral I, and will call it “the One”. We use lowercase letters to refer to minor chords. So the sequence becomes:
vi-7 | ii -7 | V7 | Imaj7
Thus, the letter version is the “actual” chords, whereas the numeric version is the variable.
We’re trying to increase the fwupd coverage score, so we can mercilessly refactor and improve code upstream without risks of regressions. To do this we run thousands of unit tests for each part of the libfwupdpublic API and libfwupdpluginprivate API. This gets us a long way, but what we really want to do is emulate the end-to-end firmware update of every real device we support.
It’s not trivial (or quick) connecting hundreds of devices to a specific CI machine, and so for some time we’ve supported recording USB device enumeration, re-plug, firmware write, re–re-plug and re-enumeration. For fwupd 2.0.0 we added support for all sysfs-based devices too, which allows us emulate a real world NVMe disk doing actual ioctls() and reads() in every submitted CI job. We’re now going to ask vendors to record emulations for existing plugins of the firmware update so we can run those in CI too.
The device emulation docs are complicated and there’s lots of things that the user can do wrong. What I really wanted was a “click, click, save-as, click” user experience that doesn’t need to use the command line. The tl;dr: is that we’ve now added the needed async API in fwupd 2.0.1 (probably going to be released on Monday) and added the click, click UI to gnome-firmware:
There’s a slight niggle when the user starts recording the first “internal” device (e.g. a NVMe disk) that we need to ask the user to restart the daemon or the computer. This is because we can’t just hotplug the internal non-removable device, and need to “start recording” then “enumerate device(s)” rather than the other way around. Recording all the device enumeration isn’t free in CPU or RAM (and is possibly a security problem too), and so we don’t turn it on by default. All the emulation is also all controlled using polkit now, so you need the root password to do anything remotely interesting.
Some of the strings are a bit unhelpful, and some a bit clunky, so if you see anything that doesn’t look awesome or is hard to translate please tell us and we can fix it up. Of course, even better would be a merge request with a better string.
If you want to try it out there’s a COPR with all the right bits for Fedora 41. It’ll might also work on Fedora 40 if you remove gnome-software. I’ll probably switch the Flathub build to 48.alpha when fwupd 2.0.1 is released too. Feedback welcome.
We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 7th October – 10th October 2024
Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.
NOTE: There are currently internal changes happening in CPE Team (we will see if the name even remains), which caused that the last week update didn’t came out (it was also caused by the login issue in community blog, but that was just a coincidence) and caused change in the content (some sections are currently missing). We apologize for that and we see how the format will look in the future.
Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for a parallel installation, the perfect solution for such tests, and also as base packages.
RPMs of PHP version 8.3.13RC1 are available
as base packages in the remi-modular-test for Fedora 39-41 and Enterprise Linux≥ 8
as SCL in remi-test repository
RPMs of PHP version 8.2.25RC1 are available
as base packages in the remi-modular-test for Fedora 39-41 and Enterprise Linux≥ 8
as SCL in remi-test repository
The packages are available for x86_64 and aarch64.
PHP version 8.1 is now in security mode only, so no more RC will be released.
This is the 124th issue of syslog-ng Insider, a monthly newsletter that brings you syslog-ng-related news.
NEWS
Version 4.8.0 of syslog-ng improves FreeBSD and MacOS support
Recently One Identity released version 4.8.0 of its open-source log management application. Learn about some of the new features and bug fixes: why upgrade to the latest syslog-ng version, not only on FreeBSD :-)
Why it is useful to set the version number in the syslog-ng configuration
The syslog-ng configuration starts with a version number declaration. Up until recently, if it was missing, syslog-ng did not start. With syslog-ng 4.8, this is changing. From this blog, you can learn why version information is useful, what workaround you can use if you do not want to edit your syslog-ng configuration on each update, and what changed in version 4.8.
We are switching syslog-ng containers from Debian Testing to Stable
For many years, the official syslog-ng container and development containers were based on Debian Testing. We are switching to Debian Stable now. Learn about the history and the reasons for the change now.
The Linux Kernel git repo has a spec file that builds the Kernel RPM. However, it does not build perf or any of the other userland tools. I want to build a perf RPM using the same code as is used to build the Kernel RPM.
Here are my debugging notes.
When building perf from the command line, one changes to the Linux kernel directory and, after having configured and built the kernel, you can “simply” type make -C tools/perf. It will compile in place. There area few variations of this, such as changing to the directory first, or calling the wrapped makefile in the tools/perf directory, Makefile.perf, to get more control over the build process.
However, merely attempting to do this from a stripped down RPM spec file does not work. Instead, it fails with:
LINK /root/rpmbuild/BUILD/linux/tools/perf/util/bpf_skel/.tmp/bootstrap/bpftool
/usr/bin/ld: /root/rpmbuild/BUILD/linux/tools/perf/util/bpf_skel/.tmp/bootstrap/main.o: relocation R_AARCH64_ADR_PREL_PG_HI21 against symbol `stderr@@GLIBC_2.17' which may bind externally can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: /root/rpmbuild/BUILD/linux/tools/perf/util/bpf_skel/.tmp/bootstrap/main.o(.text+0x18): unresolvable R_AARCH64_ADR_PREL_PG_HI21 relocation against symbol `stderr@@GLIBC_2.17'
/usr/bin/ld: final link failed: bad value
collect2: error: ld returned 1 exit status
I have tried various things with setting/remove CFLAGS and, while it changes the error, it is not in a way that moves me closer to a solution. For example, if I set the CFLAGS on the make command line like this:
%{make} CFLAGS="" -C tools/perf -f Makefile.perf
I get this error
fs/fs.c:20:10: fatal error: debug-internal.h: No such file or directory
20 | #include "debug-internal.h"
| ^~~~~~~~~~~~~~~~~~
The same error occurs if I populate the CFLAGS with the value used by the RPM spec file (reported higher up)
Going back to the original error, I notice that it looks like generated code: /root/rpmbuild/BUILD/linux/tools/perf/util/bpf_skel/.tmp/bootstrap/main.o
The skel and .tmp makes me think this is generated from a script. The main.o aspect is also strange, as the main for the whole perf binary should be in linux/tools/perf/perf.c.
There is a whole chain to build Berkeley Packet Filers (bpf) tools and link it in to perf. It appears that this is the part that is failing. The main.o file linked above is build from the main.d file, which is just a list of other files.
Looking inside the log of the build from the Fedora Spec file, (which does produce a binary) I see that we end up with the same errors shown above. I do not know how that version of perf is built, since it appears to fail in the build.
After a lot more trial and error, I came up with the following line, which builds perf successfully. It uses the –static version, which should minimize the number of shared objects it requires.
With that building successfully, I can start removing the flags that disable libraries. However, if the library is not installed, it will report the absence and just skip it in the build. In order to have a reproducible build, I need to identify the proper set of packages to include and add them as BuildRequires. I think that, in order to document the decisions, I will also keep any NO_* flags for libraries that we explicitly are not including.
Here are the release notes from Cockpit 326, cockpit-podman 96, and cockpit-files 9:
cockpit/ws container: Connect to any Linux server
The quay.io/cockpit/ws container provides Cockpit’s web server in environments such as Kubernetes and Fedora CoreOS, connecting to a remote machine via ssh.
Now, the cockpit/ws container gained the ability to connect to any Linux server, even those without Cockpit installed. This provides a convenient way to perform basic administration tasks remotely. This is similar to the Cockpit Client flatpak (available since release 295), but can run anywhere an OCI container can run.
Currently, the following pages are available when connecting to a server without Cockpit: Overview, Metrics, Terminal, Accounts, and Networking. More pages will be added soon, after vetting them for this use.
cockpit/ws container: Support host-specific SSH keys
The cockpit/ws container now supports adding multiple SSH private keys.
In addition to the existing $COCKPIT_SSH_KEY_PATH environment variable, you can now set host specific $COCKPIT_SSH_KEY_PATH_{HOSTNAME} variables, where {HOSTNAME} is the host name (in capital letters) used in the Connect to: field of the login page.
Thanks to benniekiss for contributing this feature!
Bridge supports PCP; cockpit-pcp package is now obsolete
The Python cockpit-bridge, introduced in Cockpit 294, now implements PCP functionality. As a result, the separate cockpit-pcp package is now obsolete.
Storage: Manage Stratis virtual filesystem sizes
Cockpit supports seting a default and maximum size limit for Stratis virtual filesystems, both during creation and when making adjustments.
Podman: pull images from registries without search API
Container creation now supports registries lacking a search API, including GitHub’s container registry (ghcr.io). You can also select a specific tag for images.
Files: basic keyboard shortcuts
Try it out
Cockpit 326, cockpit-podman 96, and cockpit-files 9 are available now:
یکی از کارها برای استاندارد سازی ماژول های Terraform داشتن فایل README.md برای ماژول ها می باشد. نوشتن این چنین فایلی برای ماژول ترافوم کار زمانگیری می باشید بویژه وقتی که تعداد این ماژول ها زیاد باشند. در این مطلب قصد داریم تا terraform-docs را جهت خودکار سازی ساخت فایل README.md برای ماژول های Terraform […]
Josh and Kurt talk about the recent CUPS issue. The vulnerability itself wasn’t all that exciting, but the whole disclosure process was wild. There’s a lot to talk about, many things didn’t quite go as planned and it all leaked early. Let’s talk about why and what it all means.
Kamal 2 finally brings the most requested feature to reality and allows people to run multiple applications simultaneously on a single server. Here’s how.
The Kamal way
Kamal is an application-centric deploy tool rather than a small PaaS. And this hasn’t changed with the new version 2. But what does it even mean?
Let’s look at a typical config/deploy.yml to run a generic application:
# config/deploy.ymlservice:[APP_NAME]image:[DOCKER_REGISTRY]/[APP_NAME]servers:web:-165.22.71.211job:hosts:-165.22.71.211cmd:bin/jobsproxy:ssl:truehost:[APP_DOMAIN]registry:username:[DOCKER_REGISTRY]# Always use an access token rather than real password when possible.password:-KAMAL_REGISTRY_PASSWORDenv:...
As you can notice the configuration describes only one particular service. And this hasn’t changed. Applications still have their own configuration. The only thing that changed is the possibility to share their servers.
Kamal Proxy
Kamal 2 adds support for multiple apps with the new Kamal Proxy. The new proxy registers new deployments for services and handles their gapless switchover.
The only thing Kamal Proxy needs to know is the host (domain) of the service so it can route the traffic to the service web containers. This is done with the following Kamal configuration:
The extra ssl option let’s us also request automatic SSL/TLS certificates via Let’s Encrypt.
There will only be one instance of Kamal Proxy running on a given server, installed by the first deployed application.
Multiple apps
Let’s say we want to deploy three different application on the same server for a local dealership. The main app, API server, and their marketing website. Then we need three different configurations, one for each app.
Kamal cannot do redirects right now, so the auto redirect from www to non-www variant has to be done on the application side. Also any accessory that needs to expose an HTTP endpoint should actually be another app like these three.
To deploy them we would run kamal setup for each. The first one will install Docker, set up the kamal network, and make sure kamal-proxy is running. The other two would safely skip these steps and deploy to existing proxy.
dealeship$ kamal setup
dealeship-api$ kamal setup
marketing$ kamal setup
Debugging
If we want to check what apps should Kamal Proxy be running, we can do so on the server with kamal-proxy list:
$ ssh [USER]@[SERVER]
# docker exec kamal-proxy kamal-proxy list
Service Host Target State TLS
dealership dealearship.com cde2433e86d6:80 running yes
dealership-api api.dealearship.com 82361b53174f:80 running yes
...
We’ll get what hosts and revisions are running for each deployed service as well as their status.
This is not exposed directly to Kamal but we can fix it with an alias:
# config/deploy.yml...aliases:...apps:server exec docker exec kamal-proxy kamal-proxy list
Now running kamal apps gives us a nice rundown of what’s running which is pretty sweet.
I tried other gaming oriented Linux distributions and found them all to be… well, to be frank, bloated. I don’t need
Lutris preinstalled, I don’t use it. Just like I don’t need 20 other tools or launchers or managers installed, with
services running and draining my battery. Furthermore, I love the ability to simply “tab out” of my steam gamescope and
enjoy full desktop environment.
And so I’ve come to the conclusion, that the best thing I can do is to simply start from scratch. I’ve got a little video showcasing what my install looks like, the below is the minimum you’d need to take care of to get something similar.
Note that this is about standard rpm based Fedora - NOT Atomic Fedora (NOT Silverblue/Kinoite/etc)
fsync-kernel
Most of the tools necessary to achieve this are already there. Some devices, including GPD Win 4 do benefit from enabling the fsync kernel copr repository. In order to use it you need to:
$ sudo dnf copr enable sentry/kernel-fsync
Edit /etc/yum.repos.d/fedora.repo and /etc/yum.repos.d/fedora-updates.repo
Add exclude=kernel* to the main “[fedora]” and “[fedora-updates]” categories respectively
You should be able to now simply $ sudo dnf update --refresh to grab the fsync kernel and reboot. Verify with uname -a or rpm -qa | grep kernel to see what you’ve got.
Should the update above fail, for instance because Fedora has got a higher version, you can force it with $ sudo dnf distro-sync kernel*
hhd
HandHeld Daemon provides various functionality necessary to run your handheld seamlessly, such as TDP limits, both through the decky integration as well as using the desktop UI. Installing it is quite simple - enable the hhd-dev copr repository.
$ sudo dnf copr enable hhd-dev/hhd
Install all the things
Obviously we need steam and gamescope, both of which are already available. And hhd we enabled above.
$ sudo dnf install steam gamescope hhd hhd-ui
Executing steam in gamescope
I’ve got a little .desktop file in my taskbar that I use to run it. Shove it wherever you like, tldr this is what I run:
We don’t want login screens for deck-like experience. Note that you still need to use your password for sudo, etc. Again I’m on KDE so your configuration might varry a little bit if you’re not.
Settings -> Login Screen (SDDM) -> clicky the “behavior” button on top -> clicky the check box to automatically log in as your user, and your session.
Create the nopasswwdlogin group and addyourself to it, and configure pam appropriately - follow the Arch wiki for Passwordless login
Autostart gamescope
I use KDE, your configuration might differ. I simply first created an autostart application in the KDE settings (search for autostart) and then replaced it with a link to my existing .desktop file:
I replaced the default deck startup video with a single black frame .webm here, as I haven’t figured out a better way to get rid of it heh (-nointro cli argument doesn’t seem to work)
Today I tagged fwupd 2.0.0, which includes lots of new hardware support, a ton of bugfixes and more importantly a redesigned device prober and firmware loader that allows it to do some cool tricks. As this is a bigger-than-usual release I’ve written some more verbose releases notes below.
The first notable thing is that we’ve removed the requirement of GUsb in the daemon, and now use libusb directly. This allowed us to move the device emulation support from libgusb up into libfwupdplugin, which now means we can emulate devices created from sysfs too. This means that we can emulate end-to-end firmware updates on fake hidraw and nvme devices in CI just like we’ve been able to emulate using fake USB devices for some time. This increases the coverage of testing for every pull request, and makes sure that none of our “improvements” actually end up breaking firmware updates on some existing device.
The emulation code is actually pretty cool; every USB control request, ioctl(), read() (and everything inbetween) is recorded from a target device and saved to a JSON file with a unique per-request key for each stage of the update process. This is saved to a zip archive and is usually uploaded to the LVFS mirror and used in the device-tests in fwupd. It’s much easier than having a desk full of hardware and because each emulation is just that, emulated, we don’t need to do the tens of thousands of 5ms sleeps in between device writes — which means most emulations take a few ms to load, decompress, write and verify. This means you can test [nearly] “every device we support” in just a few seconds of CI time.
Another nice change is the removal of GUdev as a dependency. GUdev is a nice GObject abstraction over libudev and then sd_device from systemd, but when you’re dealing with thousands of devices (that you’re poking in weird ways), and tens of thousands of device children and parents the “immutable device state” objects drift from reality and the abstraction layers really start to hurt. So instead of using GUdev we now listen to the netlink socket and parse those events into fwupd FuDevice objects, rather than having an abstract device with another abstract device being used as a data source. It has also allowed us to remove at least one layer of caching (that we had to work around in weird ways), and also reduce the memory requirement both at startup and at runtime at the expense of re-implementing the netlink parsing code. It also means we can easily start using ueventd, which makes it possible to run fwupd on Android. More on that another day!
The biggest change, and the feature that’s been requested the most by enterprise customers is the ability to “stream” firmware from archives into devices. What fwupdmgr used to do (and what 1_9_X still does) is:
Send the cabinet archive to the daemon as a file descriptor
The daemon then loads the input stream into memory (copy 1)
The memory blob is parsed as a cabinet archive, and the blocks-with-header are re-assembled into whole files (copy 2)
The payload is then typically chunked into pieces, with each chunk being allocated as a new blob (copy 3)
Each chunk is sent to the device being updated
This worked fine for a 32MB firmware payload — we allocate ~100MB of memory and then free it, no bother at all.
Where this fails is for one of two cases: huge firmware or underpowered machine — or in the pathological case, huge video conferencing camera firmware with inexpensive Google ChromeBook. In that example we might have a 1.5GB firmware file (it’s probably a custom Android image…) on a 4GB-of-RAM budget ChromeBook. The running machine has a measly 1GB free system memory, and then fwupd immediately OOMs when just trying to parse the archive, let alone deploy the firmware.
So what can we do to reduce the number of in memory copies, or maybe even remove them all completely? There are two tricks that fwupd 2.0.x uses to load firmware now, and those two primitives we now use all over the source tree:
Partial Input Stream:
This models an input stream (which you can think of like a file descriptor) that is made up of a part of a different input stream at a specific offset. So if you have a base input stream of [123456789] you can build two partial input streams of, say, [234] and [789]. If you try and read() 5 bytes from the first partial stream you just get 3 bytes back. If you seek to offset 0x1 on the second partial input stream you get the two bytes of [89].
Composite Input Stream
This models a different kind of input stream, which is made up of one or more partial input streams. In some cases there can be hundreds of partial streams making up one composite stream. So if you take the first two partial input streams defined a few lines before, and then add them to a composite input stream you get [234789] — and reading 8 bytes at offset 0x0 from that would give you what you expect.
This means the new way of processing firmware archives can be:
Send the cabinet archive to the daemon as a file descriptor
The daemon parses it as a cab archive header, and adds the data section of each block to a partial stream that references the base stream at a specific offset
The daemon “collects” all the partial streams into a composite stream for each file in the archive that spans multiple blocks
The payload is split into chunks, with each chunk actually being a partial stream of the composite file stream
Each chunk is read from the stream, and sent to the device being updated
Sooo…. We never actually read the firmware payload from the cabinet file descriptor until we actually send the chunk of payload to the hardware. This means we have to seek() all over the place, possibly many times for each chunk, but in the kernel a seek() is really just doing some pointer maths to a memory buffer and so it’s super quick — even faster in real time than the “simple” process we used in 1_9_X. The only caveat is that you have to use uncompressed cabinet archives (the default for the LVFS) — as using MSZIP decompression currently does need a single copy fallback.
This means we can deploy a 1.5GB firmware payload using an amazingly low 8MB of RSS, and using less CPU that copying 1.5GB of data around a few times. Which means, you can now deploy that huge firmware to that $3,000 meeting room camera from a $200 ChromeBook — but also means we can do the same in RHEL for 5G mobile broadband radios on low-power, low-cost IoT hardware.
Making such huge changes to fwupd meant we could justify branching a new release, and because we bumped the major version it also made sense to remove all the deprecated API in libfwupd. All the changes are documented in the README file, but I’ve already sent patches for gnome-firmware, gnome-software and kde-discover to make the tiny changes needed for the library bump.
My plan for 2.0.x is to ship it in Flathub, and in Fedora 42 — but NOT Fedora 41, RHEL 9 or RHEL 10 just yet. There is a lot of new code that’s only had a little testing, and I fully expect to do a brown paperbag 2.0.1 release in a few days because we’ve managed to break some hardware for some vendor that I don’t own, or we don’t have emulations for. If you do see anything that’s weird, or have hardware that used to be detected, and now isn’t — please let us know.
A long over-due release which has accumulated a bunch of bugfixes but also some
fancy new features…read on!
As always, big thanks to everyone who reported issues and contributed to QCoro.
Your help is much appreciated!
QCoro::LazyTask<T>
The biggest new features in this release is the brand-new QCoro::LazyTask<T>.
It’s a new return type that you can use for your coroutines. It differs from QCoro::Task<T>
in that, as the name suggest, the coroutine is evaluated lazily. What that means is when
you call a coroutine that returns LazyTask, it will return imediately without executing
the body of the coroutine. The body will be executed only once you co_await on the returned
LazyTask object.
This is different from the behavior of QCoro::Task<T>, which is eager, meaning that it will
start executing the body immediately when called (like a regular function call).
QCoro::LazyTask<int>myWorker(){qDebug()<<"Starting worker";co_return42;}QCoro::Task<>mainCoroutine(){qDebug()<<"Creating worker";constautotask=myWorker();qDebug()<<"Awaiting on worker";constautoresult=co_awaittask;// do something with the result}
This will result in the following output:
mainCoroutine(): Creating worker
mainCoroutine(): Awaiting on worker
myWorker(): Starting worker
If myWorker() were a QCoro::Task<T> as we know it, the output would look like this:
mainCoroutine(): Creating worker
myWorker(): Starting worker
mainCoroutine(): Awaiting on worker
The fact that the body of a QCoro::LazyTask<T> coroutine is only executed when co_awaited has one
very important implication: it must not be used for Qt slots, Q_INVOKABLEs or, in general, for any
coroutine that may be executed directly by the Qt event loop. The reason is, that the Qt event loop
is not aware of coroutines (or QCoro), so it will never co_await on the returned QCoro::LazyTask
object - which means that the code inside the coroutine would never get executed. This is the
reason why the good old QCoro::Task<T> is an eager coroutine - to ensure the body of the coroutine
gets executed even when called from the Qt event loop and not co_awaited.
Defined Semantics for Awaiting Default-Constructed and Moved-From Tasks
This is something that wasn’t clearely defined until now (both in the docs and in the code), which is
what happens when you try to co_await on a default-constructed QCoro::Task<T> (or QCoro::LazyTask<T>):
co_awaitQCoro::Task<>();// will hang indefinitely!
Previously this would trigger a Q_ASSERT in debug build and most likely a crash in production build.
Starting with QCoro 0.11, awaiting such task will print a qWarning() and will hang indefinitely.
The same applies to awaiting a moved-from task, which is identical to a default-constructed task:
QCoro::LazyTask<int>task=myTask();handleTask(std::move(task));co_awaittask;// will hang indefinitely!`
Compiler Support
We have dropped official support for older compilers. Since QCoro 0.11, the officially supported compilers are:
GCC >= 11
Clang >= 15
MSVC >= 19.40 (Visual Studio 17 2022)
AppleClang >= 15 (Xcode 15.2)
QCoro might still compile or work with older versions of those compilers, but we no longer test it and
do not guarantee that it will work correctly.
The reason is that coroutine implementation in older versions of GCC and clang were buggy and behaved differently
than they do in newer versions, so making sure that QCoro behaves correctly across wide range of compilers was
getting more difficult as we implemented more and more complex and advanced features.
If you enjoy using QCoro, consider supporting its development on GitHub Sponsors or buy me a coffee
on Ko-fi (after all, more coffee means more code, right?).
A few years ago Mike and I discussed adding video support to zink, so that we could provide vaapi on top of vulkan video implementations.
This of course got onto a long TODO list and we nerdsniped each other into moving it along, this past couple of weeks we finally dragged it over the line.
This MR adds initial support for zink video decode on top of Vulkan Video. It provides vaapi support. Currently it only support H264 decode, but I've implemented AV1 decode and I've played around a bit with H264 encode. I think adding H265 decode shouldn't be too horrible.
I've tested this mainly on radv, and a bit on anv (but there are some problems I should dig into).
TLDR: if you know what EVIOCREVOKE does, the same now works for hidraw devices via HIDIOCREVOKE.
The HID standard is the most common hardware protocol for input devices. In the Linux kernel HID is typically translated to the evdev protocol which is what libinput and all Xorg input drivers use. evdev is the kernel's input API and used for all devices, not just HID ones.
evdev is mostly compatible with HID but there are quite a few niche cases where they differ a fair bit. And some cases where evdev doesn't work well because of different assumptions, e.g. it's near-impossible to correctly express a device with 40 generic buttons (as opposed to named buttons like "left", "right", ...[0]). In particular for gaming devices it's quite common to access the HID device directly via the /dev/hidraw nodes. And of course for configuration of devices accessing the hidraw node is a must too (see Solaar, openrazer, libratbag, etc.). Alas, /dev/hidraw nodes are only accessible as root - right now applications work around this by either "run as root" or shipping udev rules tagging the device with uaccess.
evdev too can only be accessed as root (or the input group) but many many moons ago when dinosaurs still roamed the earth (version 3.12 to be precise), David Rheinsberg merged the EVIOCREVOKE ioctl. When called the file descriptor immediately becomes invalid, any further reads/writes will fail with ENODEV. This is a cornerstone for systemd-logind: it hands out a file descriptor via DBus to Xorg or the Wayland compositor but keeps a copy. On VT switch it calls the ioctl, thus preventing any events from reaching said X server/compositor. In turn this means that a) X no longer needs to run as root[1] since it can get input devices from logind and b) X loses access to those input devices at logind's leisure so we don't have to worry about leaking passwords.
Real-time forward to 2024 and kernel 6.12 now gained the HIDIOCREVOKE for /dev/hidraw nodes. The corresponding logind support has also been merged. The principle is the same: logind can hand out an fd to a hidraw node and can revoke it at will so we don't have to worry about data leakage to processes that should not longer receive events. This is the first of many steps towards more general HID support in userspace. It's not immediately usable since logind will only hand out those fds to the session leader (read: compositor or Xorg) so if you as application want that fd you need to convince your display server to give it to you. For that we may have something like the inputfd Wayland protocol (or maybe a portal but right now it seems a Wayland protocol is more likely).
But that aside, let's hooray nonetheless. One step down, many more to go.
One of the other side-effects of this is that logind now has an fd to any device opened by a user-space process. With HID-BPF this means we can eventually "firewall" these devices from malicious applications: we could e.g. allow libratbag to configure your mouse' buttons but block any attempts to upload a new firmware. This is very much an idea for now, there's a lot of code that needs to be written to get there. But getting there we can now, so full of optimism we go[2].
[0] to illustrate: the button that goes back in your browser is actually evdev's BTN_SIDE and BTN_BACK is ... just another button assigned to nothing particular by default.
[1] and c) I have to care less about X server CVEs.
[2] mind you, optimism is just another word for naïveté
Two weeks ago, I was at EuroBSDcon and received a feature request for syslog-ng. The user wanted to collect FreeBSD audit logs together with other logs using syslog-ng. Writing a native driver in C is time consuming. However, creating an integration based on the program() source of syslog-ng is not that difficult.
This blog shows you the current state of the FreeBSD audit source, how it works, and its limitations. It is also a request for feedback. Please share your experiences at https://github.com/syslog-ng/syslog-ng/discussions/5150!
Before you begin
First, you must enable audit logging on your FreeBSD box. If you do not want to enable it permanently, enable it only for the testing:
service auditd onestart
The syslog-ng package does not have XML parsing enabled by default. The sample configuration I show in the blog uses XML parsing. You should compile the sysutils/syslog-ng port with XML parsing enabled. If you do not enable XML parsing, you can still forward XML logs without parsing, or change the praudit command line to switch to plain text output.
Configuring syslog-ng
The actual “driver” is very simple, just a few lines. Once it is ready, it will be added to the syslog-ng configuration library (SCL), and you do not have to copy it into your configuration. For now, append it to syslog-ng.conf, or create a new configuration file, if you implemented using an include directory in your configuration.
It is a configuration block, typically used in the SCL. It uses the tail command to follow the FreeBSD audit log (with -F to detect file rotations), and praudit is configured to keep running, and print single lines in XML format.
And here comes the rest of the configuration, utilizing the new source in multiple setups. Once the above configuration block is part of SCL, you only need to include this part in your configuration.
This configuration contains two test setups. The first one calls the freebsd-audit() source without any additional parameters, thus having an XML formatted output. It parses the incoming message using the XML parser, and saves it to a JSON formatted file, so you can see the name-value pairs parsed from the XML.
The second one sets the output to single line plain text format and writes it as a regular syslog message.
Note, that in both cases we use the no-parse flag, so the original message is stored in the $MESSAGE macro of syslog-ng, and syslog-ng creates a syslog-ng message header.
Bugs and limitations
While testing the freebsd-audit() source, I ran into a few bugs and limitations.
If a field has a space in it, syslog-ng adds double quotation marks.
The XML parser adds a leading underscore to tag names, which cannot be changed / removed.
As we read the logs using the tail command, there might be some missing or duplicate audit logs when syslog-ng is restarted (depending on the rate of audit logs and the downtime of syslog-ng).
The XML parser throws an error message while reading the beginning of the XML formatted audit log, as it cannot properly parse the header lines.
Testing
Add the above lines to your syslog-ng configuration (disable the XML parser, if you do not have it compiled into syslog-ng), and restart syslog-ng. The easiest way to generate some audit log messages is to login on the FreeBSD console, or using ssh. Once you generated some audit logs, you should see two new log files under the /var/log/ directory.
The above configuration collects FreeBSD audit logs to some local files. In most cases you want to collect log messages centrally.
Also, do not forget to provide us with feedback at https://github.com/syslog-ng/syslog-ng/discussions/5150. We are happy to hear problem reports, as it also means that you are actually testing what we are working on. It helps us to make syslog-ng even better. And of course we are also very happy to hear success stories!
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
I am using this page as the officially documented support page for a new iOS app called “Widgets Factory”
It has been recently initially approved and released to the iOS App Store!
~
Here are some example screenshots:
~
Known Issues
The background task and refresh processing capabilities on iOS are severely limited and hampered due to some more hostile protection mechanisms built in. It would be nice if Apple could improve the background system to guarantee some additional service capabilities:
Don’t sleep or stop the apps non-main thread if it is expecting a background refresh
Ensure that the requested caller gets a background refresh process time after waiting a maximum of 5-15 minutes
Provide a specific API method call that is background approved to perform the basic needed tasks on behalf of the calling application with a minimal guaranteed processing time:
Location Coordinates
Map Snapshot
Web Fetch
Widget Views
Push Notifications
It would also be nice for apps that are free without ads to be able to have a one-time (or even re-occurring) tip jar that user’s could donate whatever amount they feel comfortable with to the developer.
As I upload photos to various services, I generally resize them as required
based on portrait or landscape mode. I used to do that for all the photos in a
directory and then pick which ones to use. But, I wanted to do it selectively,
open the photos in Gnome Nautilus (Files) application and right click and
resize the ones I want.
This week I noticed that I can do that with scripts. Those can be in any given
language, the selected files will be passed as command line arguments, or full
paths will be there in an environment variable
NAUTILUS_SCRIPT_SELECTED_FILE_PATHS joined via newline character.
To add any script to the right click menu, you just need to place them in
~/.local/share/nautilus/scripts/ directory. They will show up in the right click menu for scripts.
Below is the script I am using to reduce image sizes:
#!/usr/bin/env python3
import os
import sys
import subprocess
from PIL import Image
# paths = os.environ.get("NAUTILUS_SCRIPT_SELECTED_FILE_PATHS", "").split("\n")
paths = sys.argv[1:]
for fpath in paths:
if fpath.endswith(".jpg") or fpath.endswith(".jpeg"):
# Assume that is a photo
try:
img = Image.open(fpath)
# basename = os.path.basename(fpath)
basename = fpath
name, extension = os.path.splitext(basename)
new_name = f"{name}_ac{extension}"
w, h = img.size
# If w > h then it is a landscape photo
if w > h:
subprocess.check_call(["/usr/bin/magick", basename, "-resize", "1024x686", new_name])
else: # It is a portrait photo
subprocess.check_call(["/usr/bin/magick", basename, "-resize", "686x1024", new_name])
except:
# Don't care, continue
pass
You can see it in action (I selected the photos and right clicked, but the recording missed that part):
شاید برای شما هم پیش آمده باشد که به یک سیستم لینوکسی دسترسی دارید و می خواهد نوع هاردیسک (SSD یا HDD) آن را تشخیص دهید. برای اینکار روش ها و ابزارهای گوناگونی وجود دارد که در این مطلب قصد داریم به برخی از آنها اشاره کنیم. نکته اینکه برای اجرای برخی دستورها نیاز به […]
Many of you already know about my love of photography. I am taking photos for
many years, mostly people photos. Portraits in conferences like PyCon or Fedora
events. I regularly post photos to wikipedia too, specially for the
people/accounts which does not have good quality profile photos. I stopped
doing photography as we moved to Sweden, digital camera was old and becoming
stable in a new country (details in a different future blog) takes time. But,
last year Anwesha bought me a new camera, actually two
different cameras. And I started taking photos again.
I started regular photos of the weekly climate protests / demonstrations of Fridays for
Future Stockholm group.
And then more different street protests and some dance/music events too. I
don't have a Facebook account and most people asked me to share over Instagram,
so I did that. But, as I covered more & more various protests as a photographer,
I noticed my Instagram postos are showing up
less in people's feeds. Very less. Was wondering different ways of breaking out
of the algorithmic restriction.
Pixelfed is a decentralized, federated ActivityPub
based system to share photos. I am going to share photos more on this platform,
and hoping people will slowly see more. I started my account yesterday.
You can follow me from any standard ActivityPub system, say your mastodon
account itself. Search for @kushal@pixel.kushaldas.photography or
https://pixel.kushaldas.photography/kushal in your system and you can then follow it
like any other account. If you like the photos, then please share the account
(or this blog post) more to your followers and help me to break out of the
algorithmic restrictions.
In the technology side, the server runs Debian and containers. On my Fedora
system I am super happy to add a few scripts for Gnome Files, they help me to
resize the selected images before upload (I will write a blog post tomorrow on
this).
Josh and Kurt talk about a few things that have recently come out of CISA. They seem to be blaming the vendors for a lot of the problems, but there’s also not any actionable advice telling the vendors what they should be doing. This feels like the classic case of “just security harder”. We need CISA to be leading the way funding and defining security, not blaming vendors for giving the market what it demands.
Recently, I heard a pitch from a public cloud company. Among other characteristics, a key aspect they stressed is that they are the cheapest cloud. This aspect struck me. Not because I believe it is or is not, but because I’ve heard many companies pitch themselves as the cheapest cloud over the years. I asked the CTO if they were foreseeing consistent and planned cuts in the pricing every year or so.
One of the major reason of using static blogging for me is to less worry about how the site will look like. Instead the focus was to just write (which of course I did not do well this year). I did not change my blog's theme for many many years.
But, I noticed Oskar Wickström created a monospace based site and kindly released it under MIT license. I liked the theme, so decided to start using it. I still don't know HTML/CSS but managed to change the template for my website.
You can let me know over mastodon what do you think :)
Version 8.4.0 Release Candidate 1 is released. It's now enter the stabilisation phase for the developers, and the test phase for the users.
RPMs are available in the php:remi-8.4 stream for Fedora ≥ 39 and Enterprise Linux ≥ 8 (RHEL, CentOS, Alma, Rocky...) and as Software Collection in the remi-safe repository (or remi for Fedora)
The repository provides development versions which are not suitable for production usage.
In a previous post I explained how to use a Solomon SSD1306 OLED display on Fedora with a Raspberry Pi, but that only covered displays that come with a I2C driving interface.
The SSD1306 display controller also supports a (both 3-wire and 4-wire) SPI interface, and the ssd130x DRM driver has support for it since Linux 5.19.
This blog post explains how to setup Fedora to use a SSD1306 OLED when connected through 4-wire SPI interface.
First you need connect the SSD1306 display to the RPi, there are different ways to do this since one can choose the GPIO to use for some the reset and data/command pins.
But to simplify the configuration, one could use the default GPIO pins that are defined in the ssd1306-spi.dtbo provided by the RPi firmware package.
To do this, connect the SSD1306 display to the RPi as follows:
Then you need to install Fedora, this can be done by downloading an aarch64 Fedora raw image (e.g: Workstation Edition) and executing the following command:
where $device is the block device for your uSD (e.g: /dev/sda) and image is the file name (e.g: Fedora-Workstation-40-1.14.aarch64.raw.xz) of the downloaded image.
Finally you need to configure the RPi firmware to enable the SPI pins and load the ssd1306-spi Device Tree Blob Overlay (dtbo) to register a SSD1306 SPI device, e.g:
The ssd1306-spi.dtbo supports many parameters in case your display doesn’t match the defaults. Take a look to the RPi overlays README for more details about all these parameters and their possible values.
The ssd130x DRM driver registers an emulated fbdev device that can be bound with fbcon and use the OLED display to have a framebuffer console. If you want to do that, it’s convenient to change the virtual terminal console font to a smaller one, e.g:
sudo sed -i 's/FONT=.*/FONT="drdos8x8"/' /etc/vconsole.conf
sudo dracut -f
A la sazón del día internacional del turismo, Raul Jimenez Ortega ha organizado un evento Geo Developers en Internet al cual me ha invitado a hablar de lo que hemos aprendido en estos años acerca de la relación de Wikipedia, el patrimonio y el turismo. Espero que os guste.
It’s been a busy year for me again, trying to focus on my self and my health. Mapping out dietary and seasonal allergies, still on the mission of No Dairy, Eggs, Caffeine, High-Fructose-Gluctose-Corn-Syrup, etc. I am able to breathe better and sleep better which is much needed as I get older.
Anyway, I was still running the experiment of tunneling my entire home traffic (network wide) and all connections through a VPN. I first ran into MTU packet size and fragmentation issues related to the fact that the clients on my network default to a 1500 MTU whereas the VPN tunnel interface drops that size by at least 40-60 bytes worth. This can result in packet fragmentation and performance issues which OpenVPN has support for but WireGuard does not.
I then switched to a proxy setup where I redirect and pipe all connection data at the protocol level to a server-side service which forwards it to the VPN endpoint and then out to the internet. This setup had much better performance but I then ran into some connection issues as the firewall states and the timeouts may not exactly be honoured correctly by the serving application.
I rewrote my Python made framework to start fresh again and go back to basics in a lower level language like C and this seems to be working better at the moment. I will continue to run this and test it out as the final replacement hopefully. The list of features this includes is:
Transparent Dynamic Forwarding Proxy Service (load balancing capable with ip/nftables)
This is the magic part of the code which sits in front of the proxy service and shares the UDP connection states and pre-routes them to the already established VPN tunnel related for that specific load balanced connection.
Last week I wrote about a campaign that we started to resolve issues on GitHub. Some of the fixes are coming from our enthusiastic community. Thanks to this, there is a new syslog-ng-devel port in MacPorts, where you can enable almost all syslog-ng features even for older MacOS versions and PowerPC hardware. Some of the freshly enabled modules include support for Kafka, GeoIP or OpenTelemetry.
From this blog entry, you can learn how to install a legacy or an up-to-date syslog-ng version from MacPorts.
Before you begin
If you are reading this blog, most likely you already have MacPorts installed on your machine. If not, follow the instructions from https://www.macports.org/install.php on how to install MacPorts for your operating system.
Installing syslog-ng 3.38
MacPorts has an old version of syslog-ng already included. It works, but it has some problems, and it also lacks many of the available features of syslog-ng.
Installation is simple:
czanik@Peters-MacBook-Pro ~ % sudo port install syslog-ng
---> Computing dependencies for syslog-ng
The following dependencies will be installed:
bzip2
expat
gettext-runtime
glib2
json-c
libeditlibelflibffilibiconvlibnetncurses
openssl
openssl3
pcre
pcre2
py312-packaging
python312
python3_select
python3_select-312
python_select
python_select-312
sqlite3
xz
zlib
Continue? [Y/n]: y
---> Fetching archive for json-c
[…]
---> Attempting to fetch syslog-ng-3.38.1_0.darwin_22.x86_64.tbz2.rmd160 from https://packages.macports.org/syslog-ng
---> Installing syslog-ng @3.38.1_0
---> Activating syslog-ng @3.38.1_0
---> Cleaning syslog-ng
---> Updating database of binaries
---> Scanning binaries for linking errors
---> No broken files found.
---> No broken ports found.
---> Some of the ports you installed have notes:
python312 has the following notes:
To make this the default Python or Python 3 (i.e., the version run by the 'python' or 'python3' commands), run one or both of:
sudo port select --set python python312
sudo port select --set python3 python312
syslog-ng has the following notes:
To use syslog-ng, first unload OS X's built-in syslog daemon:
sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.syslogd.plist
Then customize /opt/local/etc/syslog-ng.conf,
and load syslog-ng.
A startup item has been generated that will aid in starting syslog-ng with launchd. It is disabled by default. Execute the following command to start it, and to cause it to launch at startup:
sudo port load syslog-ngczanik@Peters-MacBook-Pro ~ %
It works, it is rock solid. However, it is old and misses many of the new and optional features:
root@Peters-MacBook-Pro ~ # /opt/local/sbin/syslog-ng -V
syslog-ng 3 (3.38.1)
Config version: 3.35
Installer-Version: 3.38.1
Revision:
Module-Directory: /opt/local/lib/syslog-ng
Module-Path: /opt/local/lib/syslog-ng
Include-Path: /opt/local/share/syslog-ng/include
Available-Modules: timestamp,kvformat,appmodel,afprog,examples,rate-limit-filter,cef,map-value-pairs,stardate,system-source,confgen,afuser,xml,disk-buffer,tfgetent,linux-kmsg-format,dbparser,json-plugin,add-contextual-data,pseudofile,affile,csvparser,basicfuncs,syslogformat,hook-commands,graphite,tags-parser,afstomp,secure-logging,afsocket,cryptofuncs,azure-auth-header,regexp-parser
Enable-Debug: off
Enable-GProf: off
Enable-Memtrace: off
Enable-IPv6: on
Enable-Spoof-Source: on
Enable-TCP-Wrapper: off
Enable-Linux-Caps: off
Enable-Systemd: off
Installing syslog-ng-devel
The syslog-ng-devel port is built upon a recent syslog-ng git snapshot. As syslog-ng is developed in a way that you should be able to create a stable release anytime, it is not a problem.
In this case, the same command that I used to install syslog-ng 3.38 from a binary package will first install the necessary dependencies from packages, then build syslog-ng locally.
czanik@Peters-MacBook-Pro ~ % sudo port install syslog-ng-devel
---> Computing dependencies for syslog-ng-devel
The following dependencies will be installed:
bison
bison-runtime
brotli
cmake
curl
curl-ca-bundle
cyrus-sasl2
flex
gettext
gettext-tools-libs
gperf
hiredis
icu
ivykis
kerberos5
libarchive
libb2
libbson
libcomerr
libcxx
libdbi
libesmtp
libidn2
libmaxminddb
libpsl
librdkafka
libtextstyle
libunistring
libxml2
lmdb
lz4
lzo2
m4
mongo-c-driver
nghttp2
paho.mqtt.c
pkgconfig
popt
rabbitmq-c
snappy
tcp_wrappers
zstd
Continue? [Y/n]:
---> Fetching archive for gperf
[…]
---> Fetching archive for syslog-ng-devel
---> Attempting to fetch syslog-ng-devel-2024.09.17_0+osl.darwin_22.x86_64.tbz2 from https://packages.macports.org/syslog-ng-devel
---> Attempting to fetch syslog-ng-devel-2024.09.17_0+osl.darwin_22.x86_64.tbz2 from https://vie.at.packages.macports.org/syslog-ng-devel
---> Attempting to fetch syslog-ng-devel-2024.09.17_0+osl.darwin_22.x86_64.tbz2 from https://fra.de.packages.macports.org/syslog-ng-devel
---> Fetching distfiles for syslog-ng-devel
---> Verifying checksums for syslog-ng-devel
---> Extracting syslog-ng-devel
---> Applying patches to syslog-ng-devel
---> Configuring syslog-ng-devel
---> Building syslog-ng-devel
---> Staging syslog-ng-devel into destroot
---> Installing syslog-ng-devel @2024.09.17_0+osl
---> Activating syslog-ng-devel @2024.09.17_0+osl
---> Cleaning syslog-ng-devel
---> Updating database of binaries
---> Scanning binaries for linking errors
---> No broken files found.
---> No broken ports found.
---> Some of the ports you installed have notes:
cmake has the following notes:
The CMake GUI and Docs are now provided as subports 'cmake-gui' and 'cmake-docs', respectively.
libpsl has the following notes:
libpsl API documentation is provided by the libpsl-docs port.
syslog-ng-devel has the following notes:
To use syslog-ng, first unload OS X's built-in syslog daemon:
sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.syslogd.plist
Then customize /opt/local/etc/syslog-ng.conf,
and
sudo load syslog-ng
A startup item has been generated that will aid in starting syslog-ng-devel with launchd. It is disabled by default. Execute the following command to start it, and to cause it to launch at startup:
sudo port load syslog-ng-devel
czanik@Peters-MacBook-Pro ~ %
Based on the output my suspicion is that pre-built packages might be available soon.
You might have noticed that installing syslog-ng-devel pulled in a lot of dependencies. The reason is simple: after the fixes received from the MacPorts community and from our developers, a lot more features compile now with older compiler versions and on older operating system versions.
Compared to the old package, these are some the additional modules available:
czanik@Peters-MacBook-Pro ~ % sudo /opt/local/sbin/syslog-ng -V
Password:
syslog-ng 4.8.0.157.gd68f5a5.dirty
Config version: 4.2
Installer-Version: 4.8.0.157.gd68f5a5.dirty
Revision: 4.8.0.157.gd68f5a5.dirty
Module-Directory: /opt/local/lib/syslog-ng
Module-Path: /opt/local/lib/syslog-ng
Include-Path: /opt/local/share/syslog-ng/include
Available-Modules: timestamp,darwinosl,kvformat,redis,afamqp,appmodel,afprog,metrics-probe,cef,map_value_pairs,kafka,stardate,system-source,confgen,afuser,xml,disk-buffer,tfgetent,linux-kmsg-format,cloud_auth,correlation,json-plugin,pseudofile,affile,afsmtp,csvparser,basicfuncs,syslogformat,hook-commands,mqtt,afmongodb,graphite,tags-parser,geoip2-plugin,afstomp,http,secure-logging,afsql,mod-python,afsocket,add_contextual_data,cryptofuncs,azure-auth-header,regexp-parser,rate_limit_filter
Enable-Debug: off
Enable-GProf: off
Enable-Memtrace: off
Enable-IPv6: on
Enable-Spoof-Source: on
Enable-TCP-Wrapper: on
Enable-Linux-Caps: off
Enable-Systemd: off
If you want even more modules, you can use “variants” to enable support for grpc-based modules and a native MacOS source:
czanik@Peters-MacBook-Pro ~ % sudo port variants syslog-ng-devel
syslog-ng-devel has the variants:
debug: Enable debug binaries
grpc: Enable GRPC modules
[+]osl: Enable support for OSLog
universal: Build for multiple architectures
czanik@Peters-MacBook-Pro ~ % sudo port install syslog-ng-devel +grpc
---> Computing dependencies for syslog-ng-devel
The following dependencies will be installed:
abseilc-aresgrpclbzip2libuvprotobuf3-cppre2Continue? [Y/n]:
---> Fetching archive for abseil
[…]
If you check the modules again, you will see even more available:
czanik@Peters-MacBook-Pro ~ % sudo /opt/local/sbin/syslog-ng -V
Password:
syslog-ng 4.8.0.157.gd68f5a5.dirty
Config version: 4.2
Installer-Version: 4.8.0.157.gd68f5a5.dirty
Revision: 4.8.0.157.gd68f5a5.dirty
Module-Directory: /opt/local/lib/syslog-ng
Module-Path: /opt/local/lib/syslog-ng
Include-Path: /opt/local/share/syslog-ng/include
Available-Modules: bigquery,timestamp,darwinosl,kvformat,redis,afamqp,appmodel,afprog,loki,metrics-probe,cef,map_value_pairs,otel,kafka,stardate,system-source,confgen,afuser,xml,disk-buffer,tfgetent,linux-kmsg-format,cloud_auth,correlation,json-plugin,pseudofile,affile,afsmtp,csvparser,basicfuncs,syslogformat,hook-commands,mqtt,afmongodb,graphite,tags-parser,geoip2-plugin,afstomp,http,secure-logging,afsql,mod-python,afsocket,add_contextual_data,cryptofuncs,azure-auth-header,regexp-parser,rate_limit_filter
Enable-Debug: off
Enable-GProf: off
Enable-Memtrace: off
Enable-IPv6: on
Enable-Spoof-Source: on
Enable-TCP-Wrapper: on
Enable-Linux-Caps: off
Enable-Systemd: off
What is next?
I would like to thank the involved MacPorts developers and my colleagues for making this huge step forward happen.
I would also like to ask for your feedback. Please share your experience with the syslog-ng-devel portand not only if you run into a problem, but also if it works for you as expected.
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
Please join us at the next regular Open NeuroFedora team meeting on Monday 07 October at 1300 UTC.
The meeting is a public meeting, and open for everyone to attend.
You can join us in the Fedora meeting channel on chat.fedoraproject.org (our Matrix instance).
Note that you can also access this channel from other Matrix home severs, so you do not have to create a Fedora account just to attend the meeting.
You can use this link to convert the meeting time to your local time.
Or, you can also use this command in the terminal:
$date-d'Monday, October 07, 2024 13:00 UTC'
The meeting will be chaired by @ankursinha.
The agenda for the meeting is:
Please join us at the next regular Open NeuroFedora team meeting on Monday 23 September at 1300 UTC.
The meeting is a public meeting, and open for everyone to attend.
You can join us in the Fedora meeting channel on chat.fedoraproject.org (our Matrix instance).
Note that you can also access this channel from other Matrix home severs, so you do not have to create a Fedora account just to attend the meeting.
You can use this link to convert the meeting time to your local time.
Or, you can also use this command in the terminal:
$date-d'Monday, September 23, 2024 13:00 UTC'
The meeting will be chaired by @ankursinha.
The agenda for the meeting is:
Kamal 2 is coming with a brand new custom proxy that’s replacing Traefik. Let’s have a look at why is that and what it means.
Why Kamal needs a proxy
Kamal is a simple deployment tool built around Docker containers. While Docker itself has a Swarm mode allowing for more robust deploys, Kamal keeps things simple by running the containers with straightforward docker run calls. But starting and stopping containers this way comes without their automatic replacement. Kamal needs a way to handle zero-downtime deployment for web containers so it originally incorporated Traefik.
Why Traefik
While there are many HTTP proxies around, Kamal was in the market for something of the auto-discovery of Docker containers. Something could make the proxy configuration management minimal. Traefik is such a proxy – it was specifically built as a dynamic reverse proxy for orchestrating container traffic. This meant that Kamal would only need to label the web containers for Traefik to select them. Docker health checks then did the rest by pointing to the healthy containers that should receive new traffic.
Why a new proxy
There was one problem with Traefik. Kamal had to create a special cord file on the host and bind mount it into the container to be able to control the health check that was responsible for driving traffic alongside the label. By introducing the cord file check Kamal could make any container unhealthy by deleting their associated cord file. And while it mostly worked, it was seen as a hack.
Kamal is an imperative tool and so it needs a proxy that works in a similar way. Traefik was also the most complex and misunderstood part of Kamal’s stack. So while people mostly figure things out in the end, there was an opportunity to simplify it. This is especially true when it comes to the most requested feature for Kamal – hosting multiple apps on the same system with automatic TLS certificates.
Kamal Proxy
The new Kamal proxy is simply called kamal-proxy and it’s designed to make it easy to coordinate zero-downtime deployments with an option of issuing TLS certificates. Instead of labeling containers, we tell the proxy what kind of service to deploy with a hostname:port target:
$ kamal-proxy deploy my-app --target web-1:3000
This reads as deploy myapp service with web-1 hostname on port 3000. We can reference the container like this thanks to the Docker DNS on the same network. Deploying a service like this will wait for the containers to become healthy and reroute the traffic from any previously deployed instance of the service.
But that’s not all. We can also specify a host to run more services on the same server:
Note that there is no specific configuration file and the proxy itself has to be running.
Kamal 2 configuration
There will be little need for you to run the proxy commands yourself as it will be done by Kamal. Here’s how it will look in the Kamal deploy/config.yml file:
# config/deploy.yml...# The new proxy configproxy:ssl:truehost:app.example.com
The new proxy settings have two parts. A required host for routing on the same system and optional ssl for automatic TLS certs.
Kamal Proxy features
To sum up the new proxy feature set:
Routing based on provided hosts
Automatic TLS certificates via Let’s Encrypt
Request and response buffering
Maximum requests and response sizes
The new proxy should also bring new features to Kamal such as pausing requests, maintenance mode and gradual rollouts.
I’ve been using aerc for a bit now to test it out as my default mail client. I have to say that I’m truly loving it. Today I was trying to clean up the view so that I wasn’t looking at two dozen old IMAP folders in the folders pane, and remapping a few folder names when I ran into what I thought was a snag but was actually a failure on my part to configure things properly at first.
In the aerc account configuration file (/home/user/.config/aerc/accounts.conf) you can specify what folders you want to see, how they should be sorted, and even remap folder names. For example:
That will tell aerc only to display those folders.
folders-sort = Inbox, @Fedora Mailing Lists,Archive, Sent
Will sort your displayed folders in that order.
folder-map = /home/user/.config/aerc/folders
That will tell aerc to look in the “folders” file for a mapping of folders to new names, such as:
Fedora = @Fedora Mailing Lists
If, for example, you’d once upon a time named a folder thusly so that the @ would put it at the top of webmail sort.
The problem I was running into was that I’d remap a folder name and *poof* it’d disappear when I restarted aerc. Initially I thought I was futzing up the configuration in the folder-map file.
Then it finally (after a bit too long) dawned on me that once I remapped the folder name, aerc took me literally and… didn’t show it because it wasn’t listed as Fedora in the folders or folders-sort directive.
Just throwing that out there in case anybody else bangs their head on the keyboard wondering why remapped folders disappear. Happy mailing!
In the last blog (Using InstructLab in Log Detective), we went through the installation and set
up process for InstructLab. The post finished with knowledge preparation. We’ll
continue with that and hopefully end this one with data generated by InstructLab.
So, rewind to earlier this year: There were 2 laptop announcements of interest to me.
First was the snapdragon X arm laptops that were going to come out. qualcomm was touting that they would have great linux support and they were already working on merging things upstream. Nothing is ever that rosy, but I did pick up a Lenovo Yoga Slim 7x that I have been playing with. Look for a more detailed review and status on that one in a bit. Short summary is that it’s a pretty cool laptop and mainstream linux support is coming along, but it’s not yet reading to be a daily laptop IMHO.
The second was framework announcing a new batch of laptops would be coming out with some nice upgrades, so I pre-ordered one of the ryzen ones. But reader, you may ask: “don’t you already have a framework ryzen laptop? and aren’t they supposed to be upgradable? So why would you order another one?”. To which I answer: yes, and yes, and… because I wanted so many new things it seemed easier to just order a new one and get a spare/second laptop out of it.
I have one of the very first generation framework 13 laptops. It was originally ordered as a intel 11th gen cpu/mb shipped in July of 2021, almost 3.5 years ago. So whats in the newer/latest version that I wanted?
Better hinges (the old ones are kinda weak and you can cause the display to ‘flop’ if you carry it by that.
New top cover. The old one is the old multipart one, the new ones have a one part aluminum one.
New camera thats supposedly better.
New battery (ok, I replaced the battery in my old one a while back, but always nice to have a new battery)
Replacement input cover (the thing with the keyboard/touchpad). After hammering mine for 3.5 years, the tab and/or alt keys stick and result in moving between windows being frustrating. Also, new one has no windows key, just a ‘super’ key.
Higher resolution / refresh rate display. (120hz and 2880×1920 and matte vs 60hz and 2256×1504 and glossy). In particular the glossy is very anoying in highly reflecting areas.
So, I could have replaced all those things, but at that point it seemed like it would be easier to just move to a new chassis and have a spare.
Of course things didn’t go as planned. The laptop arrived and I swapped my memory and nvme drive over to it and… it didn’t boot. Spend a fair bit of time with framework support back and forth. They wanted movies / pictures of most everything and had me do a bunch of things to isolate the problems. They decided it was a bad monitor/display cable and input cover. So, they shipped those replacements to me (they had to replace the display because the cable is attached to it). Unfortunately, they shipped them USPS, so it took about 9 days and because we don’t get USPS here I had to go rescue it from the local postoffice before they sent it back.Today I swapped in the display and input cover and everything worked like a charm. A quick switch of memory and nvme I am am now booted on the new laptop.
After the last syslog-ng release, we started a campaign to close open issues on GitHub. We'd like to continue this effort and call for collaboration from our users and contributors to make OSE even more stable. While unit tests are great (and we do many tests in-house), nothing can replace using syslog-ng in real-world situations. This blog collects some resources about how you can start testing the latest syslog-ng release from GitHub.
Note that you are also welcome to take part in testing without any previous syslog-ng experience. In that case, my tutorial series can help you get started with syslog-ng: https://peter.czanik.hu/posts/syslog-ng-tutorial-toc/
Before you begin
We provide ready-to-use packages built from the latest git master snapshot for some of the major Linux distributions. Nightly containers are also available. However, most of these instructions assume that you want to test on x86_64 Linux, except for openSUSE packages, which are also available for ARM and Power64LE. Generating the source tarball is also only documented for x86_64 Linux or FreeBSD. If snapshot packages are not available for your operating system, the goal is to be able to build packages for your OS, as working without packages is a pain.
If you do not use any of the above Linux distributions, and do not mind using containers, then you can still try the latest nightly builds without compiling syslog-ng yourself. These containers are based on the nightly Debian packages, mentioned above: https://www.syslog-ng.com/community/b/blog/posts/nightly-syslog-ng-container-images
If none of the above solutions work for you, you should build syslog-ng from the source. Of course you can build syslog-ng from git sources, just check it out from https://github.com/syslog-ng/syslog-ng/ However, most operating systems build syslog-ng from the source tarball generated from Git sources. You just have to slightly modify the files used to build syslog-ng for the given OS (ports for FreeBSD, spec files for RPM distros, etc.).
Once you have a package, you are ready for testing. We made many fixes in the past few months, for example in the wildcard-file() source. But even if your problem was not directly addressed after the 4.8.0 release, you should join testing. If you experienced a problem in an earlier version, it might already be fixed as a “side effect” of another fix.
Note that both constructive criticism and positive feedback are very welcome! They help us to close issues at https://github.com/syslog-ng/syslog-ng/issues This is also the place what you can use to report new problems.
Testing the latest Git version helps us to make the next syslog-ng the best syslog-ng release ever. So thank you for your time and contributions in advance!
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
تیم توسعه پروژه ی فدورا انتشار نسخه Linux Fedora 41 Beta را اعلام کرد. این نسخه مانند همیشه شامل تغییرات و بهبود های هیجان انگیزی می باشد که در ادامه به برخی از مهمترین آنها می پردازیم. نسخه بتای فدورا چیست؟ نسخههای بتای فدورا از نظر کد کامل هستند و به شدت شبیه به […]
En ce mardi 17 septembre, la communauté du Projet Fedora sera ravie d'apprendre la disponibilité de la version Beta de Fedora Linux 41.
Malgré les risques concernant la stabilité d’une version Beta, il est important de la tester ! En rapportant les bogues maintenant, vous découvrirez les nouveautés avant tout le monde, tout en améliorant la qualité de Fedora Linux 41 et réduisant du même coup le risque de retard. Les versions en développement manquent de testeurs et de retours pour mener à bien leurs buts.
La version finale est pour le moment fixée pour le 22 octobre ou 5 novembre.
Expérience utilisateur
Passage à GNOME 47 ;
L'environnement de bureau léger LXQt passe à la version 2.0 ;
L'éditeur d'image GIMP utilise la branche de développement qui deviendra la version 3 ;
Le gestionnaire de listes de tâches Taskwarrior évolue à la version 3 ;
La mise à jour du cœur des systèmes atomiques de bureau peut se faire sans droits administrateurs, mais pas les mises à niveau de celui-ci à savoir passer d'une version Fedora Linux Silverblue 40 à Fedora Linux Silverblue 41 ;
Mise à disposition des images Spin KDE Plasma Mobile et Fedora Kinoite Mobile ;
De même le gestionnaire de fenêtres Miracle exploitant Wayland est proposé dans Fedora et bénéficie de son propre Spin ;
L'installation de Fedora Workstation se fera avec le protocole d'affichage Wayland uniquement, X11 reste disponible et installable après.
Gestion du matériel
L'installation du pilote propriétaire de Nvidia via GNOME Logiciels est compatible avec les systèmes utilisant l'option Secure Boot ;
Prise en charge des caméras MIPI pour les systèmes utilisant Intel IPU6 qui concerne de nombreux ordinateurs portables actuels ;
L'installateur Anaconda prend en charge le chiffrement matériel des disques via le standard TCG OPAL2, mais cela nécessite de passer via un fichier kickstart pour personnaliser l'installation ;
Utilisation par défaut de l'outil tuned au lieu de power-profiles-daemon pour la gestion de l'énergie de la machine ;
Mise à jour de ROCm 6.2 pour améliorer la prise en charge de l'IA et le calcul haute performance pour les cartes graphiques ou accélérateurs d'AMD ;
L'outil de débogue des tables ACPI nommé acpica-tools ne prend plus en charge les architectures gros boutistes tels que s390x ;
PHP ne prend plus en charge les processeurs x86 32 bits.
Internationalisation
Le gestionnaire d'entrées IBus par défaut pour la langue traditionnelle chinoise de Taiwan passe de ibus-libzhuyin à ibus-chewing.
Administration système
Le gestionnaire de paquet dnf est mis à jour vers sa 5e version ;
Tandis que la commande rpm utilise la version 4.20 ;
Les systèmes Fedora atomiques de bureau et Fedora IoT disposent de bootupd pour la mise à jour du chargeur de démarrage ;
Les images atomiques de Fedora proposent les outils dnf et bootc, ce premier est utilisable dans un contexte de développement pour l'instant mais le second peut commencer à servir à déployer des images du système qui sont bootables ;
La bibliothèque de sécurité OpenSSL n'accepte plus les signatures cryptographiques avec l'algorithme SHA-1 ;
Stabilisation de la fonctionnalité de Fedora Linux 36 où ostree prenait en charge les formats OCI/Docker pour le transport et le mécanisme de déploiement des conteneurs ;
Le gestionnaire de réseaux NetworkManager ne prend plus en charge la configuration dans le format ifcfg qui était déjà désuet depuis des années ;
Dans la même veine, le paquet network-scripts a été retiré, mettant fin à la gestion du réseau via les scripts ifup et ifdown ;
Les interfaces réseaux pour les éditions Cloud vont utiliser les nouveaux noms par défaut comme adoptés par les autres éditions il y a des années au lieu de conserver les noms traditionnels tels que eth0 ;
Le gestionnaire de virtualisation libvirt utilise maintenant par défaut le pare-feu nftables au lieu de iptables pour son interface réseau vibr0 ;
L'outil Netavark pour gérer la pile réseau des conteneurs, notamment avec podman, utilise également par défaut le pare-feu nftables au lieu de iptables ;
Les unités système de systemd vont utiliser par défaut beaucoup d'options pour améliorer la sécurité des services ;
Introduction de l'outil fedora-repoquery pour faire des requêtes sur les dépôts comme savoir la version exacte d'un paquet spécifique dans une autre version de Fedora, la date de mise à jour d'un dépôt, ou connaître les paquets qui dépendent d'un paquet spécifique (dépendance inverse donc), etc. ;
Le gestionnaire de conteneurs Kubernetes a des nouveaux paquets versionnés, permettant d'avoir plusieurs versions en parallèle. Ici les versions 1.29, 1.30 et 1.31 sont proposées avec des noms comme kubernetes1.31 ;
L'implémentation des interfaces de Kubernetes fait par l'OCI a ses propres paquets cri-o et cri-tools qui sont également versionnés pour pouvoir suivre les versions de Kubernetes.
Développement
Mise à jour de la suite de compilation GNU : binutils 2.42, glibc 2.40 et gdb 15 ;
Mise à niveau de la suite de compilateurs LLVM vers la version 19 ;
Retrait de Python 2.7 dans les dépôts, seule la branche 3 est maintenue dorénavant ;
D'ailleurs Python bénéficie de la version 3.13 ;
Python est aussi compilé avec l'optimisation -O3 activée, en ligne avec la manière de faire par le projet officiel et améliorant les performances ;
Le framework d'écriture de tests en Python, Pytest se teste avec sa version 8 ;
Mise à jour du langage Go vers la version 1.23 ;
Mise à jour dans l'écosystème Haskell GHC 9.6 et Stackage LTS 22 ;
Le langage Perl passe à la version 5.40 ;
Node.js 22 devient la version de référence, tandis que la version 20 et 18 restent disponibles en parallèle ;
Pour des raisons de changement de licence, le gestionnaire de bases de données clé-valeur Redis est remplacé par Valkey ;
La bibliothèque Python d'apprentissage profond Pytorch est éclairée avec sa version 2.4 ;
L'API engine de la bibliothèque OpenSSL est désactivée car non maintenue tout en gardant une ABI stable.
Projet Fedora
L'édition de Fedora KDE pour l'architecture AArch64 est maintenant bloquante pour les sorties d'une nouvelle version. L'édition doit être suffisamment stable pour qu'une nouvelle version de Fedora Linux voit le jour ;
Ultime phase 4 de l'usage généralisé des noms abrégés de licence provenant du projet SPDX pour la licence des paquets plutôt que des noms du projet Fedora ;
Les bibliothèques Java n'ont plus une dépendance explicite envers le runtime de Java pour simplifier la maintenance, rien ne change concernant les applications ;
Le paquet systemtap-sdt-devel n'a plus l'outil dtrace qui a été mis dans le paquet systemtap-sdt-dtrace ;
Ajout d'une tâche de nettoyage lors de la génération des paquets RPM pour améliorer la reproductibilité des paquets ;
Changement dans les métadonnées des dépôts de Fedora, avec l'usage de l'algorithme de compression zstd et l'abandon des bases de données sqlite pour diminuer la taille des données à télécharger ou à stocker.
Tester
Durant le développement d'une nouvelle version de Fedora Linux, comme cette version Beta, quasiment chaque semaine le projet propose des journées de tests. Le but est de tester pendant une journée une fonctionnalité précise comme le noyau, Fedora Silverblue, la mise à niveau, GNOME, l’internationalisation, etc. L'équipe d'assurance qualité élabore et propose une série de tests en général simples à exécuter. Suffit de les suivre et indiquer si le résultat est celui attendu. Dans le cas contraire, un rapport de bogue devra être ouvert pour permettre l'élaboration d'un correctif.
C'est très simple à suivre et requiert souvent peu de temps (15 minutes à une heure maximum) si vous avez une Beta exploitable sous la main.
Les tests à effectuer et les rapports sont à faire via la page suivante. J'annonce régulièrement sur mon blog quand une journée de tests est planifiée.
Si l'aventure vous intéresse, les images sont disponibles par Torrent ou via le site officiel.
Si vous avez déjà Fedora Linux 40 ou 39 sur votre machine, vous pouvez faire une mise à niveau vers la Beta. Cela consiste en une grosse mise à jour, vos applications et données sont préservées.
Nous vous recommandons dans les deux cas de procéder à une sauvegarde de vos données au préalable.
This is an independent, censorship-resistant site run by volunteers. This site and the blogs of individual volunteers are not officially affiliated with or endorsed by the Fedora Project.