Released on 2024-10-26.
This is a feature release that needs configuration file adjustments. See the following notes for the details.
Backwards incompatible changes
Bodhi's update status checking has been overhauled, and some configuration options have changed.
critpath.num_admin_approvals is removed. This backed the old Fedora "proventesters" concept, which has not been used for some years.
critpath.min_karma is removed and is replaced by a new setting just calledmin_karma. This applies to all updates, not just critical path.
critpath.stable_after_days_without_negative_karma is renamed to critpath.mandatory_days_in_testing and its behavior has changed: there is no longer any check for 'no negative karma'. Critical path updates, like non-critical path updates, can now be manually pushed stable after reaching this time threshold even if they have negative karma.
As before, these settings can be specified with prefixes to apply only to particular releases and milestones. min_karma and (critpath.)mandatory_days_in_testing now act strictly and consistently as minimum requirements for stable push. Any update may be pushed stable once it reaches either of those thresholds (and passes gating requirements, if gating is enabled). The update's stable_karma value is no longer ever considered in determining whether it may be pushed stable. stable_karma and stable_days are only used as triggers for automatic stable pushes (but for an update to be automatically pushed it must also reach either min_karma or (critpath.)mandatory_days_in_testing).
The most obvious practical result of this change for Fedora is that, during phases where the policy minimum karma requirement is +2, you will no longer be able to make non-critical path updates pushable with +1 karma by setting this as their stable_karma value. Additionally:
It is no longer possible to set an update's request to 'stable' if it has previously met requirements but currently does not
Two cases where updates that reached their unstable_karma thresholds were not obsoleted are resolved
Updates in 'pending' as well as 'testing' status have autopush disabled upon receiving any negative karma
The date_approved property of updates is more consistently set as the date the update first became eligible for stable push (#5630).
Features
When searching updates, you can now specify multiple gating statuses by passing the 'gating' query arg more than once (#5658).
Bundled fedora-bootstrap has been updated to 5.3.3-0 (#5711).
A packager can now edit a side-tag update even if the side-tag is not owned by them, provided they have commit rights on all packages included in the side-tag (#5764).
Bug fixes
The development.ini.example config - on which the BCD config is based - is now set up to listen on both IPv4 and IPv6 (#5659).
Openid based login support has been removed from Bodhi. python-openid and pyramid-fas-openid are EOL and we moved to OIDC authentication. (#5601).
Fixed a build validation issue which would prevent a sidetag update from being submitted in some circumstances (#5725).
Fixed broken pagination for listing updates in webUI and JSON (#5738).
Development improvements
Calls to datetime.datetime.utcnow() have been changed to datetime.datetime.now(datetime.timezone.utc). We previously assumed all datetimes were UTC based, now this is explicit by using timezone aware datetimes (#5702).
Documentation improvements
Bodhi's documentation is now served from ReadTheDocs pages (#5774).
Contributors
The following developers contributed to this release of Bodhi:
Flock to Fedora 2024, held in Rochester, New York from August 7th to 10th, soared to new heights, bringing together Fedora contributors and enthusiasts for four days of immersive learning, dynamic collaboration, and vibrant community building. The event seamlessly blended in-person interactions with live stream and recorded sessions via YouTube for the first two days, ensuring accessibility for a wider audience. Using Matrix Chat for seamless communication and a well-structured online schedule, Flock 2024 successfully fulfilled its mission of uniting the Fedora community, fostering connections, and sparking a wave of innovation.
Target Audience
The primary target audience was Fedora contributors, encompassing developers, packagers, designers, documentation writers, and anyone actively involved in the Fedora Project. The event also welcomed newcomers and those curious about Fedora and open source.
Attendees
Rochester welcomed a diverse and passionate group of attendees, including, but not limited to:
The majority of the Fedora Council, FESCo, and Mindshare Committee members.
Fedora Project Leader Matthew Miller, Fedora Operations Architect Aoife Moloney, Fedora Community Architect Justin W. Flory
Representatives from sponsor communities, like Rocky Linux, Lenovo, Microsoft Azure, AlmaLinux, CentOS, openSUSE, ARM, Meta, and SureStep.
Numerous passionate Fedora community members. See badge scans for a full list.
Key Highlights From The Event
Engaging Keynotes: Matthew Miller’s “State of Fedora” keynote provided valuable insights into the project’s trajectory. Other impactful keynotes included Pat Riehecky’s “It is OK not to know things” and Anita Zhang’s “How (Not) To Get Into Tech,” offering diverse perspectives on the open source journey.
Dynamic Sessions and Workshops: The event featured many sessions catering to diverse interests, with popular choices including panel discussions from the Fedora Council and FESCo, “What does Red Hat want?”, and numerous talks on Bootc.
Diverse Tracks: Attendees explored three tracks, including the Flock general track, the CentOS track, and the Mentor Summit track. These tracks tool place across several rooms, like the Red Hat main stage, and the breakout rooms sponsored by Lenovo, Rocky Linux, and Microsoft Azure. Topics ranged from technical deep dives (OpenQA, Konflux, eBPF, AI/ML, RISC-V) to community-building initiatives and DEI efforts.
International Candy Swap: This unique tradition fostered cross-cultural connections as attendees shared sweets and stories from their home countries.
Fedora Community Engagement
The Fedora community was actively involved throughout the event.
Community members contributed as speakers, workshop leaders, and attendees, driving discussions and knowledge sharing.
Topics like Bootc and the future of Fedora sparked lively conversations and collaboration.
Community Participation and Recognition
Booth Staff/Volunteers: The dedicated volunteer team, including those listed on the Flock 2024 – Volunteers sign-up sheet, ensured a smooth and welcoming experience for all attendees. Their tireless efforts were critical to the event’s success.
Speakers: A diverse group of speakers shared their expertise and insights, fostering learning and inspiring attendees. Myself and two of my fellow interns, Adrian Edwards and Roseline Bassey, presented a hybrid presentation of pre-recorded and live sessions as part of the Mentored Projects Showcase. You can see the full speaker list on the conference schedule.
Virtual Events
Livestream & Recordings: The event’s first two days were live-streamed and recorded, allowing virtual participants to engage with the content.
Viewership Data:
Day 1:
Red Hat Room: Over 1,000 views
Lenovo Room: 888 views
Rocky Linux Room: 427 views
Microsoft Azure Room: 324 views
Day 2:
Red Hat Room: 674 views
Lenovo Room: 288 views
Rocky Linux Room: 286 views
Azure Room: 213 views
Online Engagement: Social media platforms buzzed with activity, with attendees and virtual participants sharing their experiences and insights using the official hashtags #FlockToFedora, and #FlockRochester.
Challenges, Lessons Learned, and Recommendations
Streaming Capabilities: Available resources constrained our ability to live stream the entire event, while the impact of this for virtual attendees was minimized with creative scheduling, the hope is to explore improvements to the event streaming setup in the future.
Weather Disruption: The scavenger hunt was unfortunately affected by a storm, highlighting the importance of contingency plans for outdoor or weather-dependent activities.
Survey Feedback: We conducted a post-event attendee survey to collect feedback from attendees and speakers. This valuable input has guided us and will continue to guide us in implementing improvements and addressing areas of concern for future Flock events.
Conclusion
Flock to Fedora 2024 was a success, showcasing the strength and vibrancy of the Fedora community. We are already looking forward to Flock to Fedora 2025! To stay informed about future events and opportunities to get involved, visit the Fedora Project website, join the Fedora Matrix room, and follow us on social media!
Special Thanks to Our Sponsors
We extend our deepest gratitude to all our sponsors, whose generous support made Flock to Fedora 2024 possible. Your commitment to open source and the Fedora community is invaluable.
Josh and Kurt talk about the Meshtastic open source project. It’s a really slick mesh radio system that runs on very cheap radio equipment. This episode isn’t very security related (there are a few things), but it is very open source.
Oddly enough, as I parked the car, Allie Sherlock's first single was playing
on the radio. I photographed
Allie Sherlock and Zoe Clark four months ago. We can look forward to
the day when Fergus's hit Roadkill comes on the radio while driving.
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 21st October – 25th October 2024
Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
Fedora Week of Diversity took place June 17-22, 2024. This week is the successor to Fedora Women’s Day, originally started in 2016 as a celebration of the diverse people who make up our Fedora community.
From inspiring interviews to engaging virtual sessions hosted on Matrix, this year’s Fedora Week of Diversity showcased the strength and spirit of the community. Attendees registered for the event through Pretix, and session recordings were made available on YouTube for wider access.
For this year’s Flock, the team planned a half-day hackfest. This session was designed to welcome newcomers, provide insight into the DEI team’s ongoing efforts, and outline our plans for the future.
The agenda for the session included:
Introduction to the DEI team, including our GitLab workflow and team meetings
Retrospective on the Fedora Week of Diversity
Ticket triage on GitLab and collaboration on pending tickets
Next steps for the DEI team
Goals of the hackfest were:
Provide a clear understanding of our team’s mission, vision, and workflow to new people interested in learning more about the DEI team.
Outline upcoming events and gather input on event ideas/themes. Ideally, create a team with interested people in organizing the events.
Identify strategies to better support DEI initiatives and regional communities.
Flock was held from Wednesday, August 7th to Saturday, August 10th at the Hyatt Regency Rochester in Rochester, New York, USA. You will have to wait for our Q3 report to see how it went!
Planning took place for the half-day event focused on mentoring best practices. This year’s Mentor Summit aimed to focus on workshops and sessions to promote mentorship best practices and to connect mentors and mentees across the Fedora community.
FMS was held together with Flock and took place on Saturday, August 10th. You will have to wait for our Q3 report to see how it went!
Given the influx of new team members the last few months, the team decided it would be good to standardize how we onboard new members. Two changes were made:
A repository was created for the team to store pictures, videos and slides from DEI events. This was created to help us with future content creation for the team.
Looking ahead to 2024 Q3!
As mentioned above, you can look forward to summaries of how the DEI Hackfest and the Fedora Mentor Summit went.
Recently I was asked the same question both at my workplace and at EuroBSDCon, the conference where I was presenting: where do you talk next? I had no definite answer. Of course, I am looking forward to the FOSDEM CfP, but I am also looking for new conferences to present syslog-ng and sudo. Do you have any recommendations?
A bit of history
I have been presenting syslog-ng at various events for the past almost 15 years, and sudo for 6 years. I gave many talks and held tutorials in Hungary, Europe and some even in the US.
To me, giving a talk is a two-way process: not just sharing information, but also learning from the users how they use our software, what their experiences are and what features they would like to see. You can read more about it in my OpenSource.com article at: https://opensource.com/article/21/1/open-source-evangelist
Over the years some of my favorite events disappeared, like LOADays, a small Linux / DevOps conference in Belgium. Other events have shifted focus, so talking about syslog-ng is not relevant anymore. Instead of going back to an ever-shorter list of events, I want to present syslog-ng at new places.
My talk topics
I can talk about syslog-ng, sudo, and their combinations.
When it comes to sudo, I usually talk about some of the lesser known and / or latest features. Most people only know that sudo is a prefix to run commands as root, but I can show that sudo knows a lot more than simply make you root. You can record sessions, extend it in Python, log sub-commands, and a lot more: https://opensource.com/article/22/2/new-sudo-features-2022
I have some short, introductory tutorials for both syslog-ng and sudo, 2.5-3 hours each. The syslog-ng one helps you to get from the absolute beginner level to a good mid-level, allowing you to collect log messages, process them, filter them, and finally to store them to files and to Elasticsearch. The sudo tutorial lets you try some of the latest sudo features, like session recording, extending sudo in Python, or logging sub-commands.
Send me your conference recommendations
So, returning to the original question: do you have any recommendations on where I should present syslog-ng and sudo? There are way too many events, so here are a few pointers that could narrow my preferences down:
Open source events, as both sudo and syslog-ng are open source software. FOSDEM, All Things Open, and many others belong in this category.
Security events, as sudo is definitely a security tool, and syslog-ng is also often used by security professionals. Bsides events or Pass the SALT belong in this category.
Obviously, the combination of the two is the best, like Pass the SALT.
Small events with a strong focus on networking both for speakers and participants. Some examples are the previously mentioned Pass the SALT, and LOADays. Discussions at these events had huge impact both on sudo and syslog-ng development.
Large events, like FOSDEM, or All Things Open, as they allow reaching many users at once from all around the world. In-depth personal discussions are often impossible, however I often hear months or even years later that my talks have induced some great changes, like adding syslog-ng to an appliance, creating Python-based drivers, and so on.
Operating system-specific events, like the openSUSE conference, or EuroBSDCon, which have larger syslog-ng communities and/or use sudo.
Where? The main focus is the EU simply because that is the easiest to reach for me. Not even a passport is needed. However, I also gave talks at some of the largest US conferences, like All Things Open, or the RSA Conference, and in Croatia before it became an EU member.
Yes, I am aware that some of these points are contradicting. Still, I need some recommendations, as the number of events is huge. Personal experience and recommendations as a participant, speaker or organizer could be really helpful in finding a few new places to present syslog-ng and/or sudo.
This change is being made to reflect actual practice. For example, there is clear overlap between the use-cases and potential user-bases for the current Server, Cloud, and CoreOS Editions, but each takes a different approach. We are currently considering adding an exception for a KDE Desktop Edition, which would overlap with Fedora Workstation.
Currently part of the policy currently reads in a way that prevents this exception from being possible:
A Fedora Edition:
addresses a distinct, relevant, and broad use-case or user-base that a Fedora Edition is not currently serving;
is a long term investment for the Fedora Project; and
is consistent with all of Fedora’s Four Foundations.
We propose an additional line:
The Council may make exceptions to the “distinct” rule when we determine that doing so best fits the Project’s Mission and Vision.
This topic is open for community discussion, following our Policy Change Policy. After two weeks, the Council will vote in a new ticket, and if approved, the policy will be updated.
Approval of this change would not automatically mean the approval of a KDE Desktop Edition, but would allow that possibility.
The latest version of iOS brought some nice updates to allow you to customize the look and feel even further than before. It has allowed me to set 2 shortcuts on the home screen (one for my switch bot app and one for my custom MacBook controller web app). I also like the larger dark mode icons without the text labels below them. Here are my current screenshots and setup:
~
When Steve Jobs came back to Apple, he drew a 2×2 grid to reorganize the Mac product line up and offerings. I believe the iPhone needs a similar grid in 2×3 form, for example:
Kamal’s configuration comes with one primary proxy role to accept HTTP traffic. Here’s how to think about proxy roles and how to configure others.
Kamal roles
Roles are a Kamal way to split application containers by their assigned role. A web role runs the application server, a job role runs a job queue, and an api role might run API. They all run the same Docker image but can started with a different command:
A primary role is a role that’s booted first. Other roles boot only once at least one container of the primary role passed its health check. We can change the name of the primary role with the primary_role directives.
Proxy roles
A proxy role is a role that uses Kamal Proxy to accept requests. By default, Kamal sets up the web role to accept requests as we have seen. But we can have more roles to accept requests.
Let’s say we want to run an API server alongside the main application:
Now Kamal will boot our application and shortly after start the API server. Since it’s a proxy role it also comes with the same health check run by Kamal Proxy (checking out the /up path on port 80).
Similarly we could split the application into backend and frontend:
(I worked on this feature last year, before being moved off desktop related projects, but I never saw it documented anywhere other than in the original commit messages, so here's the opportunity to shine a little light on a feature that could probably see more use)
The new usb_set_wireless_status() driver API function can be used by drivers of USB devices to export whether the wireless device associated with that USB dongle is turned on or not.
This will be used by user-space OS components to determine whether the
battery-powered part of the device is wirelessly connected or not,
allowing, for example:
- upower to hide the battery for devices where the device is turned off
but the receiver plugged in, rather than showing 0%, or other values
that could be confusing to users
- Pipewire to hide a headset from the list of possible inputs or outputs
or route audio appropriately if the headset is suddenly turned off, or
turned on
- libinput to determine whether a keyboard or mouse is present when its
receiver is plugged in.
This is not an attribute that is meant to replace protocol specific
APIs [...] but solely for wireless devices with
an ad-hoc “lose it and your device is e-waste” receiver dongle.
Currently, the only 2 drivers to use this are the ones for the Logitech G935 headset, and the Steelseries Arctis 1 headset. Adding support for other Logitech headsets would be possible if they export battery information (the protocols are usually well documented), support for more Steelseries headsets should be feasible if the protocol has already been reverse-engineered.
As far as consumers for this sysfs attribute, I filed a bug against Pipewire (link) to use it to not consider the receiver dongle as good as unplugged if the headset is turned off, which would avoid audio being sent to headsets that won't hear it.
UPower supports this feature since version 1.90.1 (although it had a bug that makes 1.90.2 the first viable release to include it), and batteries will appear and disappear when the device is turned on/off.
Our team has put a lot of effort into the possibility of
building modules in Copr. This feature went through
many iterations and rewrites from scratch as the
concepts, requirements, and goals of Fedora Modularity kept
changing. This will be my last article about this topic because we are planning
to drop Modularity and all of its functionality from Copr. The only exception
are Module hotfixes, which are staying for good.
Why?
The Fedora Modularity project never really took off, and building modules in
Copr even less so. We’ve had only 14 builds in the last two years. It’s not
feasible to maintain the code for so few users. Modularity has also been
retired since Fedora 39 and will die with RHEL 9.
Additionally, one of our larger goals for the upcoming years is to start using
Pulp as a storage for all Copr build results. This requires
rewriting several parts of the backend code. Factoring in reimplementation for
module builds would result in many development hours wasted for very little
benefit. All projects with modules will remain in the current storage until the
Modularity is finally dropped.
Schedule
In the ideal world, we would keep the feature available as long as RHEL 9 is
supported, but we cannot wait until 2032.
October 2024 - All Modularity features in Copr are now deprecated
April 2025 - It won’t be possible to submit new module builds
October 2025 - Web UI and API endpoints for getting module information will
disappear
April 2026 - All module information will be removed from the database, and
their build results will be removed from our storage
Communication
It was me who introduced all the Modularity code into Copr, so it should also be
me who decommissions it. Feel free to ping me directly if you have any questions
or concerns, but you are also welcome to reach out on the
Copr Matrix channel, mailing list, or in the form of
GitHub issues. In the meantime, I will contact everybody who
submitted a module build in Copr in the past two years and make sure they don’t
rely on this feature.
To ensure compatibility and safety, package-installed versions of Cockpit only allow connections to remote machines without Cockpit running the same operating system version. Future releases may relax this limitation.
The systemd docs talk about UsrMerge, and while bootc works nicely with this, it does not require it and never will. In this blog we’ll touch on the rationale for that a bit.
The first stumbling block is pretty simple: For many people shipping “/usr merge” systems, a a lot of backwards compatibility symlinks are required, like /bin → /usr/bin etc. Those symbolic links are pretty load bearing, and we really want them to also not just be sitting
there as random mutable state.
This problem domain really scope creeps into “how does / (aka the root filesystem)” work?
There are multiple valid models; one that is viable for many use cases is where it’s ephemeral (i.e. a tmpfs) as encouraged by things like systemd-volatile-root. One thing I don’t like about that is that / is just sitting there mutable, given how important those symlinks are. It clashes a bit with things like wanting to ensure all read files are only from verity-protected paths and things like that. These things are closer to quibbles though, and I’m sure some folks are successfully shipping systems where they don’t have those compatibility symlinks at all.
The bigger problem though is all the things that never did “/usr move”, such as /opt. And for many things in there we actually really do want it to be read-only at runtime (and more generally, versioned with the operating system content).
Finally, /opt is just a symptom of a much larger issue that there’s no “/usr merge” requirement for building application containers (docker/podman/kube style) and a toplevel, explicit goal of bootc is to be compatible with that world.
It’s for these reasons that while historically the ostree project encouraged “/usr merge”, it never required it and in fact the default / is versioned with the operating system – defining /etc and /var as the places to put persistent machine local state.
The way bootc works by default is to continue that tradition, but as of recently we default to composefs which provides a strong and consistent story for immutability for everything under / (including /usr and /opt and arbitrary toplevels). There’s more about this in our filesystem docs.
In conclusion I think what we’re doing in bootc is basically more practical, and I hope it will make it easier for people to adopt image-based systems!
Hello testers,
starting today we are introducing several important changes to our subscription backend.
New Quay.io accounts
Subscribers have the ability to access extra
container images via private docker repositories.
What is changing: quay.io account username will no longer be based on email address,
instead it is based on the subscription ID which is more stable over time.
Discovering your credentials is explained on the page above.
New accounts have been created automatically for those eligible!
Old accounts will continue to be active until December 31st 2024,
afterwards they will be removed! Make sure to update your
workflows with the new credentials before December 31st!
Automatic account creation for new subscriptions
Previously subscribers who purchased a subscription were required to create
an account on https://public.tenant.kiwitcms.org using the same email address
used during their purchase.
What is changing: user accounts for new subscriptions will be created automatically
if they do not exist and a random password would be assigned to them. Customers will
be able to reset passwords for these accounts via
https://public.tenant.kiwitcms.org/accounts/passwordreset/! The account username
is sent as a reminder in the password reset email!
Happy Testing!
If you like what we're doing and how Kiwi TCMS supports various communities
please help us!
With the transition of the applications from Fedmsg to Fedora Messaging inching towards completion, today we want to introduce a new service, Webhook To Fedora Messaging. Webhook To Fedora Messaging has been researched and developed by the Fedora Infrastructure team members with the company of an Outreachy mentee over the last quarter to communicate with services using webhooks.
Webhook To Fedora Messaging takes webhook events from services and translates them into semantic messages to be sent over on the Fedora Messaging bus, to which every Fedora Project application can listen and act for automation. Currently, the project supports services like GitHub but going forward we plan on implementing support for services like Discourse, GitLab, Forgejo etc.
As this service was designed to be the successor to the existing Github2Fedmsg service, we are also announcing that the service is now deprecated and users are encouraged to migrate to the newer service. If you are an existing user of the Github2Fedmsg service, please open a private ticket in the fedora-infra/w2fm-registration repository using the template named “Github2Fedmsg Migration Request”.
Additionally, as GitHub allows for managing webhooks at an organizational level, users migrating from the GitHub2Fedmsg service can explore the functionality by visiting the page https://github.com/organizations//settings/hooks. Once the changes have been made by the owner of the GitHub organization, the activities from all repositories can be conveniently relayed on the Fedora Messaging bus.
Recently I got myself a Thinkpad X1 Tablet Gen 3, however, unlike other Thinkpads, this model does
not seem to use the usual Thinkpad kernel module which allows sensitivity to be set in sysfs.
Thanks to some inspiration from ChatGPT, I found out that it is possible to apply multiplier
to the event stream by building a daemon that intercept the input events and modify it.
So here’s how you can do it:
Dependency
On Fedora, you will need python3-evdev installed
sudo dnf install python3-evdev
Event modification script
Save this in /usr/local/bin/alter-sensitivy.py
importevdevfromevdevimportInputDevice,UInput,ecodesimporttimeimportargparseparser=argparse.ArgumentParser()parser.add_argument('-d','--device',required=True)parser.add_argument('-m','--multiplier',type=int,default=2)parser.add_argument('-t','--trigger-threshold',type=int,default=2)args=parser.parse_args()# Open the input device
device=InputDevice(args.device)# Create a virtual input device to emit the modified events
ui=UInput.from_device(device,name="ModifiedDevice")# Loop to process incoming events
foreventindevice.read_loop():ifevent.type==ecodes.EV_RELandabs(event.value)>args.trigger_threshold:# Modify only relative movement events if it above threshold
foriinrange(args.multiplier):print(event.value)ui.write_event(event)ui.syn()time.sleep(0.0005)# for smoothing out the movement
Find your device using sudo libinput list-devices and evtest [device].
Test it out. In this example I’m altering event of /dev/input/event5 input device which is
my trackpoint. -m option allows you to configure how much you want to multiply
the event (default is 2).
Josh and Kurt talk to Seth Larson from the Python Software Foundation about security the Python ecosystem. Seth is an employee of the PSF and is doing some amazing work. Seth is showing what can be accomplished when we pay open source developers to do some of the tasks a volunteer might consider boring, but is super important work.
The config.assume_ssl adds additional ActionDispatch::AssumeSSL middleware that will set the following HTTPS headers:
HTTPS to on
HTTP_X_FORWARDED_PORT to 443
HTTP_X_FORWARDED_PROTO to https
rack.url_scheme to https
This is useful when running Rails with force_ssl but behind a load balancer or proxy that terminates SSL connection. This prevents ActionDispatch::SSL auto-redirect to HTTPS.
Cybersecurity is a vital topic for Switzerland and
social engineering attacks are a significant issue in the realm
of cybersecurity.
Organizations like Google, Facebook and LinkedIn could be seen as a
very effective social engineering attack against Swiss culture and
privacy.
Frans Pop, the
Debian Day Volunteer Suicide Victim, had sent at least one of
his suicide notes on debian-private gossip network the
night before Debian Day. If an organization can get into somebody's
head like that, such that decisions about life and death revolve
around this software, we could contemplate the possibility that
Frans Pop died under the influence of a social engineering culture.
Adrian von Bidder died on the same day that Carla and I got married.
Why can't we ask questions about that?
Switzerland reportedly has
a higher per-capita ratio of Debian Developers than any other country
except perhaps Ireland. Yet according to Shuttleworth's
email, many of these people have a loyalty to Debian culture that is above
their loyalty to Swiss employers and Swiss law. This dual
allegience appears to be a sign that they are under the sway of
social engineering or at risk of external influence.
By way of background, in 2006, Adrian and Diana got
married. In 2007,
the suicide petition to Basel Stadt authorities
was signed by A. von Bidder.
In August 2010, we had the confirmed suicide of Frans Pop, the
warning from Mark Shuttleworth and a sustained period of stress
among volunteers in the Debian Developer world.
In April 2011, Adrian von Bidder died. It was discussed like a suicide
but they told us casually that it could be a heart attack.
There was no comment about whether the couple had any children
during the five years of their marriage.
On 28 April 2011, very soon after von Bidder died, Diana modified his blog,
adding a new post:
Sadly, I have to make an end to this blog. Adrian - my husband - died on april 17th of a heart attack.
Adrian von Bidder had made various blog posts with critical commentary
about the risks of social media and other devious enterprises.
Many of his concerns have been proven correct by the passage of time.
Yet I feel the manner in which Diana writes "I have to make an end to this
blog" has an air of disapproval for Adrian's work. Then again,
this must have been a very disturbing time for Diana and on top
of that, English may not be her native language so the tone
of her comments may not reflect her real thoughts and feelings about
the subject.
Some time later, Diana completely erased the blog, removed the DNS
entry for blog. and placed a picture on the main page
fortytwo.ch.
The picture's metadata tells us
it was taken on 20 January 2011 with a Canon EOS 40D, possibly
the camera Adrian discussed in some of his blog posts.
We know that other Debian Developers in Switzerland were subject
to social engineering attacks involving blackmail and public
humiliation. One of those cases was the blackmail of
Daniel Baumann. Did Adrian von Bidder receive similar messages
in the days before his heart attack?
Did Adrian von Bidder communicate with anybody before his
heart attack, for example, leaving a note? In English-speaking countries,
all these things are published by the coroner's office. In
Switzerland, it is the opposite, evidence is only given to those
in close proximity to the deceased. At the time, Diana may not
have known about the earlier suicide of Frans Pop. She may not
have realized there was the risk of a connection between deaths
in a single community. Now
the suicide cluster is public knowledge, is it time for a fresh
discussion about that?
Most cybersecurity experts around the world believe that
transparency is important for education and mitigating risks.
Here is a photo of Diana and Adrian on their wedding day:
Hitler and the Nazis were obsessed with the idea that Jews
could be identified by a distinctive smell. While America was
building the A-bomb, Hitler
diverted science funding to research the Jewish smell.
The smell was rumored to resemble sulfur.
It makes the case that there was a shift in the way that smell, beginning in the late nineteenth century, was used to not simply demarcate groups but, in addition, to supposedly detect ‘race’ and ethnicity.
Prominent Debian Developer Daniel Pocock has recently released
details of the
Swiss harassment judgment. His former landlady, an organizer of
the SVP senioren (far right Swiss seniors group) had started rumors
about a smell coming from Pocock's cats. Even the judge asked
if it could be acceptable to pose questions about this imaginary smell.
Obviously the judge was not familiar with this awkward similarity
to the persecution of Jewish and African people throughout
history.
For about six years now, people have been creating gossip about
harassment and abuse against various Debian co-authors. Nobody ever
provided any evidence.
Earlier this year, when I nominated in the European elections, the
misfits were desperate to attack me but they didn't have any grounds to do so.
They waited until the last minute before voting began and on
6 June 2024, the day before voting, they
published a document that appears to be invalid, full of forgeries, racism
and nonsense.
But wait, there really was a harassment case and a judgment.
With the Irish General Election approaching, I am considering whether
to nominate again and it is really important that people can see the
truth about who really harassed who.
Swiss racism, cats of colour, women harassing women and a 10,000 Swiss franc settlement
The only mistake I made was taking black cats to Zurich.
The real Debian harassment story is about women harassing women
and occasionally, a woman harassing our cats and women harassing men.
In Switzerland, both in the law and in the culture, when you have
a harassment problem like this the matter is usually settled privately
and everybody moves on with their life as quickly as possible.
Carla and our black cats, who are also female victims, were subject
to racism from a white Swiss woman. We received a payment of CHF 10,000.
Surely I would have rushed to publish that on my blog the same day.
But I didn't publish it before. When the WeMakeFedora case was resolved,
I immediately put it on my blog. But in the case of the harassment
in Zurich, I wanted to respect all the parties involved, I wanted to respect
the Swiss cultural approach to such disputes in Switzerland and just
put it out of my mind and get on with serious problems.
Nonetheless, Debianists, including people like Axel Beckert at
ETH Zurich and at the Google office in Zurich have been stirring
up rumors about the harassment and paw behavior for six years.
Ironically, the Google engineering headquarters for Europe
is located in Zurich and Google's role in spreading rumors about
the harassment case had actually undermined the privacy that
people used to take for granted in Switzerland.
Women harassing women: a common problem
In the case of serious violent crime against women, the majority
of perpetrators appear to be male.
In the case of less tangible crimes, like harassment, stalking,
racism and even sexism, we can find many cases where women are
either protagonists or associates of an offender.
The recent Netflix series
Baby Reindeer
cast a spotlight on the story of a woman harassing a male
employee at a bar.
In 2021, we saw a female volunteer, Molly de Blanc, started an online
petition
harassing her former boss, Dr Richard Stallman at FSF. Approximately
three thousand people joined the petition but a petition about a person
is not a real petition at all, it is harassment. de Blanc made the petition
more than two years after leaving her job at FSF.
In a previous blog, I looked at the case of another non-developing
Debian volunteer, Laura Arjona,
harassing one of my female interns in the Outreachy program.
After learning that this goes on behind mentors' backs, I didn't
volunteer to be a mentor again.
Then there was Amaya Rodrigo Sastre who helped spread the rumors
that Ted Walther's partner at the DebConf6 dinner was
alleged to be a prostitute. In fact, the woman was a dentist and
these rumors were disastrous for her reputation.
Ariadne Conill from the Alpine Linux project, which has no
relationship to Switzerland as far as I can tell, was spreading
the rumor that my intern in Google Summer of Code was my girlfriend.
The rumor was offensive to me but even more offensive to the intern
because
that was the year she got married.
Shortly before DebConf15, we received
nasty messages from Margarita (Marga) Manterola of Google telling us
that Carla is not welcome to eat the food at DebConf, despite the
fact that other woman like Marga go there with their husbands every year.
While waiting for the train to go down the
Uetliberg one day, Carla and I were talking to a British woman
in the playground beside the railway station. The woman told
us about her Swiss landlady, a little old lady, who had been
whinging and whining about the behavior of her small children.
The Swiss landlady had become quite obsessed and had even been
caught at the window taking pictures of the way the children played
inside their home.
Looking at the
invalid and falsified legal documents distributed
by rogue members of Debian, we can find various references to
my Irish heritage. Everybody seems to know that I was born and
raised in Australia. I acquired Irish citizenship because my mother
is from Ireland. We find that the racist women in Switzerland, and
we'll see more of them in this blog, are not classifying people based
on our skills and talents, they are obsessed about little things
like my mother's Irish heritage. In fact, some of these
documents were prepared by two women in Zurich,
Pascale Koster and Albane die Ziegler. The documents don't mention
that I am a citizen of three countries, they emphasize my Irish
heritage as some kind of a hint to their racist colleagues
that my mother and I should be treated badly in Zurich.
What we see here is another example of women being offensive to
other women.
One of the most well known examples of women exhibiting poor
behavior to other women in Zurich was the infamous Oprah Winfrey
handbag incident. A woman in the handbag shop refused to let
Oprah look at a particular handbag. Oprah gives a testimony about
her experience with the Swiss saleswoman (Kauffrau) in this
video:
This brings us to the point where we will consider the paw behavior
of a Swiss landlady towards Carla and our black cats, who are both female
cats, so there was a female offender and three female victims.
I don't wish to make the generalization that all women are like
this. I've worked with many professional women who act with
integrity in everything they do. But when we see gossipmongers
making up stories about harassment in groups like Debian, we need
to remember the risk of listening to attention seekers
and their paid lawyers/liars. Gossip and
social engineering attacks go hand in hand and if we care
about cybersecurity, we need to call out gossip behavior.
Harassment and racism are not only Swiss problems
Before rushing to any conclusion about racism in Switzerland,
we need to remember that there is racism in every country.
When we look at the concerns about Brexit in the United Kingdom,
there was a lot of racism during the campaign period before the
referendum. Some of the practical changes in the UK, like
canceling the driving licenses of foreigners, actually happened
before the Brexit referendum. Likewise, whenever there is a Swiss
referendum about the relationship with the EU, some people
may voice racist opinions about the subject but there may be
some valid political or economic discussions that take place
at the same time.
We can also ask the question: are there times when Swiss citizens
are subject to extreme acts of bullying or extreme injustice by
employers, landladies or the public authorities? In fact,
some examples do exist.
Looking at the JuristGate
affair, we can see that the rogue legal protection scheme, which
smells like a ponzi scheme, had both Swiss customers and foreign
customers. All the customers lost their money at the same time.
When FINMA shut down the rogue insurance, they hid the details
from everybody, both Swiss and foreign clients were kept in the dark
to an equal extent. Therefore, there was extraordinary injustice,
there were some foreign clients but racism wasn't the main theme
in JuristGate.
When I look at
the case of Adrian von Bidder (avbb / cmot),
the Debian Developer who died on our wedding day,
I wonder if he had one of the same bad experiences that
foreigners often complain about in Switzerland. For example, did one
of the health insurance companies bungle a treatment for his wife
or did an employer fail to make contributions to his pension scheme
and then go into liquidation?
Here is a photo of Diana and Adrian on their wedding day:
In Swiss culture, sensitivity about the cause of death is
an important cultural consideration. After blogging the
initial evidence about how the death was discussed in
the debian-private gossip channel, I came to realize that Adrian's
widow, Diana, was listed as a member of the Basel City
parliament. In such cases, there is obviously even more opportunity
to ask questions about the interaction between the death, any
environmental or cultural factors, whether in Debian or in his community
but at the same time, the cultural aversion to asking those questions
is a very steep obstacle.
Real harassment, real evidence ordered chronologically
Some time in 2017 or 2018, Chris Lamb, former leader of the
Debian project, started making mischievous references to harassment.
He didn't provide any facts, dates, victims or evidence.
Most of the larger property management companies in Zurich and
Switzerland are somewhat consistent in their application of tenancy
regulations.
When people find a nice apartment with a responsible landlord, they
usually keep the apartment for a very long time.
Some smaller buildings, usually sized between five and ten apartments,
are owned by a resident landlord/landlady. This gives rise to the
phenomena where the landlady and tenant may cross paths almost every day.
It goes without saying that the turnover of tenants in some of these
owner-occupier buildings is much higher than in the buildings owned by
a silent investor.
Web sites advertising the apartments sometimes have a checkbox and
filter option for potential tenants to exclude apartments with a resident
landlady (Vermieter wohnt im Mehrfamilienhaus). Most people
who have had a bad experience with one of these will go out of
their way to avoid them in future.
Due to the very high turnover in buildings with a resident landlady,
the number of advertisements for such apartments is disproportionate
to the number of buildings that don't have a resident landlady.
Laundry duties & the status of women
Very new buildings in Switzerland have a washing machine and
clothes dryer in every apartment. Most traditional buildings and
some new buildings have a laundry room or drying room shared by all the
tenants. Most buildings have a handwritten roster where the tenants
can reserve the machines for a particular day.
You may only have one reservation to use the laundry every two weeks.
If that reservation falls on a work day and you have multiple loads
of washing to do then it can be very inconvenient. Nonetheless,
nobody sees any urgency to change this system. There is a prevailing
attitude that the wife or girlfriend will stay home on the laundry day
and ensure that all the clothes are nicely washed, dried and folded
and the laundry room is left in a proper state for the tenant
who will use it on the following day.
Switzerland is notable for its neutral status and hosting diplomats
from around the world at the United Nations in Geneva. But if
the washing machine breaks down and one tenant's drying time
runs over into the next day, there is anything but diplomacy
and tenants regress to communicating with each other through handwritten
notes written in
one of the four official Swiss languages.
The application process, religious harassment and cats
When tenants arrive to visit a prospective apartment, they are
given an application form that must be completed for the landlord
or letting manager.
They tend to ask more questions than necessary. It is not unusual
to find questions about your religious affiliations on the form.
We can quickly find examples of these forms in a search engine
by searching for words like Anmeldungformular and
Konfession (
Example 1,
Example 2,
Example 3).
In effect, if your religion has been
persecuted in Switzerland,
you may well feel that filling out the application form
is an experience of harassment.
News articles appear from time to time about whether or not
you should declare your religion. (
Example 1,
Example 2,
Example 3).
Not every Anmeldungformular asks about religion but
it is almost certain they will ask about your pets and musical
instruments. It is a good idea to answer those questions honestly
in any country. While some landlords and letting agents will decline
certain requests, others will be quite
happy to direct you to the most suitable apartments for your
lifestyle.
Whenever we applied for any apartment in Switzerland, we did
so with total honesty and integrity. We declared our cats
(Katzen):
Specifically, we have written Hauskatzen, which literally
translates to house cats. In other words, we are not
talking about something exotic like a tiger or panther.
No room for undocumented aliens
The confession of cat ownership led to a flurry of paperwork
mediated by the letting agent. Everybody who rents an apartment
in Switzerland is expected to purchase a civil liability insurance
and pay three months of rent as a security deposit.
In our case, that simply wasn't enough. The landlady insisted
that we sign a guarantee against any paw behavior by our cats:
Costs anticipated by this document were already anticipated
by the security deposit and our civil liability insurance. Therefore,
I feel this additional cat contract was superfluous. Can we call
it harassment or bullying?
Fair wear and tear
Switzerland has high standards for construction and due to
the level of wealth, even the most mundane apartments typically
have very high quality components in their bathrooms and kitchens.
It is typical to have mixer taps on the showers and sinks, good water
pressure and wall mounted toilets.
When tenancies are concluded in Switzerland, the apartment or
house is subject to a forensic examination that may last several
hours.
It is expected that the tenant leaving an apartment will arrange
to have it cleaned back to the original state before the inspection
day.
Even if the bathroom is 30 or 40 years old, the high quality
components still look like new after each cleaning.
Nonetheless, internal components like washers and gaskets don't
last forever, no matter how beautiful the sinks and toilet bowls
appear on the outside.
In this particular apartment we experienced the failure of both
the shower mixer and the gasket joining the cistern to the toilet
bowl. Both of these things failed within a short span of time.
The plumber came promptly to make the necessary repairs.
Nonetheless, after the drama about whether our cats were a national
security risk, we were never on a good footing with this
particular landlady. She was 76 years old and the far right party,
of which she was a member, was constantly warning her to be
on the lookout for mischievous foreigners.
If you look at the far right propaganda circulated in advance of
referendums and elections in Switzerland, the foreigners are
typically depicted in black, like our cats.
At Kaltbad on the Rigi, we found a white cat in the snow:
A large professional landlord company with thousands of
apartments probably wouldn't worry about the cost of repairing
these washers and gaskets. On the other hand, for these owner-occupier
landladies who like to micro-manage their tenancies, some of them
stay up all night worrying about whether
tenants (or cats) do something like this as a prank.
Here is the report about the shower defect about two weeks
after we moved in. There is no way that tenants or cats could
have put rust into the pipes. These are simply the problems of
an old building.
Subject: bath / shower water problems
Date: Thu, 1 Dec 2016 09:39:29 +0100
From: Daniel Pocock <daniel@pocock.pro>
To: Letting agent
Hi [redacted],
The plumber visited today, he replaced the dishwasher door and the
shower hose.
He also looked at the flow from the hot water tap in the shower. He
found a lot of rust inside the tap.
He removed the hot and cold taps, cleaned out the taps and ran the water
directly from the pipes in the wall. A lot of rust came out of both hot
and cold pipes.
- the hot water pipe is now flowing better, but it is still less than normal
- water from both hot and cold pipes still has a slight red colour
He said he will contact you to explain and discuss how it can be fixed.
Regards,
Daniel
Cat smell letter
While I was on a trip to the UK, Carla received this ugly letter:
It says there is an unknown smell in the common areas and
it asks if the smell could come from our cats or deficiencies
in cleanliness.
Carla and the cats were really sad.
We contacted our legal insurance and had a lawyer draft a
response. We hoped that would be the end of the matter.
The window nazi
Then came the windows. There are 11 apartments in the
building and somebody would sometimes open one of the windows
in the stairwell and leave it open.
The landlady become obsessed with closing the windows and
leaving handwritten notes on the windows.
Mediation requested
After some months of receiving insults in the post and in the
common areas, it reached a point
where we had to take legal action. We demanded a mediation
session at the tribunal of Zurich.
Our cats were members of our family. Everybody loved our
cats. My Italian cousin came to stay with them on several occasions:
Remarkably, the landlady sent an expensive lawyer to repeat
the accusations about a cat smell, the window in the stairs and
a dirty towel that another tenant found in the washing room.
There were no fingerprints, no paw-prints, no video evidence,
no DNA evidence, not even a whisker to link any
of these problems to us. It was just a witchhunt and as we had
black cats, we were the most recent arrivals in the building,
and we were foreigners, we felt we had been victimized.
Here is the accusation about a disobedient tenant who opens the window
in the stairs:
Swiss lawyer tried to deceive Swiss judge about far right membership
Early in the mediation session, the lawyer for the landlady claimed
that it wasn't clear whether or not she was really a member of the
far right political party.
We were able to show the judge that the landlady had a web site
promoting the party. Here is one of the photos, she is chairing a meeting
and the poster attached to the table has her name and face on it.
The filename tells us it is a meeting of the far right seniors committee
(SVP senioren):
In this photo, she is standing beside then president of the
Kanton parliament, Dr. Christian Huber:
Shortly after the photo
was taken, Dr Huber resigned from the parliament and resigned
from the SVP in mysterious circumstances.
Dr Huber and his spouse spent the next ten years traveling around the
European Union by houseboat. This is ironic of course, a leader
from an anti-immigration/anti-EU party living like a refugee in a boat
in the EU. In Australia the far right uses the term
boat people as a derogatory term for immigrants who travel by boat.
Mystery smell: who is defaming who?
Here is the accusation about a mysterious smell. The lawyer
is saying it is not clear where it comes from because he doesn't
want to be caught defaming foreigners directly. He doesn't provide any
expert evidence or witnesses, he basically says the landlady has a
hunch about this smell and the judge should trust the landlady.
The letting agent is also in the room and if the rumor was credible
he would have surely commented on it. I don't think he wanted
to comment about the smell at all so it came down to the expensive
lawyer to talk this imaginary smell into existence.
When I hear references to these mysterious smells, I feel it is
a way for the jurists to give each other a wink and a nod and ask
for the foreigners to be punished.
Every time Carla went down to the laundry in the basement,
the little old lady would appear. We don't know if she had
video surveillance cameras or if she spent all her day going
up and down the steps to check on the laundry.
Nonetheless, Carla had become quite upset about the cat letter
and the intrusions in the laundry and at some point I had to
start doing the laundry because it was impossible for Carla to
go down there alone.
The landlady was taken aback by the sight of a man in the laundry.
She started calling Carla's employer. We don't know what she was
hoping to achieve. Was she trying to determine if Carla had absconded?
Or was she trying to find out why the employer expected Carla to work
on laundry day?
The lawyer sent a stern letter demanding that these phone calls to
Carla's employer must cease immediately.
Frau [----] hat letzte Woche beim Arbeitsort meiner Mandantin
angerufen und unter Vorwand, sie wolle mit ihr sprechen,
gegenüber der Chefin meiner Mandnatin während ca. 30 Minuten
meine Mandanten im Zusammenhang mit dem vorliegenden Verfahren
angeschwärzt, resp. diese in ihrer Ehre verletzt.
Ich fordere Sie auf, Ihre Klientin über die Tragweite der
Bestimmungen über strafbare Handlungen gegen üble Nachrede
und Verleumdung zu informieren.
Es gab und gibt keinen Grund der direkten Kontaktaufnahme und
insbesondere keinen Grund für Ihr Klientin, beim Arbeitsort
meiner Mandantin anzurufen.
Sollte es noch einmal vorkommen, dass Ihre Klientin gegenüber
meinen Mandanten oder Dritten ausfällig wird und sich sonst
rassistisch äussert, so wird dies entsprechende Konsequenzen haben.
Ich denke auch nicht, dass das Verhalten Ihrer Klientin die
Verhandlungsbereitschaft meiner Mandanten bezüglich des
vorliegenden Verfahrens erhöht.
and translated into English:
Last week, [----] called my client's place of work and,
under the pretext that she wanted to speak to her,
spent around 30 minutes denigrating my client in connection
with the current proceedings to my client's boss, and insulted her honor.
I request that you inform your client of the scope of the
provisions on criminal offenses against slander and defamation.
There was and is no reason to make direct contact and in particular
no reason for your client to call my client's place of work.
Should your client become abusive towards my client or third parties
or otherwise make racist comments, this will have the appropriate
consequences.
I also do not think that your client's behavior increases my
client's willingness to negotiate with regard to the current proceedings.
Would a female judge in Zurich be any more sympathetic than a
female landlady? Maybe not. Here, the landlady's lawyer is explaining that
if the man (me) is busy with my job, the woman (Carla) can look for
another flat. The judge and the translator are both female.
Nobody calls out the sexism.
The search for a flat in Zurich is not a trivial task. In
German, the press refer to it as the Wohnungslotterie. When
a new building is about to be completed, hundreds of prospective
tenants line up outside to submit copies of their Anmeldungformular
in person.
What we see here is Swiss feminism, that is feminism for Swiss women.
I don't think it's up to a man to give the definition of feminism.
But I feel it is safe to say that Swiss feminism or Australian feminism
are contradictions because it is basically privileged women from
rich countries who go to university and become jurists and meddle
in the lives of women from other countries.
One of the reasons we are in court in the first place is because
Carla didn't feel comfortable being that woman from latin America
who does laundry with the Swiss landlady looking over her shoulder.
When Swiss families want to apply for apartments,
they send their foreign nannies to stand in those queues and submit
the forms.
"In the early days ... every client meeting I would be asked to get the coffee. The other male graduates
were never asked to do such things,"
She's right: in more than twenty years since I graduated, nobody ever
asked me to make coffee in the workplace. And when I tried to share
responsibility for doing the laundry in Zurich, the landlady was
opposed to the idea. She seemed to feel that women like Carla were
easier to control.
In Renens, Canton Vaud, a white cat can sleep on the steps at
the railway station and nobody complains about the risk that
somebody might trip over the cat. Every ten minutes, the metro arrives
at the top of the steps and hundreds of people come down the steps to
search for their trains. There is a serious risk that somebody could
trip over the cat and suffer an injury. If it was a black cat, would
the police come with dogs to remove it?
Everybody in west Lausanne seems to know this cat but nobody
knows who it belongs to.
Here is the part of the trial where they talk about the landlady
calling Carla's workplace about the laundry:
Who owns that towel?
Given the lack of evidence about the imaginary cat smell, the
landlady had tried to diversify her legal strategy by introducing
a dirty towel that somebody found in the washing room.
Most landlords would simply provide a basket for
lost property. Even at Swiss prices, the cost of a basket for
these elusive towels and socks would be far less than the cost
of the lawyers.
The cat smell trial consisted of four jurists, an interpreter, the
letting manager and an engineer, myself. The combined cost of our
time was over CHF 2,000 per hour for three hours in court
debating the anxieties of a landlady who didn't show up.
In comparison, many Swiss residents drive over to Germany or
France each weekend for shopping. At
Action in France, you can buy another towel and a lost property
basket for a combined cost of less than ten Swiss francs.
The fact they tried to bring this towelgate affair into the courtroom
only proves that they had no serious case in the first place.
They were clutching at straws.
Speaking English in a Zurich courtroom
I think the judge realized that the landlady had a very weak case
and on top of that, the landlady's lawyer had been somewhat deceptive
about the political connection. The judge decided to continue the
mediation session using the English language.
The far right Swiss landlady was unable to sleep due to the
imaginary smell, the sight of a man doing laundry and our stubborn refusal
to take phone calls during our working hours about every little drama
in the missing towels department. Yet my family had far
more serious concerns due to my father's health. I tried to explain
that in the court but were they listening?
Switzerland is a very small country and many people live in the same
valley where they grew up with their parents. Even if they move from
their valley to a city like Zurich, they can always reach most of
their extended family with a short journey by train.
In the most hostile company where I worked in Switzerland, a line
manager's mother had developed a terminal illness and had less than
six months to live. The manager went back to his country for a number of
months and the company strategy, organization and culture was totally unable
to cope with this situation.
Nonetheless, in our case, Carla's aunt was getting very old and
my father was very ill. The financial cost of the mediation session
where we spoke about missing towels and the imaginary cat smell was
greater than the financial cost of a trip to Australia to see my father.
The judge and I seem to agree there are cultural differences but
the extent to which some people react to small differences is
extraordinary:
Defending the honor of black cats before a Swiss judge
Vous promettez d’être fidèle à la Constitution fédérale et à la Constitution du Canton de Vaud.
Vous promettez de maintenir et de défendre en toute occasion et de tout votre pouvoir les droits, les libertés et
l’indépendance de votre nouvelle patrie, de procurer et d’avancer son honneur et profit, comme aussi d’éviter tout ce qui
pourrait lui porter perte ou dommage
and translated into English:
You promise to be true to the federal constitution and the constitution
of the Canton of Vaud.
You promise to maintain and defend on every occasion and with all your
powers the rights, freedoms and independence of your new country,
to develop and advance her reputation and wealth and equally to
avoid all that could cause her loss or damage.
What does an oath like this mean in practice? In the Zurich courthouse,
I defended the honor and reputation of our black cats before a
Swiss tribunal:
Remarkably, the judge repeats the question about whether there
could be a smell. This was so offensive to us as a family.
In fact, these rumors about smells have Holocaust origins.
Hitler commissioned significant scientific research to determine
if the Jews have a distinctive smell. When the judge tried
to legitimize these black-cat-smell comments in Zurich, I couldn't believe
what I was hearing.
When the
Albanian whistleblowers came to Zurich, they slept with the
cats. Here is Anisa Kuci from OpenStreetmap, Wikimedia and
GNOME Foundation on our sofa bed with Buffy the black kitten
sleeping beside her:
If people want to confirm the cat smell was a lie, just ask Anisa.
Switzerland vs Australia, which country is more beautiful
I feel that honesty is always important in any relationship.
When we see courtrooms on television, the witnesses promise to
tell the whole truth, the complete truth and nothing but the truth.
I guess that mantra stuck in my head. I simply told the tribunal
that I didn't really want that apartment anyway because Australia
is more beautiful. At that very moment, the jurists stop speaking English
and revert to German.
In fact, both Switzerland and Australia have some amazing geographic
and cultural features and I think we were just unlucky with this
particular landlady from the SVP senioren (far right seniors) cabal.
Far right dictator or eccentric old lady?
While this landlady was definitely a member of the far right party,
her behavior was rather foolish and I don't think every member of the
far right party behaves like this. Many of the people in the far right
party own small businesses and they don't want to start silly disputes
with their customers and tourists over things like a missing towel.
In this case, I suspect the propaganda of the far right party
has become mixed up with the aging process and contributed to
behavior that is erratic.
Most political parties and religions try to exploit the insecurities
of little old ladies like this in the hope little old ladies
will leave bequests to the party or the religion in question.
With that in mind, I don't blame the landlady alone for the pain
my family experienced in Zurich.
Google and Debian forcing the harassment verdict into the spotlight
While we had to collect a lot of evidence at the time of the dispute,
I never imagined publishing this case on my blog.
The only reason I am publishing this is because of vague rumors
about a harassment case being distributed on the web sites of Debian,
the World Intellectual Property Organization (WIPO) in Geneva and
some other web sites.
I don't want to encourage cat enthusiasts to seek revenge against
this little old lady. If she is still alive today, and I haven't
even bothered to check, she would be well into her eighties and there
would be no benefit whatsoever from harassing her.
The case was resolved with a cash settlement of CHF 10,000, equivalent
to EUR 10,500 or USD 10,000.
The cats were transported in a box to a new home:
Here is the judgment in German. We've redacted parts of it
to avoid identifying anybody. Ultimately, this was another case
of a woman instigating harassment, a lot like Baby Reindeer:
Chris Lamb and Molly de Blanc violated Swiss privacy
Soon after the harassment case was finished, it was Chris Lamb
and Molly de Blanc who started a gossip campaign.
Some of these women spreading rumors in the free software
community are particularly vicious.
One of the cats, Floe, died shortly after the relocation.
de Blanc then showed up at FOSDEM in Brussels with her
infamous speech about
putting cats behind bars:
de Blanc's behavior was a horrible act of trolling after the
death of our beloved cat.
Carla and I did not choose to make the harassment verdict
public. We didn't have any vendetta with that little old lady. We just
wanted to get on with our lives.
The far right landlady paid the compensation money on time. She
has a right to get on with her life too. She is well into her
eighties now and Google is violating her privacy
with the ongoing gossip about harassment.
The ten thousand Swiss francs we received is less than half
the cost of the handbag that Oprah Winfrey wanted to see in
Bahnhofstrasse, Zurich.
What we see is a range of women, both the landlady and Molly de Blanc,
meddling in peoples' lives. Women and female cats
are victims of these stalkers but the stalkers are women too.
Role Based Access Control (RBAC) as defined by NIST is based on the concept of global roles. Global, in this case, means the scope of the application. So if you have the role of ADMIN, and you are in a globally scoped RBAC based application, that role applies to all APIs and resources within the program.
OpenStack was written assuming that the ADMIN role was a global role. But then it was implemented as a non-global role. It was implemented as a role scoped to a tenant. The term tenant was the original (and I would argue, better) term for what was later called Project, and then again expanded to Domains as well.
A project, or a domain, or a tenant, or a namespace, is a way of sub-setting the resources in a system. Resources are explicitly labeled with exactly one name. When a user attempts to interact with that resource, the system checks the roles associated with the users account, and the rules will deny or allow the requested access.
However, the Nova project explicitly continued to use ADMIN in a global manner, and typically reserved it for sensitive operations. Keystone, on the other hand, assumed that ADMIN was a scoped role, and required that role to perform operations that were limited to a tenant. It is this disconnect that lead to the longevity of bug 968696.
So…is there any problem with continuing to keep ADMIN as a global role, even though it is assigned locally? Yes and no. The No comes from the fact that you can lock down all scoped operations to roles other than admin….OpenStack is going to use the term Manager instead. For example, a user with the manager role on a group will be able to add and remove people from that group. Thus, if you make it so no one actually needs the ADMIN role to perform day-to-day operations, and thus ADMIN is never required to be tied to a specific scope, you can make a functioning system.
TO continue the “NO” from above, you have to make sure that role assignments cannot lead to an elevation of principals. Thus, if the manager can assign roles to a user, they CANNOT be allowed to assign the ADMIN role to that user…unless they themselves have admin. Put more strictly, a user cannot be allowed to delegate any role that they themselves do not have. However, this means that a user needs to be explicitly assigned all roles that they can assign, or you need to be able to infer roles from the assigned roles. Keystone has role inference rules, and these are sufficient, if set up, to enforce this scheme. We have also hard-coded in a check that no role can imply the ADMIN role, essential to preventing elevation of privileges. It also performs inference cycle checks, so that a lower rule cannot point to a higher rule that points back to itself. Thus, if ADMIN directly or indirectly infers all other rules, you could never make a rule that then infers ADMIN.
With those two pre-conditions in place, you could make a system that allows ADMIN to continue to function globally even when assigned to a scope. However, those rules need to be enforced continually in the future. The problem would then come if some service creates or modifies an API that requires a scoped ADMIN to perform it. And, since RBAC enforcement is local to each server, this is not only possible, but likely based on history. If this happens, people will require a scoped ADMIN role, and in doing so, will get the unscoped power that comes with it. And, in the cases where there is automation and delegation involved (HEAT, Trusts, etc), there is going to be cases where these delegated credentials are stored on disk.
The current OpenStack approach is called Secure RBAC. That very term should raise alarms. RBAC by itself needs to be secure, and the need to put an additional qualifier on it indicates a problem. Changing all the APIs that required ADMIN to now require manager where Keystone treated it as a scoped role is opening up a huge security issue in all of the installations where ADMIN is assigned out. Perhaps this is not a real problem as the operators were already forced to limit it due to Nova. But there is a potential for abuse there. People that could not do role assignments on other domain and projects can do so now. As Keystone core, I would not have accepted this change.
Why does OpenStack insist on using ADMIN as a global role? Momentum. Inertia. It has always been done that way, and the users have built their workflow around the existing implementation. The previous attempt to replace project scoped roles with a different scoping mechanisms (System scoped roles) met with a thunderous NO from the Operators. That was after a quieter ‘no’ based on my own approached based on Admin-projects. They found admin projects confusing, and system scopes seemed more intuitive. I agree they were, but they ignored the inertia that the change needed to over come, and system scope comes with integration issues on its own.
I’ll write up why I think the admin-project approach is better in a follow on article. However, I think the current SRBAC effort to make ADMIN act as a global only role is a necessary step anyway. Distinguishing between global and privileges is an essential hardening scheme. Making more fine-grained roles that have limited scopes, and using them consistent across the OpenStack services is proper hardening. Locking down the role-assigment delegation rules in Keystone is essential. If all that is done, the addition of is_admin_project becomes and additional safety check, part of a defense in depth, and not the make-or-break it was when I originally wrote up the proposal.
The damage is already done. ADMIN is global, and proliferated. That is why most people do not expose their OpenStack APIs to their end users, and instead used things like CloudForms as a front to it. Or they wrote custom policy.
Version 4.8.1 was released recently. As you could guess from the version number change, it is primarily a bug fix release, but some minor features also slipped in. From this blog, you can learn what changed in syslog-ng 4.8.1 and where you can get its latest stable version.
there is a new destination for Elasticsearch data stream
the key fingerprint of the peer is now available as a syslog-ng macro
you can now control what happens if there is a parsing error in the syslog-parser
However, there were many more changes under the hood. The number of open issues on GitHub became a lot shorter. Many bugs were fixed, including some hard-to-debug crashes, if you used a space at the wrong place in the syslog-ng configuration.
One of the original goals of syslog-ng was to support many different platforms. Over the years, many commercial UNIX variants have disappeared. Recently, the main target of syslog-ng development became Linux on x86_64, as the majority of our users install syslog-ng on these OS-es. Some of the new features could only be compiled on the very latest Linux versions.
Version 4.8.1 did not only bring stability fixes, but going back to the roots, it also improved platform support a lot. Many syslog-ng features are now also available on older OS releases, and compile not just on Linux, but on MacOS and FreeBSD as well. We also received help from external contributors in this: syslog-ng is now available in MacPorts again with a much more extended feature set. You can read more details on this topic at https://www.syslog-ng.com/community/b/blog/posts/huge-improvements-for-syslog-ng-in-macports.
For many years, the main development platform for syslog-ng was Debian Testing, as only this OS included all the possible dependencies of syslog-ng. Using a rolling Linux distro is good for testing but can cause unexpected problems at the worst moments (like the middle of a release process). We switched to a stable OS as a base for our development and release containers: https://www.syslog-ng.com/community/b/blog/posts/we-are-switching-syslog-ng-containers-from-debian-testing-to-stable The nightly containers were based on Debian Stable for the past couple of weeks, and with version 4.8.1 of syslog-ng, the release container is also based on Debian Stable.
Before going back to regular feature development, we keep working a bit more on stabilizing the syslog-ng code base and enhancing platform support.
Where to get syslog-ng 4.8.1
There are many ways to get syslog-ng. Even if there is little chance, you might want to check if the latest syslog-ng is available for your OS: https://repology.org/project/syslog-ng/versions. While most larger Linux distributions lag behind, Alpine Linux, Homebrew, and a number of smaller Linux distributions already have syslog-ng 4.8.1.
Version 4.8.1 of syslog-ng is supposed to be our most stable release with the best platform support in many years. Still, if you run into any problems, let us know: https://github.com/syslog-ng/syslog-ng/issues
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
Hello testers,
our team has worked with our largest customers in order to better define what the
Managed Hosting subscription is and how it can bring more value beyond the
Kiwi TCMS application itself. You can read about the details below.
What is Managed Hosting by Kiwi TCMS
This is our top-tier of support services, where the Kiwi TCMS team leverages
our existing experience of hosting and running the Kiwi TCMS application
in production. It is more about the batteries included which make
overall operations easier, rather than specific software features.
Who is this subscription for
Managed Hosting is suitable for large organizations which consider
their Kiwi TCMS instance to be a mission critical piece of infrastructure.
These are typically organizations with hundreds or thousands of testers
which have more requirements towards security and performance.
What do you get
Everything from lower tier subscription plans!
Most notably you get access to all versions of
community & enterprise releases and a SaaS namespace which could be useful for in-house
development and experimentation with Kiwi TCMS. The SaaS version
(or a self-deployed enterprise version) can also be used as a sandbox instance to exercise
backwards compatibility testing against the latest version of Kiwi TCMS before
we upgrade your designated production instance!
What do all of the individual items mean
1x Kiwi TCMS hosted in AWS: we've been running Kiwi TCMS in production since 2017 without
major incidents so far! This is lots of application and operation specific experience which
allows us to run a Kiwi TCMS instance securely and efficiently for you.
Managed Hosting frees up
your DevOps team from figuring it all out and lets them work on higher priority items.
Under this subscription you may chose one of
Amazon Web Services regions
if you have team members concentrated in a specific geographic area. There is no guarantee for the
actual underlying technology, e.g. EC2, ECS, Lightsail or other - this is up to us!
IMPORTANT: Kiwi TCMS would adjust application size in order to meet your performance requirements
within reason. The total cost of all consumed cloud resources should still be covered by your
monthly payment. In extreme scenarios we would ask you to purchase a higher quantity
of the same subscription.
Fully isolated instance: means exactly that - your database, web application and any additional
services (e.g. a Redis cache) will be completely isolated from resources provisioned for other customers
due to security concerns.
As part of the Managed Hosting subscription you get an unlimited storage quota for
uploaded files and attachments. We may work with you to establish a data retention period if necessary.
Email delivery via Amazon SES: depending on the amount of testing you perform Kiwi TCMS may be
sending lots of emails. Throughout the years we've observed that SMTP connections sometimes fail
resulting in unreliable service or email messages get marked as SPAM hurting the reputation of the sender.
This is also an area which requires prior configuration and does not work out of the box.
Managed Hosting deployments use Amazon SES for email delivery which has the added benefit of
automatically managing blacklists when a delivery fails or is marked as SPAM. As part of
this subscription you will have to authorize one of your email addresses to be used with SES!
Alternatively we may use an email address on one of our own domains.
DNS & SSL management: correct DNS and SSL configuration is vital for the
so called multi-tenant feature in Kiwi TCMS. This is the feature where you can create
unlimited namespaces of the type team-1.tcms.example.com and product-2.tcms.example.com
via the Kiwi TCMS web interface and have them available immediately.
A misconfigred DNS and/or misconfigured and/or expired SSL certificates is something that
happens regularly and leads to sub-optimal performance. With Managed Hosting we're going
to be managing all of this in the background.
Full application admin via web: as a customer you get the super-user account defined
in the Kiwi TCMS application and we promise that Kiwi TCMS staff will not login to minimize
the possibility of incidents. In any case we treat all of the data stored in a
Managed Hosting instance as confidential.
Can override app/host settings: most of Kiwi TCMS' settings are not exposed via the web
interface and some customers find it cumbersome to override them. This is unfortunately
how the underlying application framework is designed to work. With a Managed Hosting
subscription our team will be taking care of this for you. This also extends to settings
of the underlying host system when possible.
Kiwi TCMS upgrade upon request: currently an application upgrade involves human
interaction and is not something that can be automated as an unattended process.
Upgrading may also have implications towards backwards compatibility with 3rd
party systems and in-house software. That's why we would upgrade your instances
only after an explicit request and strive to keep downtime to a minimum.
Regular security updates: relates to the underlying host OS (where applicable)
and web server configuration for Kiwi TCMS. With a Managed Hosting subscription
our team will be keeping track of this for you. The underlying Postgres database
is not exposed via the Internet however we try to keep this up-to-date too!
Your InfoSec team is welcome to submit suggestions for improvements and we would
happily implement them where possible.
Access to encrypted backups: our team performs regular backups for every Kiwi TCMS
instance within our care. All backups are encrypted using the popular open source tool
restic. This includes database and file uploads.
As part of the Managed Hosting subscription we will work with your IT department and
provide you with shared access to these files in case you would like to keep your own
disaster recovery copy and/or to provision staging Kiwi TCMS instances with production like
data. Our team is open to further collaboration in this area!
Extended support: as part of Managed Hosting subscription you get more support coverage,
07-22 UTC/Mon-Sun, this is 3 hours extra and coverage over the weekends
plus a video conference option when necessary in order to resolve
requests faster.
Mid-term plans
Hosting on Red Hat Enterprise Linux: in some cases we are running containers directly
onto a bare-metal or virtualized machine. We are exploring the possibility of using
Red Hat Enterprise Linux throughout the entire hosting stack and tying this with
Red Hat's existing management infrastructure without adding extra charges towards
our customers.
Red Hat Enterprise Linux package upgrades: will apply to both host OS (where applicable)
and the kiwitcms/enterprise container application which is already built on top of
Red Hat UBI 9
container image.
Load balanced deployment: running the Kiwi TCMS application in a load balanced
environment for customers with high performance requirements.
Access to monitoring tools: we're exploring how to securely provide access
for each customer to our existing monitoring tools and/or implement new ones where needed.
This will allow your DevOps team to scrutinize how well we are doing and
provide us with valuable feedback. Let us know what you would be interested to have access to!
Long-term plans
Security certifications via NDA: our vision is to be able to securely share existing and
future security related certifications with our Managed Hosting customers under a
non-disclosure agreement. Ideally we would enroll provisioned instances into a penetration testing
program which will be provided by a 3rd party vendor. Let us know if you have specific suggestions here.
24/7 support: the first step here will be to refactor our existing support program
and migrate to a ticket management system. Ideally such system will be open source too.
At a later date our goal is to have 24x7 coverage in order to minimize response times!
Happy Testing!
If you like what we're doing and how Kiwi TCMS supports various communities
please help us!
Here’s the simplest example to deploy a containerized Next application with Kamal.
What’s Kamal
Kamal is a new tool from 37signals for deploying web applications to bare metal and cloud VMs. It comes with zero-downtime deploys, rolling restarts, asset bridging, remote builds, and more.
Kamal needs SSH configured and Docker installed to run. You also need to create a cloud VM somewhere like Hetzner or Digital Ocean.
SSH configuration
Linux and macOS should come with SSH installed but you’ll need a new key pair for the server:
$ ssh-keygen -t EcDSA -a 100 -b 521 -C"admin@example.com"
This will promt you for the key name and will save the keys to ~/.ssh/ by default.
Add the private key to the SSH agent:
$ ssh-add ~/.ssh/[KEY]
Host provisioning
Create a cloud VM on your favourite provider, choose your prefered Linux operating system (e.g. Ubuntu 24 LTS), and provision the server with your public key from the previous step.
Note the host public IP address.
Once the cloud VM is ready, you can recheck if SSH works by running:
$ ssh root@[IP_ADDRESS]
Docker configuration
Install Docker locally if you don’t have it. If you are on macOS you can install Docker Desktop or OrbStack.
Then sign up for a Docker repository such as Docker Hub and create a first private repository with your application name like my-next-app.
Create an access token and note it down.
Dockerfile
You’ll need to write a Dockerfile for your Next application. A very basic one can look like the following:
FROM node:20 AS base
WORKDIR /app
# RUN npm i -g pnpm
COPY package.json package-lock.json ./
RUN npm install
COPY ..
RUN npm run build
FROM node:20-alpine3.19 AS release
WORKDIR /app
# RUN npm i -g pnpm
COPY --from=base /app/node_modules ./node_modules
COPY --from=base /app/package.json ./package.json
COPY --from=base /app/.next ./.next
EXPOSE 80
CMD ["npm", "start"]
If you use pnpm you’ll need to install it and replace the expected files. Also note that we are exposing port 80. Commit the Dockerfile to your source control.
To start Next we’ll provide the -p 80 argument to next start in package.json:
You likely don’t have Ruby around, so install Kamal as Docker image by creating a command alias.
On macOS for $HOME/.zshrc:
$ alias kamal='docker run -it --rm -v "${PWD}:/workdir" -v "/run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock" -e SSH_AUTH_SOCK="/run/host-services/ssh-auth.sock" -v /var/run/docker.sock:/var/run/docker.sock ghcr.io/basecamp/kamal:v2.2.1'
On Linux for $HOME/.bashrc:
$ alias kamal='docker run -it --rm -v "${PWD}:/workdir" -v "${SSH_AUTH_SOCK}:/ssh-agent" -v /var/run/docker.sock:/var/run/docker.sock -e "SSH_AUTH_SOCK=/ssh-agent" ghcr.io/basecamp/kamal:v2.2.1'
Kamal configuration
Now open a new terminal window and generate the basic Kamal files with kamal init:
$ kamal init
Open .kamal/secrets and provide the registry token as password:
KAMAL_REGISTRY_PASSWORD="dckr_pat..."
And open config/deploy.yml and provide a starting configuration:
# Name of your application. Used to uniquely configure containers.service:my-next-app# Name of the container image.image:[REGISTRY_USER]/my-next-app# Deploy to these servers.servers:web:-[PUBLIC_IP_ADDRESS]# Enable SSL auto certification via Let's Encrypt (and allow for multiple apps on one server).# If using something like Cloudflare, it is recommended to set encryption mode# in Cloudflare's SSL/TLS setting to "Full" to enable end-to-end encryption.proxy:ssl:truehost:[DOMAIN_NAME]# Credentials for your image host.registry:# Specify the registry server, if you're not using Docker Hub# server: registry.digitalocean.com / ghcr.io / ...username:[REGISTRY_USER]# Always use an access token rather than real password (pulled from .kamal/secrets).password:-KAMAL_REGISTRY_PASSWORD# Configure builder setup.builder:arch:amd64
If you have a domain name, provide a domain name and set ssl to true, otherwise keep it false.
Deploy
Now you can deploy the app by running:
$ kamal setup
Kamal now should be able to do its thing, log in to Docker registry, build the application, run Kamal Proxy on the server, and all of the other required steps to run your application.
Any subsequent deploys can be then done using kamal deploy.
It’s once again time for the outrage generators on social media to ask if SBOMs have any value. This seems to happen a few times a year. Probably lines up with the pent up excitement while we wait for the McRib to return. I could dig up a few examples of these articles but I can’t be bothered, and it doesn’t matter. I’d rather spend my time searching for a McRib … I mean, writing this blog post.
I wanted to write down some thoughts, I’m sure it won’t change the constant complaining about how SBOMs are completely useless, and how important it is to tell everyone that, because it’s very normal to remind people about useless things and how useless they are. Nothing really enforces true uselessness like having to spend time explaining how useless something is.
Let’s answer the big question first. Are SBOMs useless? It depends really. If I’m a farmer waiting for my big McRib payoff, they might be. Although with all the right to repair and tractor hacking and all that, it’s possible they’re not totally useless. But like anything, SBOMs are good at certain things, but they’re not good at everything. It’s always good to temper expectations and be reasonable. And most importantly, if there’s something you don’t really like, you can just not use it. If something does suck, just ignore it. Your time has value you know.
OK, so that’s a bad start I suppose. Here’s maybe a better start. Much like rats in New York City. You’re never more than 6 feet from an SBOM. Wait, is that actually better? We started with McRib and now we’re talking about rats … the topic is SBOMs. But really, they drive a lot of modern software, but not quite in ways you realize. That package manager you just used to install a JSON library? That’s an SBOM. You run a vulnerability scanner? Also SBOM! Your car’s entertainment system, it probably lists all the open source it uses in a well hidden menu. Let’s call it a bill of materials, for software.
SBOMs are really just lists of software. I bet you could make a list of things that use lists of software. Package managers, app stores, top ten lists. It’s a pretty common idea.
“HOLD THE #%$& ON!” we hear from the back row, filled with manufactured outrage “THAT’S NOT WHAT ME MEAN AT ALL! We mean systems that manage SBOMs, in SPDX and/or CycloneDX format, can you show me that? That’s what I want to see. Let’s see it! Oh you can’t, WE KNEW IT, sweet sweet victory, we have proven the uselessness beyond a reasonable doubt!” Right about here I’d like to remind you your time has value, and it’s OK to ask for a hug sometimes. I mean, not from me though.
It’s also probably worth explaining such a system will never exist because SBOMs aren’t useful by themselves. SBOMs are tools used to solve other problems where having a list of software is the solution, or at least part of the solution.
You should compare this line of reasoning to going phone shopping and loudly complaining that a company making screws is failing at making devices that can look at memes on the internet. I mean, there are screws in the phones, but if you are measuring screws against their ability to look up memes on the internet, you will be very disappointed. Much like an SBOM, a screw is but a gear in a much larger machine (see what I did there).
The thing is, SBOMs aren’t something that should exist by themselves. They’re a tool, a part, in a larger system. The two SBOM formats that get much of the attention and complaints are SPDX and CycloneDX. But those are just two standards. There are formal definitions of SBOMs that tell us what sort of information they should contain – SPDX and CycloneDX meet those requirements, but it’s foolish to use such a narrow definition by itself, remember, it’s a tool. Standards are importantly for interoperability, but they aren’t necessarily important for solving problems.
Demanding to be shown a successful SBOM tool is the fallacy in this whole argument. Lots of tools use and create lists of software. Some of those can import and export SPDX or CycloneDX. But those formats aren’t necessarily useful by themselves. It’s time for an example.
Let’s say I have a vulnerability scanner. There are many vulnerability scanners in the world, commercial and open source. They have various levels of quality, some are great, some are horrid. Now, in order to scan something for vulnerabilities, I need to know what the software on the system is. This seems like an obvious first step. So the tool has some sort of software identification system built into it. You know what else that identification system can do? It can output the list of software it found, maybe in SPDX or CycloneDX format. More than one vulnerability scanner figured this out, and now they output SBOMs.
At this point the cynical of us would proclaim that while this is technically correct, you can only call it an SBOM if it’s generated in the correct region of France, otherwise it’s just a sparkling list. The reality of it all is complaining about SBOMs gets some attention, but it doesn’t really matter. The ideas behind SBOMs are used everywhere, and formats like SPDX and CycloneDX are just nice ways to communicate a list of software. If you can solve your problem with a list of software, that’s great, do that. Work in this space is progressing wonderfully and there are more tools to help than we’ve ever had before.
And the next time you see someone who seems unreasonably angry about SBOMs, maybe ask them if they need a hug. If you’re not a hugger, ask if they need a McRib.
Josh and Kurt talk about the current WordPress / WP Engine mess. In what is certainly a supply chain attack, the Advanced Custom Fields forking. This whole saga is weird and filled with chaos and stupidity. We have no idea how it will end, but we do know that the blog platform you use shouldn’t be this exciting. The bad sort of exciting.
Update Django from 5.0.8 to 5.0.9, addressing multiple potential security
vulnerabilities, which do not seem to affect Kiwi TCMS directly however
this is not 100% guaranteed
Improvements
Update markdown from 3.6 to 3.7
Update psycopg from 3.2.1 to 3.2.3
Update pygithub from 2.3.0 to 2.4.0
Update python-bugzilla from 3.2.0 to 3.3.0
Update python-gitlab from 4.9.0 to 4.13.0
Update tzdata from 2024.1 to 2024.2
Update uwsgi from 2.0.26 to 2.0.27
Update node_modules/pdfmake from 0.2.10 to 0.2.14
Specify large_client_header_buffers for NGINX proxy configuration example
to match the configuration of Kiwi TCMS
Set uWSGI configuration max-requests to 1024
Settings
Explicitly set DATA_UPLOAD_MAX_NUMBER_FIELDS to 1024, default is 1000
Bug fixes
Increase uWSGI configuration buffer-size to 20k to allows the creation of
a TestRun with 1000 test cases! Fixes
Issue #3387,
Issue #3800
Refactoring and testing
Update black from 24.8.0 to 24.10.0
Update pylint-django from 2.5.5 to 2.6.1
Update selenium from 4.23.1 to 4.25.0
Update sphinx from 8.0.2 to 8.1.1
Update node_modules/webpack from 5.93.0 to 5.95.0
Update node_modules/eslint from 8.57.0 to 8.57.1
Update node_modules/eslint-plugin-import from 2.29.1 to 2.31.0
Assert that password reset email contains username reminder
Update translation source strings
Kiwi TCMS Enterprise v13.6-mt
Based on Kiwi TCMS v13.6
Update django-ses from 4.1.0 to 4.2.0
Update kiwitcms-tenants from 3.1.0 to 3.2.1
Update sentry-sdk from 2.12.0 to 2.16.0
Update value for Content-Security-Policy header to match
upstream Kiwi TCMS
Private container images
quay.io/kiwitcms/version 13.6 (aarch64) 14f4599db480 12 Oct 2024 705MB
quay.io/kiwitcms/version 13.6 (x86_64) 2d925723ab4e 12 Oct 2024 693MB
quay.io/kiwitcms/enterprise 13.6-mt (aarch64) 27a5de45d8dc 12 Oct 2024 1.07GB
quay.io/kiwitcms/enterprise 13.6-mt (x86_64) f2ba176b5e0f 12 Oct 2024 1.05GB
Y and X are variables, but 5 and 19 are numbers. The numbers don’t change.
In Algebra, we use letters as variables, and think of digit based numbers as the actual numbers. Here are the first four chords of a standard jazz tune
Fm7 | Bb-7 | Eb7 | Abmaj7 |
Suppose your singer can’t reach the low notes, and wants to transpose it up a full step. You end up with:
Gm7 | C-7 | F7 | Bbmaj7 |
This works because the relationship between the chords is the same in both keys. To represent this relationship, we represent the chord using roman numerals. If we assume the first version is in the key of Ab, and the second in the key of Bb, we say the last chord of the sequence is the root, and refer to it using the roman numeral I, and will call it “the One”. We use lowercase letters to refer to minor chords. So the sequence becomes:
vi-7 | ii -7 | V7 | Imaj7
Thus, the letter version is the “actual” chords, whereas the numeric version is the variable.
We’re trying to increase the fwupd coverage score, so we can mercilessly refactor and improve code upstream without risks of regressions. To do this we run thousands of unit tests for each part of the libfwupdpublic API and libfwupdpluginprivate API. This gets us a long way, but what we really want to do is emulate the end-to-end firmware update of every real device we support.
It’s not trivial (or quick) connecting hundreds of devices to a specific CI machine, and so for some time we’ve supported recording USB device enumeration, re-plug, firmware write, re–re-plug and re-enumeration. For fwupd 2.0.0 we added support for all sysfs-based devices too, which allows us emulate a real world NVMe disk doing actual ioctls() and reads() in every submitted CI job. We’re now going to ask vendors to record emulations for existing plugins of the firmware update so we can run those in CI too.
The device emulation docs are complicated and there’s lots of things that the user can do wrong. What I really wanted was a “click, click, save-as, click” user experience that doesn’t need to use the command line. The tl;dr: is that we’ve now added the needed async API in fwupd 2.0.1 (probably going to be released on Monday) and added the click, click UI to gnome-firmware:
There’s a slight niggle when the user starts recording the first “internal” device (e.g. a NVMe disk) that we need to ask the user to restart the daemon or the computer. This is because we can’t just hotplug the internal non-removable device, and need to “start recording” then “enumerate device(s)” rather than the other way around. Recording all the device enumeration isn’t free in CPU or RAM (and is possibly a security problem too), and so we don’t turn it on by default. All the emulation is also all controlled using polkit now, so you need the root password to do anything remotely interesting.
Some of the strings are a bit unhelpful, and some a bit clunky, so if you see anything that doesn’t look awesome or is hard to translate please tell us and we can fix it up. Of course, even better would be a merge request with a better string.
If you want to try it out there’s a COPR with all the right bits for Fedora 41. It’ll might also work on Fedora 40 if you remove gnome-software. I’ll probably switch the Flathub build to 48.alpha when fwupd 2.0.1 is released too. Feedback welcome.
Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for a parallel installation, the perfect solution for such tests, and also as base packages.
RPMs of PHP version 8.3.13RC1 are available
as base packages in the remi-modular-test for Fedora 39-41 and Enterprise Linux≥ 8
as SCL in remi-test repository
RPMs of PHP version 8.2.25RC1 are available
as base packages in the remi-modular-test for Fedora 39-41 and Enterprise Linux≥ 8
as SCL in remi-test repository
The packages are available for x86_64 and aarch64.
PHP version 8.1 is now in security mode only, so no more RC will be released.
This is the 124th issue of syslog-ng Insider, a monthly newsletter that brings you syslog-ng-related news.
NEWS
Version 4.8.0 of syslog-ng improves FreeBSD and MacOS support
Recently One Identity released version 4.8.0 of its open-source log management application. Learn about some of the new features and bug fixes: why upgrade to the latest syslog-ng version, not only on FreeBSD :-)
Why it is useful to set the version number in the syslog-ng configuration
The syslog-ng configuration starts with a version number declaration. Up until recently, if it was missing, syslog-ng did not start. With syslog-ng 4.8, this is changing. From this blog, you can learn why version information is useful, what workaround you can use if you do not want to edit your syslog-ng configuration on each update, and what changed in version 4.8.
We are switching syslog-ng containers from Debian Testing to Stable
For many years, the official syslog-ng container and development containers were based on Debian Testing. We are switching to Debian Stable now. Learn about the history and the reasons for the change now.
Here are the release notes from Cockpit 326, cockpit-podman 96, and cockpit-files 9:
cockpit/ws container: Connect to any Linux server
The quay.io/cockpit/ws container provides Cockpit’s web server in environments such as Kubernetes and Fedora CoreOS, connecting to a remote machine via ssh.
Now, the cockpit/ws container gained the ability to connect to any Linux server, even those without Cockpit installed. This provides a convenient way to perform basic administration tasks remotely. This is similar to the Cockpit Client flatpak (available since release 295), but can run anywhere an OCI container can run.
Currently, the following pages are available when connecting to a server without Cockpit: Overview, Metrics, Terminal, Accounts, and Networking. More pages will be added soon, after vetting them for this use.
cockpit/ws container: Support host-specific SSH keys
The cockpit/ws container now supports adding multiple SSH private keys.
In addition to the existing $COCKPIT_SSH_KEY_PATH environment variable, you can now set host specific $COCKPIT_SSH_KEY_PATH_{HOSTNAME} variables, where {HOSTNAME} is the host name (in capital letters) used in the Connect to: field of the login page.
Thanks to benniekiss for contributing this feature!
Bridge supports PCP; cockpit-pcp package is now obsolete
The Python cockpit-bridge, introduced in Cockpit 294, now implements PCP functionality. As a result, the separate cockpit-pcp package is now obsolete.
Storage: Manage Stratis virtual filesystem sizes
Cockpit supports seting a default and maximum size limit for Stratis virtual filesystems, both during creation and when making adjustments.
Podman: pull images from registries without search API
Container creation now supports registries lacking a search API, including GitHub’s container registry (ghcr.io). You can also select a specific tag for images.
Files: basic keyboard shortcuts
Try it out
Cockpit 326, cockpit-podman 96, and cockpit-files 9 are available now:
It has been a long time, with out sharing some words here, but there is something that may work doing it.
At the end of last year Luis Bazan (from Fedora Panama) invite me to share some words with the attendees to Hackathon Copa 2023, it was a fun time sharing with organizers, students and teachers. That lead to been invited to become part of the organizers team for this year hackathon.
The subjec of this year HackathonCopa was Linux, therefore it was really fan and interesting, with the lead of Hugo Aquino from Copa Airlines (the main sponsor) and the Fedora Panama team and Floss-pa we help providing workshops and talks about Linux. Many other sponsors join us including Red Hat team from our area, so it was a great time to meet and share experience and activities.
A full year of work meetings and planning came to the end at October 4 and 5, were sponsors, volunteers and mentors work with over 500 students on talks on the first day.
First day was over and we share with hundred of students about Fedora, Alma Linux and risc-v, showing a risc-v board that got from http://risc-v.org many stickers and tshirts were shared at our stand.
Second day was the challenges, 115 teams of 4 members accepted the challenge, our group from Fedora Panama, Flosspa serve as part of the mentors we have the task to guide with out helping and grade the teams it was a long day but at the end the teams won this edition:
"Los Reales" - primer lugar
"ZeroTrust" - segundo lugar
"Tux Team" - tercer lugar
"Pingüinos Pioneros" - cuarto lugar
"KernelKnights" - quinto lugar
"Sudo rm-rf" - sexto lugar
Winner team photos are not in order
It was fun to share and meet the new Linux talent in our country. Some of the are wiling to learn and contribute to our beloved Fedora distro so you may ear from them.
As we all experience we learn some new tricks. Thanks to Fedora Project and Alma Linux for sponsoring this event.
I have to add special thanks to all members of Fedora Panama and Flosspa who become a sponsor in this event
Many thanks to all you to be willing to share knowledge and time with us on this kind of events
یکی از کارها برای استاندارد سازی ماژول های Terraform داشتن فایل README.md برای ماژول ها می باشد. نوشتن این چنین فایلی برای ماژول ترافوم کار زمانگیری می باشید بویژه وقتی که تعداد این ماژول ها زیاد باشند. در این مطلب قصد داریم تا terraform-docs را جهت خودکار سازی ساخت فایل README.md برای ماژول های Terraform […]
Josh and Kurt talk about the recent CUPS issue. The vulnerability itself wasn’t all that exciting, but the whole disclosure process was wild. There’s a lot to talk about, many things didn’t quite go as planned and it all leaked early. Let’s talk about why and what it all means.
Kamal 2 finally brings the most requested feature to reality and allows people to run multiple applications simultaneously on a single server. Here’s how.
The Kamal way
Kamal is an application-centric deploy tool rather than a small PaaS. And this hasn’t changed with the new version 2. But what does it even mean?
Let’s look at a typical config/deploy.yml to run a generic application:
# config/deploy.ymlservice:[APP_NAME]image:[DOCKER_REGISTRY]/[APP_NAME]servers:web:-165.22.71.211job:hosts:-165.22.71.211cmd:bin/jobsproxy:ssl:truehost:[APP_DOMAIN]registry:username:[DOCKER_REGISTRY]# Always use an access token rather than real password when possible.password:-KAMAL_REGISTRY_PASSWORDenv:...
As you can notice the configuration describes only one particular service. And this hasn’t changed. Applications still have their own configuration. The only thing that changed is the possibility to share their servers.
Kamal Proxy
Kamal 2 adds support for multiple apps with the new Kamal Proxy. The new proxy registers new deployments for services and handles their gapless switchover.
The only thing Kamal Proxy needs to know is the host (domain) of the service so it can route the traffic to the service web containers. This is done with the following Kamal configuration:
The extra ssl option let’s us also request automatic SSL/TLS certificates via Let’s Encrypt.
There will only be one instance of Kamal Proxy running on a given server, installed by the first deployed application.
Multiple apps
Let’s say we want to deploy three different application on the same server for a local dealership. The main app, API server, and their marketing website. Then we need three different configurations, one for each app.
Kamal cannot do redirects right now, so the auto redirect from www to non-www variant has to be done on the application side. Also any accessory that needs to expose an HTTP endpoint should actually be another app like these three.
To deploy them we would run kamal setup for each. The first one will install Docker, set up the kamal network, and make sure kamal-proxy is running. The other two would safely skip these steps and deploy to existing proxy.
dealeship$ kamal setup
dealeship-api$ kamal setup
marketing$ kamal setup
Debugging
If we want to check what apps should Kamal Proxy be running, we can do so on the server with kamal-proxy list:
$ ssh [USER]@[SERVER]
# docker exec kamal-proxy kamal-proxy list
Service Host Target State TLS
dealership dealearship.com cde2433e86d6:80 running yes
dealership-api api.dealearship.com 82361b53174f:80 running yes
...
We’ll get what hosts and revisions are running for each deployed service as well as their status.
This is not exposed directly to Kamal but we can fix it with an alias:
# config/deploy.yml...aliases:...apps:server exec docker exec kamal-proxy kamal-proxy list
Now running kamal apps gives us a nice rundown of what’s running which is pretty sweet.
I tried other gaming oriented Linux distributions and found them all to be… well, to be frank, bloated. I don’t need
Lutris preinstalled, I don’t use it. Just like I don’t need 20 other tools or launchers or managers installed, with
services running and draining my battery. Furthermore, I love the ability to simply “tab out” of my steam gamescope and
enjoy full desktop environment.
And so I’ve come to the conclusion, that the best thing I can do is to simply start from scratch. I’ve got a little video showcasing what my install looks like, the below is the minimum you’d need to take care of to get something similar.
Note that this is about standard rpm based Fedora - NOT Atomic Fedora (NOT Silverblue/Kinoite/etc)
fsync-kernel
Most of the tools necessary to achieve this are already there. Some devices, including GPD Win 4 do benefit from enabling the fsync kernel copr repository. In order to use it you need to:
$ sudo dnf copr enable sentry/kernel-fsync
Edit /etc/yum.repos.d/fedora.repo and /etc/yum.repos.d/fedora-updates.repo
Add exclude=kernel* to the main “[fedora]” and “[fedora-updates]” categories respectively
You should be able to now simply $ sudo dnf update --refresh to grab the fsync kernel and reboot. Verify with uname -a or rpm -qa | grep kernel to see what you’ve got.
Should the update above fail, for instance because Fedora has got a higher version, you can force it with $ sudo dnf distro-sync kernel*
hhd
HandHeld Daemon provides various functionality necessary to run your handheld seamlessly, such as TDP limits, both through the decky integration as well as using the desktop UI. Installing it is quite simple - enable the hhd-dev copr repository.
$ sudo dnf copr enable hhd-dev/hhd
Install all the things
Obviously we need steam and gamescope, both of which are already available. And hhd we enabled above.
$ sudo dnf install steam gamescope hhd hhd-ui
Executing steam in gamescope
I’ve got a little .desktop file in my taskbar that I use to run it. Shove it wherever you like, tldr this is what I run:
We don’t want login screens for deck-like experience. Note that you still need to use your password for sudo, etc. Again I’m on KDE so your configuration might varry a little bit if you’re not.
Settings -> Login Screen (SDDM) -> clicky the “behavior” button on top -> clicky the check box to automatically log in as your user, and your session.
Create the nopasswwdlogin group and addyourself to it, and configure pam appropriately - follow the Arch wiki for Passwordless login
Autostart gamescope
I use KDE, your configuration might differ. I simply first created an autostart application in the KDE settings (search for autostart) and then replaced it with a link to my existing .desktop file:
I replaced the default deck startup video with a single black frame .webm here, as I haven’t figured out a better way to get rid of it heh (-nointro cli argument doesn’t seem to work)
Today I tagged fwupd 2.0.0, which includes lots of new hardware support, a ton of bugfixes and more importantly a redesigned device prober and firmware loader that allows it to do some cool tricks. As this is a bigger-than-usual release I’ve written some more verbose releases notes below.
The first notable thing is that we’ve removed the requirement of GUsb in the daemon, and now use libusb directly. This allowed us to move the device emulation support from libgusb up into libfwupdplugin, which now means we can emulate devices created from sysfs too. This means that we can emulate end-to-end firmware updates on fake hidraw and nvme devices in CI just like we’ve been able to emulate using fake USB devices for some time. This increases the coverage of testing for every pull request, and makes sure that none of our “improvements” actually end up breaking firmware updates on some existing device.
The emulation code is actually pretty cool; every USB control request, ioctl(), read() (and everything inbetween) is recorded from a target device and saved to a JSON file with a unique per-request key for each stage of the update process. This is saved to a zip archive and is usually uploaded to the LVFS mirror and used in the device-tests in fwupd. It’s much easier than having a desk full of hardware and because each emulation is just that, emulated, we don’t need to do the tens of thousands of 5ms sleeps in between device writes — which means most emulations take a few ms to load, decompress, write and verify. This means you can test [nearly] “every device we support” in just a few seconds of CI time.
Another nice change is the removal of GUdev as a dependency. GUdev is a nice GObject abstraction over libudev and then sd_device from systemd, but when you’re dealing with thousands of devices (that you’re poking in weird ways), and tens of thousands of device children and parents the “immutable device state” objects drift from reality and the abstraction layers really start to hurt. So instead of using GUdev we now listen to the netlink socket and parse those events into fwupd FuDevice objects, rather than having an abstract device with another abstract device being used as a data source. It has also allowed us to remove at least one layer of caching (that we had to work around in weird ways), and also reduce the memory requirement both at startup and at runtime at the expense of re-implementing the netlink parsing code. It also means we can easily start using ueventd, which makes it possible to run fwupd on Android. More on that another day!
The biggest change, and the feature that’s been requested the most by enterprise customers is the ability to “stream” firmware from archives into devices. What fwupdmgr used to do (and what 1_9_X still does) is:
Send the cabinet archive to the daemon as a file descriptor
The daemon then loads the input stream into memory (copy 1)
The memory blob is parsed as a cabinet archive, and the blocks-with-header are re-assembled into whole files (copy 2)
The payload is then typically chunked into pieces, with each chunk being allocated as a new blob (copy 3)
Each chunk is sent to the device being updated
This worked fine for a 32MB firmware payload — we allocate ~100MB of memory and then free it, no bother at all.
Where this fails is for one of two cases: huge firmware or underpowered machine — or in the pathological case, huge video conferencing camera firmware with inexpensive Google ChromeBook. In that example we might have a 1.5GB firmware file (it’s probably a custom Android image…) on a 4GB-of-RAM budget ChromeBook. The running machine has a measly 1GB free system memory, and then fwupd immediately OOMs when just trying to parse the archive, let alone deploy the firmware.
So what can we do to reduce the number of in memory copies, or maybe even remove them all completely? There are two tricks that fwupd 2.0.x uses to load firmware now, and those two primitives we now use all over the source tree:
Partial Input Stream:
This models an input stream (which you can think of like a file descriptor) that is made up of a part of a different input stream at a specific offset. So if you have a base input stream of [123456789] you can build two partial input streams of, say, [234] and [789]. If you try and read() 5 bytes from the first partial stream you just get 3 bytes back. If you seek to offset 0x1 on the second partial input stream you get the two bytes of [89].
Composite Input Stream
This models a different kind of input stream, which is made up of one or more partial input streams. In some cases there can be hundreds of partial streams making up one composite stream. So if you take the first two partial input streams defined a few lines before, and then add them to a composite input stream you get [234789] — and reading 8 bytes at offset 0x0 from that would give you what you expect.
This means the new way of processing firmware archives can be:
Send the cabinet archive to the daemon as a file descriptor
The daemon parses it as a cab archive header, and adds the data section of each block to a partial stream that references the base stream at a specific offset
The daemon “collects” all the partial streams into a composite stream for each file in the archive that spans multiple blocks
The payload is split into chunks, with each chunk actually being a partial stream of the composite file stream
Each chunk is read from the stream, and sent to the device being updated
Sooo…. We never actually read the firmware payload from the cabinet file descriptor until we actually send the chunk of payload to the hardware. This means we have to seek() all over the place, possibly many times for each chunk, but in the kernel a seek() is really just doing some pointer maths to a memory buffer and so it’s super quick — even faster in real time than the “simple” process we used in 1_9_X. The only caveat is that you have to use uncompressed cabinet archives (the default for the LVFS) — as using MSZIP decompression currently does need a single copy fallback.
This means we can deploy a 1.5GB firmware payload using an amazingly low 8MB of RSS, and using less CPU that copying 1.5GB of data around a few times. Which means, you can now deploy that huge firmware to that $3,000 meeting room camera from a $200 ChromeBook — but also means we can do the same in RHEL for 5G mobile broadband radios on low-power, low-cost IoT hardware.
Making such huge changes to fwupd meant we could justify branching a new release, and because we bumped the major version it also made sense to remove all the deprecated API in libfwupd. All the changes are documented in the README file, but I’ve already sent patches for gnome-firmware, gnome-software and kde-discover to make the tiny changes needed for the library bump.
My plan for 2.0.x is to ship it in Flathub, and in Fedora 42 — but NOT Fedora 41, RHEL 9 or RHEL 10 just yet. There is a lot of new code that’s only had a little testing, and I fully expect to do a brown paperbag 2.0.1 release in a few days because we’ve managed to break some hardware for some vendor that I don’t own, or we don’t have emulations for. If you do see anything that’s weird, or have hardware that used to be detected, and now isn’t — please let us know.
A long over-due release which has accumulated a bunch of bugfixes but also some
fancy new features…read on!
As always, big thanks to everyone who reported issues and contributed to QCoro.
Your help is much appreciated!
QCoro::LazyTask<T>
The biggest new features in this release is the brand-new QCoro::LazyTask<T>.
It’s a new return type that you can use for your coroutines. It differs from QCoro::Task<T>
in that, as the name suggest, the coroutine is evaluated lazily. What that means is when
you call a coroutine that returns LazyTask, it will return imediately without executing
the body of the coroutine. The body will be executed only once you co_await on the returned
LazyTask object.
This is different from the behavior of QCoro::Task<T>, which is eager, meaning that it will
start executing the body immediately when called (like a regular function call).
QCoro::LazyTask<int>myWorker(){qDebug()<<"Starting worker";co_return42;}QCoro::Task<>mainCoroutine(){qDebug()<<"Creating worker";constautotask=myWorker();qDebug()<<"Awaiting on worker";constautoresult=co_awaittask;// do something with the result}
This will result in the following output:
mainCoroutine(): Creating worker
mainCoroutine(): Awaiting on worker
myWorker(): Starting worker
If myWorker() were a QCoro::Task<T> as we know it, the output would look like this:
mainCoroutine(): Creating worker
myWorker(): Starting worker
mainCoroutine(): Awaiting on worker
The fact that the body of a QCoro::LazyTask<T> coroutine is only executed when co_awaited has one
very important implication: it must not be used for Qt slots, Q_INVOKABLEs or, in general, for any
coroutine that may be executed directly by the Qt event loop. The reason is, that the Qt event loop
is not aware of coroutines (or QCoro), so it will never co_await on the returned QCoro::LazyTask
object - which means that the code inside the coroutine would never get executed. This is the
reason why the good old QCoro::Task<T> is an eager coroutine - to ensure the body of the coroutine
gets executed even when called from the Qt event loop and not co_awaited.
Defined Semantics for Awaiting Default-Constructed and Moved-From Tasks
This is something that wasn’t clearely defined until now (both in the docs and in the code), which is
what happens when you try to co_await on a default-constructed QCoro::Task<T> (or QCoro::LazyTask<T>):
co_awaitQCoro::Task<>();// will hang indefinitely!
Previously this would trigger a Q_ASSERT in debug build and most likely a crash in production build.
Starting with QCoro 0.11, awaiting such task will print a qWarning() and will hang indefinitely.
The same applies to awaiting a moved-from task, which is identical to a default-constructed task:
QCoro::LazyTask<int>task=myTask();handleTask(std::move(task));co_awaittask;// will hang indefinitely!`
Compiler Support
We have dropped official support for older compilers. Since QCoro 0.11, the officially supported compilers are:
GCC >= 11
Clang >= 15
MSVC >= 19.40 (Visual Studio 17 2022)
AppleClang >= 15 (Xcode 15.2)
QCoro might still compile or work with older versions of those compilers, but we no longer test it and
do not guarantee that it will work correctly.
The reason is that coroutine implementation in older versions of GCC and clang were buggy and behaved differently
than they do in newer versions, so making sure that QCoro behaves correctly across wide range of compilers was
getting more difficult as we implemented more and more complex and advanced features.
If you enjoy using QCoro, consider supporting its development on GitHub Sponsors or buy me a coffee
on Ko-fi (after all, more coffee means more code, right?).
A few years ago Mike and I discussed adding video support to zink, so that we could provide vaapi on top of vulkan video implementations.
This of course got onto a long TODO list and we nerdsniped each other into moving it along, this past couple of weeks we finally dragged it over the line.
This MR adds initial support for zink video decode on top of Vulkan Video. It provides vaapi support. Currently it only support H264 decode, but I've implemented AV1 decode and I've played around a bit with H264 encode. I think adding H265 decode shouldn't be too horrible.
I've tested this mainly on radv, and a bit on anv (but there are some problems I should dig into).
TLDR: if you know what EVIOCREVOKE does, the same now works for hidraw devices via HIDIOCREVOKE.
The HID standard is the most common hardware protocol for input devices. In the Linux kernel HID is typically translated to the evdev protocol which is what libinput and all Xorg input drivers use. evdev is the kernel's input API and used for all devices, not just HID ones.
evdev is mostly compatible with HID but there are quite a few niche cases where they differ a fair bit. And some cases where evdev doesn't work well because of different assumptions, e.g. it's near-impossible to correctly express a device with 40 generic buttons (as opposed to named buttons like "left", "right", ...[0]). In particular for gaming devices it's quite common to access the HID device directly via the /dev/hidraw nodes. And of course for configuration of devices accessing the hidraw node is a must too (see Solaar, openrazer, libratbag, etc.). Alas, /dev/hidraw nodes are only accessible as root - right now applications work around this by either "run as root" or shipping udev rules tagging the device with uaccess.
evdev too can only be accessed as root (or the input group) but many many moons ago when dinosaurs still roamed the earth (version 3.12 to be precise), David Rheinsberg merged the EVIOCREVOKE ioctl. When called the file descriptor immediately becomes invalid, any further reads/writes will fail with ENODEV. This is a cornerstone for systemd-logind: it hands out a file descriptor via DBus to Xorg or the Wayland compositor but keeps a copy. On VT switch it calls the ioctl, thus preventing any events from reaching said X server/compositor. In turn this means that a) X no longer needs to run as root[1] since it can get input devices from logind and b) X loses access to those input devices at logind's leisure so we don't have to worry about leaking passwords.
Real-time forward to 2024 and kernel 6.12 now gained the HIDIOCREVOKE for /dev/hidraw nodes. The corresponding logind support has also been merged. The principle is the same: logind can hand out an fd to a hidraw node and can revoke it at will so we don't have to worry about data leakage to processes that should not longer receive events. This is the first of many steps towards more general HID support in userspace. It's not immediately usable since logind will only hand out those fds to the session leader (read: compositor or Xorg) so if you as application want that fd you need to convince your display server to give it to you. For that we may have something like the inputfd Wayland protocol (or maybe a portal but right now it seems a Wayland protocol is more likely).
But that aside, let's hooray nonetheless. One step down, many more to go.
One of the other side-effects of this is that logind now has an fd to any device opened by a user-space process. With HID-BPF this means we can eventually "firewall" these devices from malicious applications: we could e.g. allow libratbag to configure your mouse' buttons but block any attempts to upload a new firmware. This is very much an idea for now, there's a lot of code that needs to be written to get there. But getting there we can now, so full of optimism we go[2].
[0] to illustrate: the button that goes back in your browser is actually evdev's BTN_SIDE and BTN_BACK is ... just another button assigned to nothing particular by default.
[1] and c) I have to care less about X server CVEs.
[2] mind you, optimism is just another word for naïveté
Two weeks ago, I was at EuroBSDcon and received a feature request for syslog-ng. The user wanted to collect FreeBSD audit logs together with other logs using syslog-ng. Writing a native driver in C is time consuming. However, creating an integration based on the program() source of syslog-ng is not that difficult.
This blog shows you the current state of the FreeBSD audit source, how it works, and its limitations. It is also a request for feedback. Please share your experiences at https://github.com/syslog-ng/syslog-ng/discussions/5150!
Before you begin
First, you must enable audit logging on your FreeBSD box. If you do not want to enable it permanently, enable it only for the testing:
service auditd onestart
The syslog-ng package does not have XML parsing enabled by default. The sample configuration I show in the blog uses XML parsing. You should compile the sysutils/syslog-ng port with XML parsing enabled. If you do not enable XML parsing, you can still forward XML logs without parsing, or change the praudit command line to switch to plain text output.
Configuring syslog-ng
The actual “driver” is very simple, just a few lines. Once it is ready, it will be added to the syslog-ng configuration library (SCL), and you do not have to copy it into your configuration. For now, append it to syslog-ng.conf, or create a new configuration file, if you implemented using an include directory in your configuration.
It is a configuration block, typically used in the SCL. It uses the tail command to follow the FreeBSD audit log (with -F to detect file rotations), and praudit is configured to keep running, and print single lines in XML format.
And here comes the rest of the configuration, utilizing the new source in multiple setups. Once the above configuration block is part of SCL, you only need to include this part in your configuration.
This configuration contains two test setups. The first one calls the freebsd-audit() source without any additional parameters, thus having an XML formatted output. It parses the incoming message using the XML parser, and saves it to a JSON formatted file, so you can see the name-value pairs parsed from the XML.
The second one sets the output to single line plain text format and writes it as a regular syslog message.
Note, that in both cases we use the no-parse flag, so the original message is stored in the $MESSAGE macro of syslog-ng, and syslog-ng creates a syslog-ng message header.
Bugs and limitations
While testing the freebsd-audit() source, I ran into a few bugs and limitations.
If a field has a space in it, syslog-ng adds double quotation marks.
The XML parser adds a leading underscore to tag names, which cannot be changed / removed.
As we read the logs using the tail command, there might be some missing or duplicate audit logs when syslog-ng is restarted (depending on the rate of audit logs and the downtime of syslog-ng).
The XML parser throws an error message while reading the beginning of the XML formatted audit log, as it cannot properly parse the header lines.
Testing
Add the above lines to your syslog-ng configuration (disable the XML parser, if you do not have it compiled into syslog-ng), and restart syslog-ng. The easiest way to generate some audit log messages is to login on the FreeBSD console, or using ssh. Once you generated some audit logs, you should see two new log files under the /var/log/ directory.
The above configuration collects FreeBSD audit logs to some local files. In most cases you want to collect log messages centrally.
Also, do not forget to provide us with feedback at https://github.com/syslog-ng/syslog-ng/discussions/5150. We are happy to hear problem reports, as it also means that you are actually testing what we are working on. It helps us to make syslog-ng even better. And of course we are also very happy to hear success stories!
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
I am using this page as the officially documented support page for a new iOS app called “Widgets Factory”
It has been recently initially approved and released to the iOS App Store!
~
Here are some example screenshots:
~
Known Issues
The background task and refresh processing capabilities on iOS are severely limited and hampered due to some of the more hostile protection mechanisms built in. I will also say that these protection mechanisms do serve a good purpose as to try and protect the battery life and foreground processing requests to give the user a better mobile experience. It would still be nice if Apple could improve the background system to guarantee some additional service capabilities:
Don’t sleep or stop the apps non-main thread if it is registered and expecting a background refresh, instead provide a minimum/minimal percentage of dedicated processing time overall
Ensure that the requested caller gets a background refresh process time after waiting a maximum of 5-15 minutes
Provide a specific API method call that is background approved to perform the basic needed tasks on behalf of the calling application with a minimal guaranteed processing time:
Location Coordinates
Map Snapshots
Web Fetches
Widget Views
Push Notifications
Future Wishes
It would also be nice for apps that are free without ads to be able to have a one-time (or even re-occurring) tip jar that user’s could donate whatever amount they feel comfortable with to the developer.
As I upload photos to various services, I generally resize them as required
based on portrait or landscape mode. I used to do that for all the photos in a
directory and then pick which ones to use. But, I wanted to do it selectively,
open the photos in Gnome Nautilus (Files) application and right click and
resize the ones I want.
This week I noticed that I can do that with scripts. Those can be in any given
language, the selected files will be passed as command line arguments, or full
paths will be there in an environment variable
NAUTILUS_SCRIPT_SELECTED_FILE_PATHS joined via newline character.
To add any script to the right click menu, you just need to place them in
~/.local/share/nautilus/scripts/ directory. They will show up in the right click menu for scripts.
Below is the script I am using to reduce image sizes:
#!/usr/bin/env python3
import os
import sys
import subprocess
from PIL import Image
# paths = os.environ.get("NAUTILUS_SCRIPT_SELECTED_FILE_PATHS", "").split("\n")
paths = sys.argv[1:]
for fpath in paths:
if fpath.endswith(".jpg") or fpath.endswith(".jpeg"):
# Assume that is a photo
try:
img = Image.open(fpath)
# basename = os.path.basename(fpath)
basename = fpath
name, extension = os.path.splitext(basename)
new_name = f"{name}_ac{extension}"
w, h = img.size
# If w > h then it is a landscape photo
if w > h:
subprocess.check_call(["/usr/bin/magick", basename, "-resize", "1024x686", new_name])
else: # It is a portrait photo
subprocess.check_call(["/usr/bin/magick", basename, "-resize", "686x1024", new_name])
except:
# Don't care, continue
pass
You can see it in action (I selected the photos and right clicked, but the recording missed that part):
شاید برای شما هم پیش آمده باشد که به یک سیستم لینوکسی دسترسی دارید و می خواهد نوع هاردیسک (SSD یا HDD) آن را تشخیص دهید. برای اینکار روش ها و ابزارهای گوناگونی وجود دارد که در این مطلب قصد داریم به برخی از آنها اشاره کنیم. نکته اینکه برای اجرای برخی دستورها نیاز به […]
Many of you already know about my love of photography. I am taking photos for
many years, mostly people photos. Portraits in conferences like PyCon or Fedora
events. I regularly post photos to wikipedia too, specially for the
people/accounts which does not have good quality profile photos. I stopped
doing photography as we moved to Sweden, digital camera was old and becoming
stable in a new country (details in a different future blog) takes time. But,
last year Anwesha bought me a new camera, actually two
different cameras. And I started taking photos again.
I started regular photos of the weekly climate protests / demonstrations of Fridays for
Future Stockholm group.
And then more different street protests and some dance/music events too. I
don't have a Facebook account and most people asked me to share over Instagram,
so I did that. But, as I covered more & more various protests as a photographer,
I noticed my Instagram postos are showing up
less in people's feeds. Very less. Was wondering different ways of breaking out
of the algorithmic restriction.
Pixelfed is a decentralized, federated ActivityPub
based system to share photos. I am going to share photos more on this platform,
and hoping people will slowly see more. I started my account yesterday.
You can follow me from any standard ActivityPub system, say your mastodon
account itself. Search for @kushal@pixel.kushaldas.photography or
https://pixel.kushaldas.photography/kushal in your system and you can then follow it
like any other account. If you like the photos, then please share the account
(or this blog post) more to your followers and help me to break out of the
algorithmic restrictions.
In the technology side, the server runs Debian and containers. On my Fedora
system I am super happy to add a few scripts for Gnome Files, they help me to
resize the selected images before upload (I will write a blog post tomorrow on
this).
Recently, I heard a pitch from a public cloud company. Among other characteristics, a key aspect they stressed is that they are the cheapest cloud. This aspect struck me. Not because I believe it is or is not, but because I’ve heard many companies pitch themselves as the cheapest cloud over the years. I asked the CTO if they were foreseeing consistent and planned cuts in the pricing every year or so.
One of the major reason of using static blogging for me is to less worry about how the site will look like. Instead the focus was to just write (which of course I did not do well this year). I did not change my blog's theme for many many years.
But, I noticed Oskar Wickström created a monospace based site and kindly released it under MIT license. I liked the theme, so decided to start using it. I still don't know HTML/CSS but managed to change the template for my website.
You can let me know over mastodon what do you think :)
Version 8.4.0 Release Candidate 1 is released. It's now enter the stabilisation phase for the developers, and the test phase for the users.
RPMs are available in the php:remi-8.4 stream for Fedora ≥ 39 and Enterprise Linux ≥ 8 (RHEL, CentOS, Alma, Rocky...) and as Software Collection in the remi-safe repository (or remi for Fedora)
The repository provides development versions which are not suitable for production usage.
In a previous post I explained how to use a Solomon SSD1306 OLED display on Fedora with a Raspberry Pi, but that only covered displays that come with a I2C driving interface.
The SSD1306 display controller also supports a (both 3-wire and 4-wire) SPI interface, and the ssd130x DRM driver has support for it since Linux 5.19.
This blog post explains how to setup Fedora to use a SSD1306 OLED when connected through 4-wire SPI interface.
First you need connect the SSD1306 display to the RPi, there are different ways to do this since one can choose the GPIO to use for some the reset and data/command pins.
But to simplify the configuration, one could use the default GPIO pins that are defined in the ssd1306-spi.dtbo provided by the RPi firmware package.
To do this, connect the SSD1306 display to the RPi as follows:
Then you need to install Fedora, this can be done by downloading an aarch64 Fedora raw image (e.g: Workstation Edition) and executing the following command:
where $device is the block device for your uSD (e.g: /dev/sda) and image is the file name (e.g: Fedora-Workstation-40-1.14.aarch64.raw.xz) of the downloaded image.
Finally you need to configure the RPi firmware to enable the SPI pins and load the ssd1306-spi Device Tree Blob Overlay (dtbo) to register a SSD1306 SPI device, e.g:
The ssd1306-spi.dtbo supports many parameters in case your display doesn’t match the defaults. Take a look to the RPi overlays README for more details about all these parameters and their possible values.
The ssd130x DRM driver registers an emulated fbdev device that can be bound with fbcon and use the OLED display to have a framebuffer console. If you want to do that, it’s convenient to change the virtual terminal console font to a smaller one, e.g:
sudo sed -i 's/FONT=.*/FONT="drdos8x8"/' /etc/vconsole.conf
sudo dracut -f
A la sazón del día internacional del turismo, Raul Jimenez Ortega ha organizado un evento Geo Developers en Internet al cual me ha invitado a hablar de lo que hemos aprendido en estos años acerca de la relación de Wikipedia, el patrimonio y el turismo. Espero que os guste.
It’s been a busy year for me again, trying to focus on my self and my health. Mapping out dietary and seasonal allergies, still on the mission of No Dairy, Eggs, Caffeine, High-Fructose-Gluctose-Corn-Syrup, etc. I am able to breathe better and sleep better which is much needed as I get older.
Anyway, I was still running the experiment of tunneling my entire home traffic (network wide) and all connections through a VPN. I first ran into MTU packet size and fragmentation issues related to the fact that the clients on my network default to a 1500 MTU whereas the VPN tunnel interface drops that size by at least 40-60 bytes worth. This can result in packet fragmentation and performance issues which OpenVPN has support for but WireGuard does not.
I then switched to a proxy setup where I redirect and pipe all connection data at the protocol level to a server-side service which forwards it to the VPN endpoint and then out to the internet. This setup had much better performance but I then ran into some connection issues as the firewall states and the timeouts may not exactly be honoured correctly by the serving application.
I rewrote my Python made framework to start fresh again and go back to basics in a lower level language like C and this seems to be working better at the moment. I will continue to run this and test it out as the final replacement hopefully. The list of features this includes is:
Transparent Dynamic Forwarding Proxy Service (load balancing capable with ip/nftables)
This is the magic part of the code which sits in front of the proxy service and shares the UDP connection states and pre-routes them to the already established VPN tunnel related for that specific load balanced connection.
Last week I wrote about a campaign that we started to resolve issues on GitHub. Some of the fixes are coming from our enthusiastic community. Thanks to this, there is a new syslog-ng-devel port in MacPorts, where you can enable almost all syslog-ng features even for older MacOS versions and PowerPC hardware. Some of the freshly enabled modules include support for Kafka, GeoIP or OpenTelemetry.
From this blog entry, you can learn how to install a legacy or an up-to-date syslog-ng version from MacPorts.
Before you begin
If you are reading this blog, most likely you already have MacPorts installed on your machine. If not, follow the instructions from https://www.macports.org/install.php on how to install MacPorts for your operating system.
Installing syslog-ng 3.38
MacPorts has an old version of syslog-ng already included. It works, but it has some problems, and it also lacks many of the available features of syslog-ng.
Installation is simple:
czanik@Peters-MacBook-Pro ~ % sudo port install syslog-ng
---> Computing dependencies for syslog-ng
The following dependencies will be installed:
bzip2
expat
gettext-runtime
glib2
json-c
libeditlibelflibffilibiconvlibnetncurses
openssl
openssl3
pcre
pcre2
py312-packaging
python312
python3_select
python3_select-312
python_select
python_select-312
sqlite3
xz
zlib
Continue? [Y/n]: y
---> Fetching archive for json-c
[…]
---> Attempting to fetch syslog-ng-3.38.1_0.darwin_22.x86_64.tbz2.rmd160 from https://packages.macports.org/syslog-ng
---> Installing syslog-ng @3.38.1_0
---> Activating syslog-ng @3.38.1_0
---> Cleaning syslog-ng
---> Updating database of binaries
---> Scanning binaries for linking errors
---> No broken files found.
---> No broken ports found.
---> Some of the ports you installed have notes:
python312 has the following notes:
To make this the default Python or Python 3 (i.e., the version run by the 'python' or 'python3' commands), run one or both of:
sudo port select --set python python312
sudo port select --set python3 python312
syslog-ng has the following notes:
To use syslog-ng, first unload OS X's built-in syslog daemon:
sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.syslogd.plist
Then customize /opt/local/etc/syslog-ng.conf,
and load syslog-ng.
A startup item has been generated that will aid in starting syslog-ng with launchd. It is disabled by default. Execute the following command to start it, and to cause it to launch at startup:
sudo port load syslog-ngczanik@Peters-MacBook-Pro ~ %
It works, it is rock solid. However, it is old and misses many of the new and optional features:
root@Peters-MacBook-Pro ~ # /opt/local/sbin/syslog-ng -V
syslog-ng 3 (3.38.1)
Config version: 3.35
Installer-Version: 3.38.1
Revision:
Module-Directory: /opt/local/lib/syslog-ng
Module-Path: /opt/local/lib/syslog-ng
Include-Path: /opt/local/share/syslog-ng/include
Available-Modules: timestamp,kvformat,appmodel,afprog,examples,rate-limit-filter,cef,map-value-pairs,stardate,system-source,confgen,afuser,xml,disk-buffer,tfgetent,linux-kmsg-format,dbparser,json-plugin,add-contextual-data,pseudofile,affile,csvparser,basicfuncs,syslogformat,hook-commands,graphite,tags-parser,afstomp,secure-logging,afsocket,cryptofuncs,azure-auth-header,regexp-parser
Enable-Debug: off
Enable-GProf: off
Enable-Memtrace: off
Enable-IPv6: on
Enable-Spoof-Source: on
Enable-TCP-Wrapper: off
Enable-Linux-Caps: off
Enable-Systemd: off
Installing syslog-ng-devel
The syslog-ng-devel port is built upon a recent syslog-ng git snapshot. As syslog-ng is developed in a way that you should be able to create a stable release anytime, it is not a problem.
In this case, the same command that I used to install syslog-ng 3.38 from a binary package will first install the necessary dependencies from packages, then build syslog-ng locally.
czanik@Peters-MacBook-Pro ~ % sudo port install syslog-ng-devel
---> Computing dependencies for syslog-ng-devel
The following dependencies will be installed:
bison
bison-runtime
brotli
cmake
curl
curl-ca-bundle
cyrus-sasl2
flex
gettext
gettext-tools-libs
gperf
hiredis
icu
ivykis
kerberos5
libarchive
libb2
libbson
libcomerr
libcxx
libdbi
libesmtp
libidn2
libmaxminddb
libpsl
librdkafka
libtextstyle
libunistring
libxml2
lmdb
lz4
lzo2
m4
mongo-c-driver
nghttp2
paho.mqtt.c
pkgconfig
popt
rabbitmq-c
snappy
tcp_wrappers
zstd
Continue? [Y/n]:
---> Fetching archive for gperf
[…]
---> Fetching archive for syslog-ng-devel
---> Attempting to fetch syslog-ng-devel-2024.09.17_0+osl.darwin_22.x86_64.tbz2 from https://packages.macports.org/syslog-ng-devel
---> Attempting to fetch syslog-ng-devel-2024.09.17_0+osl.darwin_22.x86_64.tbz2 from https://vie.at.packages.macports.org/syslog-ng-devel
---> Attempting to fetch syslog-ng-devel-2024.09.17_0+osl.darwin_22.x86_64.tbz2 from https://fra.de.packages.macports.org/syslog-ng-devel
---> Fetching distfiles for syslog-ng-devel
---> Verifying checksums for syslog-ng-devel
---> Extracting syslog-ng-devel
---> Applying patches to syslog-ng-devel
---> Configuring syslog-ng-devel
---> Building syslog-ng-devel
---> Staging syslog-ng-devel into destroot
---> Installing syslog-ng-devel @2024.09.17_0+osl
---> Activating syslog-ng-devel @2024.09.17_0+osl
---> Cleaning syslog-ng-devel
---> Updating database of binaries
---> Scanning binaries for linking errors
---> No broken files found.
---> No broken ports found.
---> Some of the ports you installed have notes:
cmake has the following notes:
The CMake GUI and Docs are now provided as subports 'cmake-gui' and 'cmake-docs', respectively.
libpsl has the following notes:
libpsl API documentation is provided by the libpsl-docs port.
syslog-ng-devel has the following notes:
To use syslog-ng, first unload OS X's built-in syslog daemon:
sudo launchctl unload -w /System/Library/LaunchDaemons/com.apple.syslogd.plist
Then customize /opt/local/etc/syslog-ng.conf,
and
sudo load syslog-ng
A startup item has been generated that will aid in starting syslog-ng-devel with launchd. It is disabled by default. Execute the following command to start it, and to cause it to launch at startup:
sudo port load syslog-ng-devel
czanik@Peters-MacBook-Pro ~ %
Based on the output my suspicion is that pre-built packages might be available soon.
You might have noticed that installing syslog-ng-devel pulled in a lot of dependencies. The reason is simple: after the fixes received from the MacPorts community and from our developers, a lot more features compile now with older compiler versions and on older operating system versions.
Compared to the old package, these are some the additional modules available:
czanik@Peters-MacBook-Pro ~ % sudo /opt/local/sbin/syslog-ng -V
Password:
syslog-ng 4.8.0.157.gd68f5a5.dirty
Config version: 4.2
Installer-Version: 4.8.0.157.gd68f5a5.dirty
Revision: 4.8.0.157.gd68f5a5.dirty
Module-Directory: /opt/local/lib/syslog-ng
Module-Path: /opt/local/lib/syslog-ng
Include-Path: /opt/local/share/syslog-ng/include
Available-Modules: timestamp,darwinosl,kvformat,redis,afamqp,appmodel,afprog,metrics-probe,cef,map_value_pairs,kafka,stardate,system-source,confgen,afuser,xml,disk-buffer,tfgetent,linux-kmsg-format,cloud_auth,correlation,json-plugin,pseudofile,affile,afsmtp,csvparser,basicfuncs,syslogformat,hook-commands,mqtt,afmongodb,graphite,tags-parser,geoip2-plugin,afstomp,http,secure-logging,afsql,mod-python,afsocket,add_contextual_data,cryptofuncs,azure-auth-header,regexp-parser,rate_limit_filter
Enable-Debug: off
Enable-GProf: off
Enable-Memtrace: off
Enable-IPv6: on
Enable-Spoof-Source: on
Enable-TCP-Wrapper: on
Enable-Linux-Caps: off
Enable-Systemd: off
If you want even more modules, you can use “variants” to enable support for grpc-based modules and a native MacOS source:
czanik@Peters-MacBook-Pro ~ % sudo port variants syslog-ng-devel
syslog-ng-devel has the variants:
debug: Enable debug binaries
grpc: Enable GRPC modules
[+]osl: Enable support for OSLog
universal: Build for multiple architectures
czanik@Peters-MacBook-Pro ~ % sudo port install syslog-ng-devel +grpc
---> Computing dependencies for syslog-ng-devel
The following dependencies will be installed:
abseilc-aresgrpclbzip2libuvprotobuf3-cppre2Continue? [Y/n]:
---> Fetching archive for abseil
[…]
If you check the modules again, you will see even more available:
czanik@Peters-MacBook-Pro ~ % sudo /opt/local/sbin/syslog-ng -V
Password:
syslog-ng 4.8.0.157.gd68f5a5.dirty
Config version: 4.2
Installer-Version: 4.8.0.157.gd68f5a5.dirty
Revision: 4.8.0.157.gd68f5a5.dirty
Module-Directory: /opt/local/lib/syslog-ng
Module-Path: /opt/local/lib/syslog-ng
Include-Path: /opt/local/share/syslog-ng/include
Available-Modules: bigquery,timestamp,darwinosl,kvformat,redis,afamqp,appmodel,afprog,loki,metrics-probe,cef,map_value_pairs,otel,kafka,stardate,system-source,confgen,afuser,xml,disk-buffer,tfgetent,linux-kmsg-format,cloud_auth,correlation,json-plugin,pseudofile,affile,afsmtp,csvparser,basicfuncs,syslogformat,hook-commands,mqtt,afmongodb,graphite,tags-parser,geoip2-plugin,afstomp,http,secure-logging,afsql,mod-python,afsocket,add_contextual_data,cryptofuncs,azure-auth-header,regexp-parser,rate_limit_filter
Enable-Debug: off
Enable-GProf: off
Enable-Memtrace: off
Enable-IPv6: on
Enable-Spoof-Source: on
Enable-TCP-Wrapper: on
Enable-Linux-Caps: off
Enable-Systemd: off
What is next?
I would like to thank the involved MacPorts developers and my colleagues for making this huge step forward happen.
I would also like to ask for your feedback. Please share your experience with the syslog-ng-devel portand not only if you run into a problem, but also if it works for you as expected.
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
This is an independent, censorship-resistant site run by volunteers. This site and the blogs of individual volunteers are not officially affiliated with or endorsed by the Fedora Project.