Fedora is a Trademark of Red Hat, Inc, an operating system built by volunteers around the world. This page is provided so that independent volunteers can showcase our contributions to Fedora and Free Software in general. Official Fedora Download page.
|
We are always looking for guests. We want guest that have an open source focus, ideally security, but security can be a strange topic, we’re open to a lot of interpretation.
We prefer technical guest who are doing the work. Your CEO may be a great and smart person, but we want your head of engineering, or even better, one of your developers!
Please get in touch of you have an idea for a topic or guest.
Open Source Security is a media project to help showcase and educate on open source security. Our goal is to give the community a platform educate both developers and users on how open source security works.
There’s a lot of good work happening that doesn’t get attention because there’s no marketing department behind it, they don’t have a developer relations team posting on LinkedIn every two hours. Let’s focus on those people and teams then learn what they do and how they do it. The goal is to hear from the people doing the work, they know what’s up, they have a lot to teach us. We just have to listen.
Dear syslog-ng users,
This is the 128th issue of syslog-ng Insider, a monthly newsletter that brings you syslog-ng-related news.
Recently I introduced you to my latest project: a syslog-ng container based on Alma Linux. Now I also added a syslog-ng Prometheus exporter to the container, so you can monitor syslog-ng, if you enable it.
Recently I have posted a Dockerfile to run syslog-ng in an Alma Linux container. I got some encouraging feedback, so this week I experimented with syslog-ng Premium Edition (PE) in a RHEL UBI (Universal Base Image) container. While this is not officially supported by One Identity, we are really interested in your feedback.
https://www.syslog-ng.com/community/b/blog/posts/running-syslog-ng-pe-in-rhel-ubi
Windows Subsystem for Linux (WSL) allows you to run Linux applications on a Windows host. While you can install and run syslog-ng on a default WSL installation, it is not really practical: there is no systemd and WSL is behind NAT. This blog gives you some pointers for working around these problems.
https://www.syslog-ng.com/community/b/blog/posts/running-a-syslog-ng-server-in-wsl
New webinar: “High Availability for syslog-ng Store Box”. You can register for it at https://www.syslog-ng.com/event/high-availability-for-syslog-ng-store-box-/
You can learn about upcoming webinars and browse recordings of past webinars at https://www.syslog-ng.com/events/
Your feedback and news, or tips about the next issue are welcome. To read this newsletter online, visit: https://syslog-ng.com/blog/
For more than 25 years, Sistema de Información Universitaria - SIU - has been developing solutions for the digital management of the Argentine university system and various government agencies.
This ecosystem of solutions integrates all management areas within an institution (administrative, financial, academic, human resources, purchasing and assets, data analysis), optimizing process management, data quality, transparency, and decision-making.
One of the main pillars of SIU is its collaborative network based on communities of practice. In these communities, users interact, exchange knowledge and best practices, solve common challenges, and share experiences. This governance model has gained national and international recognition and has enabled successful implementations in over 140 institutions.
In recent years, digital transformation and the evolution of the institution have led SIU to move from a structure based on specific solutions to integrated project management, requiring the search for new tools to streamline and improve software development. As part of this process, a test management system was evaluated.
This experience began with the Integrated Electronic File solution, a tool designed to seamlessly manage documents and files within an institution. One of its components, SUDOCU (Unique Document System), was developed by Universidad Nacional de General Sarmiento - UNGS - and made available to the Argentine university system through SIU. This feature added complexity for the QA team during their work, as test cases had to be replicated on different platforms both on the SIU and UNGS sides.
In order to simplify processes and avoid duplicated efforts, a tool was sought to streamline the workflow. After evaluating several options, Kiwi TCMS was selected because of its open source nature, which is in line with SIU's policy.
Evaluation began with a local installation, testing its functionalities, and optimizing its use to improve workflow. After adjustments in both test and production environments, Kiwi TCMS was incorporated into SUDOCU's projects, creating test cases and test plans. This improved the organization and visibility of testing activities for the entire team involved, including QAs, developers, and analysts.
Based on this initial implementation, a work plan was created to gradually integrate testing of more projects into Kiwi TCMS, taking into account the unique characteristics of the SIU ecosystem.
Currently, a consolidated QA team has been established within SIU, working in a fully integrated manner. Other projects are now using Kiwi TCMS in their daily operations for functional testing, system integration, regression testing and smoke testing.
The experience with Kiwi TCMS has enhanced the quality of software developed by SIU by centralizing test cases, test plans and test executions. This has improved both the traceability and transparency of QA work. It also provided the ability to generate reports and metrics to further evaluate and improve internal processes.
One of the key benefits for SIU is time optimization achieved by reusing test cases and test plans, as well as the ability to track an unlimited number of executions, within the same test management tool. In addition, Kiwi TCMS' integration with Gitlab and Redmine is very useful for the institution, as these are other tools commonly used within SIU.
The flexibility of this system allows test cases to be extended as needed, making it easier to adjust and update test cases or test plans in response to the development cycle.
In the words of Lucas del Reguero Martinez, QA tester:
The implementation of Kiwi TCMS has provided significant advantages to the institution, resulting in tangible benefits to users within the SIU community. The plan for the future is to continue training teams on the tool to further democratize its use.
If you like what we're doing and how Kiwi TCMS supports various communities please help us grow and sustain development!
[dan@talos qt6-qtwebengine]$ ./prepare-ppc64.sh /path/to/chromium-openpower-patches/patches/ Removing changes for third_party/boringssl/src/crypto/test/abi_test.h from 0001-Add-PPC64-support-for-boringssl.patch ... ... Creating patchset ... cat: /path/to/chromium-openpower-patches/patches//ppc64le/third_party/0002-third-party-boringssl-add-generated-files.patch: No such file or directory cat: /path/to/chromium-openpower-patches/patches//ppc64le/third_party/0001-Fix-highway-ppc-hwcap.patch: No such file or directory cat: /path/to/chromium-openpower-patches/patches//ppc64le/third_party/dawn-fix-typos.patch: No such file or directory
موزیلا در مرورگر وب خود، یعنی فایرفاکس (Firefox)، قابلیت جدیدی برای دسترسی به چتباتهای هوش مصنوعی (AI Chatbots) معرفی کرده است. این قابلیت به کاربران این امکان را میدهد که بدون نیاز به تغییر سربرگ (Tab)، از چتباتهای محبوبی مانند Anthropic Claude، OpenAI’s ChatGPT، Google Gemini، HuggingChat و Le Chat Mistral در نوار کناری مرورگر […]
The post استفاده از AI Chatbot در مرورگر فایرفاکس first appeared on طرفداران فدورا.What if we need to combine multiple sitemaps for a main domain or subdomain? Here’s how to do it by creating sitemap index.
Let’s say our site has a regular sitemap.xml
and a blog/sitemap.xml
, but we want Google to crawl and index both. To combine them we need to rename the original to something else like main_sitemap.xml
and then create an index of all sitemaps we have:
<?xml version="1.0" encoding="UTF-8"?>
<sitemapindex xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
<sitemap>
<loc>https://deploymentfromscratch.com/main_sitemap.xml</loc>
<lastmod>2004-10-01T18:23:17+00:00</lastmod>
</sitemap>
<sitemap>
<loc>https://deploymentfromscratch.com/blog/sitemap.xml</loc>
</sitemap>
</sitemapindex>
And that’s it, both sitemaps then stay exactly same as before. Optionally we can include <lastmod>
tag for latest modification.
Hey, another week, another blog. Still going, still hope things are of interest to you, the reader.
I realized I had some PTO (Paid time off, basically what used to be called 'vacation') that I needed to use before it disappeared, so I have planned a number of vacation days in the coming month. Including 2 in this last week. :)
I'll also point to an old post of mine about what I personally think about vacations/pto when you are working in a community: time off when you are paid to work in a community
The biggest things for me about this isn't completely disconnecting or ignoring everything, in fact most days I get up at the same time (cats have to be fed!) and sit at my laptop like I usually do, but it's that I don't have to attend meetings, and I can do things I fully enjoy. Sometimes thats still working in the community, but sometimes its like what I did on friday: Look into a battery backup for house and tradeoffs/ideas around that. In the end I decided not to do anything right now, but I had fun learning about it.
My next pto days are next thursday ( 2025-02-20 ) then next friday is a "recharge" day at Red Hat, then I have monday off ( 2025-02-24 ), then march 6th and 7th and finally march 11th.
More detailed planning is ramping up for me. I have been working on when and how to move particular things, how to shuffle resources around and so forth. I plan to work on my doc more next week and then open things up to feedback from everyone who wants to provide it.
A few things to note about this move:
There's going to be a week (tenatively in may) where we do the 'switcharoo'. That is, take down services in IAD2 and bring them up in RDU3. This is going to be disruptive, but I'm hoping we can move blocks of things each day and avoid too much outage time. It's going to be disruptive, but we will try to minimize that.
Once the switcharoo week is over and we are switched, there will be either no staging env at all, or a limited one. This will persist until hardware has been shipped from IAD2 to RDU3 and we can shuffle things around to bring staging entirely back up.
Once all this is over, we will be in a much better place and with much newer/faster hardware and I might sleep for a month. :)
Slow progress being made. Thanks to some help from abompard auth is now working correctly. It was of course a dumb typo I made in a config, causing it to try and use a principal that didn't exist. OOps. Now, I just need to finish the compose host, then sort out keytabs for builders and hopefully the riscv SIG can move forward on populating it and next steps.
Oh no! This blog has AI in it? Well, not really. I wanted to talk about something from this past week thats AI related, but first, some background. I personally think AI does have some good / interesting uses if carefully crafted for that use. It's a more usefull hype cycle than say cryptocoins or blockchain or 'web3', but less usefull than virtual machines, containers or clouds. Like absolutely anything else, when someone says "hey, lets add this AI thing here" you have to look at it and decide if it's actually worth doing. I think my employer, Red Hat, has done well here. We provide tools for running your own AI things, we try and make open source AI models and tools, we add it in limited ways where it actually makes sense to existing tools ( ansible lightspeed, etc).
Recently, Christian Schaller posted his regular 'looking ahead' desktop blog post. He's done this many times in the past to highlight desktop things his team is hoping to work on. It's great information. In this post: looking ahead to 2025 and fedora workstation and jobs on offer he had a small section on AI. If you haven't seen it, go ahead and read it. It's short and at the top of the post. ;)
Speaking for myself, I read this as the same sort of approach that Red Hat is taking. Namely, work on Open source AI tooling and integrations, provide those for users that want to build things with them, see if there's any other places that could make sense to add an integration points.
I've seen a number of people read this as "Oh no, they are shoving AI in all parts of Fedora now, I'm going to switch to another distro". I don't think that is at all the case. Everything here is being done the Open Source way. If you don't care to use those tools, don't. If AI integration is added it will be in the open and after tradeoffs and feedback about being able to completely disable it.
We had setup ansible-lint to run on our ansible playbooks years ago. Unfortunately, due to a bug it was always saying "ok". We fixed that a while back and now it's running, but it has some kind of opinionated ideas on how things should be. The latest case of this was names. It wants any play name to start with a capitol letter. Handlers in ansible are just plays that get notified when another thing changes. If you change the name of say "restart httpd" to 'Restart httpd" in the handler, you have to then change every single place that is notified too. This caused an anoying mess for a few weeks. Hopefully we have them all changed now but this rule seems a bit random to me.
In case you didn't see it, we finally retired our old fedmsg bus! We switched to fedora-messaging a while back, but kept a bridge between them to keep both sides working. With the retirement of the old github2fedmsg service we were finally able to retire it.
🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉🎉
Over the years I’ve been trying to run a custom made transparent proxy service for most of my entire network to use automatically (zero client configuration). It first started because I did not like the general idea of a VPN server using a sub-1500 MTU setting while all of the clients on a network auto-assume a 1500 MTU themselves. In addition, instead of reading 1500 bytes at a time off of a TUN interface, you can instead read 8192 bytes off of a TCP socket at a time which you can then feed to a fast stream cipher. However, it took me quite a while to reach some stability with tracking all of the connection state types and to iron out all the issues that could arise from transparently proxying both UDP and TCP connections. Some of the lessons I learnt that might help others trying a similar approach include the following:
Ex: ulimit -n 65536
Ex: conn=$(echo "${outp}" | grep -i " src=${addr} .* dport=${port} " | grep -i "${prot}")
Ex: multiport sports 0:53133 to:192.168.1.1:3135
Ex: multiport sports 53134:57267 to:192.168.1.2:3135
I will try to post more tips as time goes on and I learn more but these small issues can cause a lot of headaches when you’re trying to translate and redirect thousands of network wide connections down into separated processes for load balancing purposes!
~
Flock to Fedora will happen in the beautiful, historic city of Prague, Czechia from June 5 to June 8 this year. We cannot wait to welcome our wonderful contributors to the Fedora Projects’ annual event. Our Call for Presentations (CFP) is open until February 23. In order to enable as many of our contributors to submit talks,we have put together this blog post that might help you connect your topic to a theme, or help you to look at the themes in a different way. We hope this will make the themes resonate with you —and maybe spark a talk topic to match.
Themes are just that: general topic areas. They are not meant to be binding and all-encompassing, but they are meant to help focus the conference. The good news is, there is room in our themes to tailor your topic to make it unique to you. The slightly less good news, sometimes creating that connection between your talk topic and a theme takes a little extra…creativity.
Imagine you get an invitation to a costume party, and the theme is ‘Halloween’. Easy, right? Dracula, Zombie, Witch, etc. They are the most obvious and easy ones to think of, and if you have a pointy hat or a set of vampire fangs, you are good to go. However, what if you do not have some ready-to-hand ghoulish accessories or apparel? What if you just have fairy wings or Hulk gloves? Well, there is nothing preventing you from using what you have, and with a bit of clever tweaking in the right areas eg; tear the fairy wings – you have a Zombie Fairy. Put fake blood on the Hulk gloves – you have a menacing Hulk. Or better yet – wear the fairy wings with the Hulk gloves for some chimera-type beast. All of a sudden though, voila! You have now hit the theme right in the bullseye, but with your own clever twist!
I do not know if that analogy helps you, I hope it does, but at the very least I hope it gives you a little chuckle. Fedora is such a diverse project, it would be a downright shame to feel like you cannot submit a talk or workshop idea to the CFP just because you are struggling to connect your work to the themes.
So, here is each theme broken down with some hypothetical examples of how talk topics might fit each theme.
Dive into the tools, workflows, ecosystems and practices that enable Freedom and Features in the Fedora Project. Topics can include the upcoming Git forge change, discussion related to bug tracking systems, communication platforms, building strong community teams, the power of community driven development and project management that drives Fedora’s collaborative efforts.
Grow our community of Friends with an inclusive and welcoming space for new and current contributors. These workshops and talks should focus on beginner-friendly or hands-on sessions to introduce newcomers and long-time community members to some of the advanced topics of contributing to Fedora, and discover new ways of engaging in the project.
Embrace First by exploring the technologies shaping Fedora’s future. Topics can be related to IoT, containers, AI/ML, edge computing, RISC-V, and other advancements that align with Fedora’s commitment to innovation.
Our CFP is open until February 23 so don’t delay, get those submissions to us pronto! We hope this post will help you connect your ideas to our themes, and we look forward to reading about all the ways our community connects their work in Fedora.
The post Flock to Fedora CFP Themes – Ideas and Tips! appeared first on Fedora Community Blog.
It is a shocking fact but ten percent of the guards in Nazi concentration camps were women.
Happy Valentine's Day
The Conversation is one of many publishers to write a feature article about these sadistic women.
When we see nazis in the news or in the movies, we typically see pictures of the male leaders and their male soldiers.
In 1957, American engineer Russell Ryan met Braunsteiner while holidaying in Austria. She did not tell him about her past. They fell in love, married and moved to New York, where they lived quiet lives until she was tracked down by Nazi hunter Simon Wiesenthal. Russell could not believe she had been a Nazi concentration camp guard. His wife, he said, “would not hurt a fly”.
The BBC web site has their own article about women torturing Nazi victims.
The fascist regime promoted a world-view of women in traditional mothering roles. Many German women were able to use this philosophy as an opportunity to deny any personal involvement in the Holocaust and most claimed they didn't even know it was happening.
Nonetheless, given that so many women were in fact willing to work in the concentration camps, should we be more skeptical of those German women who claim they knew nothing?
News has recently emerged about young women in Switzerland today openly wearing the swastika tattoo and performing the Nazi salute.
Multiple Swiss women including Caroline Kuhnlein-Hofmann and Melanie Bron in Vaud and Pascale Köster and Albane die Ziegler at Walder Wyss in Zurich signed a document about excluding foreign software developers and obfuscating who really did the work. This is totally analogous to the behavior of the Nazis who plagiarised the work of Jewish authors in textbooks.
Here is another photo from Polymanga 2023 in Montreux, the young Swiss woman is sitting on the edge of Lake Geneva. The lake is the border with France.
تیم توسعه فدورا لینوکس اعلام کرد که رویداد سالانه ی «Flock to Fedora 2025» از تاریخ ۵ تا ۸ ژوئن ۲۰۲۵ در شهر زیبای پراگ، جمهوری چک برگزار خواهد شد. این کنفرانس فرصتی بینظیر برای گردهمایی مشارکتکنندگان پروژه ی فدورا از سراسر جهان به منظور ارتباط، یادگیری و همکاری است. تاریخ را به خاطر بسپارید: […]
The post رویداد سالانه Flock to Fedora 2025 در شهر پراگ first appeared on طرفداران فدورا.FOSDEM 2025 is behind us. We ran Identity and Access Management devroom at FOSDEM. At the devroom, my team did few talks and demos about FreeIPA and Kerberos. While preparing to those talks, we tried to create demonstrations that could be repeated by others as well. First, this was an attempt to help ourselves, as we need to communicate our advances to others in the teams. Then we started to look at how to show our progress to folks outside of the development groups.
We iterated over our tools and finally ended up with something that is based on what we use in upstream CIs: we use podman containers to run what ends up being ephemeral VMs hosting the software. This doesn’t give ability to handle all possible scenarios. It is not a way to run actual production environments as well. Yet, it allows us a quick reuse and share:
descriptive definition of the deployment configuration
standard tooling to provision the configuration as containers with podman-compose
use of Ansible playbooks to run repeatable actions against the hosts, with inventory taken from the podman-compose integration
The tool, ipalab-config
,
quickly became flexible enough to be used in multiple scenarios. It powers
ansible-freeipa
’s own upstream CI, we aim to reuse it for new FreeIPA Web UI
development and for the FreeIPA workshop.
For the demos at FOSDEM IAM devroom we put a separate repository that has all
the scenarios and even recording files to reproduce the demos:
freeipa-local-tests
. You can
try yourself how local authentication hub or IPA-IPA trust or IPA-IPA migration
do work.
This project demonstrates how complex multi-system FreeIPA deployments can be tested locally or in your CI/CD. The test environment is built with the help of podman and orchestrated with ipalab-config and podman-compose tools. FreeIPA environment is deployed with the help of ansible-freeipa. Upstream, we run these tests in Github Actions as well.
Following configurations provided as ‘labs’ that can be reproduced using
ipalab-config
tool and the configurations from this project:
minimal deployment, consisting of a FreeIPA server and a FreeIPA client enrolled into it.
local KDC, consisting of two standalone machines, not enrolled into any domain. Each machine runs its own Kerberos KDC exposed to local applications over UNIX domain socket, with socket activation handled by systemd. See “localkdc - local authentication hub” talk at FOSDEM 2025. This is currently a work in progress.
FreeIPA deployment migration, demonstrating how IPA data can be migrated between separate test and production deployments. See “FreeIPA-to-FreeIPA Migration: Current Capabilities and Use Cases” talk at FOSDEM 2025.
FreeIPA trust, demonstrating how two separate IPA deployments can be set up to trust each other. See “Building Cross-Domain Trust Between FreeIPA Deployments” talk at FOSDEM 2025. This is currently a work in progress.
Some of the demo labs have automated recording of the operations that could be performed on them. Video recording is built upon excellent VHS tool. A pre-built version for Fedora is provided in COPR abbra/vhs. This build also includes a fix from the upstream PR#551.
This demo recording includes a minimal use of FreeIPA command line:
The local KDC demo is more evolved:
This is a minimalistic demo of how users and groups from one IPA environment can be resolved in the other IPA environment. There is a trust agreement established between both IPA environments, similarly how IPA can establish a forest level trust with Active Directory.
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 10th Feb – 14th Feb 2025
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues
If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.
The post Infra and RelEng Update – Week 07 2025 appeared first on Fedora Community Blog.
RPMs of PHP version 8.4.4 are available in the remi-modular repository for Fedora ≥ 39 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
RPMs of PHP version 8.3.17 are available in the remi-modular repository for Fedora ≥ 39 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
ℹ️ The packages are available for x86_64 and aarch64.
ℹ️ There is no security fix this month, so no update for version 8.1.31 and version 8.2.27.
⚠️ PHP version 8.0 has reached its end of life and is no longer maintained by the PHP project.
These versions are also available as Software Collections in the remi-safe repository.
Version announcements:
ℹ️ Installation: use the Configuration Wizard and choose your version and installation mode.
Replacement of default PHP by version 8.4 installation (simplest):
dnf module switch-to php:remi-8.4/common
Parallel installation of version 8.4 as Software Collection
yum install php84
Replacement of default PHP by version 8.3 installation (simplest):
dnf module switch-to php:remi-8.3/common
Parallel installation of version 8.3 as Software Collection
yum install php83
And soon in the official updates:
⚠️ To be noticed :
ℹ️ Information:
Base packages (php)
Software Collections (php83 / php84)
The 46th annual meeting of the International TeX Users Group (TUG 2025) will take place in Thiruvananthapuram (aka Trivandrum), Kerala, India, on 18–20 July, 2025. The Indian TeX Users Group and TeXFolio (STMDocs) with support from International TeX Users Group and sponsors are organizing the event this time as it comes back to India after a long hiatus of 14 years (the last two instances hosted were in 2011 and 2002).
Details about the registration, venue, travel, accommodation, programme, deadlines and important dates etc. are available at the conference page https://tug.org/tug2025/.
TUG conferences always enjoyed excellent presentations and talks about TeX, Typefaces/Fonts, Typesetting, Typography and anything related. Please submit interesting papers — see call for papers and speaker advice. Note that a visa is required for participants from most countries and it is a non-trivial undertaking. Please register and contact the program committee for a visa invitation letter as soon as possible.
The drawings for TUG 2025 are made by notable cartoonist E.P. Unny and the flyer is typeset by CVR.
Cockpit is the modern Linux admin interface. We release regularly.
Here are the release notes from Cockpit 333 🥃, cockpit-podman 101, and cockpit-files 16:
When logged in as administrator and copying files with a given owner to a directory with a different owner, cockpit-files will now prompt for the desired ownership of the copied files.
Cockpit 333, cockpit-podman 101, and cockpit-files 16 are available now:
We have a build system that has grown organically. It started as a shell script. We needed to run it from gitlab, so we wrote helper scripts to insulate our code from gitlab. Then we added some helper functions to mimic the gitlab interactions when working with them from the comand line. The helper functions grew until you could not practically run the original shell script without them.
It is a mess.
I want to refactor it.
Refactoring Shell is painful.
I want objects. I want python.
So I am rewriting the gitlab and functions layer in python with an eye to rewriting the whole thing. Here’s what I have learned;
While it is possible to call a bash script from python, it is not possible to source a bash file and then call the functions it defines. Because the original code makes heavy use of bash functions, I cannot do a function-by-function port to Python. Instead, I have to start either at the top and work down, or at the bottom and work up.
What do I mean? By the top, I mean the interface where you call the code. We have a set of very short scripts that are designed to be called from gitlab. For this example, I have a script called pipeline_kernel_next.sh. I can wrap this with a python script that first just calls the pipeline_kernel_next.sh , and make sure everything runs. Then I start duplicating the code in pipeline_kernel_next.sh in python, and comment out the code the call to the shell functions.
By the bottom I mean replacing a function with a call to a python script. I could remove a function from the bash code, and instead have bash call a remote script. That remote script would then be written in python. I avoided doing that as I did not want to get deep into the bash code. I might follow this approach for some things in the future.
So far I have only done the top down approach. I started by writing a script that just calls the existing script, and made sure that ran. Once I got that far, I started disabling pieces of the main script and re-implementing them in python, one at a time. This mostly could be done without modifying the original scripts, and instead was done by only implementing a subset of their functionality in each pass.
The standard python library has goingthrou a few iterations of how to run another process. The current approach is a mechanism called subprocess. While this is a very full implementation, it is not always intuitive to use.
If I wanted to run the bash command:
git reset --hard
I would have to execute the following code.
subprocess.run(["git", "reset", "--hard"])
Which is pretty simple. However, if I want to capture the output from that command and append it to the console, it gets more complex. Add in the need for pipes and other bash-isms and you end up writing a bit of boilerplate code for each invocation. Other people have gone through this and come up with wrappers that make it simpler.
Ricardo, A fellow Python-Over-Coffee participant, suggested i take a look at the plumbum library. It has been a great starting point. With it, I can line-for-line convert the majority of the shell scripts to python. It is syntactic sugar for the various mechanisms in python for working with other processes: you could do it directly but you would end up writing a lot of boilerplate code around each line.
With plumbum, a section that looked like this in shell:
git config rerere.enabled true
git rebase --quit >/dev/null 2>&1
git reset --hard >/dev/null 2>&1
git clean -f -x -d >/dev/null 2>&1
git remote update
$ARGH pull
can be converted to this in python
argh = local[os.getcwd() + '/argh']
git = local['git']
git("config", "rerere.enabled", "true")
git("rebase", "--quit")
git("reset", "--hard")
git("clean", "-f", "-x", "-d")
git("remote", "update")
argh("pull")
I am trying to minimize the code structure changes from shell to python to make it easier for my teammates to read the code in transition. However, one issue that I find I cannot work around in line is the building of directories: our naming scheme is complex enough that I keep getting things wrong. I have collected all of the directories into a single object that gets statically allocated when the python file is first run. Yeah, it is a singleton approach, and I don’t foresee it staying like this permanently, but it works to get my to python. The directories are then cached parameters. This means that they are lazy loaded and built on demand. An example:
top_dir = os.getcwd()
class Directories:
@cached_property
def stage(self):
dirname = top_dir + "/stage"
os.makedirs(dirname, exist_ok=True)
return dirname
@cached_property
def repo(self):
return top_dir + "/linux"
dirs = Directories()
Now access to the repo directory can be referenced as:
dirs.repo
One thing that has tripped me up in the shell code base is that we constantly are changing directories: we have to be in a parent to run git clone, but inside the repo directory to run any git commands. If we then forget to change back, we are running commands in the wrong directory.
Plumbum allows me to isolate the commands that are to be run in a subdirectory using the python keyword with. The above code then gets indented and looks more like this:
with local.cwd(dirs.repo):
git("config", "rerere.enabled", "true")
git("rebase", "--quit")
git("reset", "--hard")
git("clean", "-f", "-x", "-d")
git("remote", "update")
argh("pull")
I found it essential in doing this work was to insulate my development from production. This may seem obvious, but in git ops stuff, you often find you are working in and on the production pipeline, otherwise you don’;t know that things work end-to-end. When all you are doing is checking out from the main git repo, it is ok to use the production git repo. When you are building new trees and tagging them, then pushing those trees to a remote repo, you want to work with your own remote repo, not production. However, much dev-git-ops code is written with remote-repos hard coded in, and so you need to make that stuff overridable. Which in my case means I do have to modify the shell scripts at least that far.
Edit: a big thanks to the members of the Boston Python Users Group Slack that provided feedback and editing for this article.
Yesterday I wrote about how
I am using a different tool for git
signing and verification. Next, I
replaced my pass
usage. I have a small
patch to use
stateless OpenPGP command line interface (SOP). It is an implementation
agonostic standard for handling OpenPGP messages. You can read the whole SPEC
here.
cargo install rsop rsop-oct
And copied the bash script from my repository to the path somewhere.
The rsoct
binary from rsop-oct
follows the same SOP standard but uses the
card to signing/decryption. I stored my public key in
~/.password-store/.gpg-key
file, which is in turn used for encryption.
Here nothing changed related my daily pass usage, except the number of time I am typing my PIN :)
I am really pleased to announce that the 1st run of the executive level program called “Chief Open Source Officer” will be held in April 2025.
We all know that open source software, hardware, data and methods is the baseline of how the world today benefits in all industries, governments, academia, NGOs, global NGOs, and the civil society.
We also recognize that not all of those in positions of decision making have a background in technology, let alone open source technologies and the norms and values that made open source so successful.
With that preamble, I am collaborating with the forward thinking Singapore Institute of Technology‘s SITLearn to offer a 4-week, executive level program called “Chief Open Source Officer“.
This program is meant for the C-suite (-1, and -2 levels), from all industries, so that they can understand how is it that when empowered individuals from around the world put their minds to solve problems in a collaborative and mind-expanding way, vast stretches of economies become dramatically productive and gaining significant advantage. They will learn how to bring these ideas and methods in tangible ways to their organizations to make a difference.
The current artificial intelligence renaissance would never have happened as rapidly as it has without the millions of lines of open source code that has been created, shared and improved upon.
This course is pegged at the executive level. It is about business decisions, understanding the technical basis, balancing the open source ethos and challenging and learning from participants on how these make a difference. We will have well known open source industry luminaries, community thought leaders to hear, learn, challenge and gain insights.
The key decision makers (CxOs, IT heads, legal, procurement, marketing and sales leaders) are encouraged to consider this program.
Naturally, it is also meant for all those who aspire to technology leadership roles, which by internalising how and why open source works, craft new ideas to bring business benefits.
The program is held in Singapore and is open to anyone, from anywhere.
It will be held in a hybrid form with 2 days each in week 1 and week 4 being in person at the brand new SIT Campus in Punggol and the remaining sessions all online.
Contact me directly if you have questions or want to run this for your organization.
[There is an earlier post on LinkedIn as well: https://www.linkedin.com/feed/update/urn:li:activity:7285682907514335233/]
#foss #futureisNOW #ChiefOpenSourceOfficer #freesoftware #opensourceway
El curso de Sistemas de Recomendación realizado en IEBS ha sido una experiencia enriquecedora que nos ha permitido explorar desde los fundamentos teóricos hasta la implementación práctica de algoritmos de recomendación. A lo largo de tres sprints y un proyecto final, hemos abordado temas clave como la personalización guiada por datos, el filtrado colaborativo, y las aplicaciones y desafíos actuales de los sistemas de recomendación. Además, hemos aplicado estos conocimientos en un proyecto práctico utilizando la librería Surprise en Python con el conjunto de datos MovieLens. A continuación, presento un resumen estructurado de lo aprendido y las conclusiones derivadas de este proceso.
En el primer sprint, nos adentramos en los conceptos fundamentales de los sistemas de recomendación (SR). Comenzamos explorando los diferentes tipos de SR, como los basados en popularidad, contenido y enfoques híbridos. Estos sistemas permiten personalizar las recomendaciones según las preferencias y comportamientos de los usuarios, lo que resulta esencial en aplicaciones como plataformas de streaming, comercio electrónico y redes sociales.
Un aspecto clave fue el estudio de la matriz de utilidad, que representa las interacciones entre usuarios e ítems. Aprendimos cómo transformar estos datos en predicciones útiles, lo que sentó las bases para el desarrollo de modelos más avanzados. Además, realizamos un ejercicio práctico en el que, a partir de un dataset de películas, construimos nuestro primer modelo de recomendación. Este ejercicio nos permitió comprender la arquitectura interna de los SR, incluyendo cómo se estructuran los datos y cómo se evalúan los resultados.
El segundo sprint se centró en el filtrado colaborativo, una técnica que se basa en las interacciones de los usuarios para generar recomendaciones. Aquí, exploramos dos enfoques principales: el basado en usuarios y el basado en ítems. El primero analiza el comportamiento de los usuarios para identificar patrones y similitudes, mientras que el segundo se enfoca en las características y valoraciones de los ítems para agruparlos y sugerirlos a usuarios con preferencias similares.
Una de las preguntas clave que surgió fue: ¿cómo medimos la similitud entre usuarios o ítems? Aprendimos que esto se logra mediante métricas como la distancia euclidiana, que calcula la proximidad entre dos puntos en un espacio multidimensional. En las clases prácticas, implementamos funciones para medir similitudes y refinamos nuestros cálculos para mejorar la precisión de las predicciones. Por ejemplo, establecimos condiciones como que dos usuarios debían haber valorado al menos 10 películas en común para que el cálculo de distancias fuera significativo.
En el tercer sprint, nos enfocamos en las aplicaciones prácticas de los SR y los desafíos que enfrentan en la actualidad. Aquí, profundizamos en el uso de la librería Surprise, una herramienta poderosa y fácil de usar para implementar algoritmos de recomendación en Python. Exploramos algoritmos como SVD (Descomposición en Valores Singulares) y KNN (K-Nearest Neighbors), y aprendimos a ajustar sus hiperparámetros para optimizar los modelos.
Además, discutimos temas éticos y de responsabilidad en el desarrollo de SR. Reflexionamos sobre la importancia del manejo responsable de datos, la anonimización, y los protocolos de seguridad para evitar fugas de información. Estos debates nos recordaron que, más allá de la técnica, es crucial considerar las implicaciones legales, morales y reputacionales de nuestro trabajo.
El proyecto final fue la culminación de todo lo aprendido. Utilizamos la librería Surprise para analizar el conjunto de datos dado por el curso, que contiene valoraciones de películas por parte de usuarios. El proceso incluyó la instalación de la librería, la carga de datos, y la división del conjunto en datos de entrenamiento y prueba.
Implementamos dos modelos: KNN y KMeans, y evaluamos su rendimiento utilizando la métrica RMSE (Root Mean Squared Error). Los resultados mostraron que el modelo KMeans tuvo un mejor desempeño, acercándose más a 0 en términos de error, lo que indica una mayor precisión en las predicciones. Sin embargo, es importante destacar que ambos modelos aún muestran un RMSE relativamente alto, lo que sugiere que su precisión podría mejorarse significativamente si se contara con un conjunto de datos de entrenamiento más grande y diverso.
Resumiendo, este curso no solo nos ha equipado con habilidades técnicas, sino que también nos ha invitado a reflexionar sobre el impacto de nuestra trabajo en la sociedad. Los sistemas de recomendación son una pieza clave en la era digital, y su correcta implementación puede marcar la diferencia entre una experiencia satisfactoria y una frustrante para los usuarios.
One of the power of Unix systems comes from the various small tools and how
they work together. One such new tool I am using for some time is for git
signing
& verification
using OpenPGP and my Yubikey for the actual signing
operation via
openpgp-card-tool-git. I
replaced the standard gpg
for this usecase with the oct-git
command from this
project.
cargo install openpgp-card-tool-git
Then you will have to configuration your (in my case the global configuration) git configuration.
git config --global gpg.program <path to oct-git>
I am assuming that you already had it configured before for signing, otherwise you have to run the following two commands too.
git config --global commit.gpgsign true
git config --global tag.gpgsign true
Before you start using it, you want to save the pin in your system keyring.
Use the following command.
oct-git --store-card-pin
That is it, now your git commit
will sign the commits using oct-git
tool.
In the next blog post I will show how to use the other tools from the author for various different OpenPGP oeprations.
In September 2023, Gaelle Jeanmonod at FINMA published a summary of the judgment against Parreaux, Thiébaud & Partners and their successor Justicia SA.
Madame Jeanmonod redacted the name of the company, the dates and other key details. We have recreated the unredacted judgment.
Many paragraphs are missing. The document released by Madame Jeanmonod only includes paragraphs 55 to 65 and the paragraph 69.
Some entire sentences appear to be missing and replaced with the symbol (...).
Details about the original publication on the FINMA site.
Key to symbols:
Symbol | Meaning |
---|---|
PTP | Parreaux, Thiébaud & Partners |
A | Mathieu Parreaux |
X | Parreaux, Thiébaud & Partners |
Y | Justicia SA |
Important: we recommended reading together with the full chronological history published in the original blog post by Daniel Pocock.
Provision of insurance services without autorisation
Judgment of the financial markets regulator FINMA de 2023
Summary
Following numerous reports that Parreaux, Thiébaud & Partners was operating an insurance business without authorisation, FINMA conducted investigations that led to the opening of enforcement proceedings. In fact, Parreaux, Thiébaud & Partners offered legal subscriptions for companies and individuals, which provided unlimited access to various legal services for an annual fee. In addition, Parreaux, Thiébaud & Partners also financed, in certain situations, advances on costs to pay lawyers' and court fees in the form of a loan at a 0&percnt interest rate. According to its general terms and conditions, Parreaux, Thiébaud & Partners then obtained reimbursement of this loan from the legal costs to be received at the end of the proceedings in the event of victory. In the event of loss, the balance constituted a non-repayable loan. With regard to the areas of law that were partially covered and to disputes prior to the signing of the contract, the claim was partially covered by 50&percnt.
During the procedure, FINMA appointed an investigation officer within Parreaux, Thiébaud & Partners. While the investigation officer's work had already begun, the activities of Parreaux, Thiébaud & Partners were taken over by Justicia SA in [late 2021 or early 2022]. From that point on, Parreaux, Thiébaud & Partners ceased its activities for new clients. Clients who had taken out a subscription with Parreaux, Thiébaud & Partners prior to the month of (…) were informed when renewing their subscription that their subscription had been transferred to Justicia SA. FINMA then extended the procedure and the mandate of the investigation officer to the latter. The business model of Justicia SA is almost identical to that of Parreaux, Thiébaud & Partners. The main difference concerns the terms of repayment of the loan which, according to the general terms and conditions of Justicia SA, was also repayable in the event of defeat according to the "terms agreed between the parties".
The report of the investigating officer contains in particular a detailed analysis of the activity of the two companies as well as a sample examination of client files.
By decision of [April?] 2023, FINMA held that the conditions set by case law to qualify an insurance activity were met and therefore found that Parreaux, Thiébaud & Partners, Justicia SA as well as Mathieu Parreaux, managing partner of Parreaux, Thiébaud & Partners and director of Justicia SA, carried out an insurance activity without having the required authorisation.
FINMA then found that Parreaux, Thiébaud & Partners, Justicia SA and Mathieu Parreaux had carried out insurance activities without the necessary authorisation, appointed a liquidator and ordered the immediate liquidation of the two companies. FINMA also ordered the confiscation of the liquidation proceeds in favour of the Confederation, ordered Mathieu Parreaux to refrain from carrying out, without the necessary authorisation, any activity subject to authorisation under the financial market laws and published the order to refrain for a period of 2 years on its website.
Key points from the judgment
(…)
1. Engaging in insurance transactions without the right to do so
(55) The LSA is intended in particular to protect policyholders against the risks of insolvency of insurance companies and against abuse2. Insurance companies established in Switzerland that carry out direct insurance or reinsurance activities must first obtain authorisation from FINMA and are subject to its supervision3. Where special circumstances justify it, FINMA may release from supervision an insurance company for which the insurance activity is of little economic importance or only affects a limited circle of policyholders4.
(56) In accordance with Art. 2 para. 4 LSA, it is up to the Federal Council to define the activity in Switzerland in the field of insurance. In an ordinance dated 9 November 2005, the Federal Council clarified that, regardless of the method and place of conclusion of the contract, there is an insurance activity in Switzerland when a natural or legal person domiciled in Switzerland is the policyholder or insured5. Furthermore, the LSA applies to all insurance activities of Swiss insurance companies, both for insurance activities in Switzerland and abroad. Thus, even insurance contracts concluded from Switzerland but which relate exclusively to risks located abroad with policyholders domiciled abroad are subject to the LSA. In such cases, there may also be concurrent foreign supervisory jurisdiction at the policyholder's domicile6.
(57) Since the legislature did not define the concept of insurance, the Federal Court developed five cumulative criteria to define it7: the existence of a risk, the service provided by the policyholder consisting of the payment of a premium, the insurance service, the autonomous nature of the transaction and the compensation of risks on the basis of statistical data. It is appropriate to examine below whether the services provided by Parreaux, Thiébaud & Partners and Justicia SA respectively meet the criteria of the given definition of the insurance activity.
(58) The existence of a risk: this is the central element for the qualification of insurance. The object of an insurance is always a risk or a danger, i.e. an event whose occurrence is possible but uncertain. The risk or its financial consequences are transferred from the insured to the insurer8. The uncertainty assumed by the insurer typically consists of determining whether and when the event that triggers the obligation to pay benefits occurs. The uncertainty can also result from the consequences of an event (already certain)9. In a judgment of 21 January 2011, the Federal Court, for example, acknowledged that the rental guarantee insurer who undertakes to pay the lessor the amount of the rental guarantee in place of the tenant while reserving the right to take action against the latter to obtain reimbursement of the amount paid, bears the risk of the tenant's insolvency. Thus, the risk of non-payment by the tenant is sufficient in itself to qualify this risk as an insurance risk10.
(59) In this case, the purpose of the legal subscriptions offered by Parreaux, Thiébaud & Partners / Justicia SA is the transfer of a risk from the clients to Parreaux, Thiébaud & Partners / Justicia SA. Indeed, when the client concludes a legal subscription, Parreaux, Thiébaud & Partners / Justicia SA assumes the risk of having to provide legal services and bear administrative costs, respectively lawyers' fees, court fees or expert fees incurred by legal matters. When a client reports a claim, Parreaux, Thiébaud & Partners / Justicia SA bears the risk and therefore the financial consequences arising from the need for legal assistance in question. In cases where there is a claim prior to the conclusion of the subscription, Parreaux, Thiébaud & Partners / Justicia SA will cover 50&percnt of the costs for this claim, but will continue to bear the risk for any future disputes that may arise during the term of the subscription. In this sense, Parreaux, Thiébaud & Partners / Justicia SA provide services that go beyond those offered by traditional legal protection insurance, which, however, has no influence on the existence of an uncertain risk transferred to Parreaux, Thiébaud & Partners / Justicia SA upon conclusion of the subscription. Furthermore, it was found during the investigation that, in at least one case, Parreaux, Thiébaud & Partners covered the fees without entering into a loan agreement with the client; it was therefore not provided for these advances to be repaid, contrary to what was provided for in the general terms and conditions of Parreaux, Thiébaud & Partners. Furthermore, it could not be established that the new wording of the general terms and conditions of Justicia SA providing for the repayment of the loan regardless of the outcome of the proceedings had been implemented. To date, no loan has been repaid. These elements allow us to conclude that the risk of having to pay for legal services and advances on fees are borne by Parreaux, Thiébaud & Partners and Justicia SA in place of the clients. Finally, in accordance with the case law of the Federal Court, even if the loan granted by Justicia SA is accompanied by an obligation to repay, the simple fact of bearing the risk of insolvency of its clients is sufficient to justify the classification of insurance risk.
(60) The insured's benefit (the premium) and the insurance benefit: In order to qualify a contract as an insurance contract, it is essential that the policyholder's obligation to pay the premiums is offset by an obligation on the part of the insurer to provide benefits. The insured must therefore be entitled to the insurer's benefit at the time of the occurrence of the insured event11. To date, the Federal Court has not ruled on the question of whether the promise to provide a service (assistance, advice, etc.) constitutes an insurance benefit. However, recent doctrine shows that the provision of services can also be considered as insurance benefits. Furthermore, this position is confirmed and defended by the Federal Council with regard to legal protection insurance, which it defined in Art. 161 OS as follows: "By the legal protection insurance contract, the insurance company undertakes, against payment of a premium, to reimburse the costs incurred by legal matters or to provide services in such matters"12.
(61) In this case, when a client enters into a legal subscription contract with Parreaux, Thiébaud & Partners/Justicia SA, he agrees to pay an annual premium which then allows him to have access to a catalogue of services depending on the subscription chosen. Parreaux, Thiébaud & Partners/Justicia SA undertakes for their part to provide legal assistance to the client if necessary, provided that the conditions for taking charge of the case are met. Parreaux, Thiébaud & Partners/Justicia SA leaves itself a wide margin of discretion in deciding whether it is a case of prior art or whether the case has little chance of success. In these cases, the services remain partially covered, up to 50&percnt. This approach is more generous than the practice of legal insurance companies on the market. In fact, cases of prior art are not in principle covered by legal protection insurance and certain areas are also often excluded from the range of services included in the contract.
(62) The autonomous nature of the transaction: The autonomy of the transaction is essential to the insurance business, even though the nature of an insurance transaction does not disappear simply because it is linked in the same agreement to services of another type. In order to determine whether the insurance service is presented simply as an ancillary agreement or a modality of the entire transaction, the respective importance of the two elements of the contract in the specific case must be taken into account and this must be assessed in the light of the circumstances13.
(63) In this case, the obligation for Parreaux, Thiébaud & Partners/Justicia SA to provide legal services to clients who have subscribed to the subscriptions and to bear administrative costs, respectively lawyers' fees, court fees or expert fees does not represent a commitment that would be incidental or complementary to another existing contract or to another predominant service between Parreaux, Thiébaud & Partners/Justicia SA and the clients. On the contrary, the investigation showed that the legal subscriptions offered are autonomous contracts.
(64) Risk compensation based on statistical data: Finally, the case law requires, as another characteristic of the insurance business, that the company compensates the risks assumed in accordance with the laws of statistics. The requirements set by the Federal Court for this criterion are not always formulated uniformly in judicial practice. The Federal Court does not require a correct actuarial calculation but rather risk compensation based on statistical data14. Furthermore, it has specified that it is sufficient for the risk compensation to be carried out according to the law of large numbers and according to planning based on the nature of the business15. In another judgment16, the Federal Court adopted a different approach and considered that the criterion of risk compensation based on statistical data is met when the income from the insurance business allows expenses to be covered while leaving a safety margin. Finally, in another judgment17, the High Court deduced from the fact that the products were offered to an indeterminate circle of people that the risks would be logically distributed among all customers according to the laws of statistics and large numbers18.
(65) In this case, the risks assumed by Parreaux, Thiébaud & Partners/Justicia SA are offset by the laws of statistics, at the very least by the compensation of risks according to the law of large numbers. Knowing that only a very small part of their clientele will use the services provided by Parreaux, Thiébaud & Partners/Justicia SA, the latter are counting on the fact that the income from the contributions from legal subscriptions will be used to cover the expenses incurred for clients whose cases must be handled by Parreaux, Thiébaud & Partners/Justicia SA while leaving a safety margin. Indeed, the analysis of the files revealed that when a client reports a case to Parreaux, Thiébaud & Partners/Justicia SA, the costs incurred to handle the case are at least three times higher than the contribution paid. Support in this proportion is only possible by assuming that only a few clients will need legal assistance and by ensuring that all contributions are used to cover these costs. (…).
(66) (…) The investigation, however, revealed that there is indeed an economic adequacy between the services provided to clients by Parreaux, Thiébaud & Partners / Justicia SA and the subscription fees it collects. In this way, Parreaux, Thiébaud & Partners / Justicia SA offsets its own risks, namely the costs related to the legal services it provides as well as the risk of not obtaining repayment of the loan granted to the client, by the diversification of risks that occurs when a large number of corresponding transactions are concluded, i.e. according to the law of large numbers. In view of the above, there is no doubt that the risk compensation criterion is met within the framework of the business model of Parreaux, Thiébaud & Partners / Justicia SA.
(69) (…) In view of the above, it is established that Parreaux, Thiébaud & Partners and Justicia SA have exercised, respectively exercise, an insurance activity within the meaning of Art. 2 para. 1 let. a in relation to Art. 3 para. 1 LSA and Art. 161 OS without having the required authorisation from FINMA. Indeed, upon conclusion of a subscription, clients can request legal services from Parreaux, Thiébaud & Partners/Justicia SA against payment of an annual premium. In addition to these services, the latter grant a loan to clients to cover legal costs and lawyers' fees. Although these loans are repayable "according to the agreed terms", none of these terms appear to exist in practice and no loan repayments have been recorded. Finally, the mere fact of bearing the risk of insolvency of clients is sufficient for the insurance risk criterion to be met. Furthermore, in view of the current number of legal subscription contracts held by Justicia SA, the turnover generated by its legal subscriptions and the fact that Justicia SA, and before it Parreaux, Thiébaud & Partners, offers its services to an unlimited number of persons, there are no special circumstances within the meaning of Art. 2 para. 3 LSA allowing Parreaux, Thiébaud & Partners and Justicia SA to be released from supervision under Art. 2 para. 1 LSA.
(…)
Dispositif
- Loi fédérale sur la surveillance des entreprises d'assurance (LSA; RS 961.01).
- Art. 1 al. 2 LSA.
- Art. 2 al. 1 let. a en relation avec l’art. 3 al. 1 LSA.
- Art. 2 al. 3 LSA.
- Art. 1 al. 1 let. a OS.
- HEISS/MÖNNICH, in: Hsu/Stupp (éd.), Basler Kommentar, Versicherungsaufsichtsgesetz, Bâle 2013, nos 5 s ad art. 2 LSA et les références citées.
- ATF 114 Ib 244 consid. 4.a et les références citées.
- HEISS/MÖNNICH, op. cit., nos 15 ss ad art. 2 LSA et les références citées.
- HEISS/MÖNNICH, op. cit., nos 5 s. ad art. 2 LSA et les références citées.
- TF 2C_410/2010 du 21 janvier 2011 consid. 3.2 et 4.2.
- HEISS/MÖNNICH, op. cit., nos 23 ss ad art. 2 LSA et les références citées.
- HEISS/MÖNNICH, op. cit., nos 26 ss ad art. 2 LSA et les références citées.
- HEISS/MÖNNICH, op. cit., nos 30ss ad art. 2 LSA et les références citées.
- ATF 107 Ib 54 consid. 5.
- Ibid.
- ATF 92 I 126, consid. 3.
- TF 2C_410/2010 du 21 janvier 2010 consid. 3.4.
- HEISS/MÖNNICH, op. cit., nos 34 ss ad art. 2 LSA et les références citées.
Photo by William White on Unsplash.
Please join us at the next regular Open NeuroFedora team meeting on Monday 10 February 2025 at 1300 UTC. The meeting is a public meeting, and open for everyone to attend. You can join us in the Fedora meeting channel on chat.fedoraproject.org (our Matrix instance). Note that you can also access this channel from other Matrix home severs, so you do not have to create a Fedora account just to attend the meeting.
You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:
$ date -d 'Monday, February 10, 2025 13:00 UTC'
The meeting will be chaired by @ankursinha. The agenda for the meeting is:
We hope to see you there!
در دنیای هوش مصنوعی، ابزارهای گوناگونی برای اجرای مدلهای زبانی بزرگ (LLMs) بهصورت محلی وجود دارند. یکی از این ابزارها LM Studio است که به کاربران امکان میدهد مدلهای پیشرفته را بدون نیاز به اتصال دائم به اینترنت اجرا کنند. در این مقاله، به معرفی LM Studio، ویژگیهای کلیدی آن و نحوه نصب این ابزار […]
The post اجرای هوش مصنوعی با LM Studio first appeared on طرفداران فدورا.When I started Open Source Security I knew one of those topics that could use more attention was the security of CI/CD systems. All the talk about securing the supply chain seems to almost exclusively focus on the development stage as well as the deployment stage. It seems like there’s not enough attention happening to the build stage (spoiler: most of the successful attacks have happened at this stage). When François Proulx reached out to chat about CD/CD systems, I couldn’t say yes fast enough.
by Alexander Bokovoy and Andreas Schneider
FOSDEM 2025 is just behind us and it was a great event as always. Alexander and I had a chance to talk
about the local authentication hub project. Our FOSDEM talk was “localkdc – a general local authentication hub”. You can watch it and come back here for more details.
But before going into details, let us provide a bit of a background. It is 2025 now and we should go almost three decades back (ugh!).
Authentication on Linux systems is interwoven with the identity of the users. Once a user logged in, a process is running under a certain POSIX account identity. Many applications validate the presence of the account prior to the authentication itself. For example, the OpenSSH server does check the POSIX account and its properties and if the user was not found, will intentionally corrupt the password passed to the PAM authentication stack request. An authentication request will fail but the attempt will be recorded in the system journal.
This joint operation between authentication and identification sources in Linux makes it important to maintain a coherent information state. No wonder that in corporate environments it is often handled centrally: user and group identities stored at a central server and sourced from that one by a local software, such as SSSD. In order to consume these POSIX users and groups, SSSD needs to be registered with the centralized authority or, in other words, enrolled into the domain. Domain enrollment allows not only identity and authentication of users: both the central server and the enrolled client machine can mutually authenticate each other and be sure they talk to the right authority when authenticating the user.
FreeIPA provides a stable mechanism for building a centralized domain management system. Each user account has POSIX attributes associated with it and each user account is represented by the Kerberos principal. Kerberos authentication can be used to transfer the authentication state across multiple services and provides a chance for services to discover user identity information beyond POSIX. It also makes strong linking between the POSIX level identity and authentication structure possible: for example, a Kerberos service may introspect a Kerberos ticket presented by a user’s client application to see how this user was authenticated originally: with a password or some specific passwordless mechanism. Or, perhaps, that a client application performs operations on behalf of the user after claiming it was authenticated using a different (non-Kerberos) authentication.
Local user accounts’ use lacks this experience. Each individual service needs to reauthenticate a user again and again. Local system login: authenticate. Elevating privileges through SUDO? Authenticate again, if not explicitly configured otherwise. Details of the user session state, like how long this particular session is active, is not checked by the applications, making it also harder to limit access. There is no information on how this user was authenticated. Finally, overall user experience between local (standalone) authentication and domain-enrolled one differs, making it harder to adjust and educate users.
Local authentication is also typically password-based. This is not a bad thing in itself but depending on applications and protocols, worse choices could be made, security-wise. For example, contemporary SMB 3.11 protocol is quite secure if authenticated using Kerberos. For non-Kerberos usage, however, it is left to rely on NTLM authentication protocol which requires use of RC4 stream cipher. There are multiple attacks known to break RC4-based encryption, yet it is still used in majority of non-domain joined communications using SMB protocol simply because there was no (so far) alternative. To be correct, there was always an alternative, use of Kerberos protocol, but setting it up for individual isolated systems wasn’t practical.
The Kerberos protocol assumes the use of three different parties: a client, a service, and a key distribution center (KDC). In corporate environments a KDC is part of the domain controller system, a client and a service are both domain members, computers are enrolled in the domain. The client authenticates to KDC and obtains a Kerberos ticket granting ticket (TGT). It then requests a service ticket from the KDC by presenting its TGT and then presents this service ticket to the service. The service application, on its side, is able to decrypt the service ticket presented by the client and authenticate the request.
In the late 2000s Apple realised that for individual computers a number of user accounts is typically small and a KDC can be run as a service on the individual computer itself. When both the client and server are on the same computer, this works beautifully. The only problem is that when a user needs to authenticate to a different computer’s service, the client cannot reach the KDC hosted on the other computer because it is not exposed to the network directly. Luckily, MIT Kerberos folks already thought about this problem a decade prior to that: in 1997 a first idea was published for a Kerberos extension that allowed to tunnel Kerberos requests over a different application protocol. This specification became later known as “Initial and Pass Through Authentication Using Kerberos V5 and the GSS-API” (IAKerb). An initial implementation for MIT Kerberos was done in 2009/2010 while Apple introduced it in 2007 to enable remote access to your own Mac across the internet. It came in MacOS X 10.5 as a “Back to My Mac” feature and even got specified in RFC 6281, only to be retired from MacOS in 2019.
In the 2020s Microsoft continued to work on NTLM removal. In 2023 they announced that all Windows systems will have a local KDC as their local authentication source, accessible externally via selected applications through the IAKerb mechanism. By the end of 2024, we have only seen demos published by Microsoft engineers at various events but this is a promising path forward. Presence of the local KDC in Windows raises an interoperability requirement: Linux systems will have to handle access to Windows machines in a standalone environment over SMB protocol. Authentication is currently done with NTLM, it will eventually be removed, thus we need to support the IAKerb protocol extension.
The NTLM removal for Linux systems requires several changes. First, the Samba server will need to learn how to accept authentication with the IAKerb protocol extension. Then, Samba client code needs to be able to establish a client connection and advertise IAKerb protocol extension. For kernel level access, the SMB filesystem driver needs to learn how to use IAKerb as well, this will also need to be implemented in the user space cifs-utils package. Finally, to be able to use the same feature in a pure Linux environment, we need to be able to deploy Kerberos KDC locally and do it in an easy manner on each machine.
This is where we had an idea. If we are going to have a local KDC running on each system, maybe we should use it to handle all authentication and not just for the NTLM removal? This way we can make both the local and domain-enrolled user experience the same and provide access locally to a whole set of authentication methods we support for FreeIPA: passwords, smartcards, one-time passwords and remote RADIUS server authentication, use of FIDO2 tokens, and authentication against an external OAuth2 Identity Provider using a device authorization grant flow.
On standalone systems it is often not desirable to run daemons continuously. Also, it is not desirable to expose these services to the connected network if they really don’t need to be exposed. A common approach to solve this problem is by providing a local inter-process communication (IPC) mechanism to communicate with the server components. We chose to expose a local KDC via UNIX domain sockets. A UNIX domain socket is a well-known mechanism and has known security properties. With the help of a systemd feature called socket activation, we also can start local KDC on demand, when a Kerberos client connects over the UNIX domain socket. Since on local systems actual authentication requests don’t happen often, this helps to reduce memory and CPU usage in the long run.
If a local KDC is only accessible over a UNIX domain socket, remote applications could not get access to it directly. This means they would need to have help from a server application that can utilize the IAKerb mechanism to pass-through the communication between a client and the KDC. It would enable us to authenticate as a local user remotely from a different machine. Due to how the IAKerb mechanism is designed and integrated into GSS-API, this only allows password-based authentication. Anything that requires passwordless methods cannot obtain initial Kerberos authentication over IAKerb, at least at this point.
Here is a small demo on Fedora, using our localkdc
tool to start a local KDC, obtain a Kerberos ticket upon login. The tickets can then be used effortlessly to authenticate to local services such as SUDO or Samba. For remote access we rely on Samba support for IAKerb and authenticate with GSSAPI but local smbclient
uses a password first to obtain the initial ticket over IAKerb. This is purely a limitation of the current patches we have to Samba.
Make a pause here and think about the implications. We have an initial Kerberos ticket from the local system. The Kerberos ticket embeds details of how this authentication happened. We might have used a password to authenticate, or a smartcard. Or any other supported pre-authentication methods. We could reuse the same methods FreeIPA already provides in the centralized environment.
The Kerberos ticket also can contain details about the user session, including current group membership. It does not current have that in the local KDC case but we aim to fix that. This ticket can be used to authenticate to any GSS-API or Kerberos-aware service on this machine. If a remote machine accepts Kerberos, it theoretically could accept a ticket presented by a client application running on the local machine as well. Only, to do that it needs to be able to communicate with our local KDC and it couldn’t access it.
Luckily, a local KDC deployment is a full-featured Kerberos realm and thus can establish cross-realm agreements with other Kerberos realms. If two “local” KDC realms have trust agreements between each other, they can issue cross-realm Kerberos tickets which applications can present over IAKerb to the remote “local” KDC. Then a Kerberos ticket to a service running on the target system can be requested and issued by the system’s local KDC.
Thus, we can achieve passwordless authentication locally on Linux systems and have the ability to establish peer to peer agreements across multiple systems, to allow authentication requests to flow and operate on commonly agreed credentials. A problem now moves to the management area: how to manage these peer to peer agreements and permissions in an easy way?
MIT Kerberos KDC implementation provides a flexible way to handle Kerberos principals’ information. A database backend (KDB) implementation can be dynamically loaded and replaced. This is already used by both FreeIPA and Samba AD to integrate MIT Kerberos KDC with their own database backends based on different LDAP server implementations. For a local KDC use case running a full-featured LDAP server is not required nor intended. However, it would be great if different applications could expose parts of the data needed by the KDB interfaces and cooperate together. Then a single KDB driver implementation could be used to streamline and provide uniform implementation of Kerberos-specific details in a local KDC.
One of the promising interfaces to achieve that is the User/Group record lookup API via varlink from systemd. Varlink allows applications to register themselves and listen on UNIX domain sockets for communication similar to D-Bus but with much less implementation overhead. The User/Group API technically also allows to merge data coming from different sources when an application inquires the information. “Technically”, because io.systemd.Multiplexer
API endpoint currently does not support merging non-overlapping data representing the same account from multiple sources. Once it would become possible, we could combine the data dynamically and may interact with users on demand when corresponding requsts come in. Or we can implement our own blending service.
Blending data requests from multiple sources within MIT KDC needs a specialized KDB driver. We certainly don’t want this driver to duplicate the code from other drivers, so making these drivers stackable would be a good option. Support for one level of stacking has been merged to MIT Kerberos through a quickly processed pull request and will be available in the next MIT Kerberos release. This allows us to have a single KDB driver that loads other drivers specialized in storing Kerberos principals and processing additional information like MS-PAC structure or applying additional authorization details.
If Alice and Bob are in the same network and want to exchange some files, they could do this using SMB and Samba. But that Alice can authenticate on Bob’s machine, they would need to establish a Kerberos cross realm trust. With the current tooling this is a complex task. For users we need to make this more accessible. We want to allow users to request trust on demand and validate these requests interactively. We also want to allow trust to be present for a limited timeframe, automatically expiring or manually removed.
If we have a Kerberos principal lookup on demand through a curated varlink API endpoint, we also can have a user-facing service to initiate establishing the trust between two machines on demand. Imagine a user trying to access SMB share on one desktop system that triggers a pop-up to establish trust relationship with a corresponding local KDC on the remote desktop system. Both owners of the systems would be able to communicate out of band that provided information is correct and can be trusted. Once it is done, we can return back the details of the specific Kerberos principal that represents this trust relationship. We can limit lifetime of this agreement so that it would disappear automatically in one hour or a day, or a week.
We started with two individual implementation paths early in 2024:
MIT Kerberos did have support for IAKerb protocol extension for more than a decade but since Microsoft introduced some changes to the protocol, those changes needed to be integrated as well. This was completed during summer 2024, though no upstream release is available yet. MIT Kerberos typically releases new versions yearly in January so we hope to get some updates early 2025.
Samba integration with IAKerb is currently under implementation. Originally, Microsoft was planning to release Windows 11 and Windows Server 2025 with IAKerb support enabled during autumn 2024. However, the Windows engineering team faced some issues and IAKerb is still not enabled in the Windows Server 2025 and Windows 11 releases. We are looking forward to getting access to Windows builds that enable IAKerb support to ensure interoperability before merging Samba changes upstream. We also need to complete the Samba implementation to properly support locally-issued Kerberos tickets and not only do acquisition of the ticket based on the password.
Meanwhile, our cooperation with MIT Kerberos development team led to advancements in the local KDC support. The MIT Kerberos KDC can now be run over a UNIX domain socket. Also on systemd-enabled systems we allow socket activation, transforming local KDC into an on-demand service. We will continue our work on a dynamic database for a local KDC, to allow on-demand combination of resources from multiple authoritative local sources (Samba, FreeIPA, SSSD, local KDC, future dynamic trust application).
For experiments and ease of deployments, a new configuration tool was developed, localkdc. The tool is available at localkdc and COPR repository can be used to try the whole solution on Fedora.
If you want to get that test tried in a simple setup, you might be interested in a tool that we developed initially for FreeIPA: FreeIPA local tests. This tool allows to provision and run a complex test environment in podman containers. The video of the local KDC usage was actually generated automatically by the scripts from here.
FOSDEM 2025 is just behind us and it was a great event. I had a chance to talk about the local authentication hub project. The talk was well received and I got a lot of questions about the project. We ran Identity and Access Management devroom for the second time in row and it was a great success. I had two talks at the IAM devroom, both were process reports on the activity we have announced at FOSDEM 2024. Now that both recordings of the both talks published, I can share articles which go into more details.
Our FOSDEM talk is “localkdc - a general local authentication hub”. You can watch it and come back here for more details.
But before going into details, let me provide a bit of a background. It is 2025 now and we should go almost three decades back (ugh!).
Authentication on Linux systems is interwoven with the identity of the users. Once a user logged in, a process is running under a certain POSIX account identity. Many applications validate the presence of the account prior to the authentication itself. For example, the OpenSSH server does check the POSIX account and its properties and if the user was not found, will intentionally corrupt the password passed to the PAM authentication stack request. An authentication request will fail but the attempt will be recorded in the system journal.
This joint operation between authentication and identification sources in Linux makes it important to maintain a coherent information state. No wonder that in corporate environments it is often handled centrally: user and group identities stored at a central server and sourced from that one by a local software, such as SSSD. In order to consume these POSIX users and groups, SSSD needs to be registered with the centralized authority or, in other words, enrolled into the domain. Domain enrollment allows not only identity and authentication of users: both the central server and the enrolled client machine can mutually authenticate each other and be sure they talk to the right authority when authenticating the user.
FreeIPA provides a stable mechanism for building a centralized domain management system. Each user account has POSIX attributes associated with it and each user account is represented by the Kerberos principal. Kerberos authentication can be used to transfer the authentication state across multiple services and provides a chance for services to discover user identity information beyond POSIX. It also makes strong linking between the POSIX level identity and authentication structure possible: for example, a Kerberos service may introspect a Kerberos ticket presented by a user’s client application to see how this user was authenticated originally: with a password or some specific passwordless mechanism. Or, perhaps, that a client application performs operations on behalf of the user after claiming it was authenticated using a different (non-Kerberos) authentication.
Local user accounts’ use lacks this experience. Each individual service needs to reauthenticate a user again and again. Local system login: authenticate. Elevating privileges through SUDO? Authenticate again, if not explicitly configured otherwise. Details of the user session state, like how long this particular session is active, is not checked by the applications, making it also harder to limit access. There is no information on how this user was authenticated. Finally, overall user experience between local (standalone) authentication and domain-enrolled one differs, making it harder to adjust and educate users.
Local authentication is also typically password-based. This is not a bad thing in itself but depending on applications and protocols, worse choices could be made, security-wise. For example, contemporary SMB 3.11 protocol is quite secure if authenticated using Kerberos. For non-Kerberos usage, however, it is left to rely on NTLM authentication protocol which requires use of RC4 stream cipher. There are multiple attacks known to break RC4-based encryption, yet it is still used in majority of non-domain joined communications using SMB protocol simply because there was no (so far) alternative. To be correct, there was always an alternative, use of Kerberos protocol, but setting it up for individual isolated systems wasn’t practical.
The Kerberos protocol assumes the use of three different parties: a client, a service, and a key distribution center (KDC). In corporate environments a KDC is part of the domain controller system, a client and a service are both domain members, computers are enrolled in the domain. The client authenticates to KDC and obtains a Kerberos ticket granting ticket (TGT). It then requests a service ticket from the KDC by presenting its TGT and then presents this service ticket to the service. The service application, on its side, is able to decrypt the service ticket presented by the client and authenticate the request.
In the late 2000s Apple realised that for individual computers a number of user accounts is typically small and a KDC can be run as a service on the individual computer itself. When both the client and server are on the same computer, this works beautifully. The only problem is that when a user needs to authenticate to a different computer’s service, the client cannot reach the KDC hosted on the other computer because it is not exposed to the network directly. Luckily, MIT Kerberos folks already thought about this problem a decade prior to that: in 1997 a first idea was published for a Kerberos extension that allowed to tunnel Kerberos requests over a different application protocol. This specification became later known as “Initial and Pass Through Authentication Using Kerberos V5 and the GSS-API” (IAKerb). An initial implementation for MIT Kerberos was done in 2009/2010 while Apple introduced it in 2007 to enable remote access to your own Mac across the internet. It came in MacOS X 10.5 as a “Back to My Mac” feature and even got specified in RFC 6281, only to be retired from MacOS in 2019.
In the 2020s Microsoft continued to work on NTLM removal. In 2023 they announced that all Windows systems will have a local KDC as their local authentication source, accessible externally via selected applications through the IAKerb mechanism. By the end of 2024, we have only seen demos published by Microsoft engineers at various events but this is a promising path forward. Presence of the local KDC in Windows raises an interoperability requirement: Linux systems will have to handle access to Windows machines in a standalone environment over SMB protocol. Authentication is currently done with NTLM, it will eventually be removed, thus we need to support the IAKerb protocol extension.
The NTLM removal for Linux systems requires several changes. First, the Samba server will need to learn how to accept authentication with the IAKerb protocol extension. Then, Samba client code needs to be able to establish a client connection and advertise IAKerb protocol extension. For kernel level access, the SMB filesystem driver needs to learn how to use IAKerb as well, this will also need to be implemented in the user space cifs-utils package. Finally, to be able to use the same feature in a pure Linux environment, we need to be able to deploy Kerberos KDC locally and do it in an easy manner on each machine.
This is where we had an idea. If we are going to have a local KDC running on each system, maybe we should use it to handle all authentication and not just for the NTLM removal? This way we can make both the local and domain-enrolled user experience the same and provide access locally to a whole set of authentication methods we support for FreeIPA: passwords, smartcards, one-time passwords and remote RADIUS server authentication, use of FIDO2 tokens, and authentication against an external OAuth2 Identity Provider using a device authorization grant flow.
On standalone systems it is often not desirable to run daemons continuously. Also, it is not desirable to expose these services to the connected network if they really don’t need to be exposed. A common approach to solve this problem is by providing a local inter-process communication (IPC) mechanism to communicate with the server components. We chose to expose a local KDC via UNIX domain sockets. A UNIX domain socket is a well-known mechanism and has known security properties. With the help of a systemd feature called socket activation, we also can start local KDC on demand, when a Kerberos client connects over the UNIX domain socket. Since on local systems actual authentication requests don’t happen often, this helps to reduce memory and CPU usage in the long run.
If a local KDC is only accessible over a UNIX domain socket, remote applications could not get access to it directly. This means they would need to have help from a server application that can utilize the IAKerb mechanism to pass-through the communication between a client and the KDC. It would enable us to authenticate as a local user remotely from a different machine. Due to how the IAKerb mechanism is designed and integrated into GSS-API, this only allows password-based authentication. Anything that requires passwordless methods cannot obtain initial Kerberos authentication over IAKerb, at least at this point.
Here is a small demo on Fedora,
using our localkdc
tool to start a local KDC, obtain a Kerberos ticket upon
login. The tickets can then be used effortlessly to authenticate to local
services such as SUDO or Samba. For remote access we rely on Samba support for
IAKerb and authenticate with GSSAPI but local smbclient
uses a password first
to obtain the initial ticket over IAKerb. This is purely a limitation of
the current patches we have to Samba.
Make a pause here and think about the implications. We have an initial Kerberos ticket from the local system. The Kerberos ticket embeds details of how this authentication happened. We might have used a password to authenticate, or a smartcard. Or any other supported pre-authentication methods. We could reuse the same methods FreeIPA already provides in the centralized environment.
The Kerberos ticket also can contain details about the user session, including up to date group membership. It does not currently have that in the local KDC case but we aim to fix that. This ticket can be used to authenticate to any GSS-API or Kerberos-aware service on this machine. If a remote machine accepts Kerberos, it theoretically could accept a ticket presented by a client application running on the local machine as well. Only, to do that it needs to be able to communicate with our local KDC and it couldn’t access it.
Luckily, a local KDC deployment is a full-featured Kerberos realm and thus can establish cross-realm agreements with other Kerberos realms. If two “local” KDC realms have trust agreements between each other, they can issue cross-realm Kerberos tickets which applications can present over IAKerb to the remote “local” KDC. Then a Kerberos ticket to a service running on the target system can be requested and issued by the system’s local KDC.
Thus, we can achieve passwordless authentication locally on Linux systems and have the ability to establish peer to peer agreements across multiple systems, to allow authentication requests to flow and operate on commonly agreed credentials. A problem now moves to the management area: how to manage these peer to peer agreements and permissions in an easy way?
MIT Kerberos KDC implementation provides a flexible way to handle Kerberos principals’ information. A database backend (KDB) implementation can be dynamically loaded and replaced. This is already used by both FreeIPA and Samba AD to integrate MIT Kerberos KDC with their own database backends based on different LDAP server implementations. For a local KDC use case running a full-featured LDAP server is not required nor intended. However, it would be great if different applications could expose parts of the data needed by the KDB interfaces and cooperate together. Then a single KDB driver implementation could be used to streamline and provide uniform implementation of Kerberos-specific details in a local KDC.
One of the promising interfaces to achieve that is the User/Group record lookup
API via varlink from systemd. Varlink
allows applications to register themselves and listen on UNIX domain sockets for
communication similar to D-Bus but with much less implementation overhead. The
User/Group API technically also allows to merge data coming from different
sources when an application inquires the information. “Technically”, because
io.systemd.Multiplexer
API endpoint currently does not support merging
non-overlapping data representing the same account from multiple sources. Once
it would become possible, we could combine the data dynamically and may interact
with users on demand when corresponding requsts come in. Or we can implement our
own blending service.
Blending data requests from multiple sources within MIT KDC needs a specialized KDB driver. We certainly don’t want this driver to duplicate the code from other drivers, so making these drivers stackable would be a good option. Support for one level of stacking has been merged to MIT Kerberos through a quickly processed pull request and will be available in the next MIT Kerberos release. This allows us to have a single KDB driver that loads other drivers specialized in storing Kerberos principals and processing additional information like MS-PAC structure or applying additional authorization details.
If Alice and Bob are in the same network and want to exchange some files, they could do this using SMB and Samba. But that Alice can authenticate on Bob’s machine, they would need to establish a Kerberos cross realm trust. With the current tooling this is a complex task. For users we need to make this more accessible. We want to allow users to request trust on demand and validate these requests interactively. We also want to allow trust to be present for a limited timeframe, automatically expiring or manually removed.
If we have a Kerberos principal lookup on demand through a curated varlink API endpoint, we also can have a user-facing service to initiate establishing the trust between two machines on demand. Imagine a user trying to access SMB share on one desktop system that triggers a pop-up to establish trust relationship with a corresponding local KDC on the remote desktop system. Both owners of the systems would be able to communicate out of band that provided information is correct and can be trusted. Once it is done, we can return back the details of the specific Kerberos principal that represents this trust relationship. We can limit lifetime of this agreement so that it would disappear automatically in one hour or a day, or a week.
We started with two individual implementation paths early in 2024:
MIT Kerberos did have support for IAKerb protocol extension for more than a decade but since Microsoft introduced some changes to the protocol, those changes needed to be integrated as well. This was completed during summer 2024, though no upstream release is available yet. MIT Kerberos typically releases new versions yearly in January so we hope to get some updates early 2025.
Samba integration with IAKerb is currently under implementation. Originally, Microsoft was planning to release Windows 11 and Windows Server 2025 with IAKerb support enabled during autumn 2024. However, the Windows engineering team faced some issues and IAKerb is still not enabled in the Windows Server 2025 and Windows 11 releases. We are looking forward to getting access to Windows builds that enable IAKerb support to ensure interoperability before merging Samba changes upstream. We also need to complete the Samba implementation to properly support locally-issued Kerberos tickets and not only do acquisition of the ticket based on the password.
Meanwhile, our cooperation with MIT Kerberos development team led to advancements in the local KDC support. The MIT Kerberos KDC can now be run over a UNIX domain socket. Also on systemd-enabled systems we allow socket activation, transforming local KDC into an on-demand service. We will continue our work on a dynamic database for a local KDC, to allow on-demand combination of resources from multiple authoritative local sources (Samba, FreeIPA, SSSD, local KDC, future dynamic trust application).
For experiments and ease of deployments, a new configuration tool was developed, localkdc. The tool is available at localkdc and COPR repository can be used to try the whole solution on Fedora.
If you want to get that test tried in a simple setup, you might be interested in a tool that we developed initially for FreeIPA: FreeIPA local tests. This tool allows to provision and run a complex test environment in podman containers. The video of the local KDC usage was actually generated automatically by the scripts from https://github.com/abbra/freeipa-local-tests/tree/main/ipalab-config/localkdc.
Lets keep the blogging rolling. This week went by really fast, but a lot of it for me was answering emails and pull requests and meetings. Those are all important, but sometimes it makes it seem like not much was actually accomplished in the week.
I got some x86 buildvm's setup. These are to do tasks that don't need to be done on a riscv builder, like createrepo/newrepos or the like. I'm still having a issue with auth on them however, which is related to the auth issue with the web interface. Will need to get that sorted out next week.
Tuesday was the f42 branching day. It went pretty smoothly this cycle I think, but there's always a small number of things to sort out. It's really the most complex part of the release cycle for releng. So many moving parts and dispirate repos and configs needing changing. This time I tried to stay out of actually doing anything, in favor of just providing info or review for Samyak who was doing all the work. I mostly managed to do that.
Planning for the datacenter move is moving along. I've been working on internal documents around the stuff that will be shipped after we move, and next week I am hoping to start a detailed plan for the logical migration itself. It's a pretty short timeline, but I am hoping it will all go smoothly in the end. We definitely will be in a better place with better hardware once we are done, so I am looking forward to that.
As always, comment on mastodon: https://fosstodon.org/@nirik/113969409712070764
La nube de Oracle, si bien no tan conocida como la nube Amazon, es una opción interesante para aquellos que requieran un proveedor de servicios en la nube, tiene una oferta de servicios menos extensa, pero seamos sinceros: quien usa todos los recursos que ofrece AWS?
Una gran ventaja del
servicio de Oracle es que te permite crear un par de maquinas virtuales
de pocos recursos de forma gratuita, de los cuales pueden ser dos
pequeñas maquinas virtuales x86_64 o gasta 4 maquinas virtuales con
arquitectura ARM.
Puedes crear fácilmente una nueva instancia de maquina virtual:
Lamentablemente Fedora no esta disponible en la nube de Oracle pero si Oracle Linux, Rocky Linux y Alma Linux:
ssh -i ssh-key-####-##-##.key opc@###.##.###.###
Al intentar acceder la primera vez obtuve el mensaje de error:
Permissions 0644 for 'ssh-key-#########.key' are too open.
No había encontrado este error anteriormente, la solución es hacer el archivo solo disponible para el usuario actual con:
chmod 600
ssh-key-#########.key
Una vez ingresado al servidor no se requiere contraseña para pasar a una consola de administrador con:
sudo bash
Una vez con acceso al terminal las tareas principales son:
Actualizar el sistema:
dnf update -y
Yo prefiero utilizar cockpit que se puede instalar con:
dnf install -y cockpit
systemctl enable --now cockpit.socket
sudo firewall-cmd --add-service=cockpit
sudo firewall-cmd --add-service=cockpit --permanent
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 03 – 07 February 2025
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues
If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.
The post Infra and RelEng Update – Week 6 appeared first on Fedora Community Blog.
Log Detective service is live for more than two weeks now. Running an LLM inference server in production is a challenge.
We started with llama-cpp-python’s server initialy but switched over to llama-cpp server because of its parallel execution feature. I still need to benchmark it to see how much speedup we are getting.
This blog post highlights a few common challenges you might face when operating an inference server.
RPMs of PHPUnit version 12 are available in the remi repository for Fedora ≥ 40 and Enterprise Linux (CentOS, RHEL, Alma, Rocky...).
Documentation :
ℹ️ This new major version requires PHP ≥ 8.3 and is not backward compatible with previous versions, so the package is designed to be installed beside versions 8, 9, 10 and 11.
Installation:
dnf --enablerepo=remi install phpunit12
Notice: This tool is an essential component of PHP QA in Fedora. This version should be available soon in the Fedora ≥ 41 official repository (19 new packages).
So, one of the things I’ve wanted to have when doing presentations in person or online is for the mouse pointer of my GNOME desktop to be large enough that it is obvious. My current system running Fedora Linux 41 which comes with GNOME 47 does not have a GUI button to change the size of the mouse pointer.
Well, here’s the trivial way to do that in the terminal and all’s well.
First, know what the size of the current mouse pointer is via:
% gsettings get org.gnome.desktop.interface cursor-size
That will return the current size, which on my system is 24.
Then, you can run the following:
% gsettings set org.gnome.desktop.interface cursor-size 80
The line above sets the size to 80.
Go ahead and experiment with different sizes to suit your needs. For me, 80 works just fine. And once my presentation/conf call is over, I reset it to 24.
We're happy to announce Kiwi TCMS version 14.0!
IMPORTANT:
This is a major version release which includes security related updates, backwards incompatible changes, several improvements and new translations.
Recommended upgrade path:
13.7 -> 14.0
You can explore everything at https://public.tenant.kiwitcms.org!
---
Public container image (x86_64):
pub.kiwitcms.eu/kiwitcms/kiwi latest a4c45db53541 681MB
IMPORTANT: version tagged and multi-arch container images are available only to subscribers!
hub.kiwitcms.eu/kiwitcms/version 14.0 (aarch64) 9aaf5f3e5c7e 05 Feb 2025 695MB hub.kiwitcms.eu/kiwitcms/version 14.0 (x86_64) 0152d6ac4cec 05 Feb 2025 681MB hub.kiwitcms.eu/kiwitcms/enterprise 14.0-mt (aarch64) f28044190b68 05 Feb 2025 1.08GB hub.kiwitcms.eu/kiwitcms/enterprise 14.0-mt (x86_64) 317f8f14a984 05 Feb 2025 1.06GB
IMPORTANT: version tagged, multi-arch and Enterprise container images are available only to subscribers!
Backup first! Then follow the Upgrading instructions from our documentation.
Happy testing!
---
If you like what we're doing and how Kiwi TCMS supports various communities please help us grow!
En el mundo del machine learning, los sistemas de recomendación han ganado una importancia significativa debido a su capacidad para personalizar la experiencia del usuario en diversas plataformas, desde servicios de streaming como Netflix y Spotify hasta comercios electrónicos como Amazon. Estos sistemas utilizan algoritmos complejos para predecir las preferencias de los usuarios y recomendar productos o contenidos que puedan ser de su interés. En este contexto, la librería Surprise en Python emerge como una herramienta especializada para la implementación y evaluación de sistemas de recomendación.
Surprise (Simple Python Recommendation System Engine) es una librería de Python diseñada específicamente para la construcción y evaluación de sistemas de recomendación. Desarrollada por Nicolas Hug, Surprise se centra en la implementación de algoritmos de filtrado colaborativo, que son una de las técnicas más populares en el ámbito de los sistemas de recomendación. El filtrado colaborativo se basa en la idea de que los usuarios que han tenido gustos similares en el pasado probablemente tendrán gustos similares en el futuro.
Surprise es una librería ligera y fácil de usar, pero a la vez potente, ya que ofrece una amplia gama de algoritmos de recomendación, herramientas para la evaluación de modelos y utilidades para la manipulación de datos. Además, está construida sobre las bases de NumPy y SciPy, lo que la hace eficiente y compatible con otras librerías de machine learning en Python.
La librería Surprise es ampliamente utilizada en la investigación y desarrollo de sistemas de recomendación, y sus aplicaciones abarcan una variedad de escenarios. Uno de sus usos principales es la implementación de filtrado colaborativo, tanto basado en usuarios como en ítems. En el filtrado colaborativo basado en usuarios, se recomiendan ítems que han sido preferidos por usuarios con gustos similares, mientras que en el basado en ítems, se recomiendan ítems que son similares a los que el usuario ya ha preferido. Este enfoque es especialmente útil en plataformas donde la interacción del usuario con los ítems (como películas, canciones o productos) es el principal indicador de sus preferencias.
Además de la implementación de algoritmos, Surprise destaca por su capacidad para evaluar modelos de recomendación de manera rigurosa. La librería ofrece métricas como el error cuadrático medio (RMSE), el error absoluto medio (MAE) y la precisión en las recomendaciones (precision@k), que permiten a los desarrolladores comparar diferentes algoritmos y seleccionar el que mejor se adapte a sus necesidades. Estas métricas son fundamentales para garantizar que las recomendaciones sean precisas y relevantes para los usuarios.
Otra aplicación importante de Surprise es la optimización de hiperparámetros. La librería incluye herramientas para realizar validación cruzada, lo que permite ajustar los modelos y encontrar la combinación óptima de parámetros. Esto es especialmente útil en entornos donde la precisión de las recomendaciones es crítica, como en plataformas de comercio electrónico o servicios de streaming, donde una recomendación incorrecta puede llevar a una mala experiencia del usuario.
Surprise también se integra fácilmente con otras librerías de machine learning, como Scikit-learn, lo que facilita la creación de pipelines complejos y la combinación de diferentes técnicas de aprendizaje automático. Por ejemplo, es posible combinar el filtrado colaborativo con técnicas de filtrado basado en contenido, donde se utilizan características específicas de los ítems (como el género de una película o la categoría de un producto) para mejorar las recomendaciones. Esta flexibilidad hace que Surprise sea una herramienta versátil para abordar una amplia gama de problemas de recomendación.
Hablemos ahora un poco sobre las ventajas y las desventajas que tiene utilizar la libreria Surprise, aqui abajo de las enlisto para que sea aun mas sencillo.
Antes de culminar queria presentarles de forma practica un pequeño ejemplo de como implementar un sistema de recomendacion sencillo utilizando Surprise basado en filtrado colaborativo.
Primero instalamos la libreria para poder darle uso.
!pip install surprise
Ahora ejecutamos lo siguiente:
# Importar las librerías necesarias
from surprise import Dataset
from surprise import Reader
from surprise import SVD
from surprise.model_selection import cross_validate
# Cargar un conjunto de datos de ejemplo (en este caso, el dataset MovieLens)
data = Dataset.load_builtin('ml-100k')
# Definir el algoritmo a utilizar (en este caso, SVD)
algo = SVD()
# Realizar validación cruzada y evaluar el modelo
cross_validate(algo, data, measures=['RMSE', 'MAE'], cv=5, verbose=True)
# Entrenar el modelo con todo el conjunto de datos
trainset = data.build_full_trainset()
algo.fit(trainset)
# Realizar una predicción para un usuario e ítem específicos
user_id = '196'
item_id = '302'
pred = algo.predict(user_id, item_id)
print(pred)
Utilizamos el dataset de prueba llamado MovieLens para poder realizar un sistema de recomendacion de peliculas para el usuario 196 y poder predecir si le gustara el item o pelicula 302.
El modelo SVD sencillo que hemos desarrollado ha logrado un rendimiento aceptable en la predicción de calificaciones, con un RMSE y MAE relativamente bajos. La predicción para el usuario 196 e ítem 302 es de 4.02, lo que sugiere que al usuario probablemente le gustaría este elemento.
La librería Surprise es una herramienta poderosa y versátil para la implementación de sistemas de recomendación en Python. Su facilidad de uso, combinada con una amplia gama de algoritmos predefinidos y herramientas de evaluación, la convierte en una opción atractiva tanto para investigadores como para desarrolladores. Sin embargo, es importante tener en cuenta sus limitaciones, especialmente en términos de escalabilidad y soporte para datos no estructurados.
En resumen, si buscas una librería especializada en sistemas de recomendación que sea fácil de usar pero a la vez potente, Surprise es una excelente opción que vale la pena explorar. Ya sea que estés trabajando en un proyecto personal o en una aplicación comercial, Surprise puede ayudarte a construir modelos de recomendación efectivos y robustos. Con su combinación de algoritmos avanzados y herramientas de evaluación, Surprise se posiciona como una de las mejores opciones para aquellos que desean adentrarse en el fascinante mundo de los sistemas de recomendación.
Fedora had a booth at BalCCon for the 8th time in a row. I’m personally very happy that we kept this streak as the first BalCCon where Fedora had presence is when I first became Ambassador in Serbia . I’m not aware of any other booths that had such a strong presence.
For those unaware of this conference, it is focused on: hacking, information security, privacy, technology & society, making, lock-picking, and electronic arts. For a sharp eye, these are well-rounded topics for Free Software-oriented people and that makes it rather important conference in the region.
In addition to all that, people that organize these events are extremely welcoming and heart-warming people! Working in tandem: founders, volunteers, and speakers – all show teamwork in very challenging times, both now as well as in the past… For an untrained eye it would seem that every BalCCon is a smooth sailing, but the truth is that there is a lot of effort and sacrifice required in realizing it. Every single person behind it, is a devoted, humble, and passionate contributor to the event.
Attendees range from teenagers, students, younger and older adults all interested in learning, sharing and socializing with each others. In other words, the conference is not solely focused on lectures and workshops, but also on bringing similar-minded people together and providing them a safe place to connect.
Fun is also huge part of the event as there is a karaoke evening (don’t miss that one ) and a rakia tasting and sharing night adequately named “Rakija leaksâ€�
.
At Fedora booth we try to engage people and encourage questions, to better understand what people like and dislike, to provide them guidance and invite them to join the community. We always keep the positive attitude towards all Free and Open Source Software and never fuel or support distro-wars. We love Fedora, but that doesn’t exclude love towards other distros as well (it just may not be as strong ).
This approach had an impact that many non-Fedora users liked to talk to us and stuck around. That relationship grown to constructive discussions about strengths of Fedora and their OSes of choice. Many of them converted over time , or at least found a perfect sweet-spot for Fedora in their everyday life.
Due to the import customs in Serbia, swag has been a hit and miss sometimes, but we try to keep the booth entertaining even in the absence of much-adored stickers.
This year we’ve had a revamp of our booth with new Fedora logo for the roll-up banners and the table cloth. And it was at a perfect time, as Fedora booth was visible in an article of country’s most popular printed IT magazine Svet Kompjutera.
This was all due to the amazing support from jwflory who displayed great amount of innovative and pro-active energy!
Booth appearance has evolved over the years and become more and more inviting to everyone. An organizing volunteer even approached me to say that people’ve been asking if Fedora will have a booth again this year, as they found it very interesting – not only from the Project’s aspect, but also because, since the first year we tried to bring something that would draw people to come and talk to us.
To give some examples:
There are plans and ideas for future booths too, such as SyncStar setup, SELinux challenge box, DIY pin machine, other quizes, …
Here is the timeline in photos from 2015 – 2024 (there are missing photos due to, either COVID, or just an unfortunate oversight on my end):
Huge thanks for the support from jwflory, thunderbirdtr, nmilosev, nsukur, bitlord, and especially to my dear wife littlecat that makes the booth incredibly appealing.
If you’ve never been to Serbia, Novi Sad, or BalCCon, you should definitely consider visiting and we’ll do our best to be good hosts and dedicate some of our times just for you!
The post Balkan Computer Congress, Novi Sad, Serbia appeared first on Fedora Community Blog.
At LaOficina we are currently working on a project to digitize family photographs and one of the challenges is the correct traceability of intellectual property. In this case we have encountered the difficulty of knowing the exact conditions of the received material, a situation that is not new and which is already addressed by the RightsStatements vocabulary, which includes 12 terms that are used, among others, by the Europeana community. Therefore, it is obvious that we need to add this vocabulary to our Wikibase Suite instance. By the way, as an exercise, I have taken the opportunity to compose it from scratch as an independent OWL ontology. It is very simple, but probably it has some conceptual flaws. If it is useful to someone, please use it without restrictions: righstatements-ontology.ttl
If you find something wrong please reach me.
So a we are a little bit into the new year I hope everybody had a great break and a good start of 2025. Personally I had a blast having gotten the kids an air hockey table as a Yuletide present :). Anyway, wanted to put this blog post together talking about what we are looking at for the new year and to let you all know that we are hiring.
Artificial Intelligence
One big item on our list for the year is looking at ways Fedora Workstation can make use of artificial intelligence. Thanks to IBMs Granite effort we know have an AI engine that is available under proper open source licensing terms and which can be extended for many different usecases. Also the IBM Granite team has an aggressive plan for releasing updated versions of Granite, incorporating new features of special interest to developers, like making Granite a great engine to power IDEs and similar tools. We been brainstorming various ideas in the team for how we can make use of AI to provide improved or new features to users of GNOME and Fedora Workstation. This includes making sure Fedora Workstation users have access to great tools like RamaLama, that we make sure setting up accelerated AI inside Toolbx is simple, that we offer a good Code Assistant based on Granite and that we come up with other cool integration points.
Wayland
The Wayland community had some challenges last year with frustrations boiling over a few times due to new protocol development taking a long time. Some of it was simply the challenge of finding enough people across multiple projects having the time to follow up and help review while other parts are genuine disagreements of what kind of things should be Wayland protocols or not. That said I think that problem has been somewhat resolved with a general understanding now that we have the ‘ext’ namespace for a reason, to allow people to have a space to review and make protocols without an expectation that they will be universally implemented. This allows for protocols of interest only to a subset of the community going into ‘ext’ and thus allowing protocols that might not be of interest to GNOME and KDE for instance to still have a place to live.
The other more practical problem is that of having people available to help review protocols or providing reference implementations. In a space like Wayland where you need multiple people from multiple different projects it can be hard at times to get enough people involved at any given time to move things forward, as different projects have different priorities and of course the developers involved might be busy elsewhere. One thing we have done to try to help out there is to set up a small internal team, lead by Jonas Ådahl, to discuss in-progress Wayland protocols and assign people the responsibility to follow up on those protocols we have an interest in. This has been helpful both as a way for us to develop internal consensus on the best way forward, but also I think our contribution upstream has become more efficient due to this.
All that said I also believe Wayland protocols will fade a bit into the background going forward. We are currently at the last stage of a community ‘ramp up’ on Wayland and thus there is a lot of focus on it, but once we are over that phase we will probably see what we saw with X.org extensions over time, that for the most time new extensions are so niche that 95% of the community don’t pay attention or care. There will always be some new technology creating the need for important new protocols, but those are likely to come along a relatively slow cadence.
High Dynamic Range
HDR support in GNOME Control Center
As for concrete Wayland protocols the single biggest thing for us for a long while now has of course been the HDR support for Linux. And it was great to see the HDR protocol get merged just before the holidays. I also want to give a shout out to Xaver Hugl from the KWin project. As we where working to ramp up HDR support in both GNOME Shell and GTK+ we ended up working with Xaver and using Kwin for testing especially the GTK+ implementation. Xaver was very friendly and collaborative and I think HDR support in both GNOME and KDE is more solid thanks to that collaboration, so thank you Xaver!
Talking about concrete progress on HDR support Jonas Adahl submitted merge requests for HDR UI controls for GNOME Control Center. This means you will be able to configure the use of HDR on your system in the next Fedora Workstation release.
PipeWire
I been sharing a lot of cool PipeWire news here in the last couple of years, but things might slow down a little as we go forward just because all the major features are basically working well now. The PulseAudio support is working well and we get very few bug reports now against it. The reports we are getting from the pro-audio community is that PipeWire works just as well or better as JACK for most people in terms of for instance latency, and when we do see issues with pro-audio it tends to be more often caused by driver issues triggered by PipeWire trying to use the device in ways that JACK didn’t. We been resolving those by adding more and more options to hardcode certain options in PipeWire, so that just as with JACK you can force PipeWire to not try things the driver has problems with. Of course fixing the drivers would be the best outcome, but for some of these pro-audio cards they are so niche that it is hard to find developers who wants to work on them or who has hardware to test with.
We are still maturing the video support although even that is getting very solid now. The screen capture support is considered fully mature, but the camera support is still a bit of a work in progress, partially because we are going to a generational change the camera landscape with UVC cameras being supplanted by MIPI cameras. Resolving that generational change isn’t just on PipeWire of course, but it does make the a more volatile landscape to mature something in. Of course an advantage here is that applications using PipeWire can easily switch between V4L2 UVC cameras and libcamera MIPI cameras, thus helping users have a smooth experience through this transition period.
But even with the challenges posed by this we are moving rapidly forward with Firefox PipeWire camera support being on by default in Fedora now, Chrome coming along quickly and OBS Studio having PipeWire support for some time already. And last but not least SDL3 is now out with PipeWire camera support.
MIPI camera support
Hans de Goede, Milan Zamazal and Kate Hsuan keeps working on making sure MIPI cameras work under Linux. MIPI cameras are a step forward in terms of technical capabilities, but at the moment a bit of a step backward in terms of open source as a lot of vendors believe they have ‘secret sauce’ in the MIPI camera stacks. Our works focuses mostly on getting the Intel MIPI stack fully working under Linux with the Lattice MIPI aggregator being the biggest hurdle currently for some laptops. Luckily Alan Stern, the USB kernel maintainer, is looking at this now as he got the hardware himself.
Flatpak
Some major improvements to the Flatpak stack has happened recently with the USB portal merged upstream. The USB portal came out of the Sovereign fund funding for GNOME and it gives us a more secure way to give sandboxed applications access to you USB devcices. In a somewhat related note we are still working on making system daemons installable through Flatpak, with the usecase being applications that has a system daemon to communicate with a specific piece of hardware for example (usually through USB). Christian Hergert got this on his todo list, but we are at the moment waiting for Lennart Poettering to merge some pre-requisite work into systemd that we want to base this on.
Accessibility
We are putting in a lot of effort towards accessibility these days. This includes working on portals and Wayland extensions to help facilitate accessibility, working on the ORCA screen reader and its dependencies to ensure it works great under Wayland. Working on GTK4 to ensure we got top notch accessibility support in the toolkit and more.
GNOME Software
Last year Milan Crha landed the support for signing the NVIDIA driver for use on secure boot. The main feature Milan he is looking at now is getting support for DNF5 into GNOME Software. Doing this will resolve one of the longest standing annoyances we had, which is that the dnf command line and GNOME Software would maintain two separate package caches. Once the DNF5 transition is done that should be a thing of the past and thus less risk of disk space being wasted on an extra set of cached packages.
Firefox
Martin Stransky and Jan Horak has been working hard at making Firefox ready for the future, with a lot of work going into making sure it supports the portals needed to function as a flatpak and by bringing HDR support to Firefox. In fact Martin just got his HDR patches for Firefox merged this week. So with the PipeWire camera support, Flatpak support and HDR support in place, Firefox will be ready for the future.
We are hiring! looking for 2 talented developers to join the Red Hat desktop team
We are hiring! So we got 2 job openings on the Red Hat desktop team! So if you are interested in joining us in pushing the boundaries of desktop linux forward please take a look and apply. For these 2 positions we are open to remote workers across the globe and while the job adds list specific seniorities we are somewhat flexible on that front too for the right candidate. So be sure to check out the two job listings and get your application in! If you ever wanted to work fulltime on GNOME and related technologies this is your chance.
When I thought doing an episode about authentication would be a good idea, Marc Boorshtein was the first person who came to mind for me. Marc knows more about authentication than anyone I know, and he’s really good at talking about it in a coherent way. Marc is the CTO of Tremolo Security, he’s been doing authentication for more than 20 years, long before many of us even knew this whole identity and authentication thing was something we should care about.
January has gone by pretty fast. Here's some longer form thoughts about a few things that happened this last week.
We did a mass update/reboot cycle on (almost) all our instances. The last one was about 2 months ago (before the holidays), so we were due. We do apply security updates weekdayly (ie, monday - friday), but we don't apply bugfix updates except for this scheduled windows. Rebooting everything makes sure everything is on the latest kernel versions and also ensures that if we had to reset something for other reasons it would come back up in the correct/desired/working state. I did explore the idea of having things setup so we could do these sorts of things without an outage window at all, but at the time (a few years ago) the sticking point was database servers. It was very possible to setup replication, but it was all very fragile and required manual intervention to make sure failover/failback worked right. There's been a lot of progress in that area though, so later this year it might be time to revisit that.
We also use these outage windows to do some reinstalls and dist-upgrades. This time I moved a number of things from f40 to f41, and reinstalled the last vmhost we had still on rhel8. That was a tricky one as it had our dhcp/tftp vm and our kickstarts/repos vm. So, I live migrated them to another server, did the reinstall and migrated them back. It all went pretty smoothly.
There was some breakage with secure boot signing after these upgrades, but it turned out to be completely my fault. The _last_ upgrade cycle, opensc changed the name of our token. From: 'OpenSC Card (Fedora Signer)' to 'OpenSC Card'. The logic upstream being "Oh, if you only have one card, you don't need to know the actual token name". Which is bad for a variety of reasons, like if you suddenly add another card, or swap cards. In any case I failed to fix my notes on that and was trying the old name and getting a confusing and bad error message. Once I managed to fix it out everything was working again.
Just for fun, here's our top 5 os versions by number:
237 Fedora 41
108 RHEL 9.5
31 Fedora 40
23 RHEL 8.10
4 RHEL 7.9
The 7.9 ones will go away once fedmsg and github2fedmsg are finally retired. (Hopefully soon).
A bunch of planning work on the upcoming datacenter move. I'm hoping next week to work a lot more on a detailed plan. Also, we in infrastructure should kick off some discussions around if there's anything we can change/do while doing this move. Of course adding in too much change could be bad given the short timeline, but there might be some things to consider.
I also powered off 11 of our old arm servers. They had been disabled in the buildsystem for a while to confirm we didn't really need them, so I powered them off and saved us some energy usage.
The riscv-koji seconday hub is actually installed and up now. However, there's still a bunch of things to do:
Need to setup authentication so people/I can login to it.
Need to install some buildvm-x86 builders to do newrepos, etc
Need to install a composer to build images and such on
Next week's riscv sig meeting hopefully we can discuss steps after that. Probibly we would just setup tags/targets/etc and import a minimal set of rpms for a buildroot
Need to figure out auth for builders and add some.
Overall progress finally. Sorry I have been a bottleneck on it, but soon I think lots of other folks can start in on working on it.
We have been having anoying lockups of our power9 hypervisors. I filed https://bugzilla.redhat.com/show_bug.cgi?id=2343283 on that. In the mean time I have been moving them back to the 6.11.x kernels which don't seem to have the problem. I did briefly try a 6.13.0 kernel, but the network wasn't working there. I still need to file a bug on that when I can take one down and gather debugging info. It was the i40e module not being able to load due to some kind of memory error. ;(
One thing that was bugging me last year is that I get a lot of notifications on chat platforms (in particular slack and matrix) where someone asks me something or wants me to do something. Thats perfectly fine, I'm happy to help. However, when I sit down in the morning, I usually want to look at whats going on and prioritze things, not get sidetracked into replying/working on something thats not the most important issue. This resulted in me trying to remember which things where needed responses and sometimes missing going back to them or getting distracted by them.
So, a few weeks ago I started actually noting things like that down as I came to them, then after higher pri things were taken care of, I had a nice list to go back through and hopefully not miss anything.
It's reduced my stress, and I'd recommend it for anyone with similar workflows.
As always, comment on mastodon: https://fosstodon.org/@nirik/113930147248831003
هوش مصنوعی، بهویژه هوش مصنوعی مولد (Generative AI)، بهسرعت در حال پیشرفت است و دسترسی به آن برای کاربران عادی روزبهروز آسانتر میشود. با ظهور مدلهای زبانی بزرگ (LLM) مانند GPT و LLaMA، تمایل به اجرای این مدلها بهصورت محلی روی سختافزار شخصی افزایش یافته است. این مقاله راهنمای سادهای برای راهاندازی Ollama (ابزاری برای […]
The post اجرای هوش مصنوعی با استفاده از Ollama و Open WebUI first appeared on طرفداران فدورا.This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: – 27th January – 31th January 2025
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues
If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.
The post Infra and RelEng Update – Week 05 2025 appeared first on Fedora Community Blog.
Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for a parallel installation, the perfect solution for such tests, and also as base packages.
RPMs of PHP version 8.4.4RC2 are available
RPMs of PHP version 8.3.17RC1 are available
ℹ️ The packages are available for x86_64 and aarch64.
ℹ️ PHP version 8.2 is now in security mode only, so no more RC will be released.
ℹ️ Installation: follow the wizard instructions.
ℹ️ Announcements:
Parallel installation of version 8.4 as Software Collection:
yum --enablerepo=remi-test install php84
Parallel installation of version 8.3 as Software Collection:
yum --enablerepo=remi-test install php83
Update of system version 8.4:
dnf module switch-to php:remi-8.4 dnf --enablerepo=remi-modular-test update php\*
Update of system version 8.3:
dnf module switch-to php:remi-8.3 dnf --enablerepo=remi-modular-test update php\*
ℹ️ Notice:
Software Collections (php83, php84)
Base packages (php)
I am running a Pixelfed instance for some time now at https://pixel.kushaldas.photography/kushal. This post contains quick setup instruction using docker/containers for the same.
We will need .env.docker file and modify it as required, specially the following, you will have to write the values for each one of them.
APP_NAME=
APP_DOMAIN=
OPEN_REGISTRATION="false" # because personal site
ENFORCE_EMAIL_VERIFICATION="false" # because personal site
DB_PASSWORD=
# Extra values to db itself
MYSQL_DATABASE=
MYSQL_PASSWORD=
MYSQL_USER=
CACHE_DRIVER="redis"
BROADCAST_DRIVER="redis"
QUEUE_DRIVER="redis"
SESSION_DRIVER="redis"
REDIS_HOST="redis"
ACITIVITY_PUB="true"
LOG_CHANNEL="stderr"
The actual docker compose file:
---
services:
app:
image: zknt/pixelfed:2025-01-18
restart: unless-stopped
env_file:
- ./.env
volumes:
- "/data/app-storage:/var/www/storage"
- "./.env:/var/www/.env"
depends_on:
- db
- redis
# The port statement makes Pixelfed run on Port 8080, no SSL.
# For a real instance you need a frontend proxy instead!
ports:
- "8080:80"
worker:
image: zknt/pixelfed:2025-01-18
restart: unless-stopped
env_file:
- ./.env
volumes:
- "/data/app-storage:/var/www/storage"
- "./.env:/var/www/.env"
entrypoint: /worker-entrypoint.sh
depends_on:
- db
- redis
- app
healthcheck:
test: php artisan horizon:status | grep running
interval: 60s
timeout: 5s
retries: 1
db:
image: mariadb:11.2
restart: unless-stopped
env_file:
- ./.env
environment:
- MYSQL_ROOT_PASSWORD=CHANGE_ME
volumes:
- "/data/db-data:/var/lib/mysql"
redis:
image: zknt/redis
restart: unless-stopped
volumes:
- "redis-data:/data"
volumes:
redis-data:
I am using nginx
as the reverse proxy. Only thing to remember there is to
pass .well-known/acme-challenge
to the correct directory for letsencrypt
,
the rest should point to the contianer.
Rails comes with a built-in support for saving and uploading files to S3 and S3-compatible storage services in Active Storage. Here’s how to set up Cloudflare R2.
To start using Cloudflare R2, select R2 Object Storage from the menu on the left navbar. If you are on free plan you’ll need to subscribe first.
Then you can click + Create bucket and give your bucket a name. Choose meaningful names for your buckets. I usually append the environment to the name.
Cloudflare will auto assign your bucket location based on your actual location and providing a hint might not work. Not great doing this on holidays.
TIP: If you are hosting in EU, you can at least choose Specify jurisdiction and force it to be within EU.
Then on the bucket page choose Settings and include the following CORS:
[
{
"AllowedOrigins": [
"*"
],
"AllowedMethods": [
"PUT"
],
"AllowedHeaders": [
"*"
],
"ExposeHeaders": [
"Origin", "Content-Type", "Content-MD5", "Content-Disposition"
],
"MaxAgeSeconds": 3600
}
]
Once done click on API next to the + Create Bucket button. Select Manage API tokens and create a new token.
Note all secrets and most importantly access key, access key secret and URL.
Install Active Storage and configure config/storage.yml
similarly to the following:
test:
service: Disk
root: <%= Rails.root.join("tmp/storage") %>
local:
service: Disk
root: <%= Rails.root.join("storage") %>
cloudflare_dev:
service: S3
endpoint: https://[HASH].eu.r2.cloudflarestorage.com
access_key_id: <%= ENV["R2_ACCESS_KEY"] %>
secret_access_key: <%= ENV["R2_SECRET_ACCESS_KEY"] %>
bucket: app-dev
region: auto
cloudflare_production:
endpoint: https://[HASH].eu.r2.cloudflarestorage.com
access_key_id: <%= ENV["R2_ACCESS_KEY"] %>
secret_access_key: <%= ENV["R2_SECRET_ACCESS_KEY"] %>
bucket: app
region: auto
To connect Cloudflare we’ll need an endpoint
, access_key_id
, secret_access_key
, and region
. You can use environment variables or Rails Credentials.
Note that unlike for Amazon S3 where we fill in the region and Digital Ocean Spaces where the region is unused, we set it to auto
.
I usually set up two different backends, one for development and one for production:
# config/environments/development.rb
config.active_storage.service = :cloudflare_dev
# config/environments/production.rb
config.active_storage.service = :cloudflare_production
And that’s really it!
Here are simple good practices to ensure health of your HDDs, using SMART.
I run a home server since many decades ago with multiple services such as:
Hardware setup is:
This server is critical for my family, yet several hard drives failed during all these years (that is why I have setup automated backups).
It was not until recently that I learned how to monitor the health of these HDDs using SMART. Get it installed with package smartmontools.
When you get a new HDD, after connecting but before putting it into production, you must run a self-health check with command:
smartctl -t long /dev/sdz
Where /dev/sdz
is the device name of your new HDD. This test will take more than 10 hours and can be done even with the HDD in use, but it will degrade performance, so I prefer to unmount all filesystems first.
You should run this health check sporadically from time to time.
You can also run 10 minutes health checks without bothering to take your services offline:
smartctl -t short /dev/sdz
To see results of these tests, I prefer using Alexander Shaduri’s GSmartControl GUI, even if smartctl command can also display results. Here is the view of one HDD with the completion status of the health checks we triggered by the smartctl commands above, the Self-Tests tab.
As a general good practice, you must immediately stop using a HDD if you get many errors in the Error Log tab.
SMART error messages are very confusing and proprietary, and I have seen HDDs marked as healthy even if they present many error messages. So again, as a general good practice, if smartctl -l error /dev/sdz
output is very long and is not a simple No errors logged, it is time to migrate data off of the HDD and replace it.
The smartmontools package also installs a service that checks HDD health daily and sends a report by e-mail. On Fedora Linux this is activated by simply installing the package, but is useless if your system can’t send e-mails, which is the way reports are delivered. So make sure you enable your server to send e-mails. There are many ways of doing this, the simplest one is using Gmail as relay.
If your HDD is starting to fail, you’ll get an e-mail from the SMART daemon. If that happens, you must run more extensive health checks as described above. And check your backups and move your data off of the failing HDD.
Also, you should check the warranty of your HDDs. The more premium product lines of Seagate (IronWolf) and Wester Digital (Red) offer 3-year warranty and you should make sure you’ll get these when you buy it. Use your HDD serial number to check if warranty coverage is still valid on Seagate and Western Digital websites. You can fill some forms and on the manufacturer’s website to get a brand new replacement HDD.
Recently I have posted a Dockerfile to run syslog-ng in an Alma Linux container. I got some encouraging feedback, so this week I experimented with syslog-ng Premium Edition (PE) in a RHEL UBI (Universal Base Image) container. While this is not officially supported by One Identity, we are really interested in your feedback.
If you do not have syslog-ng PE yet, you can get it at https://www.syslog-ng.com/register/115582/ . You also need RHEL with a valid subscription to build the container image. You need Podman and Buildah installed, and also git, unless you want to download files one by one.
You can download the Dockerfile and the related files from GitHub from my personal repo at https://github.com/czanik/syslog-ng-pe-ubi.
The Dockerfile contains all information needed to build the container image. Let’s check some of the most important lines from the file.
FROM registry.access.redhat.com/ubi9/ubi-init
This line means that we use the RHEL Universal Base Image as a base image. It has a couple of variants: “init” means that this image has systemd with service management included. It results in a slightly larger image size, but it also means that unlike the method described in the syslog-ng PE documentation, multiple services can run in the same container. In this case, syslog-ng starts automatically, but optionally you can also enable the syslog-ng Windows Event Collector (WEC) and the syslog-ng Prometheus exporter.
ENV SNGRPM="syslog-ng-premium-edition-8.0.0-1+20250116+1752.rhel9.x86_64.rpm"
Setting this environment variable ensures that you only have to set the syslog-ng PE installer filename only once.
The next lines update software in the container, install syslog-ng PE and clean the package management system. (Note that even if we remove the syslog-ng PE installer, it persists in one of the lower layers, enlarging the final container image. I still need to find a workaround for this problem.)
Next, files related to the syslog-ng Prometheus exporter are added. Only the syslog-ng service is enabled in the Dockerfile, and you can easily enable other services later. However, you can also add syslog-ng-wec and sngexporter here, so you do not have to enable these services manually for the container.
Default ports related to syslog-ng PE are collected in the Dockerfile. Note that these are not automagically forwarded from the outside network to the containers. These are just reminders that these ports could be opened by the application(s) running in the container.
The bundled syslog-ng.conf does not collect local logs but opens a couple of network sources.
Once you set the name of the installer in the Dockerfile, and optionally also included other services to be enabled, you are ready to build the container. You can name it what you want or skip naming altogether. The following command builds a new container within a few minutes (depending on the network speed):
docker build -t ubipe9 .
Note that while I still use the command name “docker” in my examples, I actually use Podman and Buildah with a compatibility link.
There are many options to run the container. You will most likely map some directories from the host or open some ports to collect logs from remote hosts. Here I show only a very basic command line. It starts syslog-ng in the background, and maps the license file into the container from the host:
docker run -d --name=sng -v /data/license.txt:/opt/syslog-ng/etc/license.txt:Z ubipe9
The license file is needed to run syslog-ng PE as a server. Without it, syslog-ng PE can save logs only from local log sources or run in relay mode (forward logs without saving them).
You have seen how to enable services in the Dockerfile. You can also enable (or disable) WEC or the Prometheus exporter from the command line. The container will keep the changed state as long as you do not rm it. Here is a sample session:
[root@localhost syslog-ng-pe-ubi]# docker run -d --name=sng -v /data/license.txt:/opt/syslog-ng/etc/license.txt:Z ubipe9 e09706c27aa4a10e83959030604473c9f3e2a0d45c8f011256d2a4ae03ba732e [root@localhost syslog-ng-pe-ubi]# docker exec -ti sng systemctl enable sngexporter Created symlink /etc/systemd/system/multi-user.target.wants/sngexporter.service → /usr/lib/systemd/system/sngexporter.service. [root@localhost syslog-ng-pe-ubi]# docker stop sng sng [root@localhost syslog-ng-pe-ubi]# docker start sng sng [root@localhost syslog-ng-pe-ubi]# docker exec -ti sng /bin/bash [root@e09706c27aa4 /]# ps aux USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.5 0.3 22072 11648 ? Ss 09:07 0:00 /sbin/init root 10 0.0 0.2 35748 9244 ? Ss 09:07 0:00 /usr/lib/systemd/systemd-journald root 16 3.1 0.8 375848 32272 ? Ssl 09:07 0:00 /opt/syslog-ng/libexec/syslog-ng -F --enable-core root 21 0.6 0.5 25152 18944 ? Ss 09:07 0:00 python3 /usr/local/bin/sng_exporter.py --socket-path=/opt/syslog-ng/var/syslog-ng.ctl root 25 0.0 0.0 4840 3712 pts/0 Ss 09:07 0:00 /bin/bash root 36 0.0 0.0 7552 3200 pts/0 R+ 09:07 0:00 ps aux
This blog is aimed at helping you to get started with syslog-ng PE in a RHEL UBI container. This is still experimental and not officially supported, but your feedback is very welcome, both the problem and success reports.
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
Cockpit is the modern Linux admin interface. We release regularly.
Here are the release notes from Cockpit 332, cockpit-machines 327, cockpit-podman 100, and cockpit-files 15:
The cockpit/ws container now includes cockpit-files. When you log into a remote machine that does not have any Cockpit packages installed (package-less mode), the “Files” page is now available.
Cockpit 327 introduced support for connecting to remote machines without any installed Cockpit packages. That only worked if the local and remote machine had the same operating system version. This version extends this “package-less session” feature to all operating systems which Cockpit continuously tests. At the time of writing this includes:
This list will change over time, due to new distribution releases and the end of support for each version.
The “Launch viewer” button in a virtual machine’s “Console” card now uses the same address that Cockpit uses to connect to the virtual machine host. Previously, the button incorrectly attempted to connect to either the local machine where the browser is running (when the listening address was set to “localhost”) or to an invalid “0.0.0.0” address (when configured to listen to all interfaces).
If podman.socket
is not already running on the system or user session, cockpit-podman will now automatically start the unit. It is no longer necessary to manually start or enable the podman service.
Container-based services (aka: “quadlets”) are now labeled with a “service” badge in the containers overview. Additionally, the option to rename quadlets has been removed, as renaming must take place in the systemd service .container
file, which are often found in /etc/containers/systemd/
.
The footer now shows the user and group of the current directory.
Cockpit 332, cockpit-machines 327, cockpit-podman 100, and cockpit-files 15 are available now:
In October 2024 some members of our community gathered in Medelin, Colombia, for another edition of GNOME Latam. Some of us joined remotely for a schedule packed with talks about GNOME and its ecosystem.
The talks, in Spanish and Portuguese, are now published on YouTube. Check it out!
Photo by William White on Unsplash.
Please join us at the next regular Open NeuroFedora team meeting on Monday 27 January 2025 at 1300 UTC. The meeting is a public meeting, and open for everyone to attend. You can join us in the Fedora meeting channel on chat.fedoraproject.org (our Matrix instance). Note that you can also access this channel from other Matrix home severs, so you do not have to create a Fedora account just to attend the meeting.
You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:
$ date -d 'Monday, January 27, 2025 13:00 UTC'
The meeting will be chaired by @ankursinha. The agenda for the meeting is:
We hope to see you there!
What a way to spend an evening.
As I attempted to rewrite some Gitlab automation from bash to python, I stumbled over what should have been a trivial thing: collapsing section headers. They were not trivial.
In bash the code to do a section header looks like this:
section_start() {
name="$1"
description="${2:-$name}"
echo -e "\e[0Ksection_start:`date +%s`:${name}[collapsed=true]\r\e[0K${description}"
}
But those escape sequences are not valid escape sequences in python. What they turn in to is non-intuitive either. Here is the code I ended up with:
def section_start(name, description=None):
if description is None:
description = name
escape = bytearray([27, 91, 48, 75])
log_date = local["date"]("+%s").strip()
message = "section_start:" + log_date + ":" + name + "[collapsed=true]\r"
b = escape + message.encode('utf-8') + escape + description.encode('utf-8')
sys.stdout.buffer.write(b)
print()
The print call in python accepts on a limited set of escape characters. It turns out that the “\e” in my original echo command is not one of them.
To find out what was actually sent to the screen, I fell back on strace. Specifically, this call:
strace -xx -e trace=write echo -e "\e[0Ksection_start:`date +%s`:name[collapsed=false]\r\e[0Kdescription" 2> /tmp/trace.txt
The output from that was:
write(1, "\x1b\x5b\x30\x4b\x73\x65\x63....
The \x1b became the 27 in my program: the ascii escape character.
With that, when I print output to the screen, the name portion is covered up and I only see the description. Since this is what happens with the bash code, I think I have something that will work on gitlab.
Maybe you are ...
A Tesla is stupid fun to drive, if you treat it as if were just a normal car. The electric motor(s) have high torque for acceleration, and the machine features precise steering, and smooth tracking. For some models, the suspension is a little stiff, but that's normal for high-performance cars. It's easy to manoeuvre the thing exactly the way you want to. The audio system is great, and you don't even have to pump up to volume to enjoy it. They look pretty sharp. Recharging and winter are fine. Fun!
A decade ago, one major selling point of electric vehicles was that they are purportedly good for the environment. Yeah, it's nowhere that simple, and should never be mandated, but whatever. They don't burn gasoline, so don't make that kind of mess. It's possible to get severe battery fires happening, but with a Tesla that seems require a catastrophic crash. So don't crash! But how?
Got you, fam. Safety is the defining feature that sets Tesla apart from practically all other vehicles. Their way to get there is full automation, to the point where the car literally drives itself. Today's top implementation is called "Full Self Driving (Supervised)", and it feels pretty close. It can already take me door-to-door from this city to another one, including on city streets, traffic, highways, with zero or almost zero input. The driver just needs to "supervise", i.e., look outside and confirm the car is reading his mind properly. (If the car's about to make a mistake, the driver can instantly take over with a touch on the pedals or the steering wheel.)
What makes this possible is a bunch of AI hardware and software in the car, fed by eight or so cameras that are looking in every direction. The hardware recognizes objects and positions; the software infers velocities, vulnerabilities, intent, and decides on driving control inputs. It needs all those cameras to build up a complete-enough view of the world to track lanes, safely do turns, lane changes, passing. The system draws a live 3D map of the surroundings of the car, including all other visible nearby vehicles, people, obstacles.
What this also makes possible is to exceed the capacity of a human being to pay attention. The car is literally looking in every direction, and constantly, so it can respond to impending trouble in places where the human would not be looking. There are lots of crash-evasion videos out there, where the robot makes a rapid turn or brake to avoid hitting something that came out of the blue. The rear view cameras watch for bikers and pause your door opening if one is nearby, to protect them from "door prizes". It does not get tired, so even on a boring highway or with a sleepy driver, the thing stays solid.
The FSD system is tied to the navigation software, so it knows where you want to go. A destination that you are unfamiliar with is not a problem. I recently had FSD drive in a foreign city, at night, where if I were driving by gps/map alone, it would have been a challenge. Constantly cross-checking street names, distances, GPS guidance with driving in an unknown city is super stressful. FSD is confident, knows where it's going, and turns the driver's job into just confirming tactical safety. So much easier!
Another way to describe the feeling of supervising FSD will resonate with aviators. Supervising FSD feels just like flying with the autopilot engaged. A pilot can dismiss second-by-second minutiae, and switch to a strategic level of executive thinking. On the plane, with the autopilot on, one doesn't need to worry "am I centered in this airway? holding altitude?". Normally, the answer will be "yes". Similarly, with FSD, the driver can stop thinking "how exactly can we pass this guy?", and instead consider "how many minutes until the next turn? how is the battery usage compared to predictions? is there a washroom at the supercharger?". It's soothing. It makes long drives feel short.
Yes, FSD is not perfect, though the monthly updates improve it constantly. Crashes can/do still occur, with or without FSD in control of the car. Tesla engineers wrapped the cabin in enough airbags and steel to protect people inside from normal crashes - and not just the cybertruck. It is unfortunately possible to manually race a Tesla like a drunken madman into concrete pillars, beyond the capacity of any car to protect the passengers, but so it goes.
It feels like full self-driving - without supervision - is pretty close. That way, even people who grew too old to drive, or those who can't or don't want to own a Tesla, will be able to ride. This may take out the worst aspects of taxicab industries, and maybe parts of public transit.
The Tesla hardware + software systems are designed to keep people comfy. During driving, the car is pretty quiet, with only tire and wind noise getting in. The electric motors are silent, and there is no engine + exhaust to rumble. You can talk quietly inside. Even the smallest model 3 is spacious, and the larger ones really feel large on the inside.
The heating/cooling system is amazing. It conspires to scavenge excess heat from wherever in the entire car it may be generated or found, and send it to wherever it is needed. It's controllable remotely and by automated schedule, so the car can heat or cool or completely defrost its windows without anyone being there. There's no engine, so no fumes: this process can safely run anywhere - a parking lot or indoors. When you arrive at the car, it's ready and cozy. No more squirming in cold/hot seats for the first 15 minutes. Excellent air filtration - driving beside a landfill site is hard to notice.
There's no fuel, so there's no need to stand outdoors to refuel. (Road-trip recharging is often outdoors, but you don't have to babysit the wire connection. You can just sit inside or visit a fast food joint and relax.) If you need to park somewhere strange, the car will auto-lock when you walk away, can keep an eye out around itself as a security camera while you're away, and (in a parking lot) can drive itself to you when you're ready to go.
The car's main user interface is a big touchscreen, running some OS, maybe Linux, exposing all the systems in the car. There is also voice control, as in: "navigate to a gas station". Just kidding! The visualizations on the screen include a birds-eye view of the car and its surroundings, as detected by the AI system from all the cameras. Rear view cameras pop up when changing lanes. Interactive mapping is the default app to take up the bulk of the screen, but there are other options too. Several phone-app type gadgets are available. Several games(!) also, for use while the car is parked. Tesla likes expressing humour, so there are toy programs that play with the lights, make fart sounds, change the surroundings-visualization to a rendering of mars, or of Santa, a sketchpad to make blobfish-shaped-dad-portraits, that kind of silly stuff.
The navigation software is pretty important, because it is tied into the battery management system. The car knows precisely how much charge it has, and when you tell it where you're going, it can integrate wind, temperature, traffic, topography, and maybe other factors, to estimate how much energy it'll take to get to your destination. If the battery is too low, it'll offer detours to en-route supercharger stations, and will tell you how long you'll need to charge for. (Often it's 5-15 minutes.) If you don't like the built-in navigation package, third-party phone apps with more routing/charging flexibility can be authorized to exchange telemetry with the car.
If you want to know how the car works and how to work on it, you can read the entire service manuals. I hope you like software updates: every couple of weeks, new versions and features flow down, sometimes big improvements, sometimes small ones, and occasionally some regressions. The car literally improves with age.
But if you're a weirdo who doesn't want to play with the tech, you can just take the car's navigation advice, use voice commands for everything, plug it in at home every night, and ignore the screen.
Tesla cars are perpetually Internet-connected via a built-in cellular data connection. This enables various remote control functions, clever navigation, software updates, and other things. In the case of a crash, I heard the car attempts to upload to the mothership a black-box worth of info about the last few seconds. At your option, you can volunteer to supply Tesla with much more data, via the cameras outside and inside the car. Basically, when something happens that surprises the AI system, a brief recording is uploaded to the mothership. This is used to train the next version of the AI system. In turn, everyone will receive the updated AI system before long. Unfortunately, this trust was violated a couple of years ago. We are assured discipline has improved, and again it's opt-in (and opt-out anytime).
Unfortunately, they are not yet cheap. A Tesla is priced similarly to high-end hybrids, well above entry-level sedans. On the other hand, operating costs are hilariously small. I've calculated that, compared to our Toyota RAV4 Hybrid (an amazingly fuel efficient SUV, with an 18-month wait to deliver), taking the Tesla for a similar ride costs about 10%, when recharged at home overnight! It's practically free for commuting. Also, there's basically no regular maintenance, as the most mechanical systems from a normal car just don't exist. At the end of every quarter and especially at the end of the year, some serious purchase discounts are normally available. Plus, any current Tesla owner can give you a referral link, from which you can get an additional $1-2K off the price (and they get some freebies too). Plenty of older used ones can be found, which can still run modern FSD AI.
So yeah, the Tesla is not quite for everyone. But in case it might be for you, I'd love to show you ours - drop me an email. If you decide you want one, you can order it entirely online, in a few minutes, just like on Amazon. One could be in your garage literally in days. (We picked up our "custom" Model Y, just 4 days after a Sunday order.)
comments? additions? reactions?
As always, comment on mastodon: https://fosstodon.org/@nirik/114009437084125263