Beaker one liner using xmllint:
bkr job-results J:61 | xmllint --xpath /job/recipeSet/recipe/roles/role -
<role value="RECIPE_MEMBERS"/>
The dash at the end makes it read from stdin
Fedora is a Trademark of Red Hat, Inc, an operating system built by volunteers around the world. This page is provided so that independent volunteers can showcase our contributions to Fedora and Free Software in general. Official Fedora Download page.
|
Beaker one liner using xmllint:
bkr job-results J:61 | xmllint --xpath /job/recipeSet/recipe/roles/role -
<role value="RECIPE_MEMBERS"/>
The dash at the end makes it read from stdin
The official syslog-ng container image is based on Debian Stable. However, we’ve been getting requests for an RPM-based image for many years. So, I made an initial version available based on Alma Linux and now I need your feedback about it!
This image uses the “init” variant of Alma Linux 9 containers as a base image. What does this mean? Well, it uses systemd service management inside, making it possible to run multiple services from a single container. While only syslog-ng is included right now, I also plan to add the syslog-ng Prometheus exporter to the image.
Note that while the example command lines show Docker, I also tested it using Podman.
I want everyone to be able to rebuild the image from scratch (Dockerfile). When using RHEL UBI as a base image, you have to run RHEL to build the container image based on RHEL UBI, as certain repositories cannot be enabled without a Red Hat subscription. While many syslog-ng users use RHEL to run syslog-ng, even more of them use one of the RHEL-compatible operating systems, like Alma Linux, CentOS, or Rocky Linux. Using Alma Linux as a base image ensures that you do not need an RHEL subscription to rebuild the image locally.
Needless to say: if you use RHEL, you can easily modify the Dockerfile to use RHEL UBI as a base image.
Yes, I know that RHEL 8 and its compatibles are still more popular than version 9. However, version 8 includes ancient software versions both in the base OS and in EPEL. With RHEL 9 as a base, a lot more syslog-ng features can be enabled.
All features that can be compiled on RHEL 9, except for Java support. It’s missing because that would significantly enlarge an already huge container. If you really need it, just add syslog-ng-java to the package list in the Dockerfile and build the image yourself.
I also plan to add the syslog-ng Prometheus exporter to the image. It is small, just a few lines of Python script.
You can list the enabled modules with the following command:
czplaptop:~ # docker run -it --rm --name syslog-ng czanik/syslog-ng:alma9_481 /usr/sbin/syslog-ng -V syslog-ng 4 (4.8.1.7.g6113797) Config version: 4.2 Installer-Version: 4.8.1.7.g6113797 Revision: Compile-Date: Oct 3 2024 00:00:00 Module-Directory: /usr/lib64/syslog-ng Module-Path: /usr/lib64/syslog-ng Include-Path: /usr/share/syslog-ng/include Available-Modules: add-contextual-data,afamqp,affile,afmongodb,afprog,afsmtp,afsocket,afstomp,afuser,appmodel,azure-auth-header,basicfuncs,cef,cloud_auth,confgen,correlation,cryptofuncs,csvparser,disk-buffer,ebpf,examples,geoip2-plugin,graphite,hook-commands,http,json-plugin,kafka,kvformat,linux-kmsg-format,loki,map-value-pairs,metrics-probe,mod-python,otel,pacctformat,pseudofile,rate-limit-filter,redis,regexp-parser,riemann,sdjournal,secure-logging,stardate,syslogformat,system-source,tags-parser,tfgetent,timestamp,xml Enable-Debug: off Enable-GProf: off Enable-Memtrace: off Enable-IPv6: on Enable-Spoof-Source: on Enable-TCP-Wrapper: off Enable-Linux-Caps: on Enable-Systemd: on czplaptop:~ #
This container includes the exact same configuration as the official syslog-ng container image. You can check it at https://github.com/syslog-ng/syslog-ng/blob/master/docker/syslog-ng.conf The Dockerfile opens four ports to collect log messages:
514 UDP and TCP for RFC3164 log messages
601 TCP for RFC5424 log messages
4317 TCP for OpenTelemetry (not used in the default configuration)
6514 TCP TLS for encrypted RFC5414 log messages (does not work out of the box)
All logs are saved into two files: /var/log/messages and /var/log/messages-kv, the latter of which also stores all name-value pairs received.
Of course, not many people run syslog-ng using the default configuration. You can map your own custom syslog-ng.conf to /etc/syslog-ng/syslog-ng.conf to use your own configuration.
If you have a test syslog-ng configuration in the /tmp directory, you can test it by mounting it as a volume into the container and run syslog-ng. If you are lucky, there is no output at all:
czplaptop:~ # docker run -it --rm -v /tmp/syslog-ng.conf:/etc/syslog-ng/syslog-ng.conf -p 514:514/udp -p 601:601 --name syslog-ng czanik/syslog-ng:alma9_481 /usr/sbin/syslog-ng --no-caps -s czplaptop:~ #
Note the --no-caps option. It silences an error message. Without it, the output of the above command would be:
czplaptop:~ # docker run -it --rm -v /tmp/syslog-ng.conf:/etc/syslog-ng/syslog-ng.conf -p 514:514/udp -p 601:601 --name syslog-ng czanik/syslog-ng:alma9_481 /usr/sbin/syslog-ng -s syslog-ng: Error setting capabilities, capability management disabled; error='Operation not permitted'
As you could see from the previous paragraphs, syslog-ng needs the --no-caps parameter to run in a container. Normally, when you start the syslog-ng container, syslog-ng is started by init. You cannot pass parameters from the command line. This can be solved by adding parameters to the /etc/sysconfig/syslog-ng file. By default, it already has the --no-caps parameter:
SYSLOGNG_OPTS="--no-caps"
If you want more parameters, like extra log messages from syslog-ng, you can create a file with this content and add the extra parameters to it. Note that not all syslog-ng parameters make sense, when you have proper init in a container. For example, verbose logging does not work: you have to call syslog-ng directly for that, like in the syntax check case. However, you can use a different configuration file, among others.
Once you saved the file, you have to map it into the container:
docker run -v /tmp/syslog-ng:/etc/sysconfig/syslog-ng czanik/syslog-ng
If you use volume mapping to store your logs outside the container, you will run into trouble on SELinux-enabled systems. Many organizations simply disable SELinux, as it can greatly complicate life. However, you do not have to do so. Tested with Podman on Alma Linux 8, appending “:Z” to the volume name resolves SELinux problems:
docker run -d -v /data/log/:/var/log:Z -p 514:514 czanik/syslog-ng
This is just the first version of the Alma Linux-based syslog-ng container. I plan to add Prometheus exporter support soon. And I need your feedback! Does it work for you? Do you miss something? Is there anything that you would do differently about it?
You can reach the Docker image on Docker Hub at https://hub.docker.com/repository/docker/czanik/syslog-ng/general
You can find the Dockerfile at https://github.com/czanik/syslog-ng-ubi
Pull Requests are also welcome! You can share your feedback on GitHub, the syslog-ng mailing list, or also on LinkedIn / Twitter / Mastodon.
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
Jérémy Bobbio (Lunar) passed away on 8 November 2024. It is uncanny but that is exactly 30 years after Oregon voted to legalize euthanasia and it is exactly ten years after Lunar disclosed his cancer diagnosis on a Debian mailing list, debian-private, the gossip network that is being used to spread rumors about developers. While Lunar advanced computer security in many technical initiatives, the gossip on debian-private enables social engineering attacks, therefore, gossip is like a cancer too.
Here is the message from 26 November 2014, it was sent some days after the diagnosis, hence the observation that he lived with the disease for almost exactly 10 years. He thought it would just be a couple of months. It turns out people make mistakes.
Subject: [semi-VAC] A couple of months? Date: Wed, 26 Nov 2014 20:42:26 +0100 From: Lunar <lunar@debian.org> To: debian-private@lists.debian.org My fellow Debianites, I have been diagnosed with kidney cancer, and it seems there's some metastases in my lungs. A few doctors have decided it's worth doing many things to make me live some more. (Another privilege to acknowledge.) I'm going to follow them and see where it goes… The first things are going to come up pretty quickly now, but I'm not sure how long treatments will last or how much they will affect me. What is clear is that getting better will take most of my life in the next weeks and probably months. Please take care of my packages if I'm not responsive; or if I'm responsive but don't follow-up afterwards. Basically, consider me unreliable. I want to keep working on reproducible builds at least a little, as it's a good source of pleasure. You are welcome to join the fun. :) [ Never to be disclosed. ] Be well, -- Lunar .''`. lunar@debian.org : :Ⓐ : # apt-get install anarchism `. `'` `-
Moving on from that, we can see that Lunar made significant contributions to the harassment of Dr Jacob Appelbaum. It appears that Lunar was opposed to everything that has been achieved for the right to due process since the signing of the Magna Carta over eight hundred years ago.
It is really important to look at this email now because the cool kids on debian-private might be using similar tactics in the GNOME conspiracy against Sonny Piers.
Lunar did not allow the cancer to get in the way of his fight against due process. He even went beyond that, advocating for gaslighting. Look at the comments about creating a "support group" to brainwash Dr Appelbaum to believe he really might be a rapist:
Subject: Re: Jacob Appelbaum and harrassement Date: Fri, 17 Jun 2016 02:12:01 +0200 From: Jérémy Bobbio <lunar@debian.org> To: debian-private@lists.debian.org CC: da-manager@debian.org Hi! Since these stories have been published, I kept myself referring to a zine [1] jointly done by two organizations working on supporting survivors of sexual assault and ending rape culture. I strongly recommend you have a look, it's not that long. The zine list nine principles on how to support survivors of sexual assault. They've been very helpful to me, you might want to have a look. [1]: http://www.phillyspissed.net/sites/default/files/survivor-support.pdf Konstantinos Margaritis: > I'm curious, is everyone else OK with expulsion, without having heard the side of > the accused first? Disclaimer, I do not have any interest to the Tor project, and > this is the first time I've actually heard of Jacob and his behaviour. I'm assuming > that all the stories are true, but I'm not at all comfortable with expulsion, > or any other "punishment" coming officially from Debian, without first hearing his > side and at least having given him the right to respond to many of the emails here. The process you describe is modeled on “coercive justice”. I think we should instead be supporting survivors, and making sure that Debian can be as safe and welcoming as possible. The first principle given in the above zine is “Health & Safety First”, and the second is “Restore Choice”. So let's hear what people who have been abused has to say on what needs to be done for their health and safety: Alison Macrina who has been advocating Debian and its derivatives to many libraries and activists has made the following demand in her statement, amongst others [2]: Jake must be excluded from all community activities as a precondition for healing. Isis Lovecruft who attended DebConf13 and is a longtime Debian user and advocate has made the following demand to the communities, amongst others [3]: We need to entirely remove abusers from our communities, until such a time as they have sufficiently demonstrated to their victims that their abusive behaviours will no longer continue. Jake should be removed from all places where his victims, their loved ones, and friends might come into any form of contact with him. Given the enormous amounts of pain myself and the other victims have gone through, the draining emotional stress, and (please excuse my rather dark humour) the development time wasted, I am not willing to revisit this issue for at least four years. After that time has passed, it may be possible to reassess whether there is any path forward for Jake. As such I support preventing Jacob Appelbaum from participating in Debian until a process took place for him to work on his issues enough to make those who have been abused and their friends confident that he will not commit more abuses. [2]: https://medium.com/@flexlibris/theres-really-no-such-thing-as-the-voiceless-92b3fa45134d [3]: https://blog.patternsinthevoid.net/the-forest-for-the-trees.html If people want to care about Jake, I suggest listening to Alison again: People who love Jake and want him to heal should make a support group for him. Those people should bear in mind that he has not apologized nor admitted to any wrongdoing, and they should hold him accountable for what he’s done. For how this can be done, you can get some ideas from the work of Philly Stands Up which they have documented [4]. [4]: https://communityaccountability.files.wordpress.com/2012/06/philly-stands-up.pdf Given some of the emails I've read here, we do have work to do in order to keep Debian as safe and welcoming as possible. We do have to educate ourselves and newcomers on boundaries, consent, gender-based violence, abuse prevention, accountability processes… Others already wrote a few things about this, so I'm not going to develop further, and discussing this on public channels might probably more effective. Oh, and principle 8 is “It's Not About You”. Seriously, it's not. Your needs won't help the people who have been abused or Jake. -- Lunar .''`. lunar@debian.org : :Ⓐ : # apt-get install anarchism `. `'` `-
In my last couple of blog posts on my own site and the Software Freedom Institute site, I've discussed the invalid Swiss judgment based on lies and forgeries. One thing that people have failed to notice is the false judgment attacks the principle of free redistribution. It is this paragraph here:
Translated to English, it says:
Daniel Pocock has revoked the Debian Project Code of Conduct and stated that he has the right to authorize joint authors to use the name Debian in domains. On the site debiangnulinux.org, he has used the Debian open use logo and he has offered a copy of Debian for people to download.
The Debian Code of Conduct was never accepted or consented to by the vast majority of Debian co-authors. Less than 25% of co-authors consented to the Code of Conduct. Therefore, it was not even valid in the first place. You can't revoke a Code of Conduct that wasn't valid in the first place.
The use of the Debian open use logo is authorized very clearly.
If somebody was distributing a virus or some other random software under the name Debian it would be confusing and wrong. But what they accuse me of doing is distributing a genuine copy of Debian. Debian was the birthplace of the Debian Free Software Guidelines and the right to distribute genuine copies of Debian has always been there.
Moreoever, any co-author or joint author of intellectual property has the right to unilaterally redistribute copies of the joint work as they see fit, according to this legal guidance from UC Berkeley:
Joint authorship occurs when two or more people work together on a creative work. In this case, all creators have equal rights to distribute and alter the work, and they must split profits among each other.
That is always the case unless we have entered into a signed agreement with colleagues whereby we agree to only distribute the work through a specific agent or channel. Employment contracts for IT workers almost always include provisions to prevent such unilateral distribution but in Debian, we are not employees and we never signed anything giving up our rights under copyright law.
Pretending that some of us don't have the right to redistribute copies of Debian is a form of gaslighting, a lot like gaslighting Dr Appelbaum with a "support group" to brainwash him to believe the social media rumors that he could be a rapist.
Therefore, while the judgment is invalid, it appears to be contradicting the DFSG. What they have written is actually worse than IBM Red Hat's decision to restrict the RHEL source code.
Lunar, Magna Carta & Debian Social Contract. Rest In Peace.
I recently received 4 Milk-V Jupiter development boards, and one Banana Pi F3 through RISC-V International. All of these boards have the same (or very similar) SpaceMIT X60 SoC which is a fairly capable 8 core RISC-V processor.
model name : Spacemit(R) X60
isa : rv64imafdcv_zicbom_zicboz_zicntr_zicond_zicsr_zifencei_zihintpause_zihpm_zfh_zfhmin_zca_zcd_zba_zbb_zbc_zbs_zkt_zve32f_zve32x_zve64d_zve64f_zve64x_zvfh_zvfhmin_zvkt_sscofpmf_sstc_svinval_svnapot_svpbmt
Since we’ll be using all of these boards for Fedora package building I ran some simple benchmarks of how well they perform. The benchmark is to recompile this grub2 package to RPMs:
# dnf builddep grub2-2.12-11.0.riscv64.fc41.src.rpm
$ time rpmbuild --recompile grub2-2.12-11.0.riscv64.fc41.src.rpm
(I did a few builds in a row until the times settled down, so these are all “hot cache” builds on an otherwise unloaded board.)
Milk-V Jupiter | RISC-V SpaceMIT X60 8 cores 16GB RAM | 748s |
Banana Pi | RISC-V SpaceMIT X60 8 cores 16GB RAM | 962s |
VisionFive 2 | JH7110 4 cores 8GB RAM | 923s |
Raspberry Pi 4 | ARM Cortex A72 4 cores 8 GB RAM | 753s |
AMD gaming PC | AMD Ryzen 9 7950X 16 cores 64 GB RAM | 104s |
We should be getting a SiFive P550 development board soon which is the first widely available out-of-order RISC-V core.
(Thanks Andrea Bolognani for benchmarking the VF2)
A blog has appeared on the Software Freedom Institute web site proving that the defamatory Swiss judgment was invalid from the beginning. In fact, within a few weeks, it had been shot down in a response from the Swiss Intellectual Property Office.
The rogue Debianists clearly knew since December 2023 that their judgment document was invalid. Nonetheless, on the day before Ireland voted, they published the invalid judgment anyway along with a deceptive translation of what was in it.
By choosing to publish a document that they knew to be invalid and by doing so the day before voting, just hours before the news moratorium, it is clear that they intended to deceive the Irish media and the voters of Midlands-North-West.
On some metrics, like infrastructure, the region Midlands-North-West is one of the most disadvantaged in Europe. Therefore, the attempt by foreigners funded by Google to deceive this community is outrageous.
The blog by the Software Freedom Institute reveals that the publishers of the false document received Google funding. Looking at the DebConf web site, we can see that Google regularly sponsors the annual conference for Debianists.
The legal dispute started in early 2023. We can see that one of the leading sponsors of DebConf23 was Infomaniak, a competitor of the Software Freedom Institute in the Swiss market.
From the DebConf23 sponsors page, we can see Google and we can see Infomaniak, a competitor of the Institute, all had a hand in Debian's funding during the period of this dispute:
It's time for that rarest of events: a blog post! And it's another debugging adventure. Settle in, folks!
Recently I got interested in improving the time it takes to do a full compose of Fedora. This is when we start from just the packages and a few other inputs (basically image recipes and package groups), and produce a set of repositories and boot trees and images and all that stuff. For a long time this took somewhere between 5 and 10 hours. Recently we've managed to get it down to 3-4, then I figured out a change which has got it under 3 hours.
After that I re-analyzed the process and figured out that the biggest remaining point to attack is something called the 'pkgset' phase, which happens near the start of the process, not in parallel with anything else, and takes 35 minutes or so. So I started digging into that to see if it can be improved.
I fairly quickly found that it spends about 20 minutes in one relatively small codepath. It's created one giant package set (this is just a concept in memory at the time, it gets turned into an actual repo later) with every package in the compose in it. During this 20 minutes, it creates subsets of that package set per architecture, with only the packages relevant to that architecture in it (so packages for that arch, plus noarch packages, plus source packages, plus 'compatible' arches, like including i686 for x86_64).
I poked about at that code a bit and decided I could maybe make it a bit more efficient. The current version works by creating each arch subset one at a time by looping over the big global set. Because every arch includes noarch and src packages, it winds up looping over the noarch and src lists once per arch, which seemed inefficient. So I went ahead and rewrote it to create them all at once, to try and reduce the repeated looping.
Today I was testing that out, which unfortunately has to be done more or less 'in production', so if you like you can watch me here, where you'll see composes appearing and disappearing every fifteen minutes or so. At first of course my change didn't work at all because I'd made the usual few dumb mistakes with wrong variable names and stuff. After fixing all that up, I timed it, and it turned out about 7 minutes faster. Not earth shattering, but hey.
So I started checking it was accurate (i.e. created the same package sets as the old code). It turned out it wasn't quite (a subtle bug with noarch package exclusions). While fixing that, I ran across some lines in the code that had bugged me since the first time I started looking at it:
if i.file_path in self.file_cache: # TODO: test if it really works continue
These were extra suspicious to me because, not much later, they're followed by this:
self.file_cache.file_cache[i.file_path] = i
that is, we check if the thing is in self.file_cache
and move on if it is, but if it's not, we add it to self.file_cache.file_cache
? That didn't look right at all. But up till now I'd left it alone, because hey, it had been this way for years, right? Must be OK. Well, this afternoon, in passing, I thought "eh, let's try changing it".
Then things got weird.
I was having trouble getting the compose process to actually run exactly as it does in production, but once I did, I was getting what seemed like really bizarre results. The original code was taking 22 minutes in my tests. My earlier test of my branch had taken about 14 minutes. Now it was taking three seconds.
I thought, this can't possibly be right! So I spent a few hours running and re-running the tests, adding debug lines, trying to figure out how (surely) I had completely broken it and it was just bypassing the whole block, or something.
Then I thought...what if I go back to the original code, but change the cache thing?
So I went back to unmodified pungi code, commented out those three lines, ran a compose...and it took three seconds. Tried again with the check corrected to self.file_cache.file_cache
instead of self.file_cache
...three seconds.
I repeated this enough times that it must be true, but it still bugged me. So I just spent a while digging into it, and I think I know why. These file caches are kobo.pkgset.FileCache
instances; see the source code here. So, what's the difference between foo in self.file_cache
and foo in self.file_cache.file_cache
? Well, a FileCache
instance's own file_cache
is a dict. FileCache
instances also implement __iter__
, returning iter(self.file_cache)
. I think this is why foo in self.file_cache
works at all - it actually does do the right thing. But the key is, I think, that it does it inefficiently.
Python's preferred way to do foo in bar
is to call bar.__contains__(foo)
. If that doesn't work, it falls back on iterating over bar
until it either hits foo or runs out of iterations. If bar
doesn't support iteration it just raises an exception.
Python dictionaries have a very efficient implementation of __contains__
. So when we do foo in self.file_cache.file_cache
, we hit that efficient algorithm. But FileCache
does not implement __contains__
, so when we do foo in self.file_cache
, we fall back to iteration and wind up using that iterator over the dictionary's keys. This works, but is massively less efficient than the dictionary's __contains__
method would be. And because these package sets are absolutely fracking huge, that makes a very significant difference in the end (because we hit the cache check a huge number of times, and every time it has to iterate over a huge number of dict keys).
So...here's the pull request.
Turns out I could have saved the day and a half it took me to get my rewrite correct. And if anyone had ever got around to TODOing the TODO, we could've saved about 20 minutes out of every Fedora compose for the last nine years...
A while ago I was looking at Rust-based parsing of HID reports but, surprisingly, outside of C wrappers and the usual cratesquatting I couldn't find anything ready to use. So I figured, why not write my own, NIH style. Yay! Gave me a good excuse to learn API design for Rust and whatnot. Anyway, the result of this effort is the hidutils collection of repositories which includes commandline tools like hid-recorder and hid-replay but, more importantly, the hidreport (documentation) and hut (documentation) crates. Let's have a look at the latter two.
Both crates were intentionally written with minimal dependencies, they currently only depend on thiserror and arguably even that dependency can be removed.
As you know, HID Fields have a so-called "Usage" which is divided into a Usage Page (like a chapter) and a Usage ID. The HID Usage tells us what a sequence of bits in a HID Report represents, e.g. "this is the X axis" or "this is button number 5". These usages are specified in the HID Usage Tables (HUT) (currently at version 1.5 (PDF)). The hut crate is generated from the official HUT json file and contains all current HID Usages together with the various conversions you will need to get from a numeric value in a report descriptor to the named usage and vice versa. Which means you can do things like this:
let gd_x = GenericDesktop::X; let usage_page = gd_x.usage_page(); assert!(matches!(usage_page, UsagePage::GenericDesktop));Or the more likely need: convert from a numeric page/id tuple to a named usage.
let usage = Usage::new_from_page_and_id(0x1, 0x30); // GenericDesktop / X println!("Usage is {}", usage.name());90% of this crate are the various conversions from a named usage to the numeric value and vice versa. It's a huge crate in that there are lots of enum values but the actual functionality is relatively simple.
The hidreport crate is the one that can take a set of HID Report Descriptor bytes obtained from a device and parse the contents. Or extract the value of a HID Field from a HID Report, given the HID Report Descriptor. So let's assume we have a bunch of bytes that are HID report descriptor read from the device (or sysfs) we can do this:
let rdesc: ReportDescriptor = ReportDescriptor::try_from(bytes).unwrap();I'm not going to copy/paste the code to run through this report descriptor but suffice to day it will give us access to the input, output and feature reports on the device together with every field inside those reports. Now let's read from the device and parse the data for whatever the first field is in the report (this is obviously device-specific, could be a button, a coordinate, anything):
let input_report_bytes = read_from_device(); let report = rdesc.find_input_report(&input_report_bytes).unwrap(); let field = report.fields().first().unwrap(); match field { Field::Variable(var) => { let val: u32 = var.extract(&input_report_bytes).unwrap().into(); println!("Field {:?} is of value {}", field, val); }, _ => {} }The full documentation is of course on docs.rs and I'd be happy to take suggestions on how to improve the API and/or add features not currently present.
The hidreport and hut crates are still quite new but we have an existing test bed that we use regularly. The venerable hid-recorder tool has been rewritten twice already. Benjamin Tissoires' first version was in C, then a Python version of it became part of hid-tools and now we have the third version written in Rust. Which has a few nice features over the Python version and we're using it heavily for e.g. udev-hid-bpf debugging and development. An examle output of that is below and it shows that you can get all the information out of the device via the hidreport and hut crates.
$ sudo hid-recorder /dev/hidraw1 # Microsoft Microsoft® 2.4GHz Transceiver v9.0 # Report descriptor length: 223 bytes # 0x05, 0x01, // Usage Page (Generic Desktop) 0 # 0x09, 0x02, // Usage (Mouse) 2 # 0xa1, 0x01, // Collection (Application) 4 # 0x05, 0x01, // Usage Page (Generic Desktop) 6 # 0x09, 0x02, // Usage (Mouse) 8 # 0xa1, 0x02, // Collection (Logical) 10 # 0x85, 0x1a, // Report ID (26) 12 # 0x09, 0x01, // Usage (Pointer) 14 # 0xa1, 0x00, // Collection (Physical) 16 # 0x05, 0x09, // Usage Page (Button) 18 # 0x19, 0x01, // UsageMinimum (1) 20 # 0x29, 0x05, // UsageMaximum (5) 22 # 0x95, 0x05, // Report Count (5) 24 # 0x75, 0x01, // Report Size (1) 26 ... omitted for brevity # 0x75, 0x01, // Report Size (1) 213 # 0xb1, 0x02, // Feature (Data,Var,Abs) 215 # 0x75, 0x03, // Report Size (3) 217 # 0xb1, 0x01, // Feature (Cnst,Arr,Abs) 219 # 0xc0, // End Collection 221 # 0xc0, // End Collection 222 R: 223 05 01 09 02 a1 01 05 01 09 02 a1 02 85 1a 09 ... omitted for previty N: Microsoft Microsoft® 2.4GHz Transceiver v9.0 I: 3 45e 7a5 # Report descriptor: # ------- Input Report ------- # Report ID: 26 # Report size: 80 bits # | Bit: 8 | Usage: 0009/0001: Button / Button 1 | Logical Range: 0..=1 | # | Bit: 9 | Usage: 0009/0002: Button / Button 2 | Logical Range: 0..=1 | # | Bit: 10 | Usage: 0009/0003: Button / Button 3 | Logical Range: 0..=1 | # | Bit: 11 | Usage: 0009/0004: Button / Button 4 | Logical Range: 0..=1 | # | Bit: 12 | Usage: 0009/0005: Button / Button 5 | Logical Range: 0..=1 | # | Bits: 13..=15 | ######### Padding | # | Bits: 16..=31 | Usage: 0001/0030: Generic Desktop / X | Logical Range: -32767..=32767 | # | Bits: 32..=47 | Usage: 0001/0031: Generic Desktop / Y | Logical Range: -32767..=32767 | # | Bits: 48..=63 | Usage: 0001/0038: Generic Desktop / Wheel | Logical Range: -32767..=32767 | Physical Range: 0..=0 | # | Bits: 64..=79 | Usage: 000c/0238: Consumer / AC Pan | Logical Range: -32767..=32767 | Physical Range: 0..=0 | # ------- Input Report ------- # Report ID: 31 # Report size: 24 bits # | Bits: 8..=23 | Usage: 000c/0238: Consumer / AC Pan | Logical Range: -32767..=32767 | Physical Range: 0..=0 | # ------- Feature Report ------- # Report ID: 18 # Report size: 16 bits # | Bits: 8..=9 | Usage: 0001/0048: Generic Desktop / Resolution Multiplier | Logical Range: 0..=1 | Physical Range: 1..=12 | # | Bits: 10..=11 | Usage: 0001/0048: Generic Desktop / Resolution Multiplier | Logical Range: 0..=1 | Physical Range: 1..=12 | # | Bits: 12..=15 | ######### Padding | # ------- Feature Report ------- # Report ID: 23 # Report size: 16 bits # | Bits: 8..=9 | Usage: ff00/ff06: Vendor Defined Page 0xFF00 / Vendor Usage 0xff06 | Logical Range: 0..=1 | Physical Range: 1..=12 | # | Bits: 10..=11 | Usage: ff00/ff0f: Vendor Defined Page 0xFF00 / Vendor Usage 0xff0f | Logical Range: 0..=1 | Physical Range: 1..=12 | # | Bit: 12 | Usage: ff00/ff04: Vendor Defined Page 0xFF00 / Vendor Usage 0xff04 | Logical Range: 0..=1 | Physical Range: 0..=0 | # | Bits: 13..=15 | ######### Padding | ############################################################################## # Recorded events below in format: # E: . [bytes ...] # # Current time: 11:31:20 # Report ID: 26 / # Button 1: 0 | Button 2: 0 | Button 3: 0 | Button 4: 0 | Button 5: 0 | X: 5 | Y: 0 | # Wheel: 0 | # AC Pan: 0 | E: 000000.000124 10 1a 00 05 00 00 00 00 00 00 00
Welcome to the Fedora Operations Report and I hope everyone is enjoying the latest Fedora Linux release! Read on to find out more information such as important upcoming dates for F42, and a few tidbits of things happening around the project lately
Fedora Linux 41 is here! Get the latest version of our operating system from the website and read about some of the new features you can enjoy from the Fedora magazine.
As we now have F41, that means we will have to say goodbye to F39. This release will go end of life on November 26th 2024. All bugzillas open against this release will be automatically closed.
Also with the advent of F41, that means we will now have some new elections! We will have elections in FESCo and Mindshare, and from now until 27th November – Nominations are open! Please feel free to nominate yourself, or someone you know (with their permission) who would be a great fit, and able to volunteer for 12 months. We have five seats up for election in FESCo, and one seat up for election in the Mindshare Committee.
As a reminder, the Fedora Council has moved to a once per year election, so there will be no elective seats open in Council in this cycle. The next election cycle will be post F42 release. Similarly, the EPEL Steering Committee holds elections once per year also in May, and seats on this committee come up for election on a voluntary basis.
Fedora Linux 42 is currently in development, and for the most recent set of changes planned in this release, please refer to our change set page. Our release schedule is also live, and a reminder of some key dates are below:
The changes that are currently in our community feedback period are :
For all the latest on boot-c, check out the latest bootc phttps://discussion.fedoraproject.org/t/bootc-this-week-2024-11-15/136985ost on discourse!
Our Git Forge evaluation is still active, with details of how to get access to either the GitLab or Forgejo instance can be found on this discussion thread.
The request to grant KDE Desktop Spin edition status has been approved! Congratulations to the folks involved in the KDE SIG and we look forward to seeing this option as an official edition for Fedora in the near future!
FOSDEM 2025 returns on Saturday 1st and Sunday 2nd February and the cfp is now open.
CentOS Connect also returns to Brussels on 30th and 31st January 2025. For more information on cfps and to register for the event, check out the event page.
Do you have an idea for an episode of the Fedora Podcast, or want to see what some of the upcoming episodes will be? Bookmark The IT Guy’s discussion post on planning for the podcast!
Did you know there are EPEL Office Hours? If not, check out the details on how to join and when they happen on the announcement post!
Missed the Fedora Linux 41 release watch parties? Never fear! All talks are now available on the Fedora Youtube Channel in the ‘latest’ video tab. Happy Streaming!
The post Fedora Operations Architect Report appeared first on Fedora Community Blog.
Please join us at the next regular Open NeuroFedora team meeting on Monday 02 December at 1300 UTC. The meeting is a public meeting, and open for everyone to attend. You can join us in the Fedora meeting channel on chat.fedoraproject.org (our Matrix instance). Note that you can also access this channel from other Matrix home severs, so you do not have to create a Fedora account just to attend the meeting.
You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:
$ date -d 'Monday, December 02, 2024 13:00 UTC'
The meeting will be chaired by @ankursinha. The agenda for the meeting is:
We hope to see you there!
Please join us at the next regular Open NeuroFedora team meeting on Monday 18 November at 1300 UTC. The meeting is a public meeting, and open for everyone to attend. You can join us in the Fedora meeting channel on chat.fedoraproject.org (our Matrix instance). Note that you can also access this channel from other Matrix home severs, so you do not have to create a Fedora account just to attend the meeting.
You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:
$ date -d 'Monday, November 18, 2024 13:00 UTC'
The meeting will be chaired by @ankursinha. The agenda for the meeting is:
We hope to see you there!
This is a summary of the work done by Fedora Infrastructure & Release Engineering teams as of Q3 2024. As these teams are working closely together, we will summarize the work done in one blog post by both teams.
This update is made from infographics and detailed updates. If you want to just see what’s new, check the infographics. If you want more details, continue reading.
Purpose of these teams is to take care of day-to-day business regarding Fedora Infrastructure and Fedora Release Engineering work. It’s responsible for developing and maintaining services running in Fedora and preparing things for the new Fedora Linux release (mirrors, mass branching, new namespaces etc.).
Issue trackers
The post Infra & Releng Update Q3 2024 appeared first on Fedora Community Blog.
When the Software Freedom Institute was established in April 2021, we stated that Our mission is to put humans like you in charge of technology and We reject the false sense of empowerment cultivated by the cloud.
These simple statements brought a furious response from larger rivals in the technology industry.
IBM Red Hat used a dispute in the UDRP to try and seize a domain name from the Institute and publicly denounce the Institute’s founder, Daniel Pocock. The UDRP verdict confirmed that Mr Pocock was a victim of harassment by these competitors
Copy-cat attacks from other organizations quickly followed.
In April 2023, Mr Pocock was informed that the Swiss law office providing a legal expenses insurance to the Institute had been placed in liquidiation (the JuristGate affair, which we come back to later).
As the lawyers were no longer available to handle disputes about the trademark, Mr Pocock unilaterally canceled the trademark registration in Switzerland.
More significantly, the legal insurance office was being liquidated by the Swiss law firm Walder Wyss. Walder Wyss was also representing the rogue Debianists in the trademark dispute. Therefore, it appears that Walder Wyss would gain access to privileged information that Software Freedom Institute had shared with the legal expenses insurer. Mr Pocock informed Mathieu Elias Parreaux, the disqualified director of the legal expenses company and this was the response:
Subject: RE: Fermeture de Justicia SA - Organisation de notre nouvelle structure
Date: Fri, 14 Apr 2023 17:16:47 +0200
From: m.parreaux@justiva.ch
To: 'Daniel Pocock' <daniel@softwarefreedom.institute>
Cher Monsieur,
Tout à fait. Il vous suffit de leur écrire pour les aviser que vous avez
constaté qu'ils sont liquidateurs de la société et que comme il y a un
conflit d'intérêt clair, vous souhaitez qu'ils suppriment vos données. Ils
devront le faire.
Je constate que vous devez normalement avoir un renouvellement chez Justicia
SA le 8 juin prochain. Afin que cela n'arrive pas, et si vous souhaitez
rester avec nous, je vous soumet un nouveau contrat à me renvoyer signé.
Nous pouvons en parler au téléphone dès lundi si vous le souhaitez, avec
grand plaisir. Cordialement,
Mathieu Parreaux
Associé www.justiva.ch
Translated to English:
You only have to write to them and tell them (Walder Wyss) in their capacity as liquidators of the company and that as there is a very clear conflict of interest, you would like them to delete all the privileged information. They must do so. I notice that you would normally have to renew your legal expenses insurance with Justicia SA on 8 June. As that is not possible and if you wish to remain insured with us, I will send you a new contract for you to sign and return to me. We can discuss it by telephone Monday if you like, with pleasure.
The Institute did not purchase the alternative legal expenses insurance proposed by Mathieu Parreaux and his new scheme Justiva SA.
Mr Pocock completed the trademark cancelation form in August 2023.
A copy of the cancelation form was given to the tribunal on 21 August 2023. There is no record of the Swiss Intellectual Property Office acting on this cancelation request and it is not clear if they received it at all.
Moreover, Mr Pocock told the tribunal that he had no confidence in their procedure. He had already informed the tribunal that his lawyers were in liquidation but the judges did not listen. We can see it in the invalid judgment document, Mr Pocock told the Swiss judges he doesn’t trust them:
As the trademark was canceled, there was no basis for any tribunal to make any judgment about a transfer of the trademark. It would be impossible to do so.
On 5 September 2023 the unprotected Swiss entity was renamed from Software Freedom Institute SA to a new name Open Source Developer Freedoms SA
On 19 September 2023 Gaelle Jeanmonod at FINMA published a summary of the FINMA judgment that closed down the legal expenses insurance scheme, leaving the Software Freedom Institute and 20,000 other clients without a lawyer
FINMA went to great lengths to redact the names of the lawyers in the judgment.
Mr Pocock was concerned that FINMA’s handling of the legal expenses scandal was a factor in not having the resources to defend attacks against the Institute. Many people have asked questions about why the Institute was under attack like this. To respond to these questions, Mr Pocock analyzed the legal expenses scandal and on 7 October 2023 published a detailed report with all the names and dates that FINMA had obfuscated
Mr Pocock’s attempt to bring transparency to the Swiss legal expenses insurance crisis was resented by Swiss jurists and this is the type of thing that provokes a fierce reprisal in Switzerland.
Yet rather than an immediate reprisal, Mr Pocock was granted Swiss citizenship in the Canton of Vaud.
Despite the fact Mr Pocock had sent multiple letters canceling the trademark, it appears that the judges did not read them and were unaware of the procedures relating to trademarks. They may have accidentally created a judgment that was invalid before they even put their stamp on it. This happens sometimes, for example, the 1616 verdict against Copernicanism says a lot about the mentality of making judgments.
Thus, on February 26, the Inquisition’s most authoritative cardinal, Robert Bellarmine (1542-1621), met with Galileo in private and gave him the following warning: the Church was going to declare the idea of the earth’s motion false and contrary to Scripture, and so this theory could not be held or defended. Galileo agreed to comply.
The Swiss Intellectual Property Office has confirmed that the judgment document being circulated by rogue Debianists is not legally valid.
The judge sent two more letters acknowledging the opinion of the Swiss Intellectual Property Office and confirming that they will not issue any other judgment to replace the invalid judgment.
As the judgment has effectively been withdrawn, Mr Pocock believed that was the end of the matter.
On 7 March 2024 Mr Pocock began publishing the JuristGate.com web site exploring the operation of the rogue Swiss legal expenses scheme in more detail. This web site complements the original blog post Mr Pocock wrote about the scandal
In April 2024, Mr Pocock nominated as a candidate in the European Parliament election for the district of Ireland, Midlands-North-West.
Mr Pocock has done voluntary work with amateur radio and open source software since he was fourteen years old. Political rivals from the open source “community” spent much of May 2024 denouncing Mr Pocock and his family to undermine his election campaign.
Finally, a few hours before the news moratorium for the voting day one of the rogue Debianists, Jean-Pierre Giraud, an archaeologist from Toulouse, France, published the invalid judgment documents and falsely claimed that the invalidated document is a complete and correct account of the dispute.
The timing of Mr Giraud’s actions give us the impression that this was a deliberate and possibly pre-meditated attack on the Irish democratic process.
At the time the state of hostility began in 2018, the former Debian leader wrote a report mentioning a large donation from Google. Google’s identity as the donor is obfuscated:
* Acted a central contact point connecting the various parties
involved in receiving a significant donation to Debian from their
Open Source office [14].
...
[14] https://opensource.google.com/
Some months later, an email hinted that the amount was $300,000.
So, there were two $300k donations in the last year.
One of these was earmarked for a DSA equipment upgrade.
DSA has a couple of options to pursue, but it's possible they may
actually spend $400k on an equipment refresh.
The manner in which rogue Debianists obfuscate their employment and the source of these funds demonstrates that some people don’t want to admit the organization has become a soft target for social engineering exploits by large corporate sponsors.
Any root cause analysis of the dispute needs to consider the role of this money and the people who left their fingerprints on it.
The invalid judgment document has a date stamp on the final page, 27 Nov. 2023.
It is purportedly signed by two parties: Caroline Kuhnlein, president of the civil bench in the tribunal and Melanie Bron, “greffière”, a jurist who assists a judge or tribunal. Their names are listed on the web site of the tribunal
Judge Kuhnlein has a LinkedIn profile but it doesn’t reveal anything about her employment or relevant experience prior to becoming a judge.
The Wayback Machine has old copies of the membership lists for the bar association in the Canton of Vaud. We can see the name of her husband, Vivian Kuhnlein in the lists but we can not see any evidence that Caroline Kuhnlein was admitted to the bar in Canton Vaud or Canton Geneva.
It is also noteworthy that the Debianist judge Caroline Kuhnlein-Hofmann is listed on the Swiss Corruption Info index. It is alleged there that she was a participant in the controversial Légeret scandal. Mr Légeret is of Indian origin and there is a perception of wrongful persecution.
The Swiss Info news web site explains
In some countries, judges are not allowed to be members of political parties. Not the case in Switzerland, however, where party membership is in practice a prerequisite. Elected judges even pay a part of their income to their affiliated party.
…
There is no specific training for judges in Switzerland. Usually (but not necessarily) aspiring judges study law
Judge Kuhnlein was appointed by a meeting of the Cantonal parliament. The minutes of the meeting are concerning. They are available online, in French
Fonctionnement de la Commission de présentation
La Commission de présentation s’est réunie les 24 et 25 février ainsi que le 1er mars 2010 pour traiter de ce préavis. Elle était composée des députés suivants : Mmes Fabienne Freymond Cantone (présidente), Fabienne Despot, Béatrice Métraux, MM.Régis Courdesse, Jean-Michel Dolivo, Claude-André Fardel, Olivier Feller (viceprésident), Jacques Haldy et Nicolas Mattenberger. La commission a aussi eu le privilège d’être accompagnée dans ses auditions et réflexions par ses quatre experts indépendants, MM. Philippe Richard, Jean-Jacques Schwaab ; Bertil Cottier (excusé le 24.02.10) et Philippe Reymond (excusé le 01.03.10).
Translated to English, the paragraph explains that a committee has already reviewed the details of some political party members who want to be judges.
How the advisory committee functions
The committee met on 24 and 25 February and also 1 March 2010 to prepare advice for the parliament. The committee was composed of these members of the parliament: Fabienne Freymond Cantone (president), Fabienne Despot, Béatrice Métraux, Régis Courdesse, Jean-Michel Dolivo, Claude-André Fardel, Olivier Feller (Vice president), Jacques Haldy and Nicolas Mattenberger.
The committee had the privilege of being assisted in their interviews and deliberations by four independent experts, Philippe Richard, Jean-Jacques Schwaab ; Bertil Cottier (excused 24.02.10) and Philippe Reymond (excused 01.03.10).
The committee members are from political parties and the people they interview are also from political parties.
The minutes describe the way the interviews were conducted and then they give the results of the committee stage evaluation:
Préavis de la Commission de présentation
A l’issue des auditions, les experts ont rendu, à l’unanimité, les préavis suivants: Préavis positif pour les candidats Mihaela Amoos (moins une abstention d’un des experts pour cette dernière en raison du fait qu’il était son maître 110 Bulletin du Grand Conseil du canton de Vaud / 2007-2012 9 mars 2010 de stage), Yasmina Bendani, Dina Charif Feller, Caroline Kuhnlein (moins une abstention pour cette dernière, l’un des experts s’étant récusé pour des raisons de lien de parenté)
The above states that one of the experts abstained on the selection of candidate Amoos because the expert had previously supervised Amoos’ internship. On the evaluation of Kuhnlein, one of the experts had to be recused because of a family relationship with Kuhnlein.
En outre, la commission a été sensible à la recherche d’équilibres politiques. A noter que la commission a finalement préavisé favorablement sur une candidature appréciée négativement par les experts.
This tells us that the committee was keen to have a balance of political affiliations, in other words, each party is entitled to have one judge. It goes on to say that the committee decided to override a negative report that the experts had given for one candidate and they gave the candidate a net positive report instead.
The entire parliament was asked to vote. The parliament decided to appoint Judge Kuhnlein as a part-time (50%) judge.
In January 2017, Judge Kuhnlein applied to move from a part-time to a full-time appointment, from 50% to 100%. The committee and expert panel was reconvened. A report was produced.
This time, the same four experts participate in the panel. It notes that one of the experts, Philippe Richard, was excused from the meeting but it does not mention the reason he was excused so we can not determine if he is the same expert who had a conflict of interest in the 2010 meeting.
La durée de l’entretien a avoisiné trente minutes.
The judge was interviewed for thirty minutes.
À l’issue de l’audition, les experts, après délibérations, ont souligné l’investissement marqué de cette juge dans l’exercice de son mandat à 50%, en particulier son sens pratique de sa fonction et de son souci de la bonne gestion de sa juridiction. Sur la base de ces considérations, les experts approuvent à l’unanimité sa candidature au poste de juge à 100% au Tribunal cantonal.
This paragraphs summarises the unanimous opinion of the experts that they feel judge Kuhnlein has performed her duties well as a part-time judge and they approve her for a full time role.
The name of her political party is not mentioned.
On 4 December 2023 the Swiss Intellectual Property Office sent the following letter to the judges:
Translated:
However, it turns out that at the request of Software Freedom Institute SA on 6 November 2023, the trademark registration was already deleted from the register on 13 November 2023 (see attachment). We consider that in these circumstances it is difficult for us to execute the judgment you sent us.
Please explain to us how you want us to implement the judgment.
The judge Richard Oulevey and his family are members of the PLR, Political party Liberal-Radical. More evidence.
On 22 December 2023, Judge Oulevey sends a very short letter stating:
The tribunal takes note of the fact that the Swiss Intellectual Property Office finds itself in an impossible position executing a judgment.
On 27 December 2023, Judge Oulevey sends the letter again, adding another sentence:
The tribunal takes of of the fact that the Swiss Intellectual Property Office finds itself in an impossible position executing a judgment. The tribunal does not give any replacement judgment.
On the letter from 27 December 2023, there is a handwritten note to the effect that it annuls and replaces the previous communications.
After these exchanges, as the judgment was impossible and as the tribunal had decided not to make any replacement judgment, the judgment had completely unravelled. There was no reason to appeal a judgment that had already been annulled.
When fresh character attacks were made using the WIPO UDRP procedure, the rogue Debianists did not make any reference to any judgments from Switzerland. If such a judgment was valid, they certainly would have mentioned it very loudly in their UDRP demands.
The rogue Debianists have made multiple rumors about harassment. Mr Pocock belatedly published the harassment judgment demonstrating that Carla and his cats were victims of harassment from a far-right Swiss landlady. Mr Pocock’s detailed analysis includes recordings from the trial in Zurich.
Finally, if we make a search in the Swiss trademark database today, using their online form to search for the registration number 782335 we can see that the trademark was canceled (radié). The cancelation date (radiation) is 13.11.2023. There has never been any subsequent transfer to the rogue Debianists.
When Software in the Public Interest, Inc, completed their financial reports, they disclosed $119,000 in professional fees. This appears to coincide with the monthly reports which have been added up to $120,000 in legal expenses.
If the rogue Debianists had simply apologized to Mr Pocock’s family and withdrawn the character attacks that began when his father died, he may well have been willing to transfer the trademark voluntarily and for no cost more than the registration fees. In other words, rogue Debianists spent over $120,000 to fight a dispute that could have been resolved for less than $1,000.
Software in the Public Interest, Inc proudly claims to be a non-profit organization. Being a non-profit organization does not mean you have to deliberately lose all your money on expenses that are entirely avoidable.
On 27 May 2024 the tribunal realized the documents cited in their invalid judgment had never been signed in the first place. This further emphasizes the fact that these legal documents have no credibility.
In October 2024, the judges delivered a new verdict, a verdict against themselves. According to a report in the Swissinfo news site:
Swiss judges want to dissolve the traditional link between parties and court members, including mandatory contributions to a political party. A majority would like to see a reform of the current system.
Software in the Public Interest, Inc is a tax-deductible non-profit established in the United States. The organization was established around a belief in American values of freedom and transparency.
In 2018, the organization received what appears to be an obfuscated donation of $300,000 from Google.
Ever since the payment, there has been a heightened state of conflict between co-authors in the open source community.
A wide range of international observers and even the Swiss judges themselves agree that the Swiss judicial system is corrupt and politicized.
Nonetheless, the US non-profit organization has pumped over $120,000 in legal fees into that system in the pursuit of Mr Pocock, a volunteer who resigned from mentoring in Google Summer of Code (GSoC) at the time his father died.
The use of an unreliable, corrupt and opaque legal system to harass a volunteer and censor communications appears to go against the very principles that Debian was founded on.
The judgment obtained with this money was determined to be invalid due to the fact the trademark was already canceled and some of the documents submitted to the tribunal had never been signed.
Despite multiple letters confirming the impossibility and invalidity of a judgment to transfer the former trademark, the rogue Debianists decided to distribute the document hours before the news moratorium on the day before Ireland voted in the European elections.
The invalid document distributed at that moment was presented in a foreign language and with a deceptive interpretation of its contents.
There was a real possibility that the Irish news media may have been suckered to publish references to this document. Nonetheless, it does not appear that any media outlet published any reference to it.
Had any media outlet published reference to it on the day of the moratorium, it is unlikely that Mr Pocock would have had time to respond and present the evidence to prove this document is actually invalid.
Midlands-North-West is one of the most disadvantaged regions in Europe on certain metrics. The attempt to deceive voters in such a region is especially outrageous.
Read more about Daniel Pocock’s campaign to represent Dublin Bay South in the Dáil
Josh and Kurt talk about the way WordPress vets their plugins. While WordPress has been in the news lately, they do some clever things to get plugins approved. There’s a static analyzer that runs against new submissions. We discuss using static analysis, securing open source, contributing and more.
Cloning the Linux Kernel repository takes time. We don’t need every commit ever for our work. But we do need multiple branches. Here are some numbers for how long it takes to do various operations.
First, the full clone
time git clone git@gitlab.com:AmpereComputing/linux/linux.git
Cloning into ‘linux’…
…
Updating files: 100% (81108/81108), done.
real 7m56.985s
user 22m41.394s
sys 6m26.205s
Now the shallow clone. For us, this is actually the wrong branch, but there should be significant overlap.
time git clone --depth=1 git@gitlab.com:AmpereComputing/linux/linux.git
Cloning into 'linux'...
...
Updating files: 100% (81108/81108), done.
real 0m55.075s
user 0m27.710s
sys 0m7.393s
Now a blobless clone
time git clone –filter=blob:none git@gitlab.com:AmpereComputing/linux/linux.git
Cloning into ‘linux’…
…
Updating files: 100% (81108/81108), done.
real 3m43.477s
user 5m37.896s
sys 2m35.509s
Now a treeless clone
time git clone –filter=tree:0 git@gitlab.com:AmpereComputing/linux/linux.git
Cloning into ‘linux’…
…
Updating files: 100% (81108/81108), done.
real 1m53.469s
user 1m5.809s
sys 1m1.048s
Combining treeless and shallow?
time git clone --depth=1 --filter=tree:0 git@gitlab.com:AmpereComputing/linux/linux.git
Cloning into 'linux'...
...
Updating files: 100% (81108/81108), done.
real 1m11.402s
user 0m31.235s
sys 0m6.813s
What about a shallow clone since a certain tag:
time git clone --shallow-exclude=v6.10 git@gitlab.com:AmpereComputing/linux/linux.git
Cloning into 'linux'...
...
Updating files: 100% (81108/81108), done.
real 0m56.481s
user 0m27.536s
sys 0m7.306s
We have a long build process that makes use of extensive cherry picking of branches. It starts with a git clone of the Linux Kernel, and I want to see the timing differences using the different cloning options.
Because the shallow clone does not include the tree information, we cannot use it to do the cherry-picking, and so I will restrict my testing to the full clone, blob-less, and tree-less variants.
Full clone:
real??61m17.366s
user??507m48.236s
sys??171m28.766s
Tree-less
real??69m8.975s
user??492m34.034s
sys??157m14.231s
Treeless was actually slower. Here is blob-less
real??52m9.143s
user??486m23.953s
sys??163m25.927s
9 Minutes faster. This makes sense: once it has the base tree synchronized, the only additional blobs it needs to sync are the ones specific to each of the topic branches. This limits the additional communication to one stream of blobs per topic branch. It does not need to synchronize the older blobs, which are what I assume was the additional cost of the full clone, but it already has the tree information it needs to perform the cherry-pick meta-data operations.
For our purposes, we are going to go with the blob-less option.
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 11 – 15 November 2024
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues
If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.
The post Infra and RelEng Update – Week 46 2024 appeared first on Fedora Community Blog.
Euthanasia legislation is currently subject to very active discussion in the Oireachtas, Irish parliament and Westminster, the British parliament.
The news media has given a lot of weight to testimonials from those with terminal illness and those who are close family members facing the ethical challenges alone. By way of example, the Guardian published a report about Paola Marra, the late wife of Blur drummer Dave Rowntree.
At the same time, we can see rival news reports have appeared. For example, Switzerland is well known for making euthanasia available lawfully but nonetheless, there are still cases of euthanasia that have taken place outside the legal framework. Police in the Kanton of Schaffhausen are currently investigating use of the 3D-printed "Sarco" suicide pod.
In Canada, a judge had to intervene to prevent a a euthanasia taking place on economic rather than medical grounds.
The injunction, signed by Justice Simon R. Coval, is the first of its kind issued in the province and was issued on Saturday, the day before the woman was scheduled to die.
It prevents Dr. Ellen Wiebe or any other doctor from “causing the death” of the 53-year-old woman “by MAID or any other means.” It followed a notice of civil claim alleging Wiebe negligently approved the procedure for a patient who does not legally qualify.
When talking about the appearance of a Debian Suicide Cluster or a suicide cluster in any organization for that matter, it is important to try and work out where it began, who appeared to be the first suicide, the search for suicide zero so to speak. Given that the cause of death is not always disclosed, the perception that a particular death was a suicide could be as significant, for contagion purposes, as a confirmed suicide.
After all the evidence about a Debian Suicide Cluster began to appear, one of the racist Swiss women, Pascale Koester from Walder Wyss, had her man-slave apprentice send me the insult below. His reference to "persisting" reminded me of the email Jason Gunthorpe wrote about Joel "Espy" Klecker.
Subject: RE: 753935 - Debian [WW-DMS.FID901178] Date: Wed, 26 Apr 2023 16:32:17 +0000 From: Nassisi Gillian <gillian.nassisi@walderwyss.com> To: daniel@softwarefreedom.institute <daniel@softwarefreedom.institute> CC: Köster Pascale <pascale.koester@walderwyss.com> Mr Pocock, We acknowledge receipt of both your emails dated 31 March and 21 April 2023. First of all, our client does not see any offer to settle the case in your emails. Furthermore, as long as you persist in your actions, we are instructed not to enter into any discussion or settlement agreement with you. Please instruct your lawyer to contact us for any further communications. All rights reserved. Kind regards, Pascale Köster and Gillian Nassisi Gillian Nassisi MLaw, Trainee Lawyer Walder Wyss Ltd. | www.walderwyss.com Zurich | Geneva | Basel | Berne | Lausanne | Lugano 3 Boulevard du Théâtre, P.O. Box, 1211 Geneva 3, Switzerland Phone direct: +41586583077 | Phone: +41 58 658 30 00 |Fax: +41 58 658 59 59 This e-mail message is for the sole attention and use of the intended recipient. It is confidential and may contain privileged information. Please notify us immediately if you have received it in error and delete it from your system. Thank you.
Here is the full version of the email quoted in the earlier blog post about Klecker. In the previous blog, I only showed the first half of the email. Now I show the full email, Notice the comment that Klecker didn't have the will to continue, in other words, he didn't have the will to persist, that is the link to the racist Swiss email above.
Racist Swiss women like Pascale Koester and Albane de Ziegler interfering in the lives of volunteers are a lot like this terrible disease, Duchenne muscular dystrophy that wears people down both physically and emotionally to the point where they may decide to end it before it kills them in a physical sense.
It is important to remember that the manner in which these borderline nazis at Walder Wyss intruded on my family and I occurred after we already had the horrible experience of racial harassment from the cat-hating organizer of the far-right seniors group. After that incident in 2017, I left Zurich. As any cat will tell you, if you lie down with dogs, you get up with fleas.
Subject: RE: [jwk@espy.org: Joel Klecker] Date: Fri, 14 Jul 2000 20:40:00 -0600 (MDT) From: Jason Gunthorpe <jgg@ualberta.ca> To: debian-private@lists.debian.org On Tue, 11 Jul 2000, Brent Fulgham wrote: > > It's very hard for me to even send this message. This is a > > great loss to us all. > First, I'd like to extend my condolences to Joel's family. It > is still very hard to believe this has happened. Joel was > always just another member of the project -- no one knew (or > at least I did not know) that he was facing such terrible > hardships. Debian is poorer for his loss. Some of us did know, but he never wished to give specifics. I do not think he wanted us to really know. I am greatly upset that I was unable to at least be there with him on the 9th when he decided he could no longer continue.. > I know we are dedicating the release to him, but what about > having a "Klecker" box as a permanent part of Debian? If we I will see to this. It is nice to see all the support for Joel Jason
Am I the only Debian Developer who read the entire history and formed the impression that Klecker may be a case of euthanasia, in other words, Klecker chose to take his own life in the context of medically-assisted-suicide?
It is awkward to ask questions like this but it is important to remember that the former Debian leader, Chris Lamb, decided to force these discussions into the open at the time my father was dying. Lamb is an arrogant toff who forced that upon my family. Here is that recording from the Swiss harassment judgment where I was explaining to the court the difficulty of dealing with a racist Swiss landlady at the same time that my father had a stroke on the other side of the world in Australia:
Stepping back a bit, we can see that Klecker's home state, Oregon, was the first US state to introduce assisted-suicide legislation. Therefore, as Klecker was growing up, Klecker and his family were exposed to a lot of public debate about euthanasia laws. The debate began in the early nineteen nineties, when Klecker was twelve or thirteen years old. A public referendum, Ballot Measure 16 approved the concept on 8 November 1994 and when Klecker was 16 years old. (Coincidentally, Jérémy Bobbio aka ‘Lunar’ died of cancer in Rennes, France on 8 November 2024, 30 years after Oregon's referendum). Implementation of the act was delayed for almost three years due to interventions by the former president and the US Supreme Court. Oregon residents suffering from terminal illness could begin availing of euthanasia in October 1997.
It must have been an unpleasant time for his family to have this heightened public awareness of the euthanasia issue at almost exactly the same time that Klecker's childhood was visibly robbed from him by the evolution of his medical condition.
Under the euthanasia law, the doctor who completes the death certificate will not mention that euthanasia was a factor:
Q: What is listed as the cause of death on death certificates for patients who die under the Death with Dignity Act?
A: The Oregon Health Authority, Center for Health Statistics recommends that physicians record the underlying terminal disease as the cause of death and mark the manner of death “natural”. This is intended to balance the confidentiality of patients and their families, while ensuring that we have complete information for statistical purposes.
A death certificate is a legal document that has two primary purposes: (1) to certify the occurrence of death for legal matters (e.g., settling the estate), and (2) to document causes of death for public health statistics. To ensure that we have accurate and complete data on patients who have ingested the medications, the Oregon Health Authority regularly matches the names of persons for whom a DWDA prescription is written with death certificates. The Attending Physician is then required to complete a follow-up form with information about whether the death resulted from ingesting the medications, or from the underlying disease.
Therefore, unless the patient or their family chooses to disclose the decision to euthanise, we have no way to be sure that it was euthanasia.
Nonetheless, if Klecker did choose euthanasia it would make him a pioneer of the concept. Cumulative statistics are published about the program, here is the report including all data up to 2023.
In the year that Klecker died, 39 people received a prescription for the medication and 29 died after taking the medication.
The statistics go into greater detail and provide further fuel for debate. For example, we can see that only sixty-three percent of people die within an hour of taking the medication while almost seven percent take more than six hours to die.
It is now seven years since my father had a stroke and the Debian people are still sooking that I wasn't fully available to do things like mentoring for Google, things that Google never pays us to do anyway.
Klecker's father sent the report below. Like all the other emails, nobody states if it was medically-assisted-suicide but his time of death in the early hours of the morning at 4:29 am is consistent with the pattern of taking the medication before going to bed. Another key point in the email is that the Sun Ultra 30 and monitor were already packed up to be sent back to Ben Collins. We should not jump to conclusions about this, Klecker was somebody who was well organized and even if it was not euthanasia, the doctors may have been able to give him some advance warning about the rate at which his illness would progress.
Subject: Re: [jwk@espy.org: Joel Klecker] Resent-Date: 11 Jul 2000 15:47:21 -0000 Resent-From: debian-private@lists.debian.org Date: Tue, 11 Jul 2000 10:47:19 -0500 (EST) From: matthew.r.pavlovich.1 <mpav@purdue.edu> To: Ben Collins <bcollins@debian.org> CC: Debian Private List <debian-private@lists.debian.org> To hear how the Internet and the Debian Project brought importance to someone who was battling a tragic illness is uplifting. It makes all the disagreements look very insignificant. Lets rock this release. For Joel. Matthew R. Pavlovich On Tue, 11 Jul 2000, Ben Collins wrote: > It's very hard for me to even send this message. This is a great loss to > us all. > > ----- Forwarded message from J Klecker <jwk@espy.org> ----- > > X-From_: jwk@espy.org Tue Jul 11 10:52:41 2000 > Date: Tue, 11 Jul 2000 07:58:57 -0700 > From: J Klecker <jwk@espy.org> > X-Mailer: Mozilla 4.61 (Macintosh; I; PPC) > X-Accept-Language: en,pdf > To: bcollins@debian.org, Jeff Klecker <jklecker@norpac.com> > CC: Dianne Klecker <dgk@espy.org>, jwk@espy.org > Subject: Joel Klecker > > Mr. Collins, > My son Joel died this morning at 4:29 am after a lifelong battle with > Duchenne Muscular Dystrophy. Joel was 21 years old. The Debian Project > was of paramount importance to him. > I am not technically skilled enough to maintain Joel's system or to > repair glitches. However, the system will continue to run as long as > possible (access as per normal). Please continue to use any information > or resources of value to the project. > I have sent the Sun Ultra 30 and monitor to you via UPS. Please notify > me of arrival. (I feel quite responsible since it was of great > importance to Joel) > Will you please notify any fellow developers with whom Joel worked. We > would welcome contacts from any of you. Either through this E-mail > channel, jwk@espy.org by phone at 503-769-7373 or by that "other" > mail: Jeffrey and Dianne Klecker and brother Ben Klecker > 1385 East Virginia St > Stayton, Or 97383 > Thank you and we miss him very much, Jeff Klecker > > > > ----- End forwarded message ----- > > -- > -----------=======-=-======-=========-----------=====------------=-=------ > / Ben Collins -- ...on that fantastic voyage... -- Debian GNU/Linux \ > ` bcollins@debian.org -- bcollins@openldap.org -- bcollins@linux.com ' > `---=========------=======-------------=-=-----=-===-======-------=--=---' > > > -- > Please respect the privacy of this mailing list. > > To UNSUBSCRIBE, email to debian-private-request@lists.debian.org > with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org > >
For the purposes of a suicide cluster analysis, we really don't need to violate Klecker's privacy or make any assertion that he chose euthanasia. All we are concerned with is whether exposure to the emails about this death might have been a contagion factor in any subsequent suicide decisions.
I first volunteered to be a mentor for Debian and Google Summer of Code back in 2012. I had volunteered for this program every year for six years. Not long after my father had a stroke, I sent an email to Molly de Blanc telling her that I wouldn't be fully available for mentoring in Google Summer of Code in 2018. Here is that email:
Subject: Re: Google Summer of Code 2018 Date: Mon, 22 Jan 2018 08:41:49 +0100 From: Daniel Pocock <daniel@pocock.pro> To: Molly de Blanc <deblanc@riseup.net> On 22/01/18 02:25, Molly de Blanc wrote: > I mmissed this on the application before! We need 2-5 administrators for > the application. Who else wants to be one? > You can use my name temporarily while looking for other people to help you in this role. Google knows me as a previous administrator for Ganglia in GSoC and I've met most of the Google people too. However, I can't officially commit to help with the duties of an administrator right now. Regards, Daniel
The Walder Wyss Swiss lawyers demanded more publicity about the Debian toxic culture. They have been extremely rude to my family and I. By spreading gossip and rumors about volunteers and our families, they create a situation where the only way to proceed is to publish emails like this so people can trace the root cause of these deaths.
Here is where the borderline nazis from Zurich demand that I publish something:
In this email, we see an example of Klecker's thinking about a week before he died, he was making arrangements for the Ultra Sparc 30. The machine was loaned from Sun Microsystems, they are now part of Oracle Corporation. In other words, they had sent this machine to the home of a dying teenager so he could do unpaid work for them and even as he prepared to die, he had to make arrangements to pack it up and return it to them.
Subject: Re: [jwk@espy.org: Joel Klecker] Resent-Date: 12 Jul 2000 01:23:28 -0000 Resent-From: debian-private@lists.debian.org Date: Tue, 11 Jul 2000 21:23:06 -0400 From: Ben Collins <bcollins@debian.org> To: Erick Kinnee <erick@kinnee.net>, Brent Fulgham <brent.fulgham@xpsystems.com>, Dirk Eddelbuettel <edd@debian.org>, Debian Private List <debian-private@lists.debian.org> On Tue, Jul 11, 2000 at 06:04:30PM -0500, Erick Kinnee wrote: > On Tue, Jul 11, 2000 at 03:05:10PM -0700, Brent Fulgham wrote: > > > As far as the naming goes, shouldn't it be 'espy' rather than > > > 'Klecker' ? > > > > > You're absolutely right... that's a much better choice. > > > > It might be nice to mention this to his family whenever a > > machine is made available. > > What about that Ultra 30? The UltraSPARC 30 wont be on a network. Joel was already making arrangements to send it back to me, about a week before this had occured. My intention then was to hold on to it so that I could have a SCSI system to test with (I don't have any SCSI UltraSPARC's to work with), and send my U5 to another developer so that they could assist with the SPARC port. Since those arrangements are already in place, I sort of need to stick with them to avoid calling Sun too much. Ben -- -----------=======-=-======-=========-----------=====------------=-=------ / Ben Collins -- ...on that fantastic voyage... -- Debian GNU/Linux \ ` bcollins@debian.org -- bcollins@openldap.org -- bcollins@linux.com ' `---=========------=======-------------=-=-----=-===-======-------=--=---'
On the day before Klecker died, he used IRC to tell people about it. Once again, I didn't want to tell people everything about my family but Google and the social media cabals have continued spreading gossip and smear campaigns about "behavior" at a time of grief.
Subject: Distressing news Date: Sun, 9 Jul 2000 10:30:01 -0400 From: Ben Collins <bcollins@debian.org> To: debian-private@lists.debian.org For those of you that do not IRC.... 06:42 *Espy* is saying 'goodbye' on IRC today, spread the word 06:42 <woot> Espy: oh? 06:44 <woot> Espy: would you elaborate on that? 06:46 <Espy> terminal illness close to death 06:46 <Joy> Espy: you're kidding, right? 06:47 <Espy> uh, no 06:47 <woot> Espy: what brought this on? 06:48 <Joy> Espy: omg! 06:50 <woot> Espy: have you known about this long? 06:53 <Espy> I've had this disease my entire life 06:55 <Espy> been bedridden at least as long as I've been a developer 06:57 <Espy> Duchenne Muscular Dystrophy 07:02 <Espy> http://mdausa.org/disease/dmd.html Espy is Joel Klecker, our long time devoted glibc maintainer. A few days ago he joined IRC and told us he was giving up his packages to deal with this very important juncture in his life. He has unsub'd from all lists but -private, and one other. Our thoughts and best wishes should go out to him. It might sound corny to some, but since Joel has dedicated so much time to Debian, even though he has also had to deal with this condition, I think we should dedicate our next release to him. -- -----------=======-=-======-=========-----------=====------------=-=------ / Ben Collins -- ...on that fantastic voyage... -- Debian GNU/Linux \ ` bcollins@debian.org -- bcollins@openldap.org -- bcollins@linux.com ' `---=========------=======-------------=-=-----=-===-======-------=--=---'
When we talk about contagion, it is interesting to see if any subsequent suicides have features that can be traced to the communications about Klecker's death.
As we saw above, Klecker wanted to tell people, to communicate, before the end came. Frans Pop, the Debian Day Volunteer Suicide Victim did much the same thing, he also sent a note on Debian-private the night before Debian Day. It is eerily creepy, Klecker and Pop both wanted to transfer control of their packages before they died.
Klecker and Pop both made some effort to coordinate the transfer of their equipment to other developers too. We can see how people removed the machines (and possible evidence) from his home .
Finally, a more mundane matter. Frans was hosting/using a number of machines at his house and asked that they be passed back to Debian. Please contact me *off-list* if you can help. His parents live a fair way from the town where he lived, so will need to arrange to travel there to meet people. They'd therefore appreciate it if one person can take care of everything.
Klecker, Pop and Adrian von Bidder had all been outspoken about philosophical aspects of computing, technology and free software. Does this make them more pre-disposed to suicide or was this simply coincidence?
Klecker and von Bidder both had a Debian logo on their grave:
After the death of Klecker, people decided to dedicate the release to him and name a machine after him. Here is an email about the release, notice it was sent before the time of death:
Subject: Re: draft dedication note Date: Sun, 9 Jul 2000 23:18:37 +0300 From: Antti-Juhani Kaijanaho <gaia@iki.fi> To: Debian Private List <debian-private@lists.debian.org> On Sun, Jul 09, 2000 at 03:58:51PM -0400, Ben Collins wrote: > Dedicated to Joel Klecker > > Although he has not left us, and will always be in our thoughts, we are > decicating this release of Debian to him. Throughout his long association > with our project, he has given unselfishly to Free Software. Most of us > were oblivious to what Joel was facing, and are only now seeing what a > true and giving person he was, and what a friend we will be losing. So as > a show of our appreciation, this one's for you, Joel. > > * The "Joel Klecker" release If I did not know what is going on, I'd find this note puzzling. What is it that "Joel was facing"? Why are we "losing" him? If we want to publicize this, I think it should be with a little more clarity. Also, I believe it is customary to use the past tense only of people who have died. (BTW^2, will this release be Debian 2.2 "potato" or Debian 2.2 "Joel Klecker" or something else?) -- %%% Antti-Juhani Kaijanaho % gaia@iki.fi % http://www.iki.fi/gaia/ %%%
Here is the discussion about naming the machine after him:
Subject: Re: [jwk@espy.org: Joel Klecker] Date: Tue, 11 Jul 2000 14:57:32 -0700 (PDT) From: Sean 'Shaleh' Perry <shaleh@valinux.com> Organization: VA Linux To: Dirk Eddelbuettel <edd@debian.org> CC: Debian Private List <debian-private@lists.debian.org> On 11-Jul-2000 Dirk Eddelbuettel wrote: > > Seconded. > > As far as the naming goes, shouldn't it be 'espy' rather than 'Klecker' ? > probably, just did not feel like arguing over something like this.
In the end, they named the machine klecker.debian.org and it is still there today.
Klecker's death was not the only influence in the Debian world. For example, Thiemo Seufer died and Frans Pop commented on it before his own death:
Subject: Re: Thiemo Seufer Date: Sat, 27 Dec 2008 04:45:10 +0100 From: Frans Pop <elendil@planet.nl> To: debian-private@lists.debian.org On Friday 26 December 2008, Martin Michlmayr wrote: > I'm sorry to inform you that Thiemo Seufer died in a car accident > this morning. I was told that a big, fast moving car collided with > his car, forcing his car from the high way. That is very sad news and a great loss to the project. My condolences to his family and friends. Frans
Looking through these discussions, it is noteworthy that human grief is being trivialized into text-based communications. This could be another factor, when a community doesn't have real-world mechanisms to process grief and they resort to a gossip channel like debian-private then they may never process the grief properly.
Subject: Re: Privacy of -private list ; missing even one bit of common sense and decency Date: Sun, 22 Aug 2010 01:01:34 -0500 From: Gunnar Wolf <gwolf@gwolf.org> To: George Danchev <danchev@spnet.net> CC: debian-private@lists.debian.org George Danchev dijo [Sun, Aug 22, 2010 at 08:06:01AM +0300]: > > > > And I guess that quite a few people share your view. It's considered > > > > as a crime or a sin in many religions. > > > > > > > > I personally do not like this point of view at all, but it > > > > unfortunately probably has to be respected. > > > > > > As a general rule I don't think that we have to respect the views of a > > > religious minority > > > > I wouldn't say it's a minority. According to wikipedia, the catholic > > dogmas consider this as a crime, and islam considers this as a sin or > > even a crime. That already makes quite a few people. > > Just as a data point, there are few countries where Euthanasia [1] is legal > and the Netherlands is one of them, so I guess first of all we must respect > countries legislation and peoples own legal decision. Yes, I know, we do. > Also, I don't want to speculate if that is the case with our fellow DD Frans, > since I simply don't know. I've never worked with him closely, however I > acknowledge the huge amount of valuable contributions he invested in Debian. Please. Stop it. So much arguing about what you know shit about, and that took the life of one of our group, a respected and hard-working person, makes me sick. We know nothing about the fact. Probably, we will never know. Stop, please, hallucinating about the situations that led him to do what he did.
Now we can see things are getting even worse: we have these fascist lawyers who came along and start looking for any excuse to extinguish people. In the Debian world, we are not employees so we can't be sacked. Therefore, the lawyers will try anything, whether it is the rogue UDRP harassment in the WeMakeFedora case or even trying to have a developer forcefully euthanised. If there is a law for it, and if the rogue corporations are willing to pay for it, some lawyer will try to exploit that law to meet their evil objectives.
There is a huge contradiction, one of the world's most advanced superpowers, the United States, remains very skeptical of euthanasia. Very few states copied the example from Oregon to authorize any form of assisted suicide. Yet at the same time US industry is leading the world in artificial intelligence, a technology that could be even more catastrophic then nuclear war if we lose control of it.
Subject: Re: ideology an free software Date: Wed, 29 Sep 2004 12:43:59 +0200 From: Jose Carlos Garcia Sogo <jsogo@debian.org> To: debian-private@lists.debian.org El mié, 29-09-2004 a las 11:12 +0100, David Spreen escribió: > hey there, > > the recent dwn issue's story about a debian surveillance robot made me > think about some ethical issues raised by free software. what about > debian powered weapon systems? > > who of you would not work for a weapon-company but conforms with the > dfsg-guideline "no discrimination of use"? would you like to see your > work to be part of a weapon killing people? No discrimination means no discrimination. If you start discriminating, you'll never end doing it. Every developer can have some reason for discriminating some group, person or goal. (Do you like Debian(-med) being used in abortions, euthanasia, investigation with animals, ...?) Also, we cannot impose further restrictions in licenses as (L)GPL, OSI*, BSD,... Cheers, P.S: BTW, I guess that we would like to see Debian being used in any spatial program... take into account that the technology needed for launching people out in the space is the same than the one needed for putting nuclear warfares out there. -- Jose Carlos Garcia Sogo jsogo@debian.org
As we rush to construct Artificial Intelligence systems for the military, is it a sign that humanity as a whole is engaging in a form of euthanasia?
The recent news about World Food Kitchen drone executions in Gaza put the risk of autonomous weapons back into public consciousness. For those who have a background in this industry, those concerns are not actually very new, we've been thinking about these possibilities for a long time. The message above was from 2004.
My old friend Peter Eckersley warned that Google should not help the US military with AI.
Then again, how did I find out so much about the first AI-powered autonomous drone being tested at Graytown in Australia and then listed in a journal on the Pentagon web site in 2004?. The software quoted in the article, JACK, was a student project at University of Melbourne in 1999. The top guns (excuse the pun) of the CS department were put on it. One of our fellow team members is now Google Professor of Computer Science at Oxford. Peter would neither be amused nor surprised.
This is a summary of the work done on initiatives by the CPE Team. Every quarter, the CPE team works together with CentOS Project and Fedora Project community leaders and representatives to choose projects that will be being worked upon in that quarter. The CPE team is then split into multiple smaller sub-teams that will work on the chosen initiatives + day-to-day work that needs to be done. Some of the sub-teams are dedicated to the continuous efforts in the team whilst some are created only for the initiative purposes.
This update is made from infographics and detailed updates. If you want to just see what’s new, check the infographics. If you want more details, continue reading.
The Community Platform Engineering Team is a Red Hat team that is working exclusively on community projects. Its members are part of Fedora Infrastructure, Fedora Release Engineering and CentOS Infrastructure teams. This team works on initiatives, which are projects with larger scope related to community work that needs to be done. It also investigates possible initiatives with the ARC (The Advance Reconnaissance Crew), which is formed from a subset of the Infrastructure & Release Engineering sub-team members based on the initiative that is being investigated.
Issue trackers
PDC is the Product Definition Center, running at: https://pdc.fedoraproject.org/.
However, this application which was developed internally, is no longer maintained. This codebase has been “orphaned” for a few years now and we need to find a solution for it.
We are reviewing and having a critical look on what we store in there, see what is really needed and then find a solution for its replacement.
Status: Done
Issue trackers
Documentation
Application URLs
In the last quarter of 2021, a mini-initiative was completed that finished and deployed the discourse2fedmsg application. In short, this application is a simple flask app that receives POST requests from discourse (i.e. “webhooks”) and turns them into Fedora Messages, and then sends them through to the Fedora Messaging Queue.
Webhooks are a fairly common feature in current web applications, so this proposal is to create a new web application that can reuse common parts of discourse2fedmsg and set it up to be extended to send messages from other webhook enabled apps.
This would allow us to easily add support for apps like gitlab without having to deploy and create additional flask applications for each app that gets added in the future,
Status: In Progress
Issue trackers
Documentation
This investigation is looking at the potential replacement of dist git used by Fedora and what forge would be the best candidate. It’s looking at the user stories for current dist git and if it’s possible to apply them on Forgejo or GitLab.
Status: In Progress
Documentation
If you get here, thank you for reading this. If you want to contact us, feel free to do it on Matrix.
As CPE members are part of Fedora Infrastructure, Fedora Release Engineering and CentOS Infrastructure, I’m adding here links to Fedora Infra & Releng update and CentOS Infrastructure update.
The post CPE Update Q3 2024 appeared first on Fedora Community Blog.
Date: Fri, Nov. 15th, 2024, three time slots to connect our global community
Register and RSVP here:
With Fedora’s tradition of celebrating each release, we’re thrilled to introduce a more inclusive format this time around: Fedora Global Watch Parties. Instead of a single release party event, Fedora’s Global Watch Parties create three time slots across APAC, EMEA, and LATAM/NA, making it easier for everyone to participate in a way that fits their schedule and time zone.
Each Watch Party is designed to be easy to join, running around 1.5 hours. This shorter format means that more Fedora users, contributors, and community members can attend, connect, and engage. It’s a perfect opportunity to meet other Fedora community members, learn more about Fedora 41, and share the excitement of the release—no matter where you are.
With the change to holding sessions across three different time zones, we’re making it easier than ever for contributors and users worldwide to attend and participate. Whether you’re joining to learn about Fedora 41’s features or to connect with others who share your passion for open-source, we want to bring our community closer together.
Pick the time slot that works best for you, grab your Fedora swag, and join us for our Fedora 41 release event. See you there!
The post Announcing the Fedora 41 Global Watch Parties appeared first on Fedora Community Blog.
A news report by Edouard Bolleter of PME Magazine.
He has written a news report that feels like a paid advertisement.
He wrote "a legal services insurance unlimited for the private individuals and the small businesses" and later on "We are the only insurer to accept businesses marked like a risk, those who are most frequently rejected by the legal expenses insurance market".
Monsieur Bolleter does not ask any difficult questions. The journalists in Switzerland are afraid of criminal prosecution/persecution for writing any inconvenient truths.
If it seems to good to be true, it probably is.
The law office who wants to democratize the law
No, jurists are not only for big companies! The proof is Real-Protect, an unlimited legal services insurance for the private individuals and small businesses.
Edouard Bolleter, 20.07.2018
Mathieu Parreaux, employee and co-founder of the law office Parreaux, Thiébaud & Partners, launched Real-Protect, whose terribly democratic concept and startup spirit should appeal to the bosses of French-speaking SMEs. The young company offers unlimited legal protection for individuals and businesses. The firm is made up of general lawyers (more than 10 people) and works with a network of partner lawyers registered with the Geneva, Vaud, Valais, Fribourg and Neuchâtel Bars. Real-Protect already has 450 clients with rates starting at 24.90 francs per month.
75% of clients are small businesses
Originality of the approach: any client can receive legal advice, orally or in writing, and without limits. “We are the only ones to accept companies labeled as being at risk, which are most often rejected by legal protection insurers on the market. These are primarily companies active in real estate. Paradoxically, we are also the only legal protection to enter into the matter when it comes to attacking the opposing party. These different points allow us to welcome everyone,” defends Mathieu Parreaux.
In addition to legal protection, the firm meets the tailor-made needs of SMEs, which represent 75% of its clientele. “Starting a business requires funds and 99% of SMEs start their business without contracts or general conditions, or with models taken from the internet, which is extremely dangerous. Whether they are partnerships or capital companies, we structure our prices according to their budget in order to allow them to build solid legal positions from the start,” explains Mathieu Parreaux.
The service is indeed targeted at SMEs with corporate law, contract law, tax law or prosecution law. “Our entire legal apparatus is built to support SMEs from A to Z, advising them on their structure, drafting their contracts, general conditions, etc. And also for more specific questions, in the event of a merger, acquisition, or transformation of companies,” concludes the lawyer.
At the International TeX Users Group Conference 2023 (TUG23) in Bonn, Germany, I presented a talk about using Metafont (and its extension Metapost) to develop traditional orthography Malayalam fonts, on behalf of C.V. Radhakrishnan and K.H. Hussain, who were the co-developers and authors. And I forgot to post about it afterwards — as always, life gets in between.
In early 2022, CVR started toying with Metafont to create a few complicated letters of Malayalam script and he showed us a wonderful demonstration that piqued many of our interest. With the same code base, by adjusting the parameters, different variations of the glyphs can be generated, as seen in a screenshot of that demonstration: 16 variations of the same character ഴ generated from same Metafont source.
Hussain, quickly realizing that the characters could be programmatically assembled from a set of base/repeating components, collated an excellent list of basic shapes for Malayalam script.
I bought a copy of ‘The Metafontbook’ and started learning and experimenting. We found soon that Metafont, developed by Prof. Knuth in the late 1970’s, generates bitmap/raster output; but its extension MetaPost, developed by his Ph.D. student John Hobby, generates vector output (postscript) which is required for opentype fonts. We also found that ‘Metatype1’ developed by Bogusław Jackowski et al. has very useful macros and ideas.
We had a lot of fun programmatically generating the character components and assembling them, splicing them, sometimes cutting them short, and transforming them in all useful manner. I have developed a new set of tools to generate the font from the vector output (SVG files) generated by MetaPost, which is also used in later projects like Chingam font.
At the annual TUG conference 2023 in Bonn, Germany, I have presented our work, and we received good feedback. There were three presentations about Metafont itself at the conference. Among others, I also had the pleasure to meet Linus Romer who shared some ideas about designing variable width reph-shapes for Malayalam characters.
The video of the presentation is available in YouTube.
The article was published in the TUGboat conference proceedings (volume 44): https://www.tug.org/TUGboat/tb44-2/tb137radhakrishnan-malayalam.pdf
Postscript (no pun intended): after the conference, I visited some of my good friends in Belgium and Netherlands. En route, my backpack with passport, identity cards, laptop, a phone and money etc. was stolen at Liège. I can’t thank enough my friends at Belgium and back at home for their unbridled care, support and help, on the face of a terrible affliction. On the day before my return, the stolen backpack with everything except the money was found by the railway authorities and I was able to claim it just in time.
I made yet another visit to the magnificent Plantin–Moretus Museum (it holds the original Garamond types!), where I myself could ink and print a metal typeset block of sonnet by Christoph Plantijn in 1575, which now hangs at the office of a good friend.
Dear syslog-ng users,
This is the 125th issue of syslog-ng Insider, a monthly newsletter that brings you syslog-ng-related news.
After the last syslog-ng release, we started a campaign to close open issues on GitHub. We'd like to continue this effort and call for collaboration from our users and contributors to make OSE even more stable. While unit tests are great (and we do many tests in-house), nothing can replace using syslog-ng in real-world situations. This blog collects some resources about how you can start testing the latest syslog-ng release from GitHub.
https://www.syslog-ng.com/community/b/blog/posts/a-call-for-syslog-ng-testing
We are always looking for new ways to store log messages. Quickwit is a new contender, designed for log storage, and among others, it also provides an Elasticsearch-compatible API. From this blog, you can learn about Quickwit, and how to forward log messages from syslog-ng to it using the Elasticsearch-compatible API.
https://www.syslog-ng.com/community/b/blog/posts/first-steps-with-quickwit-and-syslog-ng
You can also use the OpenTelemetry protocol to send logs to Quickwit with syslog-ng.
Recently I wrote about a campaign that we started to resolve issues on GitHub. Some of the fixes are coming from our enthusiastic community. Thanks to this, there is a new syslog-ng-devel port in MacPorts, where you can enable almost all syslog-ng features even for older MacOS versions and PowerPC hardware. Some of the freshly enabled modules include support for Kafka, GeoIP or OpenTelemetry. From this blog entry, you can learn how to install a legacy or an up-to-date syslog-ng version from MacPorts.
https://www.syslog-ng.com/community/b/blog/posts/huge-improvements-for-syslog-ng-in-macports
New webcast: “Log management superpowers: a PAM Essentials case study”. You can watch it at https://www.syslog-ng.com/webcast-ondemand/log-management-superpowers-a-pam-essentials-case-study/
You can learn about upcoming webinars and browse recordings of past webinars at https://www.syslog-ng.com/events/
Your feedback and news, or tips about the next issue are welcome. To read this newsletter online, visit: https://syslog-ng.com/blog/
Kamal 2 can deploy multiple apps on a single server so it’s easy to lose track of what’s deployed. This alias will fix it.
To see all running apps on a server we need to query Kamal Proxy:
$ ssh [USER]@[SERVER]
$ docker exec kamal-proxy kamal-proxy list
Service Host Target State TLS
dealership dealearship.com cde2433e86d6:80 running yes
dealership-api api.dealearship.com 82361b53174f:80 running yes
...
But even better, we can create an alias inside config/deploy.yml
:
# config/deploy.yml
...
aliases:
apps: server exec docker exec kamal-proxy kamal-proxy list
Now running kamal apps
gives us a nice rundown of what’s running which is pretty sweet.
Here’s a short tip on opting out a specific model from Single Table Inheritance (STI).
Imagine a Vehicle
model which is implemented using STI and extented with a type
parameter to Sedan
and Wagon
models:
# Superclass
class Vehicle < ApplicationRecord
self.inheritance_column = :type
end
# STI model
class Sedan < Vehicle
end
# STI model
class Wagon < Vehicle
end
All of this is nice except we might want to also subclass the Vehicle
model as usual without any STI magic.
To do that we simply set inheritance_column
to type_disabled
:
# Non-sti sublassing, will behave as Vehicle
class Product < Vehicle
self.inheritance_column = :type_disabled
end
Now that I have cleaned the loop up somewhat, we can continue with the process of refactoring the code. This is a continuation to the article series I started here.
This next step really moves beyond refactoring. I have identified that the intended backtracking was not actually implemented in the code. Instead, The whole board is wiped and restarted many times, causing a decent slowdown in execution. My refactoring process thus far has allowed me to understand the code well enough to attempt and improvement.
Lets get a before time hack:
time python tree_sudoku.py
...
real 0m8.289s
user 0m8.195s
sys 0m0.020s
Considering the problem set and computer power available, I think this should be almost instantaneous. 8 Seconds indicates a problem.
The Bug I identified earlier is that the retreat function does not reset the board to its original state. First I will re-enable the failing test:
diff --git a/test_tree_sudoku.py b/test_tree_sudoku.py
index 462b75b..c18de03 100755
--- a/test_tree_sudoku.py
+++ b/test_tree_sudoku.py
@@ -60,7 +60,7 @@ def test_advance():
node.write(test_board)
assert (test_board[0][3] == '9')
back_node = node.retreat()
- # assert (test_board[0][3] == '0')
+ assert (test_board[0][3] == '0')
assert (node.value == "9")
back_node.write(test_board)
assert (test_board[0][2] == '3')
In order to return the old value, we need to record it. This is probably the first time we see the real benefit of extracting the write method: we change it once and get the change executed everywhere it is called.
diff --git a/tree_sudoku.py b/tree_sudoku.py
index de5c7fe..a161a99 100755
--- a/tree_sudoku.py
+++ b/tree_sudoku.py
@@ -202,6 +202,7 @@ class Tree_Node:
return new_node
def retreat(self):
+ self.board[self.row][self.col] = self.old_value
node = self.last_node
node.next_node = None
return node
@@ -213,6 +214,8 @@ class Tree_Node:
return self.value
def write(self, board):
+ self.board = board
+ self.old_value = board[self.row][self.col]
board[self.row][self.col] = self.value
def check_solved(self, board):
Putting the board in the parameter list to remember it is a hack: this dependency should be in the constructor. However, you can see that the board is changed when we restart the loop. But maybe now it doesn’t have to be….lets see. If I make the following change:
diff --git a/tree_sudoku.py b/tree_sudoku.py
index a161a99..dadb004 100755
--- a/tree_sudoku.py
+++ b/tree_sudoku.py
@@ -39,9 +39,9 @@ class SudokuSolver:
index = 0
head_node = Tree_Node(None, index)
curr_node = head_node
+ test_board = copy.deepcopy(original_board)
while True:
filler = head_node
- test_board = copy.deepcopy(original_board)
filler.write(test_board)
while filler.next_node:
filler = filler.next_node
Things break. Specifically, the retreat function attempts to retreat beyond the start of the first node, and we get an exception. Hmmm.
OK, Before I tackle that, I see some simplification I can do. The if and the while statement around retreat are doing the same check. Remove the if.
diff --git a/tree_sudoku.py b/tree_sudoku.py
index a161a99..e5209c8 100755
--- a/tree_sudoku.py
+++ b/tree_sudoku.py
@@ -53,10 +53,9 @@ class SudokuSolver:
curr_node = curr_node.advance(test_board)
curr_node.check_solved(test_board)
else:
- if len(curr_node.possible_values) == 0:
- # backtrack
- while len(curr_node.possible_values) == 0:
- curr_node = curr_node.retreat()
+ # backtrack
+ while len(curr_node.possible_values) == 0:
+ curr_node = curr_node.retreat()
curr_node.next()
return self.build_solution_string(head_node)
OK, back to the looping. Note that the looping is based on the the array of possible_values set in the constructor of the Tree_Node. This holds the set of values that are not yet tried.
But it seems like they should get reset when we retreat. Retreat means we have Exhausted all the possibilities forward of the point that the node points to, and we are going to try the next available option in a previous node. The reason this works is we create a new node each time we Advance…this is wasteful, but not the problem at hand.
Why does Retreat stop when the deepcopy happens each time, but not when it happens once? This would be hard to debug interactively, but we can spew log information and see if it helps generates a hypothesis. Here are the first and last couple lines of output:
write row = 0, col = 0 value = 9
write row = 0, col = 0 value = 9
write row = 0, col = 0 value = 8
write row = 0, col = 0 value = 8
write row = 0, col = 0 value = 7
write row = 0, col = 0 value = 7
write row = 0, col = 0 value = 6
write row = 0, col = 0 value = 6
write row = 0, col = 0 value = 5
write row = 0, col = 0 value = 5
write row = 0, col = 0 value = 5
write row = 0, col = 1 value = 9
....
retreat row = 0, col = 1 value = 1 old value = 1
write row = 0, col = 0 value = 4
write row = 0, col = 0 value = 4
write row = 0, col = 0 value = 3
write row = 0, col = 0 value = 3
write row = 0, col = 0 value = 2
write row = 0, col = 0 value = 2
write row = 0, col = 0 value = 1
write row = 0, col = 0 value = 1
retreat row = 0, col = 0 value = 1 old value = 1
That first set of entries looks wrong: once it writes a value, it should immediately move to the next cell. Instead we see that both row and col stay the same.
I just looked back over the original code to see if I changed the semanitcs of row/col from what the longer version used: No. They always used the board_spot value attached to the node in the constructor. Here it is from the very first version, pre-refactoring:
test_board[int(current_board_filling_node.board_spot[0])][int(current_board_filling_node.board_spot[1])] = current_board_filling_node.value
So pre-calculating row and col is just an optimization, not a change in logic.
The check in the while loop that causes it to exit is :
while len(curr_node.possible_values) == 0:
This array is reduced in the call to check_solved.
def check_solved(self, board):
if board[self.row][self.col] != '0':
self.value = board[self.row][self.col]
self.possible_values = []
If that function is never called for the first node, it will still have a length left. That function gets called in the advance function. Ig gets called immediately after that function as well, which means we can probably remove one of the changes. Lets revert to the HEAD and do that clean up.
diff --git a/tree_sudoku.py b/tree_sudoku.py index e5209c8..61a8d6a 100755 --- a/tree_sudoku.py +++ b/tree_sudoku.py @@ -51,7 +51,6 @@ class SudokuSolver: if curr_node.index + 1 >= MAX: break curr_node = curr_node.advance(test_board) - curr_node.check_solved(test_board) else: # backtrack while len(curr_node.possible_values) == 0:
It works.
OK, back to the loop issue. That deep copy offends me. It should not be necessary if we are cleaning up after ourselves. But the thing that puzzles me is the loop:
while filler.next_node:
filler = filler.next_node
filler.write(test_board)
The nodes it is looping through have values in them because we wrote to them last pass. In delete we removed all nodes past the high-water mark. So the first node we hit should be one that
But I just noticed another duplicate write…Let me remove that before progressing.
diff --git a/tree_sudoku.py b/tree_sudoku.py index 61a8d6a..cb920ed 100755 --- a/tree_sudoku.py +++ b/tree_sudoku.py @@ -46,7 +46,6 @@ class SudokuSolver: while filler.next_node: filler = filler.next_node filler.write(test_board) - filler.write(test_board) if self.box_index.is_value_valid(test_board, curr_node): if curr_node.index + 1 >= MAX: break
OK. So lets looks at the line that throws the exception:
# backtrack
while len(curr_node.possible_values) == 0:
curr_node = curr_node.retreat()
curr_node.next()
curr_node.next() pops the value out of the array of potential solutions.
The error seems to be when we go off the beginning of the list: this feels like it should be an error condition: we were given an unsolvable problem. I think, though, that something must be resetting the list
Maybe we can fix it by checking for None. I don’t like it, as I don’t know why we are going off the list, but maybe it is sufficient, for now, to not go off the list. It seems like the next() call should fail for the same reason…but it doesn’t.
diff --git a/tree_sudoku.py b/tree_sudoku.py
index cb920ed..939c1da 100755
--- a/tree_sudoku.py
+++ b/tree_sudoku.py
@@ -54,6 +54,9 @@ class SudokuSolver:
# backtrack
while len(curr_node.possible_values) == 0:
curr_node = curr_node.retreat()
+ if curr_node is None:
+ break
+
curr_node.next()
return self.build_solution_string(head_node)
maybe the break statement is breaking out of multiple levels of the loop? But then our tests should fail. Lets try warning if we try to retreat past the beginning.
diff --git a/tree_sudoku.py b/tree_sudoku.py
index cb920ed..1cf2817 100755
--- a/tree_sudoku.py
+++ b/tree_sudoku.py
@@ -39,9 +39,9 @@ class SudokuSolver:
index = 0
head_node = Tree_Node(None, index)
curr_node = head_node
+ test_board = copy.deepcopy(original_board)
while True:
filler = head_node
- test_board = copy.deepcopy(original_board)
filler.write(test_board)
while filler.next_node:
filler = filler.next_node
@@ -201,6 +201,9 @@ class Tree_Node:
def retreat(self):
self.board[self.row][self.col] = self.old_value
node = self.last_node
+ if node is None:
+ print("off the rails row = %d col = %d" % (self.row, self.col))
+ print (self.board)
node.next_node = None
return node
Yep, that reports:
off the rails row = 0 col = 0
[['2', '2', '3', '2', '2', '2', '6', '2', '0'], ['9', '0', '0', '3', '0', '5', '0', '0', '1'], ['0', '0', '1', '8', '0', '6', '4', '0', '0'], ['0', '0', '8', '1', '0', '2', '9', '0', '0'], ['7', '0', '0', '0', '0', '0', '0', '0', '8'], ['0', '0', '6', '7', '0', '8', '2', '0', '0'], ['0', '0', '2', '6', '0', '9', '5', '0', '0'], ['8', '0', '0', '2', '0', '3', '0', '0', '9'], ['0', '0', '5', '0', '1', '0', '3', '0', '0']]
Something about the dirty board is causing it to go off the rails. The puzzle starts out like this: 003020600900305. What should be clear is that the 0s did not get replaced in the places where there are 2s in the first row. Obviously, that many 2s in a row should not have gotten that far, either.
AHA….I was replacing the value every time write was called. But what needs to happen is to reset it to the board original value. While the Node gets deleted each time, if the board stays around, there is no way to reset it to its original value. The hacky way to enforce this is:
--- a/tree_sudoku.py
+++ b/tree_sudoku.py
@@ -191,6 +191,7 @@ class Tree_Node:
self.next_node = None
self.value = '9'
self.index = index
+ self.old_value = None
def advance(self, test_board):
new_node = Tree_Node(self, self.index + 1)
@@ -212,7 +213,8 @@ class Tree_Node:
def write(self, board):
self.board = board
- self.old_value = board[self.row][self.col]
+ if self.old_value is None:
+ self.old_value = board[self.row][self.col]
board[self.row][self.col] = self.value
def check_solved(self, board):
With that change, we can persist the change that we only deepcopy the board once.
If that is correct, then we can also do away with all the board filling done in the early stages of the algorithm. What happens if we remove all the filling, and just use curr_node to write the value it is holding?
--- a/tree_sudoku.py
+++ b/tree_sudoku.py
@@ -36,16 +36,12 @@ class SudokuSolver:
self.solved_board_strings[key] = return_string
def tree_to_solution_string(self, original_board):
index = 0
test_board = copy.deepcopy(original_board)
head_node = Tree_Node(None, index)
curr_node = head_node
while True:
- filler = head_node
- filler.write(test_board)
- while filler.next_node:
- filler = filler.next_node
- filler.write(test_board)
+ curr_node.write(test_board)
if self.box_index.is_value_valid(test_board, curr_node):
if curr_node.index + 1 >= MAX:
break
It works. How long does it take:
real 0m0.776s
user 0m0.741s
sys 0m0.020s
Less than a second per run. We cut the time it takes to run this algorithm by a factor of 10. This makes sense now that we understand how much extra work it was doing on the first pass. Speeding up the time to test means you are less likely to get distracted on each cycle and drop out of the zone.
This article is a bit longer than I anticipated, because I fooled myself in my initial implementation of the retreat function. In retrospect, I could have written a unit test for retreat, and, if I had made it exhaustive, I might have found the problem. That is likely the case for all of the internal functions I wrote here: unit tests for inner functions tell you that they do what you expect.
Note that the code I produced here does not line up with the final code in the original article series. I seem to have lost that, and there are some pieces I do not exactly know what I did in some of the helper functions. I hope there is enough context to figure out what you would do if you were to continue on this refactoring yourself.
I have posted all of the steps of the refactoring to github.
In the last blog about Klecker, I looked at how Debianists deceived the community about their knowledge of his illness. In fact, while Klecker was alive, it looks like they deceived him too and after he died, they bounced the cheque, metaphorically.
Looking at Debian mailing lists today we frequently see people using gmail.com addresses to obfuscate their identity and who they work for.
Back in the days of Klecker and Bruce Perens, it was more common for people to use an email address associated with their employment or institution. Perens had used his pixar.com email address and Mark Shuttleworth used a thawte.com email address. We can look through the debian-private archives that have been disclosed in recent years and see many other company names too.
Was young Klecker starstruck by these big names and was this a reason he was willing to work for free while bed-ridden with a terrible illness? It may not be. I found various pieces of evidence suggesting why Klecker was willing to work on Debian without payment. This email from debian-private gives us a hint about how much weight Klecker puts on the names/trademarks of institutions in email addresses:
Subject: Re: State of the project Date: Thu, 20 Nov 2014 18:31:09 +0100 From: Marc Haber <mh+debian-private@zugschlus.de> Organization: private site, see http://www.zugschlus.de/ for details To: debian-private@lists.debian.org On Tue, 18 Nov 2014 23:43:06 +1000, Anthony Towns <aj@erisian.com.au> wrote: >? "?Debian always was known for its communication "style". There were >even shirts sold ><http://www.infodrom.org/Debian/events/LinuxTag2002/t-shirts.html> in >memory of Espy Klecker with a quote he is known for: Morons. I'm surrounded >by morons." > >That's totally my favourite Debian shirt. Mine as well. And it would be soooo un-CoC-compliant! Greetings Marc -- -------------------------------------- !! No courtesy copies, please !! ----- Marc Haber | " Questions are the | Mailadresse im Header Mannheim, Germany | Beginning of Wisdom " | http://www.zugschlus.de/ Nordisch by Nature | Lt. Worf, TNG "Rightful Heir" | Fon: *49 621 72739834 -- Please respect the privacy of this mailing list. Some posts may be declassified 3 years after posting as per http://www.debian.org/vote/2005/vote_002
Here is the shirt:
If Klecker felt these people were morons, why did he hang around?
The answer is on Klecker's web site which has been preserved for us like a time capsule at the Wayback Machine.
It is on the bottom of every page too.
The web site is at www.espy.org.
At the bottom of the page we can see a copy of the logo for the Free Speech Online Blue Ribbon Campaign from the EFF.
It goes much further than Klecker. After Klecker died, they named a server after him, klecker.debian.org and people were keen to place their own personal web sites and opinions there. Here is an example of a message that appeared after the September 11 attacks, which initiated many other private discussions too.
Subject: On WTC events sympathize or comdemnation expression Date: Wed, 12 Sep 2001 19:52:13 -0300 From: Pablo Lorenzzoni <spectra@debian.org> Reply-To: spectra@debian.org Organization: Projeto Debian To: debian-private@lists.debian.org Hello ALL! I were watching this nonsense flame-war about a possible Debian manifestation regarding WTC events. Let me state my position: 1 - I am not against any way of life... let it be islamic, catholic, agnostic, american, whatever... 2 - I am against every violation of Human Rights. It doesn't matter which circunstances surround it. 3 - I believe Debian Project is not a political organization per se. But it is very much clear (at least to me), that been the *only* (AFAIK) open-source distribution of a universal OS, Debian project have political strength. This also means we have a political responsability, just like the ones who vote for president but weren't ever candidate, or political-faction-affiliate. 4 - I'd like to express my sympathy to the victims of every Human Rights violation... but, of course, it is out of my reach. However, I **do** want to express it every chance I've got. This is one good chance. 5 - I believe that there's no distinction between technical and other fields. After all, the humans made everything happen. In a moment like the present one, these "walls" of "we are a technical community, so we have nothing to do with it" just don't apply at all. First of all we are humans.... then geeks. 6 - The first thing I've done as soon as I heard what happened was to try to find out if any of us were among the victims. AFAIK we are all safe. The reason I've mentioned it is that if one of us were hurt, probably this discussion would never start and our main webpage would be all black for the next 30 days.... The fact we are having this discussion scaries me very much. Are we loosing our humanity? Well... once this said, I think a polite, well-written expression of sympathy in our main project webpage would be appreciatted by everybody. Not a comdemnation. Not pointing fingers. Just a brief statement that we think human lifes are too important to be ended the way those hundreds ended. Maybe there're people here that disagree with me... maybe this never reach our main webpage... so, in my webspace under klecker, I've already pointed to Orange Ribbon Campaign Against Terrorism (same way I've pointed to EFF's Blue Ribbon one).... I suggest that everybody that agrees with me do the same with the webpages they host (not just under klecker).... the URL is http://www.comnet.com.br/or/ Now, please: observe that I am behind a thick wall of amianthus and all flames will go straight and silently to /dev/null ;-) Feel free to quote or Cc this message. []s Pablo -- Pablo Lorenzzoni (Spectra) <spectra@debian.org> GnuPG PubKey at search.keyserver.net (Key ID: 268A084D) Webpage: http://people.debian.org/~spectra/
What we can see from this email is that people did not agree with each other on certain topics but they could still post blogs about those topics. Many people supported the Blue Ribbon Campaign. The Debian Social Contract, clause 3, states "We will not hide problems" and many people interpret that as a free speech philosophy.
It looks like Joel Espy Klecker was one of many people who gave their time and effort to co-authorship of Debian based on a philosophical belief that they were contributing to a free future for humanity.
Given that Klecker knew he didn't have long to live, he even mentioned his imminent death before dying, the time that he contributed had a greater value than the time other people of similar skill level contribute.
When the Code of Conduct gaslighting was imposed upon Debian, the people who pushed for that CoC were bouncing the cheque that was due to Debian's founders and earliest co-authors.
Only 25% of Debianists actually consented to the CoC. Those who voted NO and those who did not vote at all did not give consent. Klecker, being deceased, was unable to consent. His copyright interest in Debian would have passed to his family and technically, they would have the right to consent in his place up to the point where his copyright expires in the year 2070. They were never consulted or asked if they consent to this retrospective change to the agreement between authors.
We can see that Klecker's beliefs were betrayed again when Debian listmasters began censoring people for using phrases like "wayward communications".
After this outbreak of fascism on debian.org mailing lists, people moved the metaphors and serious discussions to other web sites. Financial reports from Software in the Public Interest, Inc spent over $120,000 on legal fees which appear to coincide with censorship of domain names.
The censorship decision, with overtones of Nazism, was another insult to Klecker's legacy. Oddly enough, the censorship decision was signed on World Press Freedom Day.
This is why it is so important for me to document the story of Joel Espy Klecker & Debian. As we are all co-authors, we are all in a relationship with each other under copyright law until 70 years after the last one of us dies. We can't ignore that or let somebody's girlfriend come along and snuff out our moral rights for the sake of pretending to be a family all the time.
During the Enron era, Enron employees were encouraged to invest their pension funds into the sharemarket and purchased the shares of their own employer, Enron.
From the New York Times:
The lawsuit says that Enron schemed to pump up the price of the stock artificially and violated its fiduciary duty to its employees by failing to act in their best interests.
Developers pooling our copyright interests into the co-authorship of Debian GNU/Linux are, in some ways, like the Enron employees pooling their pension assets into the stock of their employer.
When Klecker was contributing to Debian, he thought he was advancing Free Speech Online and when Enron employees contributed to the 401k pension scheme, they thought they were advancing their future retirement interests. In both cases, the developers and the Enron employees, our futures are being ripped out underneath us when small groups of people change the rules or rig the system.
Red Hat's refusal to release source code for RHEL also feels like a betrayal of the principles that encouraged unpaid volunteers to co-author Fedora and RHEL in the first place. The copyright interests of Fedora joint authorship are very similar to the interests of Debian joint authorship.
Read more about Red Hat unilaterally restricting source code access without consent of the joint authors.
This point is not really clear.
Looking through the leaked debian-private emails gives some hints that some people anticipated fooling future contributors.
Some of these discussions only took place after Joel Espy Klecker was already subscribed to debian-private so he may have been aware this type of thing was going on or could happen in the future.
Please see the chronological history of how the Debian harassment and abuse culture evolved.
Every few days somebody asks me what was the wayward word or comment that snowballed into Debian's $120,000 legal bills.
We know that in the case of Dr Norbert Preining, he was punished for using the word "it" as a pronoun for a person. Dr Preining's native language is not English and he doesn't live in a country where English has a significant role.
Back in the day, the German administration we came to know as Nazis was obsessed with both censorship and the micro-managing of language. Even in choosing a word for journalists ( schriftleiter) they were very conscious of the implications of the word that they chose.
When we talk about the Nazis in English, sometimes we use the original German word and sometimes we use an English word. For example, the Germans used the phrase Endlösung der Judenfrage and in English we translate it as Final Solution to the Jewish question. There was no "question" (fragen) as such, the phrase simply obfuscates the reference to genocide.
Alexander Wirt (formorer), an employee of NetApp, is one of the Debian mailing list censors. His role could be thought of like those journalists and newspaper editors who agreed to become trained and registered as good schriftleiter.
The word wayward is used in various contexts. For example, in an article about the racist Utopia, they tell us who would be exterminated and it wasn't just the Jews and gypsies:
These included, on the one hand, members of their own 'Aryan race' who they considered weak or wayward (such as the 'congenitally sick', the 'asocial', and homosexuals), and on the other those who were defined as belonging to 'foreign races'.
The word wayward is a very general adjective that can be used in many contexts. For example, it has also been used to describe people who are ethnically Jewish but don't identify as such:
Wayward Jews, God-fearing Gentiles, or Curious Pagans? Jewish Normativity and the Sambathions
... At stake was whether these people were Jews and the ways in which diaspora Jews and their host communities influenced one another ...
Back in the day, it looks like being wayward, whether Jewish or LGBT, would attract undue attention from the state.
Now, in some groups like Debian, it appears the LGBT agitators have taken things to the opposite extreme. Even referring to a wayward horse that I saw escaping last week would get me in trouble, just as this reference to wayward communication caused a knee-jerk fascist reaction from Debian censorship.
Is there some secret list of words that we are not allowed to use any more? When I heard about the defamation of Sony Piers by GNOME fascism and their refusal to tell us why they attacked him, I wondered if it was something trivial like this, did Sony use a word like "it" or "wayward" without permission?
When a family, workplace or community works like this, where people are attacked for things they had no way to anticipate, we use the metaphor that you feel like you are walking on eggshells. Metaphors have been banned too.
Subject: Re: Your attitude on debian mailinglists Date: Sat, 29 Dec 2018 14:59:04 +0100 From: Alexander Wirt <formorer@formorer.de> To: Daniel Pocock <daniel@pocock.pro> CC: listmaster@lists.debian.org [ ... snip various iterations of threats and blackmail ... ] > Hi Alex, > > Please tell me which email and which insults you are referring to <5c987a44-b6c6-ce21-020c-9402940f2fde@pocock.pro> That is exactly that type of mail I was talking about. Starting with the subject and continueing with the body. I don't want to get too much into details, but phrases like "sustained this state of hostility" or "wayward" are not acceptable. Especially since I asked you to cool down and step back a bit. Alex
Alexander wants to create a fake community where everybody pretends to be happy all the time, even when we are targeted with insults, threats, plagiarism and other offences by the people who think they are holier-than-thou.
In my last article on the Long Refactoring series, I elided the process I went through to solve the bug. While preparing to turn the articles into a presentation, I went through the steps myself again, and came across the bug. When I was writing the articles, I was pressed for time, and didn’t go through the process of solving it step by step, which in turn means there is a gap between the pre-and-post states of the code: you can’t get there from here.
Let me take it from where I mentioned that I found a bug, and added the unit test that shows it.
Lets start with running the unit test code. Running it gets me the following error message
________test_advance ________
def test_advance():
test_board = tree_sudoku.build_board(puzzle0)
node = tree_sudoku.Tree_Node(None, 0)
node.write(test_board)
assert (test_board[0][0] == '9')
node = node.advance(test_board)
node = node.advance(test_board)
node.write(test_board)
assert (test_board[0][3] == '0')
node = node.advance(test_board)
node.write(test_board)
assert (test_board[0][3] == '9')
back_node = node.retreat()
> assert (test_board[0][3] == '0')
E AssertionError: assert '9' == '0'
E
E - 0
E + 9
From a quick look, the retreat function either is going forward, or creating a new node. Lets use the debugger to get more information there.
diff --git a/test_tree_sudoku.py b/test_tree_sudoku.py
index 1dfcdec..25a5ab9 100755
--- a/test_tree_sudoku.py
+++ b/test_tree_sudoku.py
@@ -59,6 +59,7 @@ def test_advance():
node = node.advance(test_board)
node.write(test_board)
assert (test_board[0][3] == '9')
+ import pdb; pdb.set_trace()
back_node = node.retreat()
assert (test_board[0][3] == '0')
assert (node.value == "9")
Since the other unit test runs the code and calls retreat far too many times, we want to focus in on just the test_advance method. To call this code, I activate the virtual environment and call the pytest executable directly. Note that I have updated my tox.ini to use the more modern python 3.12
pytest test_tree_sudoku.py::test_advance
What becomes clear is that the retreat function does not write to the board. The “9” is not the value returned from the Node. The previous node has the value of “3”. What it looks like is that retreat was supposed to erase the board value, and return it to the state it was before retreat. However, if the was the case, we should have written a specific unit test for it. Looking at the original code, that was not the case. And the functional test still runs.
Looking at my comments in the original article, I think I decided to store the old value in the node when writing a new value. Then, when retreating, I restored the old value prior to returning the previous node. Lets see if that change fixes the unit test.
diff --git a/tree_sudoku.py b/tree_sudoku.py
index 4c20d34..042cd69 100755
--- a/tree_sudoku.py
+++ b/tree_sudoku.py
@@ -196,6 +196,8 @@ class Tree_Node:
self.next_node = None
self.value = '9'
self.index = index
+ self.old_value = '0'
+ self.board = None
def advance(self, test_board):
new_node = Tree_Node(self, self.index + 1)
@@ -204,6 +206,9 @@ class Tree_Node:
return new_node
def retreat(self):
+ curr_row = int(self.board_spot[0])
+ curr_col = int(self.board_spot[1])
+ self.board[curr_row][curr_col] = self.old_value
node = self.last_node
node.next_node = None
return node
@@ -215,8 +220,10 @@ class Tree_Node:
return self.value
def write(self, board):
+ self.board = board
curr_row = int(self.board_spot[0])
curr_col = int(self.board_spot[1])
+ self.old_value = board[curr_row][curr_col]
board[curr_row][curr_col] = self.value
def check_solved(self, board):
Yes this works. Note the ugliness of capturing the board in the write function. That is a sign that there is a dependency missing: the board should be passed in to the constructor of the Node.
This seems to be the point at which I decided I no longer wanted to start from a blank board every time. I would argue that this steps beyond refactoring, as it actively changes the logic of the original program. How far can we go before we make that functional change?
Once thing that seems clear is that the use of an array (really a string) for row/column is making the code much more cluttered. Lets tackle that now.
Start by making row and col a member variables of the node that gets initialized in the constructor.
commit a4d4fa4856275ba9b5be903129b4de712ae1297b
Author: Adam Young <adam@younglogic.com>
Date: Tue Nov 12 13:47:39 2024 -0500
intro row and col
diff --git a/tree_sudoku.py b/tree_sudoku.py
index 4c20d34..ca6b936 100755
--- a/tree_sudoku.py
+++ b/tree_sudoku.py
@@ -192,6 +192,8 @@ class Tree_Node:
def __init__(self, last_node, index):
self.possible_values = ['1', '2', '3', '4', '5', '6', '7', '8']
self.board_spot = board_index.table[index]
+ self.row = int(self.board_spot[0])
+ self.col = int(self.board_spot[1])
self.last_node = last_node
self.next_node = None
self.value = '9'
Now replace the access to the array wi
With these members. Include the test case.
diff --git a/test_tree_sudoku.py b/test_tree_sudoku.py
index 2710ca4..462b75b 100755
--- a/test_tree_sudoku.py
+++ b/test_tree_sudoku.py
@@ -64,4 +64,5 @@ def test_advance():
assert (node.value == "9")
back_node.write(test_board)
assert (test_board[0][2] == '3')
- assert (back_node.board_spot == '02')
+ assert (back_node.row == 0)
+ assert (back_node.col == 2)
diff --git a/tree_sudoku.py b/tree_sudoku.py
index ca6b936..3be293a 100755
--- a/tree_sudoku.py
+++ b/tree_sudoku.py
@@ -46,8 +46,8 @@ class SudokuSolver:
while curr_board_filling_node.next_node:
curr_board_filling_node = curr_board_filling_node.next_node
curr_board_filling_node.write(test_board)
- curr_row = int(curr_board_filling_node.board_spot[0])
- curr_col = int(curr_board_filling_node.board_spot[1])
+ curr_row = curr_board_filling_node.row
+ curr_col = curr_board_filling_node.col
test_board[curr_row][curr_col] = curr_board_filling_node.value
if self.box_index.is_value_valid(test_board, curr_node):
if curr_node.index + 1 >= MAX:
@@ -106,9 +106,7 @@ class BoxIndex:
self.table = self.fill_box_index_table()
def is_value_valid(self, board, node):
- row = int(node.board_spot[0])
- col = int(node.board_spot[1])
- return self.value_valid(board, row, col)
+ return self.value_valid(board, node.row, node.col)
def value_valid(self, board, row_index, column_index):
row = ['1', '2', '3', '4', '5', '6', '7', '8', '9']
@@ -191,9 +189,9 @@ board_index = BoardIndexTable()
class Tree_Node:
def __init__(self, last_node, index):
self.possible_values = ['1', '2', '3', '4', '5', '6', '7', '8']
- self.board_spot = board_index.table[index]
- self.row = int(self.board_spot[0])
- self.col = int(self.board_spot[1])
+ board_spot = board_index.table[index]
+ self.row = int(board_spot[0])
+ self.col = int(board_spot[1])
self.last_node = last_node
self.next_node = None
self.value = '9'
@@ -217,15 +215,11 @@ class Tree_Node:
return self.value
def write(self, board):
- curr_row = int(self.board_spot[0])
- curr_col = int(self.board_spot[1])
- board[curr_row][curr_col] = self.value
+ board[self.row][self.col] = self.value
def check_solved(self, board):
- row = int(self.board_spot[0])
- col = int(self.board_spot[1])
- if board[row][col] != '0':
- self.value = board[row][col]
+ if board[self.row][self.col] != '0':
+ self.value = board[self.row][self.col]
self.possible_values = []
With the code clearer, I can see I missed an opportunity to call write.
index d9df07f..de5c7fe 100755
--- a/tree_sudoku.py
+++ b/tree_sudoku.py
@@ -46,7 +46,7 @@ class SudokuSolver:
while filler.next_node:
filler = filler.next_node
filler.write(test_board)
- test_board[filler.row][filler.col] = filler.value
+ filler.write(test_board)
if self.box_index.is_value_valid(test_board, curr_node):
if curr_node.index + 1 >= MAX:
break
Now the main body of the loop looks like this:
def tree_to_solution_string(self, original_board):
index = 0
head_node = Tree_Node(None, index)
curr_node = head_node
while True:
filler = head_node
test_board = copy.deepcopy(original_board)
filler.write(test_board)
while filler.next_node:
filler = filler.next_node
filler.write(test_board)
filler.write(test_board)
if self.box_index.is_value_valid(test_board, curr_node):
if curr_node.index + 1 >= MAX:
break
curr_node = curr_node.advance(test_board)
curr_node.check_solved(test_board)
else:
if len(curr_node.possible_values) == 0:
# backtrack
while len(curr_node.possible_values) == 0:
curr_node = curr_node.retreat()
curr_node.next()
return self.build_solution_string(head_node)
It is still obtuse, but a lot easier to work with. This is the point where I would consider the refactoring done, and would actually improve the implementation.
Wikipedia has a long article on Aktion T4, the Nazi-era euthanasia program. It is not necessary to read the whole thing, simply picking out a couple of lines gives us the gist of it:
From August 1939, the Interior Ministry registered children with disabilities, requiring doctors and midwives to report all cases of newborns with severe disabilities; the 'guardian' consent element soon disappeared.
...
The reports were assessed by a panel of medical experts, of whom three were required to give their approval before a child could be killed.
...
When the Second World War began in September 1939, less rigorous standards of assessment and a quicker approval process were adopted. Older children and adolescents were included and the conditions covered came to include ...
In effect, it became a slipperly slope. The euthanasia program wasn't even well intentioned to begin with but once the legal framework existed, enthusiasts were constantly looking for ways to evade checks and balances.
Now we see the same slipperly slope phenomena with the WIPO UDRP.
In the beginning, it was an attempt to prevent extreme and obvious acts of cybersquatters hijacking trademarks.
Have a look at the most recent Debian UDRP defamation:
One of the disputed domain names, <debian.video>, shows videos of the Respondent at a DEBIAN development conference in 2013, as well as audio recordings from software development conferences in 2012.
In fact, Debian funds paid for volunteers to travel to those conferences and give the presentations. There is nothing in "bad faith" about publishing the videos of those events.
Most websites at the disputed domain names display the Complainant’s trademarked “swirl” logo in the upper left corner
In fact, the Debian logo page tells us that it is an open use logo, it is an unrestricted license to use the logo. Therefore, what we see in practice is that WIPO UDRP lawyers such as W. Scott Blackmer are well and truly on the slippery slope phase. Here is the open logo license:
The Debian Open Use Logo(s) are Copyright (c) 1999 Software in the Public Interest, Inc., and are released under the terms of the GNU Lesser General Public License, version 3 or any later version, or, at your option, of the Creative Commons Attribution-ShareAlike 3.0 Unported License.
W. Scott Blackmer was clearly informed that it was an open use logo but he simply ignored the evidence in the response.
The Aktion T4 report notes:
More pressure was placed on parents to agree to their children being sent away. Many parents suspected what was happening and refused consent, especially when it became apparent that institutions for children with disabilities were being systematically cleared of their charges. The parents were warned that they could lose custody of all their children and if that did not suffice, the parents could be threatened with call-up for 'labour duty'
Clearly, the nasty accusations of "bad faith" are being used to scare other joint authors of large copyrighted works that they can't use the name of their work or they will be publicly shamed on the WIPO web site.
In some cases families could tell that the causes of death in certificates were false, e.g. when a patient was claimed to have died of appendicitis, even though his appendix had been removed some years earlier.
These comments about appendicitis sound a lot like the open use logo case. If the appendix had been removed there can not be appendicitis. If the open logo can be used under a license then there can not be bad faith.
It appears that the Nazi euthanasia doctors and some WIPO UDRP panels are simply pushing headstrong over the top of the facts and working to targets. The Nazis had targets for killing and the UDRP panels appear headstrong obsessed with censoring.
Many domain name owners are only paying a small fee of $10 to $20 per year for their domain name. The cost of paying lawyers to respond to every frivolous UDRP demand is disproportionate to the cost of the domain name. Furthermore, the cost of going to court to appeal a blatantly wrong defamation is even more astronomically out of proportion to the cost of the domain name.
Therefore, when dealing with volunteers, the WIPO UDRP lawyers seem to know they can get away with anything.
The report on child euthanasia notes that children were still being euthanised even after allied troops had taken over:
The last child to be killed under Aktion T4 was Richard Jenne on 29 May 1945, in the children's ward of the Kaufbeuren-Irsee state hospital in Bavaria, Germany, more than three weeks after US Army troops had occupied the town.
In other words, the medical panels and the legal panels that make these decisions seem to be operating out of habit. Even when the legal environment changed and the territory was under western law, the medical and legal processes in the clinics continued to kill out of habit alone.
When some institutions refused to co-operate, teams of T4 doctors (or Nazi medical students) visited and compiled the lists, sometimes in a haphazard and ideologically motivated way.
In the W. Scott Blackmer defamation, section 6 concludes: The Panel finds that the Complainant has established the third element of the Policy with respect to all fourteen of the disputed domain names..
In other words, W. Scott Blackmer hasn't really looked for the merits of the content on a site-by-site basis, he has decided to extinguish them all with a single brush stroke. In the following paragraph, like the Nazi doctors, he compiles a big list: "For the foregoing reasons, in accordance with paragraphs 4(i) of the Policy and 15 of the Rules, the Panel orders that the disputed domain names <debian.chat>, <debiancommunity.org>, <debian.day>, <debian.family>, <debian.finance>, <debian.giving>, <debiangnulinux.org>, <debian.guide>, <debian.news>, <debian.plus>, <debianproject.community>, <debianproject.org>, <debian.team>, and <debian.video> be transferred to the Complainant."
Ironically, one of the domains that the WIPO UDRP panel was so eager to censor was the former debian.day site with the story of the Debian Day Volunteer Suicide. This is significant because the death appears to be part of a wider suicide cluster, giving weight to the argument that discussion of the suicides is in the public interest. A single, one-off case of suicide may be a private matter but a suicide timed around the project anniversary and forming part of a cluster suggests there is good cause for public discussion.
Radio Lac is the third most popular radio station in the Lake Geneva region covering Switzerland and France.
The reception area includes all the lakeside cities of Geneva, Nyon, Morges, Lausanne, Vevey and Montreux as well as the cross-border regions.
The transmitter for the region of Geneva is actually situated on Mount Saléve, at the cable car station in French territory. The inhabitants of French cities Annemasse, Thonon, Evian, Saint-Julien-en-Genevois, Saint-Genis-Pouilly, Ferney-Voltaire, Gex and Divonne are in the reception area.
The jurists of Mathieu Parreaux published several documents about their legal insurance services for cross-border commuters and residents of France.
In our last blog we discovered that Monsieur Parreaux didn't pass the bar exam either in Switzerland or in France.
Each week, Mathieu Parreaux and his colleague Nati Gomez responded to legal questions on the radio program of Benjamin Smadja, Radio Lac (Media One Group).
The insurance company gained 20,000 clients. How many clients found Parreaux, Thiébaud & Partners thanks to free publicity on Radio Lac? How many clients killed themselves?
A program from 5 November 2018 where they discuss customs charges for cross-border commuters:
He has provided many services on a voluntary basis since he was 14 years old. Why do the Swiss jurists insult the families of unpaid volunteers? Is that racism?
A news report was published by Clémence Lamirand at the bureau AGEFI.
She wrote (original in French) The cabinet is young, like the majority of employees who work there. The founder, Mathieu Parreaux, has not yet passed the bar exam. For the moment, the business is his priority, the final exams will come later.
The reporter, Madame Lamirand doesn't pose difficult questions. Journalists in Switzerland fear criminal prosecution for writing any form of inconvenient truth.
Un cabinet juridique en construction
Le cabinet Parreaux, Thiébaud & Partners, basé à Genève, propose une protection juridique sur abonnement. Portrait de la toute jeune société.
Clémence Lamirand, 21 mai 2018, 20h49
Le cabinet est jeune, comme la plupart des employés qui y travaillent. Son fondateur, Mathieu Parreaux, n’a pas encore passé son brevet d’avocat. Il donne pour le moment la priorité à son entreprise, les examens finaux seront pour plus tard. Fondé en 2017 et basé à Genève, le cabinet juridique semble évoluer rapidement. Récemment, dix juristes ont été embauchés. Au début de l’année, le cabinet a fusionné avec la société de services lausannois Thiébaud pour donner naissance au cabinet juridique Parreaux, Thiébaud & Partners. «Cette entreprise était spécialisée dans les assurances, explique Mathieu Parreaux, nous avions de notre côté nos compétences propres en protection juridique. Notre rapprochement récent nous permet désormais d’être présents dans les deux domaines, sur les deux cantons.» Le cabinet juridique emploie aujourd’hui une vingtaine de personnes. Parmi les juristes, certains sont détenteurs du brevet d’avocat (six), d’autres non. Parreaux, Thiébaud & Partners travaille également avec des avocats externes, indépendants, qui peuvent prendre le relai lorsque les juristes ne peuvent pas poursuivre la défense de leurs clients, par exemple lors d’un procès au tribunal pénal. «Le statut de juriste a beaucoup d’avantages mais il ne permet pas d’aller partout. Nous avons donc noué des partenariats avec une quinzaine de professionnels présents dans tous cantons romands, précise Mathieu Parreaux. A l’avenir, nous aimerions gagner toute la Suisse. Nous devons pour cela trouver les bons avocats et les bons juristes et les inciter à rejoindre notre structure. Toutefois, nous ne voulons pas grandir trop vite et souhaitons progresser intelligemment.» Un cabinet que se veut différent des autres Aujourd’hui, Parreaux, Thiébaud & Partners, qui travaille aussi en lien avec des notaires, souhaite proposer des prestations juridiques larges et abordables. «C’est toute notre philosophie, s’enthousiasme le jeune entrepreneur, notre cabinet est une structure unique qui souhaite proposer à ses clients des prestations variées et un service client performant, le tout à un prix adapté.» Son fondateur est spécialisé en droit des contrats, droit fiscal et droit des sociétés. Il s’est entouré de spécialistes dans différents domaines. «Avec des compétences variées, nos conseillers peuvent répondre rapidement et efficacement à nos clients, explique Mathieu Parreaux. Ainsi, nous couvrons actuellement 44 domaines du droit.» Le cabinet propose donc conseils juridiques et conciliation. Les spécialistes rédigent tous types d’actes légaux pour les entreprises, des contrats de travail comme des conditions générales par exemple. Une permanence juridique privée est assurée. «Nous faisons tout pour anticiper et être proactifs, résume le fondateur, nous essayons de régler les différents en amont.» Une protection juridique sur abonnement Parreaux, Thiébaud & Partners propose depuis quelques semaines une protection juridique, pour les particuliers comme pour les entreprises, sous forme d’abonnement. Pour un engagement d’une durée de 3, 5 ou 8 ans, une entreprise peut souscrire à Real-Protect. «Nous donnons des conseils oraux mais aussi écrits, précise Mathieu Parreaux, ce qui nous engage. De plus, notre conseil est illimité. Nous souhaitons être réellement là pour nos clients. Toujours avec un coût raisonnable.» «Le prix est ce qui m’a attiré en premier, avoue Jessy Kadimadio, client qui vient de lancer une régie immobilière et qui a fait appel au cabinet pour des rédactions de contrats, mais j’ai été par la suite agréablement surpris par leur disponibilité. J’ai aussi été séduit par le côté outsider de cette jeune société.»
Josh and Kurt talk to Brian Fox from Sonatype and Donald Fischer from Tidelift about their recent reports as well as open source. There are really interesting connections between the two reports. The overall theme seems to be open source is huge, everywhere, and needs help. But all is no lost! There’s some great ideas on what the future needs to look like.
Earlier this year, I traveled to Marion in Kansas, United States, for the anniversary of the raid on the Marion County Record.
We watched the documentary about the raid, Unwarranted: The Senseless Death of Journalist Joan Meyer which was produced by Jaime Green and Travis Heying. The moment where Joan Meyer called the police nazis jumped out at me. I made a mental note to include it here in the nazi.compare web site but I wanted to review it carefully and give it the justice it deserves.
I opened up the video on the anniversary of the Kristallnacht and the evidence jumped out at me. I don't think anybody has noticed it before but Joan was right on the money about nazi stuff.
The Kristallnacht occurred on the night of 9 to 10 November 1938. It was a giant pogrom by Nazi party members. The police did not participate but they didn't try to stop it either.
However, the Jewish press were not attacked during the Kristallnacht.
In fact, Hitler's Nazis attacked the Jewish press on the previous night, 8 November.
Looking at the body cam footage where Joan Meyer accuses Gideon Cody and his police colleagues of "nazi stuff", we can see a time and date stamp at the bottom right corner. The date of the raid is written in the United States date format, Month/Day/Year, 08/11/2023 which was 11 August 2023. When we see the date 08/11/2023 in Europe, for example, in Germany, we would interpret that as Day/Month/Year, in other words, that is how Europeans and Germans write 8 November 2023, the day that Nazis raided the Jewish press in advance of the Kristallnacht.
Here is the section of the video where Joan Meyer makes the Nazi comment, look at the date stamp at the bottom right corner, it is 08/11/2023 as in 8 November for Europe:
While thinking about the way the Nazis gave these censorship orders the night before the Kristallnacht, I couldn't help thinking about the orders from Matthias Kirschner and Heiki Lõhmus at FSFE when they wanted to censor communications from the elected Fellowship representatives.
Berlin police have declined to help FSFE shut down web sites that are making accurate FSFE / Nazi comparisons.
This policy determines conditions and rights of the FSFE bodies (staffers, GA members, local and topical teams) or members of the FSFE community to to mass mail registered FSFE community members who have opted in to receive information about FSFE's activities. ## Definitions For the purpose of this document: * all registered FSFE community members who have opted in to receive information about FSFE's activities are referred to as "recipients". * mass emails that we send out to recipients are referred to as "mailings". * mailings that are only sent to recipients who live in a certain area (a municipality or a language zone or similar) or that are part of a topical team are referred to as "select mailings" and mails to all recipients of the FSFE are referred to as "overall mailings". ## Considerations * Mailings should be sent to better integrate our community in important aspects of our work, which can be for example - but is not limited to - information about critical happenings that we need their input or activity for, milestones we have achieved and thank you's, engagement in the inner FSFE processes and fundraising. * Mailings should be properly balanced between delivering information and getting to the point. * Mailings should contain material/information that can be considered worth of our supporters' interests. * Mailings are not to spread general news - that is what we have the newsletter and our news items for. * You can find help on editing mailings by reading through our press release guidelines: https://wiki.fsfe.org/Internal/PressReleaseGuide * All community members are invited to use select mailings for evaluations, to inform about certain aspects of FSFE's work, to organise events and activities or other extraordinary purposes. ## Policies * Mailings must not be against FSFE's interests and conform to our Code of Conduct. * All overall mailings have to involve the PR team behind pr@lists.fsfe.org for a final edit. In urgent cases, review by the PR team may be skipped with approval of the responsible authority. * All select mailings need approval by the relevant country or topical team coordinator or - in absence - by the Community Coordinator or the Executive Council. * All overall mailings need the approval of the Executive Council. * All mailings need to be reviewed by someone with the authority to approve the mailing. Nobody may review or approve a mailing they have prepared on their own.
Après près de 8 ans d’utilisation intensive de Jeedom, j’ai décidé il y a un mois de tester puis de migrer vers Home Assistant. Grâce à mon usage intensif de MQTT dans Jeedom, j’ai pu tester Home Assistant en profondeur sans risquer de tout casser. En effet, pendant trois semaines, les deux solutions ont ainsi […]
Cet article Migration de Jeedom vers Home Assistant est apparu en premier sur Guillaume Kulakowski's blog.
According to the official history of Debian, which was moved here after my last blog on Klecker (see snapshot / archive copy), no one knew that Joel "Espy" Klecker was a terminally ill teenager working without pay from his sickbed. Here is the same text that I copied in my first step into the Klecker case.
On July 11th, 2000, Joel Klecker, who was also known as Espy, passed away at 21 years of age. No one who saw 'Espy' in #mklinux, the Debian lists or channels knew that behind this nickname was a young man suffering from a form of Duchenne muscular dystrophy. Most people only knew him as 'the Debian glibc and powerpc guy' and had no idea of the hardships Joel fought. Though physically impaired, he shared his great mind with others.
Joel Klecker (also known as Espy) will be missed.
In fact, they did know. The Debian history page on the list of Debian's lies.
Subject: RE: [jwk@espy.org: Joel Klecker] Date: Fri, 14 Jul 2000 20:40:00 -0600 (MDT) From: Jason Gunthorpe <jgg@ualberta.ca> To: debian-private@lists.debian.org CC: Debian Private List On Tue, 11 Jul 2000, Brent Fulgham wrote: > > It's very hard for me to even send this message. This is a > > great loss to us all. > First, I'd like to extend my condolences to Joel's family. It > is still very hard to believe this has happened. Joel was > always just another member of the project -- no one knew (or > at least I did not know) that he was facing such terrible > hardships. Debian is poorer for his loss. Some of us did know, but he never wished to give specifics. I do not think he wanted us to really know. I am greatly upset that I was unable to at [ ... snip .... ]
This case is so bad that I am going to have to write multiple blogs to dissect some of the messages in the threads about the casualty.
An obituary was published in the newspaper:
Joel Edmund Klecker
Aug. 29, 1978 - July 11, 2000
STAYTON - Joel Klecker, 21, died Tuesday of muscular dystrophy.
He was born in Salem and raised in Stayton. He attended Stayton public schools and Stayton High School. He was a Debian software project developer, one of 500 worldwide, worked on Apple computers and was a computer enthusiast.
Survivors include his parents, Dianne and Jeffrey Klecker of Stayton; brother, Ben of Stayton; and grandparents, Roy and Yvonne Welstad of Aumsville.
Services will be 2 p.m. Saturday at Calvary Lutheran Church, where he was a member. Interment will be at Lone Oak Grier Cemetery in Sublimity. Arrangements are by Restlawn Funeral Home in Salem.
Contributions: Muscular Dystrophy Association, 4800 Macadam Ave., Portland, OR 97201.
Klecker was born 29 August 1978. These messages hint that his first packages may have been contributed in November or December 1997. Message 1, message 2 and message 3.
At the time, he would have been 19 years old, still a teenager, when he began doing unpaid work for the other Debian cabal members.
Many of the Debianists today obfuscate who they really work for to try and make it look like Debian is a hobby or a "Family" but the impersonation of family is fallacy.
Jason Gunthorpe (jgg), who is now with NVIDIA clearly knew some things about it.
We don't know which people had knowledge of Klecker's situation or which organizations they worked for. During the Debian trademark dispute, a list of organizations using Klecker's work was submitted to the Swiss trademark office. While written in Italian, the names of these companies are clear. They all assert they are using Debian. Did they know there has been unpaid youth labor, terminally ill teenagers, bed-ridden, writing and testing the packages for them?
The names of the companies are copied below. Remember, Mark Shuttleworth sold his first business Thawte for $700 million about eight months before Klecker died.
While Klecker was bed-ridden, here is that jet-ridden picture:
Is it really fair that Klecker, his family and many other volunteers get nothing at all from Debian? Or is that modern slavery? The US State Dept definition of Modern Slavery is extremely broad and includes all kinds of deceptive work practices.
Please see the chronological history of how the Debian harassment and abuse culture evolved.
Come già riportato nell’incipit del presente paragrafo, la nascita del progetto DEBIAN risale al 1997, ed ogni paese ha al proprio interno una comunità attiva di sviluppatori volontari che si occupano di progettare, testare e distribuire programmi basati sul sistema operativo multipiattaforma DEBIAN per i più svariati usi; ogni anno, a partire dal 2004, come visibile qui https://www.debconf.org/ è stata organizzata una conferenza internazionale alla quale partecipano tutti gli sviluppatori volontari che fanno parte delle diverse comunità nazionali attive in ciascun paese del mondo (la lista dei paesi è visibile nel macrogruppo Entries by region ed annovera, inter alia, Svizzera, Francia, Germania, Italia, Regno Unito, Polonia, Austria, Spagna, Norvegia, Belgio, USA ecc, oltre ad estendersi a Sud est asiatico, all’Africa e all’America Latina). Consultando, ad esempio, il documento DebConf13 - Sponsors relativo alla conferenza per l’anno 2013 visibile nella Section Swiss Debian Community, si nota che fra i numerosi sponsor spiccano Google, la arcinota società creatrice del famosissimo ed omonimo motore di ricerca, e HP, il noto produttore di hardware; allo stesso modo, consultando il documento DebConf15 - Sponsors visibile nella Section European Debian Community, fra gli sponsor della conferenza per l’anno 2015, è possibile annoverare nuovamente Google, HP oltre a IBM, il noto produttore di software e hardware, VALVE, il noto distributore di videogiochi online, Fujitsu, il noto produttore di hardware, nonché BMW GROUP, il noto produttore di autoveicoli. Per quanto concerne l’ultima conferenza svoltasi appunto nel 2022, visionando il documento DebConf22 sponsorship brochure visibile nella Section European Debian Community, è possibile annoverare tra gli sponsor, oltre a Google, anche Lenovo, il noto produttore di hardware, e Infomaniak, il più grande fornitore di hosting per siti internet della Svizzera (tale azienda, oltre ad aver sponsorizzato numerose edizioni della conferenza annuale, offre anche servizi di streaming e video on demand, ospitando più di 200.000 domini, 150.000 siti web e 350 stazioni radio/TV). La stessa tipologia di informazioni, a livello europeo, può essere rinvenuta consultando la voce Entries in section European Debian Communiy nella sezione Entries by section. Fra gli altri sponsor di cui alle diverse conferenze tenutesi annualmente, si annoverano, inoltre, l’Università di Zurigo-Dipartimento di informatica, il Politecnico di Zurigo-Dipartimento di Ingegneria elettrica, la PricewaterhouseCooper (notissima società di revisione), Amazon Web Services (la piattaforma di cloud computing e servizi web di proprietà di Amazon, la arcinota società di commercio elettronico statunitense), Roche (la nota casa farmaceutica), Univention Corporate Server (nota società tedesca produttrice di software open source per la gestione di infrastrutte informatiche complesse), Hitachi (noto produttore di hardware) il Cantone di Neuchâtel ecc., oltre ad un nutrito numero di altre società private ed altri enti; per una panoramica delle conferenze annuali tenutesi negli ultimi dieci anni è possibile osservare la relativa documentazione promozionale filtrando le Categories e cercando la voce Community – DebConf. Osservando inoltre il macro gruppo Entries by year, è stato anche raccolto materiale volto a coprire l’ultimo decennio di attività, 2012/2022, del progetto DEBIAN compiegando documenti provenienti da diverse fonti della stampa specializzata e non, attestazioni da parte di Università e Centri di ricerca nazionali ed esteri, attestazioni da parte di utilizzatori del software DEBIAN per la propria attività imprenditoriale/commerciale ecc. Il software DEBIAN di SPI è infatti utilizzato sia da numerose società private in Svizzera e nella Unione Europea, sia da numerosi enti istituzionali e di ricerca attivi nei più svariati ambiti. I documenti disponibili nella Entries by section alla voce Cooperation with private companies in Switzerland mostrano infatti come LIIP www.liip.ch una nota società svizzera (con sedi a Losanna, Friburgo, Berna, Basilea, Zurigo e San Gallo) attiva nella prestazione di servizi connessi alla rete internet quali, ad esempio, registrazione di domini per siti internet, configurazione di Server, servizi di hosting e per la creazione di siti web, campagne pubblicitarie via internet, gestione dei social network, sia anch’essa una utilizzatrice di Debian e, fra l’altro, anche uno degli sponsor delle conferenze annuali degli sviluppatori volontari. Un altro documento incluso sotto la stessa voce, Debian Training Courses in Switzerland, mostra come vengano tenuti corsi di formazione sul software DEBIAN; sempre sotto la stessa voce, il documento Microsoft Azure available from new cloud regions in Switzerland for all customers mostra come il servizio di cloud computing di Microsoft (notissimo produttore di software), Microsot Azure, offra il software DEBIAN tra la selezione dei software messi a disposizione. Sempre sotto la stessa voce, per mostrare la trasversalità e la permeazione del software DEBIAN in tutti i settori delle cerchie interessate, vi sono documenti che attestano, ad esempio, come un centro osteopatico a Losanna, https://osteo7-7.ch/ operi con server dotati del software DEBIAN, che la casa editrice Ringier AG di Zurigo attiva nei mercati di quotidiani, periodici, televisione, web e raccolta pubblicitaria si sia occupata del software DEBIAN e che Lenovo (noto produttore di hardware) si sia interessato anch’esso al software DEBIAN. I documenti disponibili nella Entries by section alla voce Cooperation with private companies in the European Union mostrano la notorietà del software DEBIAN presso svariate società localizzate in diversi paesi europei, ad esempio, Logikon labs http://www.logikonlabs.com/ in Grecia, Servicio Técnico, Open Tech S.L https://www.opentech.es/ in Spagna, ALTISCENE http://www.altiscene.fr/, Logilab https://www.logilab.fr/, Bureau d'études et valorisations archéologiques Éveha https://www.eveha.fr/ in Francia, 99ideas https://99ideas.pl/ e Roan Agencja Interaktywna https://roan24.pl/ in Polonia, Mendix Technology https://www.mendix.com/ in Olanda; in ragione del numero particolarmente elevato di documenti, pari a 135 unità, si invita pertanto a prendere visione dell’elevato di società piccole, medie e grandi che vedono il software DEBIAN alla base dei loro sistemi informatici. Consultando le voci Research & Papers, Institutional/Governmental cooperation e Miscellaneous nella sezione Entries by section, si possono rinvenire una serie di documenti inerenti articoli divulgativi, scientifici, saggi di ricerca, abstract di tesi, monografie, brevi guide ecc., aventi ad oggetto il software DEBIAN, realizzati, inter alia, dal Politecnico di Zurigo, dall’Università di Edimburgo, dall’Università di Oxford, dall’EPFL di Losanna, dalla Università di Ginevra, dall’Università di Roma Tor Vergata, dall’European Synchrotron Radiation Facility di Grenoble, dal WSL Istituto per lo studio della neve e delle valanghe SLF di Davos, dall’Università Politécnica di Madrid, dalla Scuola Specializzata Superiore di Economia-Sezione di Informatica di Gestione del Canton Ticino, dalla Unione internazionale delle telecomunicazioni di Ginevra, dalla BBC del Regno Unito, dal CERN di Ginevra, dall’Università di Glasgow, dalla Università di Durham ecc. Le voci Swiss press coverage e European press coverage nella sezione Entries by section includono una rassegna stampa, sia a livello svizzero sia a livello europeo, inerente articoli aventi ad oggetto il software DEBIAN, inter alia, da parte di www.netzwoche.ch, Swiss IT Magazine, RTS Info, Corriere della Sera, Linux Magazine, www.heise.de, www.gamestar.de, The Guardian, la BBC, L’Espresso, Il Disinformatico (blog amministrato dal noto giornalista ticinese Paolo Attivissimo), Linux user, www.computerbase.de, www.derstandard.at, https://blog.programster.org/ , www.digi.no, Linux Magazine https://www.linux-magazine.com/, ecc. Verificando le voci Attestation & Statements by third parties, Switzerland, nella colonna Entries by section, si nota come diversi attori facenti parte delle cerchie commerciali determinanti, composte da consumatori, canali di distribuzione e commercianti, abbiano reso esplicite attestazioni di notorietà e conoscenza del marchio DEBIAN in Svizzera per i prodotti rivendicati nella classe 9 (“Logiciels de système d'exploitation et centres publics de traitement de l'information.”): - il WSL Istituto per lo studio della neve e delle valanghe SLF di Davos; - il Dipartimento di Informatica dell’Università di Zurigo; - il provider di servizi internet www.oriented.net di Basilea; - il centro osteopatico Osteo 7/7 www.osteo7-7.ch con sedi a Losanna e Ginevra; - il CERN www.home.web.cern.ch di Ginevra per il tramite dell’Ing. Javier Serrano in qualità di BE-CEM - Electronics Design and Low-level software (EDL) Section Leader presso il CERN; - www.infomaniak.com il più grande fornitore di hosting per siti internet della Svizzera (tale azienda offre anche servizi di streaming e video on demand, ospitando più di 200.000 domini, 150.000 siti web e 350 stazioni radio/TV) per il tramite del CEO di Infomaniak.com Boris Siegenthaler; - www.liip.ch nota società svizzera (con sedi a Losanna, Friburgo, Berna, Basilea, Zurigo e San Gallo) attiva nella prestazione di servizi connessi alla rete internet quali, ad esempio, registrazione di domini per siti internet, configurazione di Server, servizi di hosting e per la creazione di siti web, campagne pubblicitarie via internet e gestione dei social network, per il tramite del cofondatore e partner di LIIP Gerhard Andrey; - www.microsoft.com notissimo produttore di software e servizi cloud (Windows, Microsoft Azure ecc.), per il tramite di Sarah Novotny, Direttrice della strategia Open Source di Microsoft; - www.microsoft.com notissimo produttore di software e servizi cloud (Windows, Microsoft Azure ecc.), per il tramite dell’Ing. KY Srinivasan quale Distinguished Engineer di Microsoft; - www.microsoft.com notissimo produttore di software e servizi cloud (Windows, Microsoft Azure ecc.), per il tramite dell’Ing. Joshua Poulson quale Program Manager di Microsoft; - il CERN www.home.web.cern.ch di Ginevra per il tramite del Dr. Axel Nauman in qualità di Senior applied physicist and ROOT Project Leader presso il CERN; - www.univention.com uno dei principali fornitori di software open source nei settori della gestione delle identità e dell'integrazione e distribuzione delle applicazioni in Europa e Svizzera, che conta migliaia di utenti e organizzazioni partner, per il tramite del CEO di Uninvention Peter H. Ganten. Alla voce, sottostante a quella di cui sopra, verificando le voci Attestation & Statements by third parties, European Union, nella colonna Entries by section, si nota come diversi attori facenti parte delle cerchie commerciali determinanti, composte da consumatori, canali di distribuzione e commercianti, abbiano reso esplicite attestazioni di notorietà e conoscenza del marchio DEBIAN in Europa per i prodotti rivendicati nella classe 9 (“Logiciels de système d'exploitation et centres publics de traitement de l'information.”) per un totale di ben 146 records (di cui si riportano i primi 25 qui di seguito): - il Rost-Lab Bioinformatics Group dell’Università tedesca di Monaco; - la società greca Logikon Labs di Atene; - il Dipartimento di Ingegneria dell’Università italiana di Roma Tor Vergata; - la società spagnola Servicio Técnico, Open Tech SL di Las Palmas; - la società francese ALTISCENE di Tolosa; - la società polacca Zakład Gospodarowania Nieruchomościami w Dzielnicy Mokotów m.st. di Varsavia; - la società francese Logilab di Parigi; - la società svedese www.Bayour.com di Göteborg; - l’ente francese ESRF (European Synchrotron Radiation Facility) di Grenoble; - la società austriaca www.mur.at di Graz; - la società polacca www.Dictionaries24.com di Poznan; - l’organizzazione no-profit francese TuxFamily; - l’ente tedesco LINKES FORUM di Kreis; - la società polacca www.99ideas.com di Gliwice; - il Departamento de Arquitectura y Tecnología de Sistemas Informáticos (Facultad de Informática), dell’Universidad Politécnica spagnola di Madrid; - la società italiana Reware Soc. Coop di Roma; - la società polacca Roan Agencja Interaktywna di Gorzów; - la società slovacca RoDi Zilina; - la società olandese Mendix Technology di Rotterdam; - l’ente francese Bureau d'études et valorisations archéologiques Éveha di Limoges; - la società olandese AlterWeb; - l’Electronics Research Group dell’Università inglese di Aberdeen; - la società olandese MrHostman di Montfoort; - la società polacca System rezerwacji online Nakiedy di Gdansk; oltre, come detto, alle restanti testimonianze e attestazioni rese dalle più disparate società e diversi enti pubblici e privati aventi base/sede, rispettivamente, in Svizzera, Italia, Germania, Regno Unito, Francia, Polonia, Austria, Spagna, Olanda, Norvegia, Belgio, Repubblica Ceca, Svezia, Bulgaria, Grecia, Finlandia, Kosovo, Slovacchia, Bosnia, Danimarca, Ungheria, Lituania, Romania, di cui invitiamo a prendere visione. Per dimostrare sia a livello nominativo, dimostrando inoltre la diffusione numericamente quantitativa, degli utilizzatori del software DEBIAN, si riporta di seguito un estratto del sito web del progetto DEBIAN https://www.debian.org/users/index.it.html tramite il quale si possono scorrere tutte le attestazioni volontariamente lasciate sul sito www.debian.org del progetto DEBIAN dagli utilizzatori finali (ciascun nominativo è un link interattivo al sito https://www.debian.org/users/index.it.html), di diversa estrazione, del software DEBIAN: Istituzioni educative (educational) Commerciali (commercial) Organizzazioni non-profit (non-profit) Enti statali (government) Istituzioni educative (educational) Electronics Research Group, University of Aberdeen, Aberdeen, Scotland Department of Informatics, University of Zurich, Zurich, Switzerland General Students' Committee (AStA), Saarland University, Saarbrücken, Germany Athénée Royal de Gembloux, Gembloux, Belgium Computer Science, Brown University, Providence, RI, USA Sidney Sussex College, University of Cambridge, UK CEIC, Scuola Normale Superiore di Pisa, Italy Mexican Space Weather Service (SCiESMEX), Geophysics Institute campus Morelia (IGUM), National University of Mexico (UNAM), Mexico COC Araraquara, Brazil Departamento de Arquitectura y Tecnología de Sistemas Informáticos (Facultad de Informática), Universidad Politécnica de Madrid, Madrid, Spain Department of Control Engineering, Faculty of Electrical Engineering, Czech Technical University, Czech Republic Swiss Federal Institute of Technology Zurich, Department of Physics, ETH Zurich, Switzerland Genomics Research Group, CRIBI - Università di Padova, Italy Dipartimento di Geoscienze, Università degli Studi di Padova, Italy Nucleo Lab, Universidad Mayor de San Andrés, Bolivia Department of Physics, Harvard University, USA Infowebhosting, Perugia, Italy Medical Information System Laboratory, Doshisha University, Kyoto, Japan Bioinformatics & Theo. Biology Group, Dept. of Biology, Technical University Darmstadt, Germany Center for Climate Risk and Opportunity Management in Southeast Asia and Pacific, Indonesia Laboratorio de Comunicaciones Digitales, Universidad Nac. de Cordoba, Argentina Laboratorio di Calcolo e Multimedia, Università degli Studi di Milano, Italy Department of Engineering, University of Rome Tor Vergata, Italy Lycée Molière, Belgium Max Planck Institute for Informatics, Saarbrücken, Germany Computer Department, Model Engineering College, Cochin, India Medicina - Facultad de Ciencias Médicas, Universidad Nacional del Comahue, Cipolletti, Río Negro, Argentina Artificial Intelligence Lab, Massachusetts Institute of Technology, USA Montana Tech, Butte, Montana, USA Mittelschule, Montessoriverein Chemnitz, Chemnitz, Germany Laboratory GQE-Le Moulon / CNRS / INRAE, Gif-sur-Yvette, France Department of Measurement and Control Technology MRT (Department of Mechanical Engineering), University of Kassel, Germany Department of Computer Science & Engineering, Muthayammal Engineering College, Rasipuram, Tamilnadu, India Spanish Bioinformatics Institute, Spanish National Cancer Research Centre, Madrid, Spain NI, Núcleo de Informática, Brazil Software & Networking Lab, National University of Oil and Gas, Ivano-Frankivsk, Ukraine Parallel Processing Group, Department of Computer Science and Engineering, University of Ioannina, Ioannina, Greece Departamento de Matemática -- Universidade Federal do Paraná, Brazil Departamento de Informática -- Universidade Federal do Paraná, Brazil Protein Design Group, National Center for Biotechnology, Spain Rost Lab/Bioinformatics Group, Technical University of Munich, Germany Department of Computer Science, University of Salzburg, Salzburg, Austria Don Bosco Technical Institute, Sunyani, Ghana Instituto de Robótica y Automática, Escuela Superior de Ingenieros, University of Sevilla, Spain Computer Engineering Department, Sharif University of Technology, Iran Dipartimento di Scienze Statistiche, Università di Padova, Italy School of Mathematics, Tata Institute of Fundamental Research, Bombay, India Department of Computer and Engineering, Thiagarajar College of Engineering, Madurai, India Library and IT Services, Tilburg University, Tilburg, the Netherlands Computer Science Department, Trinity College, Hartford Connecticut, USA Turnkey IT Training Institute, Colombo, Sri Lanka. System Department, University of Santander, Cúcuta, Colombia Academic Administration, Universidad de El Salvador, El Salvador Universitas Indonesia (UI), Depok, Indonesia Laboratoire de Chimie physique, CNRS UMR 8000, Université Paris-Sud, Orsay, France Dirección de Tecnología e Informática, Universidad Nacional Experimental de Guayana, Puerto Ordaz, Venezuela School of Computer Science and Engineering, University of New South Wales, Sydney, Australia International Arctic Research Center, University of Alaska Fairbanks, USA Laboratoire VERIMAG, CNRS/Grenoble INP/Université Joseph Fourier, France Centre for Information Technology, University of West Bohemia, Pilsen, Czech Republic Game Development Club, Worcester Polytechnic Institute, Worcester MA, USA Commerciali (commercial) IT Department, 100ASA Srl, Dragoni, Italy 99ideas, Gliwice, Poland Tech Dept, ABC Startsiden AS, Oslo, Norway Admins.CZ, Prague, Czech Republic AdvertSolutions.com, United Kingdom Kancelaria Adwokacka Adwokat Wiktor Gamracki, Rzeszów, Poland Adwokat radca prawny, Poznan Lodz, Poland AFR@NET, Tehran, Iran African Lottery, Cape Town, South Africa AKAOMA Consulting, France Alfabet Sukcesu, Lubliniec, Poland AlterWeb Altiria, Spain ALTISCENE, Toulouse, France Anykey Solutions, Sweden JSC VS, Russia Apache Auto Parts Incorporated, Parma USA Applied Business Solutions, São Paulo, Brazil Archiwwwe, Stockholm, Sweden Computational Archaeology Division, Arc-Team, Cles, Italy Articulate Labs, Inc., Dallas, TX, US Athena Capital Research, USA Atrium 21 Sp. z o.o. Warsaw, Poland Co. AUSA, Almacenes Universales SA, Cuba Agencja interaktywna Avangardo, Szczecin, Poland Axigent Technologies Group, Inc., Amarillo, Texas, USA Ayonix, Inc., Japan AZ Imballaggi S.r.l., Pontedera, Italy Backblaze Inc, USA Baraco Compañia Anónima, Venezuela Big Rig Tax, USA BioDec, Italy bitName, Italy BMR Genomics, Padova, Italy B-Open Solutions srl, Italy Braithwaite Technology Consultants Inc., Canada BrandLive, Warsaw, Poland calbasi.net web developers, Catalonia, Spain Camping Porticciolo, Bracciano (Rome), Italy CAROL - Cooperativa dos Agricultores da Região de Orlândia, Orlândia, São Paulo, Brazil Centros de Desintoxicación 10, Grupo Dropalia, Alicante, Spain Charles Retina Institute, Tennessee, USA Chrysanthou & Chrysanthou LLC, Nicosia, Cyprus CIE ADEMUR, Spain CLICKPRESS Internet agency, Iserlohn, Germany Code Enigma Companion Travel LLC, Tula, Russia Computación Integral, Chile Computerisms, Yukon, Canada CRX LTDA, Santiago, Chile CyberCartes, Marseilles, France DataPath Inc. - Software Solutions for Employee Benefit Plans, USA Datasul Paranaense, Curitiba PR, Brazil Internal IT, Dawan, France DEQX, Australia Diciannove Soc. Coop., Italy DigitalLinx, Kansas City, MO, USA Directory Wizards Inc, Delaware, USA IT / Sales Department, Diversicom Corp of Riverview, USA Dubiel Vitrum, Rabka, Rabka, Poland Eactive, Wroclaw, Poland eCompute Corporation, Japan Agencja Interaktywna Empressia, Poznan, Poland enbuenosaires.com, Buenos Aires, Argentina Eniverse, Warsaw, Poland Epigenomics, Berlin, Germany Essential Systems, UK Ethan Clark Air Conditioning, Houston, Texas, USA EuroNetics Operation KB, Sweden Bureau d'études et valorisations archéologiques Éveha, Limoges, France Fahrwerk Kurierkollektiv UG, Berlin, Germany Faunalia, AP, Italy Flamingo Agency, Chicago, IL, USA Freeside Internet Services, Inc., USA Frogfoot Networks, South Africa French Travel Organisation, Nantes, France Fusion Marketing, Cracow, Poland IT, Geodata Danmark, Denmark GigaTux, London, UK Globalways AG, Germany GNUtransfer - GNU Hosting, Mar del Plata, Argentina G.O.D. Gesellschaft für Organisation und Datenverarbeitung mbH, Germany Goodwin Technology, Springvale, Maine, USA GPLHost LLC, Wilmington, Delaware, USA; GPLHost UK LTD, London, United Kingdom; GPLHost Networks PTE LTD, Singapore, Singapore Hermes IT, Romania HeureKA -- Der EDV Dienstleister, Austria HostingChecker, Varna, Bulgaria Hostsharing eG (Cooperation), Germany Hotel in Rome, Foggia, Italy Huevo Vibrador, Madrid, Spain ICNS, X-tec GmbH, Germany Instasent, Madrid, Spain IT outsourcing department, InTerra Ltd., Russian Federation IreneMilito.it, Cosenza, Italy Iskon Internet d.d., Croatia IT Lab, Foggia, Italy Keliweb SRL, Cosenza, Italy Kosmetyczny Outlet, KosmetycznyOutlet, Wroclaw, Poland Kulturystyka.sklep.pl sp. z o.o, Kleszczow, Poland Linden Lab, San Francisco, California, USA Linode, USA LinuxCareer.com, Rendek Online Media, Australia Linuxlabs, Krakow, Poland IT services, Lixper S.r.L., Italy Logikon Labs, Athens, Greece Logilab, Paris, France Madkom Ltd. (Madkom Sp. z o.o.), Poland Inmobiliaria Mar Menuda SA, Tossa de Mar, Spain IT Services, Medhurst Communications Ltd, UK Media Design, The Netherlands Mediasecure, London, United Kingdom Megaserwis S.C. Serwis laptopów i odzyskiwanie danych, Warsaw, Poland Mendix Technology, Rotterdam, the Netherlands Mobusi Mobile Performance Advertising, Los Angeles, California, USA Molino Harinero Sula, S.A., Honduras MrHostman, Montfoort, The Netherlands MTTM La Fraternelle, France System rezerwacji online Nakiedy, Gdansk, Poland New England Ski Areas Council, USA IT Ops, NG Communications bvba, Kortenberg, Belgium nPulse Technologies, LLC, Charlottesville, VA, USA Oktet Labs, Saint Petersburg, Russia One-Eighty Out, Incorporated, Colorado Springs, Colorado, USA Servicio Técnico, Open Tech S.L., Las Palmas, Spain oriented.net Web Hosting, Basel, Switzerland Osteo 7/7, Lausanne and Geneva, Switzerland IT Department, OutletPC, Henderson, NV, USA Parkyeri, Istanbul, Turkey Pelagicore AB, Gothenburg, Sweden Development and programming, www.perfumesyregalos.com, Spain PingvinBolt webshop, Hungary DeliveryHero Holding GmbH, IT System Operations, Berlin, Germany Portantier Information Security, Buenos Aires, Argentina Pouyasazan, Isfahan-Iran PR International Ltd, Kings Langley, Hertfordshire, UK PROBESYS, Grenoble, France Agencja interaktywna Prodesigner, Szczecin, Poland Questia Media RatujLaptopa, Warsaw, Poland The Register, Situation Publishing, UK NOC, RG3.Net, Brazil RHX Studio Associato, Italy Roan Agencja Interaktywna, Gorzów Wielkopolski, Poland RoDi, Zilina, Slovakia Rubbettino Editore, Soveria Mannelli (CZ), Italy Industrial Router Group, RuggedCom, Canada RV-studio, Zielonka, Poland S4 Hosting, Lithuania Salt Edge Inc., Toronto, Canada Ing. Salvatore Capolupo, Cosenza, Italy Santiago Engenharia LTDA, Brazil SCA Packaging Deutschland Stiftung & Co. KG, IS Department (HO-IS), Germany Overstep SRL, Via Marco Simone, 80 00012 Guidonia Montecelio, Rome, Italy ServerHost, Bucharest, Romania Seznam.cz, a.s., Czech Republic Shellrent Srl, Vicenza, Italy Siemens Information Technology Dep., SIITE SRLS, Lodi / Milano, Italy SilverStorm Technologies, Pennsylvania, USA Sinaf Seguros, Brazil Skroutz S.A., Athens, Greece SMS Masivos, Mexico Auto Service Cavaliere, Rome, Italy soLNet, s.r.o., Czech Republic Som Tecnologia, Girona, Spain Software Development, SOURCEPARK GmbH, Berlin, Germany Computer Division, Stabilys Ltd, London, United Kingdom Departamento de administración y servicios, SW Computacion, Argentina Taxon Estudios Ambientales, SL, Murcia, Spain ITW TechSpray, Amarillo, TX, USA Tehran Raymand Co., Tehran, Iran Telsystem, Telecomunicacoes e Sistemas, Brazil The Story, Poland TI Consultores, consulting technologies of information and businesses, Nicaragua CA, Telegraaf Media ICT, Amsterdam, the Netherlands T-Mobile Czech Republic a. s. Nomura Technical Management Office Ltd., Kobe, Japan TomasiL stone engravings, Italy Tri-Art Manufacturing, Canada EDI Team, Hewlett Packard do Brasil, São Paulo, Brazil Trovalost, Cosenza, Italy Taiwan uCRobotics Technology, Inc., Taoyuan, Taiwan (ROC) United Drug plc, Ireland Koodiviidakko Oy, Finland Departamento de Sistemas, La Voz de Galicia, A Coruña, Spain VPSLink, USA Wavecon GmbH, Fürth, Germany WTC Communications, Canada Wyniki Lotto Web Page, Poznan, Poland Software Development, XSoft Ltd., Bulgaria Zomerlust Systems Design (ZSD), Cape Town, South Africa Organizzazioni non-profit (non-profit) Bayour.com, Gothenburg, Sweden Eye Of The Beholder BBS, Fidonet Technology Network, Catalonia/Spain Dictionaries24.com, Poznan, Poland Beyond Disability Inc., Pearcedale, Australia E.O. Ospedali Galliera, Italy ESRF (European Synchrotron Radiation Facility), Grenoble, France F-Droid - the definitive source for free software Android apps GreenNet Ltd., UK GREFA, Grupo para la rehabilitación de la fauna autóctona y su hábitat, Majadahonda, Madrid, Spain GSI Helmholtzzentrum für Schwerionenforschung GmbH, Darmstadt, Germany LINKES FORUM in Oberberg e.V., Gummersbach, Oberbergischer Kreis, Germany MAG4 Piemonte, Torino, Italy Mur.at - Verein zur Förderung von Netzwerkkunst, Graz, Austria High School Technology Services, Washington DC USA PRINT, Espace autogéré des Tanneries, France Reware Soc. Coop - Impresa Sociale, Rome, Italy Systems Support Group, The Wellcome Trust Sanger Institute, Cambridge, UK SARA, Netherlands Institute for Snow and Avalanche Research (SLF), Swiss Federal Institute for Forest, Snow and Landscape Research (WSL), Davos, Switzerland SRON: Netherlands Institute for Space Research TuxFamily, France Enti statali (government) Agência Nacional de Vigilância Sanitária - ANVISA (Health Surveillance National Agency) - Gerência de Infra-estrutura e Tecnologia (GITEC), Brazil Directorate of Information Technology, Council of Europe, Strasbourg, France Gerencia de Redes, Eletronorte S/A, Brazil European Audiovisual Observatory, Strasbourg, France Informatique, Financière agricole du Québec, Canada Informática de Municípios Associados - IMA, Governo Municipal, Campinas/SP, Brazil Bureau of Immigration, Philippines Institute of Mathematical Sciences, Chennai, India INSEE (National Institute for Statistics and Economic Studies), France London Health Sciences Centre, Ontario, Canada Lorient Agglomération, Lorient France Ministry of Foreign Affairs, Dominican Republic Procempa, Porto Alegre, RS, Brazil SEMAD, Secretaria de Estado de Meio Ambiente e Desenvolvimento Sustentável, Goiânia/GO, Brasil Servizio Informativo Comunale, Comune di Riva del Garda, ITALY St. Joseph's Health Care London, Ontario, Canada State Nature Conservation Agency, Slovakia Servicio de Prevencion y Lucha Contra Incendios Forestales, Ministerio de Produccion Provincia de Rio Negro, Argentina Vermont Department of Taxes, State of Vermont, USA Zakład Gospodarowania Nieruchomościami w Dzielnicy Mokotów m.st. Warszawy, Warsaw, Poland
Please see the chronological history of how the Debian harassment and abuse culture evolved.
Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for a parallel installation, the perfect solution for such tests, and also as base packages.
RPMs of PHP version 8.3.14RC1 are available
RPMs of PHP version 8.2.26RC1 are available
The packages are available for x86_64 and aarch64.
PHP version 8.1 is now in security mode only, so no more RC will be released.
Installation: follow the wizard instructions.
Announcements:
Parallel installation of version 8.3 as Software Collection:
yum --enablerepo=remi-test install php83
Parallel installation of version 8.2 as Software Collection:
yum --enablerepo=remi-test install php82
Update of system version 8.3:
dnf module switch-to php:remi-8.3 dnf --enablerepo=remi-modular-test update php\*
Update of system version 8.2:
dnf module switch-to php:remi-8.2 dnf --enablerepo=remi-modular-test update php\*
Notice:
Software Collections (php82, php83)
Base packages (php)
Last week, I submitted syslog-ng to openSUSE Leap 16.0. While the distro is still in a pre-alpha stage, everything already works for me as expected. Well, except for syslog-ng, where I found a number of smaller problems.
As such, this blog is a call for testing, both for syslog-ng on openSUSE Leap 16.0 and also for the distribution itself. It is (finally) a brand-new Linux distro version from the openSUSE project, instead of just some minor update for an ancient major version. And it is also a chance to make some changes to syslog-ng packaging on openSUSE.
What I already see is that it is time to do the long-postponed renaming of the sub-package for http() support. This sub-package was originally called curl, as it is based on the popular curl utility. However, the related syslog-ng module was renamed from libcurl.so to libhttp.so many years ago and the package naming did not follow this name change, which can be confusing for some users. While Debian and Fedora packaging followed up on this name change with just a minor delay, in case of openSUSE, I was suggested to wait until the next major release. Needless to say, I forgot about it…
Some dependencies of syslog-ng were missing when I submitted it to Leap 16.0. I will also need to check regularly if those arrived and re-enable the related features.
If you find any problems in openSUSE Leap 16.0, you should report it at https://bugzilla.opensuse.org/
If you have any suggestions related to syslog-ng packaging, report it at the syslog-ng upstream at https://github.com/syslog-ng/syslog-ng/issues and mention me in the report (@czanik).
Thank you for your time testing openSUSE Leap 16.0 and the syslog-ng package on it!
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
Cockpit is the modern Linux admin interface. We release regularly.
Here are the release notes from cockpit-machines 323 and cockpit-files 11:
Existing VMs using EFI but that do not have a TPM now have a menu action to add a TPM. This is especially useful for upgrading older VMs which have OS requirements that require a TPM, such as Windows 11.
Creating new VMs using EFI will automatically have a TPM created and configured.
When setting the permissions for a directory, there is now an action to also recursively change all files and directories it contains.
The “Edit permissions” dialog now shows a read-only view of the SELinux context.
cockpit-machines 323 and cockpit-files 11 are available now:
Hello testers, starting today we are discontinuing the ability to register user accounts on https://public.tenant.kiwitcms.org! New accounts can still be created by logging in via the existing OAuth integrations - GitHub, GitLab and Google!
PLUGINS -> Tenant -> Invite users
will create user accounts automatically - reset your password before first loginHappy Testing!
If you like what we're doing and how Kiwi TCMS supports various communities please help us!
The first piece of anti-semitic writing attributed to Adolf Hitler is the Gemlich letter.
After World War I, Hitler remained in the German army. He was posted to an intelligence role in Munich. Adolf Gemlich wrote a letter about the Jewish question. Hitler's superior, Karl Mayr, asked Hitler to write the response.
The Gemlich letter was written on 16 September 1919, while Hitler was still an army officer, well before Hitler became Fuhrer.
One of the key points in the letter states that there should be a Code of Conduct (CoC) for Jewish people:
legally fight and remove the privileges enjoyed by the Jews as opposed to other foreigners living among us
So there would be one set of laws for everybody else and a second set of laws, or a CoC, for the Jews.
The other key point in the Gemlich letter is "behavior":
there lives amongst us a non-German, alien race, unwilling and indeed unable to shed its racial characteristics, its particular feelings, thoughts and ambitions
On 16 September 2018 Linus Torvalds posted the email announcing he has to submit himself to the code of conduct on the Linux Kernel Mailing List and mind his behavior.
Linus tells us he is taking a break, in other words, some of his privileges are on hold for a while.
Could the date of the email be a secret hint from Linus that he doesn't approve of the phenomena of CoC gaslighting?
We saw the same thing in Afghanistan. When the Taliban took back control of the country, women had to change their behavior and become better at listening to the demands from their masters.
From Linus TorvaldsDate Sun, 16 Sep 2018 12:22:43 -0700 Subject Linux 4.19-rc4 released, an apology, and a maintainership note [ So this email got a lot longer than I initially thought it would get, but let's start out with the "regular Sunday release" part ] Another week, another rc. Nothing particularly odd stands out on the technical side in the kernel updates for last week - rc4 looks fairly average in size for this stage in the release cycle, and all the other statistics look pretty normal too. We've got roughly two thirds driver fixes (gpu and networking look to be the bulk of it, but there's smaller changes all over in various driver subsystems), with the rest being the usual mix: core networking, perf tooling updates, arch updates, Documentation, some filesystem, vm and minor core kernel fixes. So it's all fairly small and normal for this stage. As usual, I'm appending the shortlog at the bottom for people who want to get an overview of the details without actually having to go dig in the git tree. The one change that stands out and merits mention is the code of conduct addition... [ And here comes the other, much longer, part... ] Which brings me to the *NOT* normal part of the last week: the discussions (both in public mainly on the kernel summit discussion lists and then a lot in various private communications) about maintainership and the kernel community. Some of that discussion came about because of me screwing up my scheduling for the maintainer summit where these things are supposed to be discussed. And don't get me wrong. It's not like that discussion itself is in any way new to this week - we've been discussing maintainership and community for years. We've had lots of discussions both in private and on mailing lists. We have regular talks at conferences - again, both the "public speaking" kind and the "private hallway track" kind. No, what was new last week is really my reaction to it, and me being perhaps introspective (you be the judge). There were two parts to that. One was simply my own reaction to having screwed up my scheduling of the maintainership summit: yes, I was somewhat embarrassed about having screwed up my calendar, but honestly, I was mostly hopeful that I wouldn't have to go to the kernel summit that I have gone to every year for just about the last two decades. Yes, we got it rescheduled, and no, my "maybe you can just do it without me there" got overruled. But that whole situation then started a whole different kind of discussion. And kind of incidentally to that one, the second part was that I realized that I had completely mis-read some of the people involved. This is where the "look yourself in the mirror" moment comes in. So here we are, me finally on the one hand realizing that it wasn't actually funny or a good sign that I was hoping to just skip the yearly kernel summit entirely, and on the other hand realizing that I really had been ignoring some fairly deep-seated feelings in the community. It's one thing when you can ignore these issues. Usually it’s just something I didn't want to deal with. This is my reality. I am not an emotionally empathetic kind of person and that probably doesn't come as a big surprise to anybody. Least of all me. The fact that I then misread people and don't realize (for years) how badly I've judged a situation and contributed to an unprofessional environment is not good. This week people in our community confronted me about my lifetime of not understanding emotions. My flippant attacks in emails have been both unprofessional and uncalled for. Especially at times when I made it personal. In my quest for a better patch, this made sense to me. I know now this was not OK and I am truly sorry. The above is basically a long-winded way to get to the somewhat painful personal admission that hey, I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely. I am going to take time off and get some assistance on how to understand people’s emotions and respond appropriately. Put another way: When asked at conferences, I occasionally talk about how the pain-points in kernel development have generally not been about the _technical_ issues, but about the inflection points where development flow and behavior changed. These pain points have been about managing the flow of patches, and often been associated with big tooling changes - moving from making releases with "patches and tar-balls" (and the _very_ painful discussions about how "Linus doesn't scale" back 15+ years ago) to using BitKeeper, and then to having to write git in order to get past the point of that no longer working for us. We haven't had that kind of pain-point in about a decade. But this week felt like that kind of pain point to me. To tie this all back to the actual 4.19-rc4 release (no, really, this _is_ related!) I actually think that 4.19 is looking fairly good, things have gotten to the "calm" period of the release cycle, and I've talked to Greg to ask him if he'd mind finishing up 4.19 for me, so that I can take a break, and try to at least fix my own behavior. This is not some kind of "I'm burnt out, I need to just go away" break. I'm not feeling like I don't want to continue maintaining Linux. Quite the reverse. I very much *do* want to continue to do this project that I've been working on for almost three decades. This is more like the time I got out of kernel development for a while because I needed to write a little tool called "git". I need to take a break to get help on how to behave differently and fix some issues in my tooling and workflow. And yes, some of it might be "just" tooling. Maybe I can get an email filter in place so at when I send email with curse-words, they just won't go out. Because hey, I'm a big believer in tools, and at least _some_ problems going forward might be improved with simple automation. I know when I really look “myself in the mirror” it will be clear it's not the only change that has to happen, but hey... You can send me suggestions in email. I look forward to seeing you at the Maintainer Summit. Linus
با توجه به انتشار نسخه نهایی Linux Fedora 41 پیشنهاد می شود تا کاربرانی که از نسخه های پایین تر استفاده می کنند، سیستم خود را به آخرین نسخه ی پایدار ارتقاء دهند. به همین خاطر در این مطلب قصد داریم تا لینوکس فدورا ۴۰ را به فدورا ۴۱ آپگرید کنیم. قبل از انجام آپگرید […]
The post آموزش آپگرید لینوکس فدورا ۴۰ به فدورا ۴۱ first appeared on طرفداران فدورا.Josh and Kurt talk about three government activities happening around security. CISA has a request for comment, and an international strategic plan around cybersecurity. These are both good ideas, and hopefully will help drive change. But we also discuss an EU proposal that brings liability rules to software which sounds like a great way to force change to happen.
And is it soon time for the M4 Extreme? :O (:
~
Edit: Some prediction calculations
M2 Max Multi Score: 14678
M2 Ultra Multi Score: 21352 (~46% increase between max & ultra)
M4 Max Multi Score: 26675
M4 Ultra Multi Prediction: 38945 (using roughly the same 46% increase from M4 Max)
M4 Extreme Multi Prediction: 77891 (multiplying the M4 Ultra prediction by 2?)
With the 5th gen AMD EPYC CPU, formerly codenamed “Turin,” AMD introduced a new feature in its processors: ERAPS - Enhanced RAP Security. For Linux, this feature allows software mitigations to be relaxed that were put in place after the SpectreRSB vulnerability was disclosed in hardware back in 2018.
This feature makes the software invulnerable to certain speculative attacks, thereby providing the security and not requiring the software mitigations anymore.
This article goes into which software mitigations are not required anymore when using processors with this feature. But before that, let’s catch up on the CPU vulnerabilities and the software mitigations applied to protect against them. Note that the APM release will contain official documentation. This writeup is just my notes - admittedly with a bunch of mistakes that I’m slowly correcting as the patches go through the upstreaming process. I’ll continue clarifying language and making corrections.
RAP: Return Address Predictor
RSB: Return Store Buffer
RAS: Return Address Store
The three acronyms – RAP, RSB, RAS – are used to reference the same store in the CPU. AMD manuals use RAS and RAP. Linux uses RSB. For consistency, I use RSB in this post.
The RSB is not directly used by software. It is a CPU buffer store to speed up speculative operations. Whenever a CALL instruction is made, the address following the CALL instruction is pushed on the RSB by microcode. These addresses on the RSB are then used to predict return targets and speculate operations.
The Spectre v2 SpectreRSB vulnerability exploited the addresses in the RSB. One userspace process could issue a bunch of CALL instructions, then yield control of the CPU and allow another process to be scheduled in. Any speculative operations on the entries in the RSB from the other userspace process would then use the addresses stuffed by the first malicious process. With this malicious technique, user->user and user->kernel attack vectors are possible. When running virtual machines, a similar technique makes it possible for guest->guest, guest->user, and guest->hypervisor attacks to be carried out.
As a result of the SpectreRSB disclosure, Linux added code to mitigate RSB poisoning scenarios – e.g., a guest placing malicious entries in the RSB, and then hardware using those entries in hypervisor or host or other guest contexts.
Software-based mitigations can take two forms:
RSB flushing: clearing the contents of the RSB
RSB stuffing: populating RSB entries via bogus CALL instructions
Both RSB flushing and RSB stuffing address RSB poisoning scenarios. RSB stuffing addresses RSB underflow vectors (not applicable for AMD CPUs), whereas RSB flushing does not. Since RSB stuffing is a superset of both these methods, it is currently the only method in use in the Linux kernel.
RSB stuffing means the kernel repeats 32 (hard-coded value) CALL instructions. The instruction following the CALL (i.e., the target instruction from a RET) causes a trap. Any speculative operations on any of these RSB entries are hence benign.
The Linux mitigation for these RSB vulnerabilities is to stuff the RSB for these events:
Context switches
VMEXITs
Another form of mitigation - with hardware assistance - is Indirect Branch Prediction Barrier (IBPB): this mitigation also clears the RSB, but it’s not performed on every context switch. This is a more expensive operation compared to stuffing the RSB. The focus of this article and patchset is the RSB stuffing performed on context switches and VMEXITs.
The Enhanced Return Address Predictor (ERAPS) feature debuted in the newly-released 5th Gen AMD EPYC processors. This feature addresses the RSB poisoning attack scenario in hardware, making software mitigations unnecessary.
There are several enhancements in the hardware for this feature:
Auto-flush the RSB
Flush the RSB on INVPCID, all context switches (triggered by writing to the CR3), and even some writes to CR4. Even if those operations cause a TLB flush that does not flush the entire TLB, the RSB is flushed.
Caveat 1: If context switches do not invoke any of the hardware TLB invalidation instructions, or update the CR3 – as in the case of running in a virtual machine with nested paging (NPT) disabled, the RSB will not be cleared automatically
Tag addresses in the RSB as host or guest, and prevent cross-predictions
This ensures a malicious guest cannot influence the speculations made in host/hypervisor context
Caveat 2: the RSB entries are not tagged with the ASID of the guest, and hence cannot differentiate between a guest and its nested guests
This caveat only affects L1 guests from malicious L2 guests in nested VM configurations. To address this situation, this feature provides the hypervisor with to a VMCB bit to instruct the hardware to flush the RSB on the next VMRUN when switching from an L2 guest to an L1 guest, ensuring the L1 guest is protected against L2 RSB poisoning attacks.
This caveat does not apply in non-nested scenarios, and the hypervisor does not need to set the VMCB bit in that case
More notes about caveat 2 below.
Increase the size of the RSB to a new default (64 entries in 5th Gen EPYC CPUS, as opposed to the old 32)
Increase is automatic for bare metal
Software can probe the value of the ERAPS CPUID bit to confirm its presence
Additional CPUID bits indicate the size of the RSB
Hypervisor sets a new VMCB bit added with this feature to allow the larger RSB size to be usable by the guest
Otherwise, virtual machines default to older 32 entries
This is to preserve backwards compatibility with existing hypervisors
To address caveat 2 above, the hypervisor also needs to set another VMCB bit to clear the RSB before a qualifying VMRUN if this feature has been advertised to the guest.
This new default size of the RSB can change in future CPU generations without needing any software changes
For bare metal kernel and bare metal hypervisor hosts, the increase in the RSB size is automatic – no software support is necessary. The hardware always will use the larger RSB stack, and auto flush the RSB, irrespective of software configuration, or mitigations in place.
On the other hand, software enlightenment for this feature helps remove the software mitigations in favor of the hardware mitigations – effectively removing the tax of double RSB flushing or stuffing in case old software is run on newer hardware.
When executing in a guest context, the increased RSB size is only used by the hardware if the hypervisor sets the new ALLOW_LARGER_RAP VMCB bit. The hypervisor setting this bit implies the hypervisor also ensures to set the new FLUSH_RAP_ON_VMRUN VMCB bit when necessary, as identified in Caveat 2.
This feature has been designed with backwards compatibility and security in mind. No software changes are required for this feature to work in host, hypervisor, or guest mode. The following matrix shows the effect of the feature on “old”/unenlightened hosts, guests, and “new”/enlightened hosts and guests.
Old guest software | New guest software | |
---|---|---|
Old host/hypervisor software |
|
|
New host software |
|
|
|
|
User | Kernel/Hypervisor | Guest | ||||
---|---|---|---|---|---|---|
Before | After | Before | After | Before | After | |
User | RSB Stuffing | ERAPS (RSB cleared on context switch) | SMEP | RSB Stuffing | ERAPS (host/guest tags) | |
Guest | RSB Stuffing | ERAPS (host/guest tags) | RSB Stuffing after VMEXIT | ERAPS (host/guest tags) | RSB Stuffing | ERAPS (RSB cleared on context switch) |
With NPT disabled, the KVM hypervisor uses the “shadow paging” technique. This method does not cause a change in CR3 when the guest scheduler switches guest processes within the virtual machine. In this case, the auto-RSB flush does not happen on guest context switch despite ERAPS being present. To enable ERAPS for such guests, both these will need to happen:
Hypervisor needs to set FLUSH_RAP_ON_VMRUN for every guest event that causes a context siwtch or TLB flush on real hardware
Guest software will have to change the hard-coded “32” value that is used for RSB stuffing to increase it to the default RSB size if ERAPS is exposed to the guest. A guess will have to do this to preserve its operational integrity in case the hypervisor has a bug that does not clear the RSB on one of the qualifying conditions.
It is important to note that having NPT disabled is a theoretical case – almost no production host that has the NPT capability disables it.
Due to these shortcomings, the Linux implementation will not expose the larger RSB or the ERAPS CPUID feature bit to virtual machines when NPT is disabled.
When running nested virtual machines, the RSB entries marked “guest” may correspond to either of the L1 or L2 guests. To protect the L1 guest from a malicious L2 guest wanting to poison the guest RSB entries, the hypervisor, on a VMEXIT from an L2 guest, will set the FLUSH_RAP_ON_VMRUN VMCB bit. This ensures that when the control goes from an L2 guest to an L1 guest, the RSB entries are cleared in the L1 guest’s context. This is a case where the L1 guest relies on the hypervisor to do the right thing.
For Confidential Computing, software running in the guest does not trust the hypervisor software to be bug-free or be not malicious. For both caveats 1 and 2, the guest software must rely on the hypervisor for secure and correct operation. This can go against the confidential computing trust boundaries.
However, both those cases for Caveats 1 and 2 – NPT disabled and nested virtualization – are disallowed in SEV-SNP Confidential Computing.
All other modes of operation for this feature do not rely on any hypervisor implementation details, and hence are compatible with Confidential Computing trust boundaries.
A final note on confidential computing: the SEV-SNP hardware checks whether the CPUID bits being presented to the guest are allowed on a particular host. That prevents a malicious or buggy hypervisor to expose the ERAPS feature when the hardware does not have it. This ensures the guest software to safely drop the software mitigations if the ERAPS CPUID bit is discovered.
In Log Detective, we’re struggling with scalability right now. We are running an LLM serving service in the background using llama-cpp. Since users will interact with it, we need to make sure they’ll get a solid experience and won’t need to wait minutes to get an answer. Or even worse, see nasty errors.
What’s going to happen when 5, 15 or 10000 people try Log Detective service at the same time?
Let’s start the research.
Assuming PRINCIPAL is your Kerberos principal and $IPASERVER is the FQDN of your server, you can query your identity on the IPA server via curl:
kinit $PRINCIPAL
curl -k -H referer:https://$IPASERVER/ipa -H "Content-Type:application/json" -H "Accept:applicaton/json" --negotiate -u : --cacert /etc/ipa/ca.crt -d '{"method":"whoami","params":[[],{"version": "2.220"}],"id":0}' -X POST https://$IPASERVER/ipa/json
{"result": {"object": "user", "command": "user_show/1", "arguments": ["ayoung"]}, "version": "4.5.4", "error": null, "id": 0, "principal": "ayoung@YOUNGLOGIC.COM"}
This is handy if your system is not registered as an IPA client.
To fetch by username:
curl -k -H referer:https://$IPASERVER/ipa -H "Content-Type:application/json" -H "Accept:applicaton/json" --negotiate -u : --cacert /etc/ipa/ca.crt -d '{"method": "user_show", "params": [[ "ayoung" ], { "all": true, "rights": true } ]}' -X POST https://$IPASERVER/ipa/json
تیم توسعه پروژه فدورا خبر انتشار نسخه ی نهایی Linux Fedora 41 را اعلام کرد. در ادامه به برخی از مهمترین تغییرات و ویژگی های جدید لینوکس فدورا ۴۱ می پردازیم. DNF 5 فدورا لینوکس ۴۱ به طور پیشفرض از نسخه جدیدی از ابزار مدیریت بسته خط فرمان DNF استفاده میکند. این نسخه سریعتر و […]
The post نسخه نهایی لینوکس فدورا ۴۱ منتشر شد first appeared on طرفداران فدورا.Apple just introduced Thunderbolt 5 which with its higher bandwidth will finally allow for the true and properly designed displays that I think Apple was waiting for. I predict possibly a 27/32/36 inch line up with 5/6/8K resolutions with 120Hz refresh rate. The only remaining question is, will it be Mini-LED for multiple dimming zones or possibly OLED?
Things I use the Mac Mini for:
En ce mardi 29 octobre, les utilisateurs du Projet Fedora seront ravis d'apprendre la disponibilité de la version Fedora Linux 41.
Fedora Linux est une distribution communautaire développée par le projet Fedora et sponsorisée par Red Hat, qui lui fournit des développeurs ainsi que des moyens financiers et logistiques. Fedora Linux peut être vue comme une sorte de vitrine technologique pour le monde du logiciel libre, c’est pourquoi elle est prompte à inclure des nouveautés.
Passage à GNOME 47. Cette nouvelle version de l'environnement phare de Fedora propose de nombreuses améliorations. Tout d'abord elle met en place dans la personnalisation d'une couleur accentuée qui influencera la couleur de nombreux éléments graphiques comme des boutons. Cela intègre donc un changement en place chez Ubuntu depuis quelques années. Pour ceux disposant de petits écrans, certains boutons et autres icônes sont agrandies pour rendre leur interaction plus aisée dans ce contexte.
L'interface a été en partie remaniée au niveau des boîtes de dialogue pour rendre leur interaction plus simple notamment avec des petits écrans avec des boutons plus gros et plus espacés entre eux. Et bien sûr ces boutons tiennent compte maintenant de la couleur accentuée explicitée précédemment. L'interface pour ouvrir ou sauvegarder un fichier repose maintenant sur le code du navigateur de fichiers nommé Fichiers plutôt que d'utiliser un code indépendant jusqu'ici. Cela simplifie la maintenance mais permet surtout de fournir l'ensemble des fonctionnalités du navigateur de fichiers pour cette tâche. Par exemple il est possible de renommer des fichiers depuis cette interface, de changer l'ordre d'affichage en vue icônes, prévisualiser les fichiers sans les ouvrir, etc. Par ailleurs, le navigateur de fichiers s'améliore aussi. Les périphériques réseaux sont maintenant classifiés permettant d'identifier les ressources où on est déjà connecté, qu'on a précédemment utilisé et les autres. L'ensemble des disques durs internes sont également affichés dans la barre latérale et groupés ensemble pour rendre cela plus accessible et facile d'utilisation. Il est possible également de supprimer les dossiers par défaut dans la barre latérale pour faire de la place si on le souhaite. Et quelques autres changements plus mineurs.
Dans la configuration de l'interface, il est possible via le menu Accessibilité de configurer le changement automatique de focus d'une fenêtre à une autre par le simple survol de la souris. Option désactivée par défaut. De même lors de l'ajout de nouvelles dispositions clavier, la prévisualisation de cette disposition peut être effectuée avant de la sélectionner pour s'assurer que c'est bien celle souhaitée. De manière générale, l'affichage des préférences est plus cohérente dans le choix des éléments graphiques pour les représenter à travers l'interface.
Les comptes en ligne progressent également, les informations IMAP ou SMTP sont préremplies en se basant sur l'adresse électronique. La synchronisation du calendrier, des courriels et des contacts a été ajoutée pour les comptes Microsoft 365 pendant que la configuration d'un nouveau compte WebDAV permet de découvrir les services accessibles depuis ce compte pour faciliter l'expérience utilisateur.
Le navigateur web maison n'est pas en reste et propose quelques améliorations dont le pré remplissage des formulaires en se basant sur les entrées précédentes ce qui est disponible dans de nombreux navigateur. L'option peut être désactivée dans les préférences si nécessaire. Les marques pages ont été aussi remaniés en étant affichés dans un volet latéral et en proposant une barre de recherche intégrée pour retrouver celui qu'on souhaite. Le navigateur peut afficher le nombre de trackers publicitaires qui ont été bloqués. Malheureusement la synchronisation des éléments via Firefox Sync n'est plus possible en ce moment à cause d'un changement dans la procédure d'authentification par Mozilla.
L'application calendrier a été également améliorée avec par exemple une icône de cadenas qui s'affiche pour les événements qui sont en lecture seule. La mise en page est plus cohérente notamment dans l'espacement entre les éléments visuels. L'importation ou l'édition d'événements gèrent mieux les calendriers cachés ou en lecture seule. L'application de cartographie a été aussi légèrement améliorée en utilisant les cartes vectorisées par défaut et en proposant les trajets en transport en commun en exploitant le service Transitous plutôt qu'une solution commerciale.
Pour les amateurs d'enregistrement de leur écran en vidéo, cette tâche peut être effectuée dans la mesure du possible avec de l'accélération matérielle ce qui diminue la consommation d'énergie et améliore les performances du système dans ce cadre. Dans la même veine, le rendu effectué par la bibliothèque graphique GTK se fait via Vulkan dorénavant ce qui améliore les performances en particulier pour les machines plus anciennes et avec moins d'effets visuels indésirables due à la lenteur de certaines opérations. Dans la même veine, il y a une amélioration des performances des applications vidéos, photos et du navigateur web maison par la réduction quand c'est possible du nombre de copies en mémoire des données d'une vidéo ou d'une image.
Pour ceux qui ont accès à leur session à distance, il est dorénavant possible de rendre cette session persistante. En cas de déconnexion il est possible de revenir plus tard et de retrouver la session dans l'état où elle était.
Pour les utilisateurs avancés, il y a des changements expérimentaux qui sont proposés. Si vous souhaitez utiliser la mise à échelle fractionnaire de l'interface pour les applications utilisant X11 via XWayland, vous pouvez l'activer via la commande suivante :
$ gsettings set org.gnome.mutter experimental-features '["scale-monitor-framebuffer", "xwayland-native-scaling"]'
L'environnement de bureau léger LXQt passe à la version 2.0. Cette mise à jour importante est essentiellement technique avec un port complet vers la bibliothèque graphique Qt 6 au lieu de Qt 5 qui n'est bientôt plus maintenue. La prise en charge de Wayland est disponible à titre expérimental, cela devrait être stabilisé pour la version 2.1 à venir.
L'éditeur d'image GIMP utilise la branche de développement qui deviendra la version 3. Cette décision a été prise car GIMP devenait la raison principale pour maintenir le langage Python 2.7 dans la distribution qui n'est plus maintenue depuis quelques années. Alors que GIMP 3 devrait sortir sous peu, il a été décidé de prendre potentiellement un peu d'avance pour permettre de supprimer cette dépendance assez lourde et complexe de Fedora.
Outre cette décision, cette version de l'application propose entre autre une meilleure gestion des couleurs avec notamment la visualisation, l'import ou l'export d'images avec la colorimétrie CMJN. Les tablettes graphiques ont une expérience utilisateur améliorée avec notamment la possibilité de personnaliser l'actions des boutons de ce matériel sous Wayland, et la prise en charge des écrans avec une définition HiDPI est aussi améliorée. L'édition non destructive est également possible pour séparer l'application des effets des calques de l'image pour permettre de revenir dessus plus tard. Si on le souhaite, un calque peut se redimensionner automatiquement lors de son édition lors d'un dessin par exemple. Et bien d'autres changements.
Le gestionnaire de listes de tâches Taskwarrior évolue à la version 3. Cette version a surtout changé la manière de stocker les données sauvegardées et n'est pas rétrocompatible avec l'ancienne méthode. Il est donc nécessaire d'exporter les tâches avec l'ancienne version par l'usage de la commande task export
et de les importer avec la nouvelle version avec la commande task import rc.hooks=0
. La tâche de sauvegarde est aussi confiée à un nouveau module TaskChampion écrit en Rust.
La mise à jour du cœur des systèmes atomiques de bureau peut se faire sans droits administrateurs, mais pas les mises à niveau de celui-ci à savoir par exemple passer d'une version Fedora Linux Silverblue 40 à Fedora Linux Silverblue 41. Cela était déjà le cas pour Fedora Silverblue avec l'usage de GNOME Logiciels mais a été de fait généralisé. L'objectif est de simplifier la procédure de mise à jour du système, qui dans le cadre d'un système atomique est considéré comme plus sûre que dans un système traditionnel de par sa conception qui permet facilement de revenir à l'état précédent et par la faible quantité de logiciels installés dans le coeur du système.
Les autres opérations ne sont pas considérées à ce stade car trop risquées pour être confiées à un simple utilisateur. Pour certaines opérations le mot de passe administrateur sera systématiquement demandé telles que l'installation d'un nouveau paquet local, la mise à niveau complet du système (qui consiste en une opération de rebase avec une autre branche de travail), ou changer les paramètres du noyau. Pour d'autres comme l'installation d'un paquet provenant d'un dépôt, la mise à jour, le retour dans un état précédent ou l'annulation d'une commande peut se faire sans demander systématiquement le mot de passe, comme lors de l'usage de commandes via sudo si les opérations ne sont pas trop espacées.
Mise à disposition des images Spin KDE Plasma Mobile et Fedora Kinoite Mobile. L'objectif est de fournir une image native avec cet environnement qui fonctionne aussi bien pour téléphone que pour les tablettes ou petits ordinateurs portables 2-1 avec possibilité de détacher l'écran tactile du clavier.
De même le gestionnaire de fenêtres en mode pavant Miracle exploitant Wayland est proposé dans Fedora et bénéficie de son propre Spin. Cette interface moderne prend en charge aussi les fenêtres flottantes, prend en charge les dernières montures de Wayland tout en permettant l'usage des pilotes propriétaires de Nvidia. Il consomme également peu de ressources ce qui le rend intéressant dans l'usage de machines peu performantes ou anciennes tout en exploitant une pile graphique très moderne et flexible.
L'installation de Fedora Workstation se fera avec le protocole d'affichage Wayland uniquement, les sessions GNOME X11 restent disponibles et installables après. Cela suit l'effort entrepris depuis longtemps de faire de Wayland le protocole d'affichage par défaut de Fedora et par l'abandon progressif de X11 par GNOME également. L'état actuel du système permet de franchir ce cap par défaut ce qui allège également un peu le média d'installation. Cependant pour ceux qui veulent toujours utiliser GNOME avec X11 après l'installation pour différentes raisons, il reste possible d'installer les paquets gnome-session-xsession
et gnome-classic-session-xsession
depuis les dépôts officiels.
L'installation du pilote propriétaire de Nvidia via GNOME Logiciels est compatible avec les systèmes utilisant l'option Secure Boot. Ce mode de sécurité s'assure que tous les éléments de la chaine de démarrage de la machine sont signés avec une des clés cryptographiques autorisées. L'objectif est d'éviter qu'une tierce personne puisse modifier un de ces composants dans le dos d'un utilisateur afin de réaliser une attaque plus tard. Le chargeur de démarrage GRUB, le noyau Linux et ses pilotes sont évidemment concernés, et installer le pilote propriétaire de Nvidia qui n'est pas signé pouvait rendre la machine impossible à démarrer.
Même si Fedora ne fournit pas ce pilote car il est non libre, l'objectif reste d'avoir un système fonctionnel et simple à utiliser. Dans ce contexte, GNOME logiciels permet d'outre passer cette limitation en utilisant l'outil mokutil
pour auto signer le pilote Nvidia. L'utilisateur devra saisir un mot de passe à l'installation du paquet, et au redémarrage suivant cet outil sera affiché pour confirmer la clé de sécurité et ainsi autoriser le chargement du dit pilote sans encombre.
Prise en charge des caméras MIPI pour les systèmes utilisant Intel IPU6 qui concerne de nombreux ordinateurs portables actuels. En effet, de nombreux modèles utilisent le bus MIPI CSI2 au lieu du traditionnel USB UVC qui était la norme jusqu'à présent. En effet ce protocole permet des bandes passantes plus élevées, en consommant moins d'énergie et plus facile à intégrer. Sauf que la prise en charge de ce bus n'était pas pleinement gérée, car les images envoyées sont un peu brutes et nécessitent des traitements notamment concernant la balance des blancs ou le dématriçage de l'image ou le contrôle pour l'exposition et le gain. Cela est complexe car chaque caméra a ses propres caractéristiques qui nécessitent une approche au cas par cas en espace utilisateur. Un travail d'intégration a été fait entre le noyau Linux, libcamera, pipewire et Firefox pour rendre cela possible. Le noyau Linux fourni l'API de base et un pilote pour chaque type de modèles, avec un pilote commun pour la prise en charge du protocole en lui même. Le flux vidéo est récupéré par libcamera qui applique des traitements tels que le dématriçage en prenant en compte le modèle considéré, qui envoie le flux vidéo obtenu par pipewire vers le navigateur Firefox.
L'installateur Anaconda prend en charge le chiffrement matériel des disques via le standard TCG OPAL2 disponible sur certains péripériques SATA ou NVMe, mais cela nécessite de passer via un fichier kickstart pour personnaliser l'installation. L'outil cryptsetup
n'a pris en charge ce standard que très récemment, l'objectif est de fournir les arguments --hw-opal-only
ou --hw-opal
à cet utilitaire dans le fichier kickstart. Le premier argument n'active que le chiffrement matériel, ce qui est recommandé uniquement pour des périphériques où l'usage du CPU pour cette tâche nuirait grandement aux performances, alors que le second utilise un chiffrement matériel et logiciel. Il n'est pas prévu de fournir cette fonctionnalité par défaut et restera pendant un moment une option pour les utilisateurs avancés, car la sécurité de l'ensemble dépend de la qualité des firmwares de ces périphériques de stockage et qui doivent être maintenus à jour dans le temps ce qui n'est pas garanti.
Utilisation par défaut de l'outil tuned au lieu de power-profiles-daemon pour la gestion de l'énergie de la machine. C'est l'outil qui permet notamment de passer du mode économie d'énergie à performance pour moduler la puissance du CPU en fonction de la consommation d'énergie souhaitée, ce qui est très appréciable sur les ordinateurs portables en particulier. Cependant power-profiles-daemon est très simple, en dehors de ces modes très génériques et d'appliquer cela sur les CPU ou les plateformes matérielles supportées, il ne permettait une configuration plus fine ou l'ajout de modes personnalisées. Les utilisateurs avancés étaient contraints d'installer un utilitaire additionnel comme tuned pour cela. Il a été ajouté un paquet tuned-ppd qui fourni une API DBus compatible avec l'interface de power-profiles-daemon, ainsi les applications telles que le centre de configuration de GNOME, Plasma ou Budgie peuvent s'en servir directement à la place sans régression, tout en permettant aux utilisateurs avancés d'aller plus loin s'ils le souhaitent en modifiant le contenu de /etc/tuned/ppd.conf
comme par exemple en changeant les réglages périphérique par périphérique.
Mise à jour de ROCm 6.2 pour améliorer la prise en charge de l'IA et le calcul haute performance pour les cartes graphiques ou accélérateurs d'AMD. Il fourni entre autres des nouveaux composants tels que Omniperf pour l'étude et l'analyse de performance, Omnitrace pour tracer l'exécution des fonctions sur le CPU ou le GPU, rocPyDecode comme implémentation de l'API rocDecode en Python pour l'analyse des données de profilage faits avec cet outil en C ou C++ ou ROCprofiler-SDK pour identifier les points bloquants de performance. Il prend en charge également les dernières versions des outils PyTorch et TensorFlow.
L'outil de développement et de débogage des tables ACPI nommé acpica-tools ne prend plus en charge les architectures gros boutistes tels que s390x. En effet, ce standard qui est conçu pour les machines petits boutistes n'a pas beaucoup de sens pour cette architecture, les paquets qui en avaient besoin pour s390x ont de moins en moins cette dépendance et comme l'usage de cette architecture reste faible surtout pour cet usage, il a été décidé de retirer la prise en charge de cette spécificité. 49 correctifs sur 69 concernant ce paquet sont liés à cette prise en charge car le projet n'a jamais voulu les adopter par manque d'intérêt, ce qui impliquait beaucoup de test et de développement ralentissant la fréquence des mises à jour du paquet. Ces correctifs sont maintenant supprimés.
PHP ne prend plus en charge les processeurs x86 32 bits. Il n'y avait déjà plus de paquets PHP 32 bits dans les dépôts, mais PHP était toujours compilé pour permettre à d'autres dépendances de l'être pour cette architecture. Des restrictions ont été ajoutées à ces dépendances pour que cela ne soit plus bloquant. PHP était souvent utilisé dans le cadre de tests ou pour gérer des plugins ou extensions qui pouvaient être désactivées. L'architecture x86 32 bits n'est pour rappel plus pris en charge par Fedora depuis quelques années maintenant, ces paquets ne sont utilisables que sur des machine x86 64 bits pour des raisons de compatibilité. Ce nettoyage permet en contre partie un gain de temps machine et de développeurs car il n'y a plus à gérer ce cas de figure.
Le gestionnaire d'entrées IBus par défaut pour la langue traditionnelle chinoise de Taiwan passe de ibus-libzhuyin à ibus-chewing. En effet la bibliothèque chewing sous-jacent semble avoir une communauté dynamique qui fournit une bonne maintenance contrairement à libzhuyin qui n'est d'ailleurs pas maintenu en ce moment par un locuteur de cette langue ce qui pose quelques difficultés. Le code semble également mieux organisé et plus maintenable.
Le gestionnaire de paquet dnf est mis à jour vers sa 5e version. Cette version écrite en C++ au lieu de Python est bien plus rapide à l'usage et consomme moins d'espace disque et requiert moins de dépendances pour tourner, l'ensemble est 60% plus léger sur le disque. Par ailleurs dnf5daemon remplace PackageKit comme couche de compatibilité pour dnf dans GNOME Logiciels, ce qui permet notamment le partage des cache entre l'interface console et l'interface graphique évitant un gaspillage d'espace disque et de bande passante. Niveau performance, certaines opérations sont maintenant parallélisées comme le téléchargement et le traitement des données des dépôts qui doit être jusqu'à deux fois plus rapide. Les plugins sont également mieux intégrés ce qui en simplifie leur installation et leur maintenance. Cependant certains plugins n'ont pas été encore portés, vous pouvez suivre l'avancement pour ceux qui manquent à l'appel. Mais cela ne devrait concerner que peu d'utilisateurs. Certaines options de la ligne de commande n'existent plus par ailleurs, cela vous sera rappelé si vous les invoquiez. L'historique des précédentes transactions de paquets comme les mises à jour ou installations ne sont pas compatibles entre l'ancienne et la nouvelle version, vous ne pourrez donc pas voir vos anciennes transactions pour les annuler par exemple.
Tandis que la commande rpm utilise la version 4.20. Cette version permet de lister ou de supprimer les clés pour signer les paquets via la commande rpmkeys
alors que l'outil rpmsign
permet de signer les paquets avec l'algorithme ECDSA. La commande rpm
elle même permet d'afficher une sortie en format JSON, en plus du format XML déjà pris en charge depuis longtemps. Un nouveau plugin rpm-plugin-unshare
apparaît pour empêcher à des scripts d'installation de faire certaines opérations sur le système de fichiers ou via le réseau pour des raisons de sécurité. Côté création de paquet, l'introduction de la directive BuildSystem
est sans doute la plus importante pour permettre de définir de manière unique et générique la création de paquets basés sur des outils communs tels que autotools
ou cmake
. L'empaqueteur n'aurait pas besoin de rappeler pour ces outils courants chaque étape pour la création du paquet, sauf en cas de particularité, ce qui permet une meilleure maintenance et cohérence au sein de la distribution par exemple.
Les systèmes Fedora atomiques de bureau et Fedora IoT disposent de bootupd
pour la mise à jour du chargeur de démarrage. La mise à jour du chargeur de démarrage au sein d'un système atomique n'est pas trivial car ce n'est pas une opération facile à fiabiliser. Par conséquent rpm-ostree
ne prenait pas cela en charge, et c'est pourquoi bootupd
a été créé et est maintenant intégré dans ces versions. Il était déjà présent depuis quelques temps sur la version CoreOS ce qui a déjà donné un retour d'expérience en conditions réelles. Il peut prendre en charge les systèmes UEFI et BIOS, mais la mise à jour reste une étape manuelle pour être automatisée dans le futur, notamment quand le composant shim
sera à jour pour rendre la mise à jour moins risquée sur les système UEFI si la mise à jour est coupée au milieu de l'opération comme lors d'une coupure de courant ou lors d'un plantage. Il permet également de pouvoir bloquer l'usage de versions du chargeur de démarrage plus anciens ayant des failles connues, par l'usage de Secure Boot dbx et le paquet ostree-grub2
pourra être progressivement retiré, ce qui notamment mettra un terme au bogue où chaque déploiement est affiché deux fois dans l'interface de sélection de GRUB et devrait réduire le risque d'avoir certains problèmes lors de la mise à jour du système.
Les images atomiques de Fedora proposent les outils dnf et bootc, ce premier est utilisable dans un contexte de développement pour l'instant mais le second peut commencer à servir à déployer des images du système qui sont bootables. Plus tard il est prévu que dnf puisse remplacer rpm-ostree
pour certaines actions. En attendant, en cas d'usage de dnf
sur de tels systèmes, le message d'erreur sera plus explicite concernant les outils à employer pour réaliser ces actions. L'objectif est de fournir aux administrateurs systèmes des outils plus familiers pour ces différentes actions tout en ayant un outil clairement identifié pour chaque type de tâches.
Introduction de l'outil fedora-repoquery pour faire des requêtes sur les dépôts comme savoir la version exacte d'un paquet spécifique dans une autre version de Fedora, la date de mise à jour d'un dépôt, ou connaître les paquets qui dépendent d'un paquet spécifique (dépendance inverse donc), etc. Il fonctionne par dessus dnf
concernant cette fonction mais permet de facilement obtenir des informations depuis les dépôts Fedora, CentOS ou EPEL.
La bibliothèque de sécurité OpenSSL n'accepte plus les signatures cryptographiques avec l'algorithme SHA-1. Cet algorithme n'est plus considéré comme sûr car il devient de plus en plus facile de générer des collisions à la demande. Si vous souhaitez les autoriser à nouveau pour des raisons légitimes, malgré le risque de sécurité, cela reste possible de le faire via la commande
# update-crypto-policies --set FEDORA40
Commande qui devrait être prise en charge pendant quelques versions encore.
Le gestionnaire de réseaux NetworkManager ne prend plus en charge la configuration dans le format ifcfg qui était déjà désuet depuis des années. Cela fait suite aux tentatives progressives d'utiliser massivement le format keyfile. Fedora Linux 33 en l'utilisant comme format par défaut pour les nouveaux profils de connexions, tandis que Fedora Linux 36 a poussé la prise en charge de l'ancien format dans un paquet dédié non installé par défaut nommé NetworkManager-initscripts-ifcfg-rh et enfin Fedora Linux 39 a entamé la conversion automatique vers le nouveau format. Et depuis longtemps NetworkManager ne fait que maintenir ce format, de nombreuses options ou types de connexions n'étant de fait pas possibles avec l'ancien format. Cela permet de préparer la suppression future de la prise en charge de ce format de fichier de NetworkManager lui même.
Dans la même veine, le paquet network-scripts a été retiré, mettant fin à la gestion du réseau via les scripts ifup et ifdown. Depuis 2018 ces outils sont considérés comme obsolète et soumis à une suppression planifiée future. D'ailleurs le projet officiel ne fait plus une maintenance très active de ces outils.
Les interfaces réseaux pour les éditions Cloud vont utiliser les nouveaux noms par défaut (par exemple enp2s0f0) comme adoptés par les autres éditions il y a des années au lieu de conserver les noms traditionnels (tels que eth0). Cela signifie que le noyau ne recevra plus pour ces systèmes le paramètre net.ifnames=0
pour maintenir cet ancien comportement. Le reste de l'écosystème avait adopté la nouvelle nomenclature avec Fedora... 15 en 2011 ! Ce retard est attribuable à certains problèmes avec certains outils tels que cloud-init
avec cette convention de nommage qui ont été résolus à la fin des années 2010 seulement. Ainsi les périphériques auront maintenant une correspondance physique, leur rôle devrait être plus facilement identifiable et limiter le risque de problèmes suite à des changements dynamiques des interfaces.
Le gestionnaire de virtualisation libvirt utilise maintenant par défaut le pare-feu nftables au lieu de iptables pour son interface réseau vibr0. En effet Fedora utilise par défaut nftables maintenant et par ailleurs utiliser iptables signifiait créer des règles nftables sous le capot. Cette transition est faite pour améliorer les performances et réduire le risque d'une suppression accidentelle de règles par une application tierce car tout sera mis dans les règles associées à la table libvirt_network. iptables sera cependant utilisé si nftables n'est pas présent dans le système et le comportement peut être changé dans le fichier de configuration /etc/libvirt/network.conf
.
L'outil Netavark pour gérer la pile réseau des conteneurs, notamment avec podman, utilise également par défaut le pare-feu nftables au lieu de iptables. Les avantages du changement sont assez similaires à ce qui est expliqué au point précédent, les règles associées à l'outil seront mises dans la table dédiée netavark
. La possibilité d'envoyer les règles par lot peut améliorer de manière légère le temps de démarrage des conteneurs par ailleurs.
Le gestionnaire de conteneurs Kubernetes a des nouveaux paquets versionnés, permettant d'avoir plusieurs versions en parallèle. Ici les versions 1.29, 1.30 et 1.31 sont proposées avec des noms comme kubernetes1.31. Cela devenait nécessaire car Kubernetes maintient 3 versions sur une période de 4 mois par version seulement ce qui rend nécessaire un tel montage. Cela permet aussi de découpler la version de Kubernetes avec la version de Fedora Linux ce qui facilite la gestion pour les administrateurs.
L'implémentation des interfaces de Kubernetes fait par l'OCI a ses propres paquets cri-o et cri-tools qui sont également versionnés pour pouvoir suivre les versions de Kubernetes.
Mise à jour de la suite de compilation GNU : binutils 2.42, glibc 2.40 et gdb 15.
Pour la suite d'outils binutils, cela se concentre surtout sur la prise en charge plus étendue des instructions des architectures Aarch64, RISC-V et x86_64. Il gère notamment les registres supplémentaires et les instructions associées proposés par l'évolution de l'architecture x86 avec Intel APX. L'assembleur BPF améliore son interopérabilité avec les outils de LLVM en suivant les mêmes conventions.
La bibliothèque standard C commence une prise en charge expérimentale de la norme C23. La capacité de renforcer la sûreté des programmes compilés avec le compilateur Clang a été aussi améliorée pour se rapprocher de ce qui est possible de faire avec le compilateur GCC. De nombreuses fonctions mathématiques ont une version vectorisée pour l'architecture Aarch64 ce qui peut améliorer les performances pour cette architecture.
Pour finir le débogueur améliore significativement son API Python pour faciliter sa manipulation à travers un programme ou script écrit dans ce langage. La prise en charge du protocole Debugger Adapter Protocol s'améliore encore pour faciliter sa manipulation par divers IDE qui s'en servent pour l'intégrer. Les informations de débogage du programme cible au format DWARF sont lues dans un fil d'exécution dédié pour améliorer le temps de chargement.
Mise à niveau de la suite de compilateurs LLVM vers la version 19. Les paquets versionnés des versions précédentes sont toujours disponibles pour ceux qui ont besoin de la compatibilité avec les anciennes bibliothèques. Les paquets clang
, compiler-rt
, lld
et libomp
sont maintenant générés à partir du fichier de spécification du paquet llvm
ce qui n'était pas le cas avant. Cela permet entre autre de simplifier leur maintenance mais aussi d'appliquer une optimisation Profile-Guided Optimizations sur ces binaires pour améliorer les performances. Les paquets Fedora compilés avec Clang bénéficient aussi de la compilation avec l'option -ffat-lto
pour avoir des bibliothèques ayant le bitcode LTO en plus du binaire au format ELF, ce qui permet de réduire le temps de l'édition de lien quand ces bibliothèques sont impliquées. Le tout sans recourir à des macros pour obtenir le résultat après la compilation des paquets et sans renoncer à la compatibilité pour les logiciels non compilés avec ce mode activé.
Retrait de Python 2.7 dans les dépôts, seule la branche 3 est maintenue dorénavant. Enfin, cela est vrai pour l'implémentation de référence, il reste possible de le faire via PyPy qui fourni toujours un support de la version 2.7 via le paquet pypy
. Pour rappel, Python 2.7 n'est plus maintenu depuis début 2020, mais ce maintient était nécessaire pour certains paquets qui n'avaient toujours pas fini leur portage, en particulier le logiciel GIMP cas abordé plus haut. Les autres paquets concernés n'étaient plus vraiment maintenus de fait et ont été retirés. Cela devenait nécessaire car avec la fin de support de RHEL 7 prochainement, plus aucun correctif pour Python 2 ne sera développé à l'avenir rendant la situation plus critique encore.
D'ailleurs Python bénéficie de la version 3.13. Cette version fournit un nouvel interpréteur interactif avec la coloration activée par défaut pour le prompt ou les erreurs. Il donne la possibilité d'avoir de l'édition multi-lignes qui est préservée dans l'historique. Les touches F1, F2 et F3 donnent respectivement l'accès à une aide interactive, à la navigation de l'historique de l'édition et à un mode de copie plus simple pour copier / coller de gros blocs de code. Les messages d'erreur sont également plus clairs.
En dehors de cela, Python dispose du tant attendu mode sans verrou global nommé GIL ce qui permet d'améliorer les performances et de faire de réels fils d'exécution parallèle dans un programme. Mais ce mode étant expérimental, il faut installer le paquet python3.13-freethreading
et exécuter Python avec la commande python3.13t
pour en profiter.
Le compilateur juste à temps n'est quant à lui pas fourni d'une façon ou d'une autre, cette fonctionnalité étant aussi expérimentale.
Python est aussi compilé avec l'optimisation -O3 activée, en ligne avec la manière de faire par le projet officiel et améliorant les performances. Selon le test pyperformance
le gain de performance est en moyenne 1,04 fois plus rapide rien qu'avec cette option. Auparavant Python était compilé avec l'optimisation -O2
qui est moins agressive, cependant la nouvelle option augmente la taille des binaires concernés d'environ 1.2% (soit 489 kio).
Le framework d'écriture de tests en Python, Pytest se teste avec sa version 8. Cette version n'est pas compatible avec la version précédente, de nombreux éléments obsolètes sont maintenant traités comme des erreurs, et de même la façon dont les tests sont récupérés dans l'arborescence d'un code source a été modifiée ce qui peut poser différents problèmes.
En terme d'amélioration, il propose un meilleur affichage des diff en cas d'erreur lors de l'exécution d'un test, le rendant plus lisible et plus proche du visuel d'un différentiel généré à partir de la commande diff
.
Mise à jour du langage Go vers la version 1.23. Cette version apporte la télémétrie pour collecter des données sur l'usage de la chaine de compilation Go aux développeurs du projet, par défaut dans Fedora la télémétrie est activée mais reste uniquement sur votre machine, rien n'est envoyé aux serveurs du projet. Ce comportement peut être changé dans les options.
Autrement, quand le temps de compilation est amélioré lorsqu'un profil d'optimisation est utilisé, passant d'un délai supplémentaire pouvant aller jusqu'au double du temps de compilation normal à maximum 10% supplémentaire maintenant. Les applications Go ont un usage de la pile qui est légèrement réduit tandis que pour l'architecture x86_64, au détriment d'une légère augmentation de la taille du binaire, les boucles peuvent avoir une amélioration de performances d'environ 1-1,5%.
Mise à jour dans l'écosystème Haskell GHC 9.6 et Stackage LTS 22. Le compilateur en lui même propose de compiler le code pour être exécuté en tant que programme WebAssembly ou JavaScript. Les deux sont cependant considérés comme en développement et peuvent être sujets à des bogues. L'ensemble des messages d'erreur ont maintenant un code unique, permettant de simplifier la recherche d'une explication et d'une solution concernant celui-ci.
Le langage Perl passe à la version 5.40. Un nouveau mot clé CLASS
donne la classe d’exécution réelle dont l’instance d’objet est membre, ce qui est utile pour les constructeurs de classes enfants car l'accès à $self
n'étant pas autorisé dans ce contexte. Un autre mot clé :reader
est proposé, ajouté à un membre de classe il permet de définir automatiquement une fonction du même nom que le membre, qui renvoie cette valeur. Un nouvel opérateur est disponible, étant l'équivalent de
&&
et mais pour la fonction logique ou exclusif.
Node.js 22 devient la version de référence, tandis que la version 20 et 18 restent disponibles en parallèle. Cette version propose entre autre un client Websocket natif sans dépendances additionnelles, une mise à jour habituelle du moteur JavaScript V8 vers la version 12.4 qui propose notamment un ramasse miette WebAssembly. Les flux de données passent par défaut d'un buffer de 16 kib à 64 kib ce qui augmente les performances au détriment de la consommation de mémoire vive. Enfin le compilateur JIT Maglev fourni par le moteur V8 est activé par défaut, qui améliore les performances en particulier pour les petits programmes exécutés en ligne de commande.
Pour des raisons de changement de licence, le gestionnaire de bases de données clé-valeur Redis est remplacé par Valkey. En effet Redis a adopté la licence RASLv2/SSPL en remplacement de la licence BSD qui n'est pas une licence libre ce qui est en conflit avec les règles de Fedora concernant les licences des logiciels proposés dans ses dépôts. Valkey est un fork de Redis qui réutilise la même licence originelle. À ce jour pas d'incompatibilité est à prévoir pour les utilisateurs de ce logiciel, mais un paquet valkey-compat
est proposé pour migrer la configuration et les données depuis Redis. Le changement est effectué automatiquement lors de la mise à niveau de Fedora pour ces utilisateurs.
La bibliothèque Python d'apprentissage profond Pytorch est éclairée avec sa version 2.4. Le changement majeur de cette version est la prise en charge de ROCm pour tirer parti de l'accélération matérielle de l'intelligence artificielle proposée par AMD. Il y a également une amélioration de performances pour ceux utilisant GenAI sur un CPU ou encore exécutant sur des processeurs AWS Graviton3 à base d'architecture Aarch64.
L'API engine de la bibliothèque OpenSSL est dépréciée car non maintenue tout en gardant une ABI stable. En effet cette API n'est pas conforme aux standards FIPS et n'est plus maintenue depuis la version 3.0 d'OpenSSL. Aucun nouveau paquet ne peut dépendre de celui-ci jusqu'à sa suppression définitive pour simplifier la transition. Le code lié à cette API est fourni par le paquet indépendant openssl-engine-devel
pour ceux qui en ont besoin. L'objectif à terme est de simplifier la maintenance tout en réduisant la surface d'attaque.
L'édition de Fedora KDE pour l'architecture AArch64 est maintenant bloquante pour les sorties d'une nouvelle version. L'édition doit être suffisamment stable pour qu'une nouvelle version de Fedora Linux voit le jour. Cela était déjà le cas pour Fedora Workstation de cette architecture et pour Fedora KDE pour l'architecture x86_64. L'objectif est de garantir une certaine fiabilité pour ses utilisateurs.
Phase 4 de l'usage généralisé des noms abrégés de licence provenant du projet SPDX pour la licence des paquets plutôt que des noms du projet Fedora. Cela devait être l'ultime phase mais quelques contretemps repoussent à nouveau l'échéance. Cette étape et la suivante sont en fait la conversion massive des paquets vers le nouveau format, comme rapporté par ce document, la progression reste rapide et près de 98,5% des licences mentionnées dans les paquets sont déjà converties.
Les bibliothèques Java n'ont plus une dépendance explicite envers le runtime de Java pour simplifier la maintenance, rien ne change concernant les applications. L'objectif est d'éviter de spécifier une version spécifique de la version de Java pour du code qui finalement n'est pas exécuté directement, la dépendance revenant plutôt aux applications à ce sujet. Cela peut faciliter les utilisateurs ou mainteneurs d'utiliser différents JDK pour ces bibliothèques. Cela simplifie considérablement aussi la maintenance des paquets Java dans Fedora car il n'est plus nécessaire de mettre à jour la valeur de la version du JRE requis.
Le paquet systemtap-sdt-devel n'a plus l'outil dtrace qui a été mis dans le paquet systemtap-sdt-dtrace. L'objectif est de supprimer la dépendance Python dans ce paquet qui est utilisé pour l'image de compilation des paquets de Fedora. Plusieurs centaines de paquets peuvent ainsi être générés plus rapidement par cette dépendance en moins.
Ajout d'une tâche de nettoyage lors de la génération des paquets RPM pour améliorer la reproductibilité des paquets. Depuis quelques années Fedora fait un effort pour rendre la conception de ses paquets reproductibles. L'objectif est qu'un utilisateur devrait être en mesure de recompiler un paquet de son côté avec le fichier spec RPM + sources additionnelles de Fedora et obtenir exactement le même paquet, au bit près, garantissant que le paquet a été généré avec ces éléments sans altérations malveillantes. Cela peut également faciliter le développement car il rend la comparaison entre versions d'un paquets plus facile à analyser car seuls les changements dans le code sont différents et non des éléments annexes.
Un effort a été fait récemment qui repose notamment sur l'usage du programme add-determinism pour retirer du code source des éléments non déterministes comme la date de compilation. Ce programme est appelé à la fin de la génération du paquet. Fedora n'a pas réutilisé le travail de Debian à base du script strip-nondeterminism
qui est un script Perl qui ajouterait une dépendance relativement lourde pour générer tous les paquets de Fedora.
Mise à jour de createrepo_c à la version 1.0 qui gère la génération des métadonnées des dépôts de Fedora. Les versions stables et Rawhide de Fedora vont partager maintenant la même configuration des métadonnées, ce qui rendra la maintenance côté infrastructure plus simple et cohérente. Toutes les métadonnées sont compressées, avant seulement les métadonnées primaires l'étaient pour les versions stables de Fedora par exemple. Certaines données ou métadonnées étaient compressées suivant différents algorithmes:
Maintenant tout cela utilise l'algorithme zstd ce qui devrait améliorer un peu la bande passante et la consommation d'espace de stockage. Il n'est pas exclu de basculer à l'avenir sur zlib-ng
dans ce but.
Les fichiers sqlite renseignant la composition des dépôts n'étaient utiles que pour le gestionnaire de paquets YUM, avec son remplacement par DNF depuis quelques années il est inutile de les générer ce qui avait un coût en espace de stockage.
Borsalinux-fr est l'association qui gère la promotion de Fedora dans l'espace francophone. Nous constatons depuis quelques années une baisse progressive des membres à jour de cotisation et de volontaires pour prendre en main les activités dévolues à l'association.
Nous lançons donc un appel à nous rejoindre afin de nous aider.
L'association est en effet propriétaire du site officiel de la communauté francophone de Fedora, organise des évènements promotionnels comme les Rencontres Fedora régulièrement et participe à l'ensemble des évènements majeurs concernant le libre à travers la France principalement.
Si vous aimez Fedora, et que vous souhaitez que notre action perdure, vous pouvez :
Nous serions ravis de vous accueillir et de vous aider dans vos démarches. Toute contribution, même minime, est appréciée.
Si vous souhaitez avoir un aperçu de notre activité, vous pouvez participer à nos réunions mensuelles chaque premier lundi soir du mois à 20h30 (heure de Paris). Pour plus de convivialité, nous l'avons mis en place en visioconférence sur Jitsi.
Depuis juin 2017, un grand travail de nettoyage a été entrepris sur la documentation francophone de Fedora, pour rattraper les 5 années de retard accumulées sur le sujet.
Le moins que l'on puisse dire, c'est que le travail abattu est important : près de 90 articles corrigés et remis au goût du jour. Un grand merci à Charles-Antoine Couret, Nicolas Berrehouc, Édouard Duliège et les autres contributeurs et relecteurs pour leurs contributions.
La synchronisation du travail se passe sur le forum.
Si vous avez des idées d'articles ou de corrections à effectuer, que vous avez une compétence technique à retransmettre, n'hésitez pas à participer.
Si vous avez déjà Fedora Linux 40 ou 39 sur votre machine, vous pouvez faire une mise à niveau vers Fedora Linux 41. Cela consiste en une grosse mise à jour, vos applications et données sont préservées.
Autrement, pas de panique, vous pouvez télécharger Fedora Linux avant de procéder à son installation. La procédure ne prend que quelques minutes.
Nous vous recommandons dans les deux cas de procéder à une sauvegarde de vos données au préalable.
De plus, pour éviter les mauvaises surprises, nous vous recommandons aussi de lire au préalable les bogues importants connus à ce jour pour Fedora Linux 41.
Recently, someone suggested I should check out Alpine Linux and prepare a syslog-ng container image based on it. While not supported by the syslog-ng project, an Alpine-based syslog-ng container image already exist as part of the Linuxserver project.
The syslog-ng project already provides a Debian-based syslog-ng image. It has well over 50 million downloads. It is available on the Docker Hub. You can find more information about it at https://hub.docker.com/r/balabit/syslog-ng
The majority of syslog-ng users run syslog-ng on RHEL or a compatible operating system. Talking to these users, I found they were not so happy that we use a different OS as a base. I am now working on an RHEL-based syslog-ng container image. Well, almost :-) RHEL UBI (universal base image) based images can only be built on RHEL. CentOS, Alma Linux and Rocky Linux users would not be able to rebuild the image themselves. So, instead of RHEL UBI, I will use a compatible image for a wider reach. Most likely, you will be able to switch it to RHEL UBI with minimal changes, but I have not tested it yet.
If you use Alpine as a base OS, the resulting container image is usually a lot smaller than with other operating systems. Alpine has a strong focus on security. So why do many enterprises avoid Alpine Linux? Firstly, there is no commercial entity behind Alpine (Debian has the same “problem”). Second, it is a new OS to learn, both from the user and from the package building perspective. And last but not least, there were some security problems in the container base images that people still remember after years.
The small size of the base image seems to outweigh the perceived problems for admins not needing to run RHEL & clones on their infrastructure. These users are now looking for Alpine Linux based images, as they require a lot less storage or RAM.
The Linuxserver project provides many container images based on Alpine Linux. One of those is for syslog-ng. You can find it, along with a lot of useful information, at https://github.com/linuxserver/docker-syslog-ng. I tested the image and ran into a few problems. In this blog, I would like to share some of the workarounds I found while testing the image.
The problems also depend on whether you use Docker or Podman. Within the container, syslog-ng runs as a non-root user. You can set the UID/GID on the command line, and it works fine with Docker. If you set the ownership of the log directory to this on the host, syslog-ng inside the container can use it.
Podman is normally a drop-in replacement for Docker, but in this case, it is a bit more difficult. There is no one-on-one usermapping, so the values set on the command line are not helpful. You can work around it by starting a shell (bash) within the running container. The output of “ps” shows you the username running syslog-ng. Change to the /var/ directory, and change the ownership of the log directory accordingly.
Running as a regular user also means that only ports above 1024 can be used by syslog-ng. You can use port mapping to work around this problem, so from the outside, it looks like syslog-ng is running on a port below 1024.
Check the README on GitHub for the command line options and where to map the configuration or log directories.
Share your experiences with this syslog-ng container image with us on GitHub. Open a new discussion at: https://github.com/syslog-ng/syslog-ng/discussions
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
Attention testers! On behalf of the Testing and Continuous Delivery devroom we'd like to announce that call for participation is now open. This room is about building better software through a focus on testing and continuous delivery practices across all layers of the stack. The purpose of this devroom is to share good and bad examples around the question “how to improve quality of our software by automating tests, deliveries or deployments� and to showcase new open source tools and practices.
Note: for FOSDEM 2025 this devroom is a merger between the former Testing and Automation and Continuous Integration and Continuous Deployment devrooms and is jointly organized between devroom managers in previous FOSDEM editions! Kiwi TCMS is proud to be part of the team hosting this devroom!
Important: devroom will take place on Saturday, February 1st 2025 at ULB Solbosch Campus, Brussels, Belgium! Presentations will be streamed online but all accepted speakers are required to deliver their talks in person!
Here are some ideas for topics that are a good fit for this devroom:
Testing in the real, open source world
Cool Tools (good candidates for lightning talks)
In the past the devroom has seen both newbies/students and experienced professionals and past speakers as part of the audience and talks covering from beginner/practical to advanced/abstract topics. If you have doubts then submit your proposal and leave a comment for the devroom managers and we'll get back to you.
To submit a talk proposal (you can submit multiple proposals if you'd like) use Pretalx, the FOSDEM paper submission system. Be sure to select Testing and Continuous delivery!
Checkout https://fosdem-testingautomation.github.io/ for more information!
Happy Testing!
If you like what we're doing and how Kiwi TCMS supports various communities please help us!
Released on 2024-10-26.
This is a feature release that needs configuration file adjustments. See the following notes for the details.
Bodhi's update status checking has been overhauled, and some configuration options have changed.
critpath.num_admin_approvals
is removed. This backed the old Fedora "proventesters" concept, which has not been used for some years.critpath.min_karma
is removed and is replaced by a new setting just calledmin_karma
. This applies to all updates, not just critical path.critpath.stable_after_days_without_negative_karma
is renamed to critpath.mandatory_days_in_testing
and its behavior has changed: there is no longer any check for 'no negative karma'. Critical path updates, like non-critical path updates, can now be manually pushed stable after reaching this time threshold even if they have negative karma.As before, these settings can be specified with prefixes to apply only to particular releases and milestones. min_karma
and (critpath.)mandatory_days_in_testing
now act strictly and consistently as minimum requirements for stable push. Any update may be pushed stable once it reaches either of those thresholds (and passes gating requirements, if gating is enabled). The update's stable_karma
value is no longer ever considered in determining whether it may be pushed stable. stable_karma
and stable_days
are only used as triggers for automatic stable pushes (but for an update to be automatically pushed it must also reach either min_karma
or (critpath.)mandatory_days_in_testing
).
The most obvious practical result of this change for Fedora is that, during phases where the policy minimum karma requirement is +2, you will no longer be able to make non-critical path updates pushable with +1 karma by setting this as their stable_karma
value. Additionally:
date_approved
property of updates is more consistently set as the date the update first became eligible for stable push (#5630).python-openid
and pyramid-fas-openid
are EOL and we moved to OIDC authentication. (#5601).datetime.datetime.utcnow()
have been changed to datetime.datetime.now(datetime.timezone.utc)
. We previously assumed all datetimes were UTC based, now this is explicit by using timezone aware datetimes (#5702).The following developers contributed to this release of Bodhi:
RPMs of PHP version 8.3.13 are available in the remi-modular repository for Fedora ≥ 39 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
RPMs of PHP version 8.2.25 are available in the remi-modular repository for Fedora ≥ 39 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...).
The packages are available for x86_64 and aarch64.
There is no security fix this month, so no update for version 8.1.30.
PHP version 8.0 has reached its end of life and is no longer maintained by the PHP project.
These versions are also available as Software Collections in the remi-safe repository.
Version announcements:
Installation: use the Configuration Wizard and choose your version and installation mode.
Replacement of default PHP by version 8.3 installation (simplest):
dnf module switch-to php:remi-8.3/common
Parallel installation of version 8.3 as Software Collection
yum install php83
Replacement of default PHP by version 8.2 installation (simplest):
dnf module switch-to php:remi-8.2/common
Parallel installation of version 8.2 as Software Collection
yum install php82
And soon in the official updates:
To be noticed :
Information:
Base packages (php)
Software Collections (php81 / php82 / php83)
Recently I was asked the same question both at my workplace and at EuroBSDCon, the conference where I was presenting: where do you talk next? I had no definite answer. Of course, I am looking forward to the FOSDEM CfP, but I am also looking for new conferences to present syslog-ng and sudo. Do you have any recommendations?
I have been presenting syslog-ng at various events for the past almost 15 years, and sudo for 6 years. I gave many talks and held tutorials in Hungary, Europe and some even in the US.
To me, giving a talk is a two-way process: not just sharing information, but also learning from the users how they use our software, what their experiences are and what features they would like to see. You can read more about it in my OpenSource.com article at: https://opensource.com/article/21/1/open-source-evangelist
Over the years some of my favorite events disappeared, like LOADays, a small Linux / DevOps conference in Belgium. Other events have shifted focus, so talking about syslog-ng is not relevant anymore. Instead of going back to an ever-shorter list of events, I want to present syslog-ng at new places.
I can talk about syslog-ng, sudo, and their combinations.
When it comes to sudo, I usually talk about some of the lesser known and / or latest features. Most people only know that sudo is a prefix to run commands as root, but I can show that sudo knows a lot more than simply make you root. You can record sessions, extend it in Python, log sub-commands, and a lot more: https://opensource.com/article/22/2/new-sudo-features-2022
On the syslog-ng side there are also many possibilities, from why central logging is important to deep technical details, or configuring for various use cases: https://www.syslog-ng.com/community/b/blog/posts/developing-a-syslog-ng-configuration
I can also combine these two topics as you can work on sudo logs in syslog-ng. See my blog about the topic on the sudo website at https://www.sudo.ws/posts/2022/05/sudo-for-blue-teams-how-to-control-and-log-better/
I have some short, introductory tutorials for both syslog-ng and sudo, 2.5-3 hours each. The syslog-ng one helps you to get from the absolute beginner level to a good mid-level, allowing you to collect log messages, process them, filter them, and finally to store them to files and to Elasticsearch. The sudo tutorial lets you try some of the latest sudo features, like session recording, extending sudo in Python, or logging sub-commands.
So, returning to the original question: do you have any recommendations on where I should present syslog-ng and sudo? There are way too many events, so here are a few pointers that could narrow my preferences down:
Open source events, as both sudo and syslog-ng are open source software. FOSDEM, All Things Open, and many others belong in this category.
Security events, as sudo is definitely a security tool, and syslog-ng is also often used by security professionals. Bsides events or Pass the SALT belong in this category.
Obviously, the combination of the two is the best, like Pass the SALT.
Small events with a strong focus on networking both for speakers and participants. Some examples are the previously mentioned Pass the SALT, and LOADays. Discussions at these events had huge impact both on sudo and syslog-ng development.
Large events, like FOSDEM, or All Things Open, as they allow reaching many users at once from all around the world. In-depth personal discussions are often impossible, however I often hear months or even years later that my talks have induced some great changes, like adding syslog-ng to an appliance, creating Python-based drivers, and so on.
Operating system-specific events, like the openSUSE conference, or EuroBSDCon, which have larger syslog-ng communities and/or use sudo.
Where? The main focus is the EU simply because that is the easiest to reach for me. Not even a passport is needed. However, I also gave talks at some of the largest US conferences, like All Things Open, or the RSA Conference, and in Croatia before it became an EU member.
Yes, I am aware that some of these points are contradicting. Still, I need some recommendations, as the number of events is huge. Personal experience and recommendations as a participant, speaker or organizer could be really helpful in finding a few new places to present syslog-ng and/or sudo.
You can reach me in many ways:
LinkedIn: https://www.linkedin.com/in/peterczanik/
E-mail: peter.czanik at oneidentity.com
Twitter: https://x.com/PCzanik
Mastodon: https://fosstodon.org/@PCzanik
Thank you! Your help is highly appreciated!
The latest version of iOS brought some nice updates to allow you to customize the look and feel even further than before. It has allowed me to set 2 shortcuts on the home screen (one for my switch bot app and one for my custom MacBook controller web app). I also like the larger dark mode icons without the text labels below them. Here are my current screenshots and setup:
~
When Steve Jobs came back to Apple, he drew a 2×2 grid to reorganize the Mac product line up and offerings. I believe the iPhone needs a similar grid in 2×3 form, for example:
iPhone | Small < 6.0” | Medium > 6.0” | Large < 7.0” |
Consumer | 5.5” OLED Screen 2 Cameras Embedded 120Hz Display Med Size Storage | 6.1” OLED Screen 2 Cameras Embedded 120Hz Display Med Size Storage | 6.7” OLED Screen 2 Cameras Embedded 120Hz Display Med Size Storage |
Prosumer | 5.7” OLED Screen 3 Cameras Embedded 120Hz AOD Large Size Storage | 6.3” OLED Screen 3 Cameras Embedded 120Hz AOD Large Size Storage | 6.9” OLED Screen 3 Cameras Embedded 120Hz AOD Large Size Storage |
Kamal’s configuration comes with one primary proxy role to accept HTTP traffic. Here’s how to think about proxy roles and how to configure others.
Roles are a Kamal way to split application containers by their assigned role. A web role runs the application server, a job role runs a job queue, and an api role might run API. They all run the same Docker image but can started with a different command:
# config/deploy.yml
...
servers:
web:
hosts:
- 161.232.112.197
job:
hosts:
- 161.232.112.197
cmd: bin/jobs
A web role is special role since it’s a primary role that also runs Docker Proxy:
# config/deploy.yml
...
servers:
web:
hosts:
- 161.232.112.197
proxy: true
primary_role: web
# proxy:
# ssl:
# host:
A primary role is a role that’s booted first. Other roles boot only once at least one container of the primary role passed its health check. We can change the name of the primary role with the primary_role
directives.
A proxy role is a role that uses Kamal Proxy to accept requests. By default, Kamal sets up the web role to accept requests as we have seen. But we can have more roles to accept requests.
Let’s say we want to run an API server alongside the main application:
# config/deploy.yml
...
servers:
web:
hosts:
- 161.232.112.197
api:
hosts:
- 161.232.112.197
env:
MODE: "api"
proxy:
host: api.example.com
cmd: bin/rails s
proxy:
ssl: true
host: example.com
Now Kamal will boot our application and shortly after start the API server. Since it’s a proxy role it also comes with the same health check run by Kamal Proxy (checking out the /up
path on port 80
).
Similarly we could split the application into backend and frontend:
# config/deploy.yml
...
servers:
frontend:
hosts:
- 161.232.112.197
proxy:
host: example.com
cmd: npm start
backend:
hosts:
- 161.232.112.197
proxy:
host: api.example.com
cmd: bin/rails s
primary_role: backend
proxy:
ssl: true
Still, only one role will be the primary role and both roles have to share the same Docker image.
The new usb_set_wireless_status() driver API function can be used by drivers of USB devices to export whether the wireless device associated with that USB dongle is turned on or not.
To quote the commit message:
This will be used by user-space OS components to determine whether the battery-powered part of the device is wirelessly connected or not, allowing, for example: - upower to hide the battery for devices where the device is turned off but the receiver plugged in, rather than showing 0%, or other values that could be confusing to users - Pipewire to hide a headset from the list of possible inputs or outputs or route audio appropriately if the headset is suddenly turned off, or turned on - libinput to determine whether a keyboard or mouse is present when its receiver is plugged in.
This is not an attribute that is meant to replace protocol specific APIs [...] but solely for wireless devices with an ad-hoc “lose it and your device is e-waste” receiver dongle.
Currently, the only 2 drivers to use this are the ones for the Logitech G935 headset, and the Steelseries Arctis 1 headset. Adding support for other Logitech headsets would be possible if they export battery information (the protocols are usually well documented), support for more Steelseries headsets should be feasible if the protocol has already been reverse-engineered.
As far as consumers for this sysfs attribute, I filed a bug against Pipewire (link) to use it to not consider the receiver dongle as good as unplugged if the headset is turned off, which would avoid audio being sent to headsets that won't hear it.
UPower supports this feature since version 1.90.1 (although it had a bug that makes 1.90.2 the first viable release to include it), and batteries will appear and disappear when the device is turned on/off.
A turned-on headset
Our team has put a lot of effort into the possibility of building modules in Copr. This feature went through many iterations and rewrites from scratch as the concepts, requirements, and goals of Fedora Modularity kept changing. This will be my last article about this topic because we are planning to drop Modularity and all of its functionality from Copr. The only exceptions are Module hotfixes and module dependencies, which are staying for good (until they are supported by Mock and DNF).
The Fedora Modularity project never really took off, and building modules in Copr even less so. We’ve had only 14 builds in the last two years. It’s not feasible to maintain the code for so few users. Modularity has also been retired since Fedora 39 and will die with RHEL 9.
Additionally, one of our larger goals for the upcoming years is to start using Pulp as a storage for all Copr build results. This requires rewriting several parts of the backend code. Factoring in reimplementation for module builds would result in many development hours wasted for very little benefit. All projects with modules will remain in the current storage until the Modularity is finally dropped.
In the ideal world, we would keep the feature available as long as RHEL 9 is supported, but we cannot wait until 2032.
It was me who introduced all the Modularity code into Copr, so it should also be me who decommissions it. Feel free to ping me directly if you have any questions or concerns, but you are also welcome to reach out on the Copr Matrix channel, mailing list, or in the form of GitHub issues. In the meantime, I will contact everybody who submitted a module build in Copr in the past two years and make sure they don’t rely on this feature.
Cockpit is the modern Linux admin interface. We release regularly.
Here are the release notes from Cockpit 327:
Support for connecting to remote machines without Cockpit installed now extends to the standard Linux distribution packages.
The Cockpit Client flatpak has been capable of connecting to remote machines without Cockpit installed since its beginning in release 295 and the cockpit/ws container since version 326 recently added this functionality.
To ensure compatibility and safety, package-installed versions of Cockpit only allow connections to remote machines without Cockpit running the same operating system version. Future releases may relax this limitation.
Cockpit 327 is available now: