This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 13 – 17 January 2025
Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
This is the 127th issue of syslog-ng Insider, a monthly newsletter that brings you syslog-ng-related news.
NEWS
A syslog-ng container image based on Alpine Linux
Recently, someone suggested I should check out Alpine Linux and prepare a syslog-ng container image based on it. While not supported by the syslog-ng project, an Alpine-based syslog-ng container image already exist as part of the Linuxserver project.
Last week, I submitted syslog-ng to openSUSE Leap 16.0. While the distro is still in a pre-alpha stage, everything already works for me as expected. Well, except for syslog-ng, where I found a number of smaller problems. As such, this blog is a call for testing, both for syslog-ng on openSUSE Leap 16.0 and also for the distribution itself.https://www.syslog-ng.com/community/b/blog/posts/call-for-testing-syslog-ng-in-opensuse-leap-16-0
Experimental syslog-ng container image based on Alma Linux
The official syslog-ng container image is based on Debian Stable. However, we’ve been getting requests for an RPM-based image for many years. So, I made an initial version available based on Alma Linux and now I need your feedback about it!
Turbo is a great way to build user interfaces, but most Turbo forms have to wait for the server response. Here’s how I am adding a small loading spinner to the submit buttons to improve the UX.
Submit feedback
Whenever we submit a Turbo form, we are waiting for a response from the server without any big visual changes. This is especially noticable inside modals and on slow connections.
To improve the situation we’ll create a small Stimulus controller that can be attached to any such form and suggests everything is working despite the wait:
Here’s the implementation of our small controller for lazy forms:
import{Controller}from"@hotwired/stimulus";// Connects to data-controller="lazy-form"exportdefaultclassextendsController{statictargets=["button"];submit(event){constbutton=this.buttonTarget;// Preserve the button's current width and heightconstbuttonWidth=button.offsetWidth+"px";constbuttonHeight=button.offsetHeight+"px";button.style.width=buttonWidth;button.style.height=buttonHeight;// Change the button text and disable itbutton.innerHTML='<span class="small-loader"></span>';button.disabled=true;}}
The controller’s only job is to update the button without any interference to the submission process. I find it best when the button’s width doesn’t change, but you can change it to whatever’s needed.
You might also want to style the disabled state e.g. with cursor: wait; at least.
We’re thrilled to announce the new functionality of Log
Detective. The service can now explain in detail a
build log of your choice. Click on
logdetective.com/explain, paste a
link to your log, wait one minute and you should see a description of important
lines with the overall explanation.
Disclaimer
This is our initial prototype of this functionality. We appreciate all your
feedback. Please be patient, as there will be service disruptions, instability,
and usability issues.
1�⃣ The service can process only one request at a time. We understand this
is very limiting, especially when multiple people will use the service at the
same time. You may need to wait several minutes to get a reply in such a case.
We are thankful to the Fedora Project for sponsoring our infrastructure needs.
Please, reach out to us if you have experience in running LLM inference in
parallel on a single GPU. We will scale the service later based on the demand
in usage.
� We are using a general-purpose LLM in the background. The data we are
collecting are not being utilized just yet in the answers. We still don’t have
enough data to fine-tune a model that would provide high-quality answers. Our
goal for 2025 is to research and train such a model.
🔬 The functionality is highly experimental. Our team has worked
diligently to deliver this prototype. However, this represents only a fraction
of the service’s full potential. We eagerly welcome all feedback and would
greatly appreciate it if you could report any unusual behavior, errors, or
tracebacks that you encounter.
Step-by-step guide
The new functionality is available under “Explain logs�.
The service only accepts links to raw logs; all of these are always available
for Copr and Koji builds.
Upon submission, Log Detective retrieves the log file and initiates the
processing phase. During this process, the service interacts with the
underlying LLM through multiple requests. It usually takes one minute to return
an answer.
The result is split into two sections:
The left side contains an overall explanation for the whole log.
Log Detective uses Drain to select
important lines from the log. You can see descriptions of them on the right
side after you expand them by clicking on a line.
In our example here, we can see the build failed because a required library
libtree-sitter.so.0.23()(64bit) is not available and cannot be installed. Log
Detective attempts to explain this issue, though it fails with the solution:
it’s not as easy as just installing the library with dnf. There is definitely
room for improvement in the final answer.
This is where you can help. If you are not satisfied with the answer and know a
better one, head over to Log Detective website homepage, enter the log link and
annotate it for us. Once we fine-tune our own model, the service can provide
much better explanations thanks to you.
Figured I'd post an update on how things are going with the new laptop (HP Omnibook Ultra 14, AMD Ryzen AI 9 365 "Strix Point", for the searchers) and with Silverblue.
I managed to work around the hub issue by swapping out the fancy $300 Thunderbolt hub for a $40 USB-C hub off Amazon. This comes with limitations - you're only going to get a single 4k 60Hz external display, and limited bandwidth for anything else - but it's sufficient for my needs, and makes me regret buying the fancy hub in the first place. It seems to work 100% reliably on startup, reboot and across suspend/resume. There's still clearly something wrong with Thunderbolt handling in the kernel, but it's not my problem any more.
The poor performance of some sites in Firefox turned out to be tied to the hanging problem - I'd disabled graphics acceleration in Firefox, which helped with the hanging, but was causing the appalling performance on Google sites and others. I've now cargo-culted a set of kernel args - amdgpu.dcdebugmask=0x800 amdgpu.lockup_timeout=100000 drm.vblankoffdelay=0 - which seem to be helping; I turned graphics acceleration back on in Firefox and it hasn't started hanging again. At least, I haven't had random hangs for the last few days, and this morning I played a video on youtube and the system has not hung since then. I've no idea how bad they are for battery life, but hey, they seem to be keeping things stable. So, the system is pretty workable at this point. I've been using it full-time, haven't had to go back to the old one.
I'm also feeling better about Silverblue as a main OS this time. A lot of things seem to have got better. The toolbox container experience is pretty smooth now. I managed to get adb working inside a container by putting these udev rules in /etc/udev/rules.d. It seems like I have to kill and re-start the adb server any time the phone disconnects or reboots - usually adb would keep seeing the phone just fine across those events - but it's a minor inconvenience. I had to print something yesterday, was worried for a moment that I'd have to figure out how to get hp-setup to do its thing, but then...Silverblue saw my ancient HP printer on the network, let me print to it, and it worked, all without any manual setup at all. It seems to be working over IPP, but I'm a bit surprised, as the printer is from 2010 or 2011 and I don't think it worked before. But I'm not complaining!
I haven't had any real issues with app availability so far. All the desktop apps I need to use are available as flatpaks, and the toolbox container handles CLI stuff. I'm running Firefox (baked-in version), Evolution, gedit, ptyxis (built-in), liferea, nheko, slack and vesktop (for discord) without any trouble. LibreOffice and GIMP flatpaks also work fine. Everything's really been pretty smooth.
I do have a couple of tweaks in my bashrc (I put them in a file in ~/.bashrc.d, which is a neat invention) that other Atomic users might find useful...
the gedit aliases let me do gedit somefile either inside or outside a container, and the file just opens in my existing gedit instance. Can't really live without that. You can adapt it for anything that's a flatpak app on the host. The xdg-open alias within containers similar makes xdg-open somefile within the container do the same as it would outside the container.
So it's still early days, but I'm optimistic I'll keep this setup this time. I might try rebasing to the bootc build soon.
کتاب الکترونیکی Open source AI به بررسی ترکیب جذاب متنباز و هوش مصنوعی (AI) میپردازد و مزایای مدلها و ابزارهای دارای مجوز متنباز را برای توسعهدهندگان به نمایش میگذارد. هوش مصنوعی متنباز برای توسعهدهندگان به معرفی و بررسی ویژگیهای کلیدی Red Hat OpenShift AI میپردازد. این ویژگیها شامل Jupyter Notebooks، PyTorch، ابزارهای پیشرفته پایش و […]
IBus 1.5.32 beta 1 will be released soon and it will support the Wayland input-method protocol version 2. Now I’d summarize the configurations of Wayland desktop sessions.
I use dnf command to install each desktop environment in Fedora. You would use the different ways in other distro.
Weston desktop environment
I don’t remember the detail but I think Weston supports the Wayland input-method protocol version 1 and the weston.ini file can allow the one command in “input-method” section and IBus has provided ibus-wayland in a libexec directory.
GNOME-Shell can connect ibus-daemon with D-Bus methods without using the Wayland input-method protocols. GNOME-Shell also performs the panel feature instead of the IBus panel component. It executes ibus-daemon with “–panel disable” option via systemd.
Plasma Wayland supports the Wayland input-method protocol version 1 and IBus has supported it since version 1.5.29. IBus provides /usr/share/applications/org.freedesktop.IBus.Panel.Wayland.Gtk3.desktop file which can be loaded by the Plasma configurations. You can follow the setup below. Alternatively, run ibus start --type kde-wayland command manually. (You also would like to add --verbose option for ibus start command.)
Run systemsettings command to launch the desktop setting application
Select “Keyboard” -> “Virtual Keyboard” section
Select “IBus Wayland” icon and click “Apply” button
Sway desktop environment
Sway supports the Wayland input-method protocol version 2 and IBus will support it in 1.5.32. You can follow the setup below or run ibus start --type wayland command by manual. I use waybar since swaybar does not support StatusNotifier yet to show the IBus panel icon.
Users can enable IBus without the desktop configurations with the input-method protocol version 2 against version 1 (I.e. the --type kde-wayland option executes the KDE specific operations internally but the --type wayland option runs IBus components only internally.). The version 2 also supports the different serial timing, customizing key repeating rate, multiple seats but it dropped to support the customizing preedit colors which was supported in the text-input protocol version 1 and input -method protocol version 1.
% sudo dnf group list --hidden
% sudo dnf group install sway-desktop
% cat ~/.config/sway/config
include "/etc/sway/config"
# IBus uses Super-space to show the IME switcher popup.
unbindsym $mod+Shift+space
bindsym $mod+Shift+backslash floating toggle
unbindsym $mod+space
bindsym $mod+backslash focus mode_toggle
input * {
xkb_layout "us"
xkb_variant ","
xkb_options "ctrl:swapcaps"
}
bar {
swaybar_command waybar
}
exec_always ibus start --type wayland
Hyprland desktop environment
Hyprland supports the Wayland input-method protocol version 2 and IBus will support it in 1.5.32. You can follow the setup below or run ibus start --type wayland command by manual. Since Hyprland locks the ~/.config/hypr/hyprland.conf file, seems users cannot override the configuration file. To unlock the file, you might need to comment out the line of “autogenerated=1” in the hyprland.conf file until you complete to modify the file for the IBus configuration.
% sudo dnf install hyprland
< Log into the Hyprland desktop session to generate the default hyprland.conf file >
% ls ~/.config/hypr/hyprland.conf
~/.config/hypr/hyprland.conf
% vi ~/.config/hypr/hyprland.conf
# autogenerated=1
exec-once = waybar
exec-once = ibus start --type wayland
input {
kb_layout = us
kb_variant =
kb_model =
kb_options = ctrl:swapcaps
kb_rules =
follow_mouse = 1
sensitivity = 0 # -1.0 - 1.0, 0 means no modification.
touchpad {
natural_scroll = false
}
}
% cat ~/.config/kitty/kitty.conf
font_family family="Source Code Pro"
COSMIC dekstop enviroment
COSMIC supports the Wayland input-method protocol version 2 and IBus will support it in 1.5.32. You can follow the setup below or run ibus start --type wayland command by manual.
% sudo dnf group install cosmic-desktop
% cat ~/.config/cosmic/com.system76.CosmicComp/v1/xkb_config
(
rules: "",
model: "pc105",
layout: "us",
variant: "",
options: Some(",ctrl:swapcaps"),
repeat_delay: 600,
repeat_rate: 25,
)
% cat ~/.config/autostart/org.freedesktop.IBus.Panel.Wayland.Gtk3.desktop
[Desktop Entry]
# IBus Wayland is a branding name but not translatable.
Name=IBus Wayland
Exec=ibus start --type wayland
Type=Application
Icon=ibus
NoDisplay=true
X-Desktop-File-Install-Version=0.27
OnlyShowIn=COSMIC
Welcome to 2025. It's going to be a super busy year in Fedora land,
all kinds of things are going to be worked on and hopefully rollout
in 2025. Back from the holiday break things started out somewhat slow
(which is great) but already have started to ramp up.
First up I fixed a few issues that were reported during the break:
Our infrastructure.fedoraproject.org host wasn't redirecting http to https
(as all our other hosts do). Turns out we disabled this many years
ago because virt-install couldn't handle https for some reason. I think
this was in the RHEL6 days, and we just never went back and fixed it.
This did end up breaking some provisioning kickstarts that had http
links in them, but easy (and good!) to fix.
Some redirects we had for sites were retirecting to just {{ target }}
variables, but if that was examplea.fedoraproject.org////exampleb.com
it would redirect to examplea.fedoraproject.org.exampleb.com.
A totally different domain. Fixed that by making sure there was
a / at the end of the redirect. Unfortunately, I also broke the codecs
site for basically everyone. ;( It was broken for a few hours, but easy
to fix up after I was made aware.
There's been a ton of f42 changes flowing before/during/after the holidays.
Lots of exciting things. Hopefully it can all land and work out ok.
I finally started in on work for a riscv-koji secondary arch hub.
It would have been really easy, but we dropped all the secondary things
from ansible after the last secondary arch hub went primary, so I am
having to go though and adjust a lot of things. Making good progress
however. Hopefully something to show next week. This secondary hub
will allow maintainers to login/do scratch builds and get things
one step closer to primary. There's still a long road for it though, as
we need actual local builders and proof of keeping up with primary.
Next I cleaned up some space on our netapp (because see below).
I archived some iot composes and marked a bunch more for deletion.
As soon as I get the ack on those, that should free up around 5TB.
I noticed our dist-repos were really pretty large. This turns out to
be two issues: First, we were keeping 6 months of them. We did that
because we use these for deployments and before if all repos were
older than the last change to them, they just would be missing.
I am pretty sure now that kojira keeps the latest one, so this is no
longer a factor. I set them to 1 week (as default). This should free
up many TBs. Secondly, the flatpak apps repos were not using the
latest (they were pulling everything). Adjusted that and it should
save us some.
Finally, I nuked about 35TB of old db backups. There's no reason
to keep daily database dumps since 2022. I kept one from each
month, but we have never had to go back years for data. In particular
the datanommer and koji db's are... gigantic, even compressed.
Unfortunately it will be a while before this is actually freed
as we have a lengthy snapshot retention policy on the backup
volume. Still should help a lot down the road.
With some freed up space, I now could make another iscsi volume
and move our ppc64le buildvm's off to it. It took far longer than
I would like to admit to get things working (turns out I had
logged in on the client before setting up everything on the netapp
and it just didn't see the lun until I ran a refresh).
I expected there to be a pretty vast speed improvement, and the
vm's are indeed a lot more responsive and install much faster, but
I am not sure builds are really that much faster. Will need some
more poking around to find out. The local disk on those are
7200 rpm spinning sata drives. The iscsi is a ssd backed netapp over
10G network. Unfortunately also we have seen instablity in the hosts
in the last week, which is likely a kernel issue since I updated them
to 6.12. Hopefully I can get everything sorted out before the
f42 mass rebuild, which is next wed.
I’ve been trying to get things in order before the end of last year and into this new year of 2025 so I can try to lessen any worries that might come up. I was doing a complete re-write of the core network-wide proxy-service and this time I have also tried to write it in both Python and C so the code bases so mostly match and line up with each other. The main problem I think is that I tried to generalize and condense the code base too much and I think this can cause problems as the packet tracking for UDP and connection state for TCP is a bit different from each other. This time I have separated out each component into its own dedicated section, the downside is that it’s a much larger code base with potentially duplicated code snippets. It’s a strange kind of project and hard to describe it but the best way I can think of it as is a Transparent Layer 4 MITM/Proxy service
The core components now have the following layouts:
Protocol
Mode
Operation
UDP
Client
Send
UDP
Client
Read
UDP
Server
Send
UDP
Server
Read
TCP
Client
Send
TCP
Client
Read
TCP
Server
Send
TCP
Server
Read
I’m still fine tuning & adjusting it but the source code links can be found here:
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 06 – 10 January 2025
Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
Not quite as succinct as I would have liked, but this does work. This is a one liner in Bash, and I had hoped to do it with a single python call once I had the repo object established. In general, diffs are not simple to do on blob objects. I had trouble using the difflib and set approaches to diffing the two files, possibly due to the fact that I was originally not decoding the streams. Thus all the splitlines work probably can be simplified.
#!/bin/python3
import re
import io
import difflib
from git import Repo
from git import TagReference
path_to_dir = "./linux"
def check_topic_diff():
repo = Repo.init(path_to_dir)
last = None
previous = repo.head
for tagref in TagReference.list_items(repo):
match = re.search("^your_tag_pattern_here.*$", tagref.name)
if (match is None):
continue
last = tagref
if previous is None:
print("no previous, exiting")
return
if last is None:
("No last, exiting")
return
prev_tree=previous.commit.tree
last_tree=last.commit.tree
prev_contents = prev_tree["PATCHES_BY_TOPIC"].data_stream.read().decode('utf-8')
last_contents = last_tree["PATCHES_BY_TOPIC"].data_stream.read().decode('utf-8')
prev_lines = prev_contents.splitlines()
last_lines = last_contents.splitlines()
are_diffs = False
for prev_line in prev_lines:
last_line = last_lines.pop(0)
if (last_line == prev_line):
continue
are_diffs = True
if (are_diffs):
print("files differ")
else:
print("files are the same")
# print(last_contents)
# print(prev_contents)
# print (set(prev_contents).difference(set(last_contents)))
HEAD:PATCHES_BY_TOPIC
check_topic_diff()
Here’s how to implement autosaving for inline input fields the Hotwire way.
Autosaved forms
What’s autosave? Autosaving is saving a user input automatically on changes, lost focus or after an interval of no interactivity without any specific user action. Typically in inline forms.
To make things straigtforward let’s say we want to save a post’s title while reusing an existing update action that can save the title or perhaps all the post’s attributes.
They are couple options to go around it, but here’s how I do it. You just need Turbo and Stimulus installed.
Stimulus autosave
Since we’ll remove the usual ‘Save’ button from the form, we’ll need an auto-submission done in a different way. We can create a small Stimulus autosave controller that will be able to autosave anything by submitting the model form:
The controller has one form target to determine which form to submit. You could even avoid a target and find the form element, but this makes it explicit.
We need to call requestSubmit instead of submit otherwise we wouldn’t get a Turbo request.
Tip: There is Auto Submit controller as a package.
The form
The form to submit the action is completely same, we keep our form_with, we keep the same url if we specified that. We just wrap it with data-controller set to our Stimulus controller name and provide data attributes on the form and the field in question:
I used the blur event to autosave on lost focus but you could also use change to immediately save any progress (at the expense of many HTTP calls).
I also prepared a small <div> with an ID that will inform the user about the saving status. The type of the element is irrelevant, it can be a <turbo-frame> too. The important thing is the ID.
Note that Turbo could also replace the whole form or in a different approach you could even morph the whole screen.
Turbo stream
Given that our update action remains the same as with traditional forms, the form would already be submitted but we would get the usual redirection. Instead, we want to return a Turbo stream:
classPostsController<ApplicationController...defupdateif@blog_post.update(post_params)respond_todo|format|format.turbo_streamformat.html{redirect_topost_path(@post),notice: "Post was successfully updated."}endelserespond_todo|format|format.turbo_streamformat.html{render:edit,status: :unprocessable_entity}endendend
Now we have an autosave but without any feedback for the user. To add it we’ll instruct Turbo to replace our <turbo-frame> with either message Saved. or the error in the view:
Asking on title_previously_changed? will only output the message if any actual changes happened.
And that’s it! You have an inline form with autosaving.
Warning
I spent a considerable amount of time chasing an issue with wrong field focus. If that happens to you, go through all of your forms and make sure each input field has a unique ID. If you have two forms of the same model on the page they will have a different form ID but same input IDs.
Il y a un peu plus de 3 mois, j’ai entrepris de migrer de Jeedom vers Home Assistant. J’ai d’ailleurs évoqué et expliqué ce choix dans un précédent article. Aujourd’hui, un peu plus de 2 mois après cette bascule et la désactivation de mon instance Jeedom, il est temps de faire un premier bilan. Est-ce […]
New year, new blog post! Fedora's going great...41 came out and seems to be getting good reviews, there's exciting stuff going on with atomic/bootc, we're getting a new forge, it's an exciting time to be alive...
But also! I bought myself a new laptop. For the last couple of years I've been using a Dell XPS 13 9315, the Alder Lake generation. I've been using various generations of XPS 13 ever since Sony stopped making laptops (previously I used a 2010 Vaio Z) - I always found it to be the best thin-and-light design, and this one was definitely that. But over time it felt really underpowered. Some of this is the fault of modern apps. I have to run a dumb amount of modern chat apps, and while they're much nicer than IRC, they sure use a lot more resources than hexchat. Of course I have a browser with about 50 tabs open at all times, Evolution uses quite a lot of memory for my email setup for some reason, and I have to run VMs quite often for my work obviously. Put all that together, and...I was often running out of RAM despite having 16GB, which is pretty ridiculous. But even aside from that, you could tell the CPU was just struggling with everything. Just being in a video chat was hard work for it (if I switched apps too much while in a meeting, my audio would start chopping up for others on the call). Running more than two VMs tend to hang the system irretrievably. Just normal use often caused the fan to spin up pretty high. And the battery life wasn't great. It got better with kernel updates over time, but still only 3-4 hours probably.
So I figured I'd throw some hardware at the problem. I've been following all the chipset releases over the last couple of years, and decided I wanted to get something with AMD's latest silicon, codenamed "Strix Point", the Ryzen AI 3xx chips. They're not massively higher-performing than the previous gen, but the battery life seems to be improved, and they have somewhat better GPUs. That pretty much brought it down to the Asus Vivobook S 14, HP Omnibook Ultra 14, and Lenovo T14S gen 6 AMD. The Asus is stuck with 24GB of RAM max and I'm not a huge Asus fan in general, and the HP came in like $600 cheaper than the Thinkpad with equivalent specs, and had a 3 year warranty included. So I went with the HP, with 1TB of storage and 32GB of RAM.
I really like the system as a whole. It's heavier than the XPS 13, obviously, the bezels are a little bigger, and the screen is glossier. But the screen is pretty nice, I like the keyboard, and the overall build quality feels pretty solid. The trackpad seems fine.
As for running Fedora (and Linux in general) on it...well, it's almost great. Everything more or less works out of the box, except the fingerprint reader. I don't care about that because I set up the reader on the XPS 13 and kinda hated it; it's nowhere near as nice as a fingerprint reader on a phone. Even if it worked on the HP I'd leave it off. The performance is fantastic (except that Google office sites perform weirdly terribly on Firefox, haven't tried on Chromium yet).
But...after using it for a while, the issues become apparent. The first one I hit is that the system seems to hang pretty reproducibly playing video in browsers. This seems to be affecting pretty much everyone with a Strix Point system, and the only 'fix' is to turn off hardware video acceleration in the browser, which isn't really great (it means playing video will use the CPU, hurting battery life and performance). Then I found even with that workaround applied the system would hang occasionally. Looking at the list of Strix Point issues on the AMD issue tracker, I found a few that recommended kernel parameters to disable various features of the GPU to work around this; I'm running with amdgpu.dcdebugmask=0x800, which disables idle power states for the GPU, which probably hurts battery life pretty bad. Haven't had a hang with that yet, but we'll see. But aside from that, I'm also having issues with docks. I have a Caldigit TS3+, which was probably overpowered for what I really need, but worked great with the XPS 13. I have a keyboard, camera, headset, ethernet and monitor connected to it. With the HP, I find that at encryption passphrase entry during boot (so, in the initramfs) the keyboard works fine, but once I reach the OS proper, only the monitor works. Nothing else attached to the dock works at all. A couple of times, suspending the system and resuming it seemed to make it start working - but then I tried that a couple more times and it didn't work. I have another Caldigit dock in the basement; tried that, same deal. Then I tried my cheap no-name travel hub (which just has power pass-through, an HDMI port, and one USB-A port on it) with a USB-A hub, and...at first it worked fine! But then I suspended and resumed and the camera and headset stopped working. Keyboard still worked. Sigh. I've ordered a mid-range hub with HDMI, ethernet, a card reader and four USB-A ports on it off Amazon, so I won't need the USB-A hub daisychain any more...I'm hoping that'll work well enough. If not, it's a bit awkward.
So, so far it's a bit of a frustrating experience. It could clearly be a fantastic Linux laptop, but it isn't quite one yet. I'd probably recommend holding off for a bit while the upstream devs (hopefully) shake out all the bugs...
Once I saw in my unofficial syslog-ng repo that syslog-ng compiles fine on EPEL 10, I also started to work on the official package. I hit a roadblock immediately: ivykis (a mandatory dependency of syslog-ng) was missing from EPEL 10. So, right before the Christmas holidays, I submitted two missing dependencies I maintain (ivykis and riemann-c-client) to EPEL 10. As of today, all mandatory and most optional syslog-ng dependencies are available either in the base OS or in EPEL 10.
Last week, I submitted syslog-ng 4.8.1 to EPEL 10. Three dependencies are missing, thus the related features are disabled. These missing dependencies are SQL support, MQTT support and SMTP support. I suspect that SQL support will stay missing, while MQTT and SMTP might arrive later on. At least these packages arrived with some delay to EPEL 9.
Testing
Right now, syslog-ng is not yet in EPEL 10, but available in the testing repository. First, you have to enable EPEL, then you can install syslog-ng using the following command:
Once syslog-ng is migrated to the stable repository from testing, you can stop using the --enablerepo=epel-testing option.
What is next?
RHEL 10 and compatible operating systems are not yet available, but you can get a preview if you install CentOS Stream or Alma Linux Kitten. Using these, you can have a well-tested EPEL 10 environment with syslog-ng by the time of the official release. Over time, I expect some of the missing dependencies to appear in EPEL 10, so I can add them to the package.
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
I recently moved a home server from Fedora Linux 41 to CentOS Stream 10. The reason
for the move was not related to Fedora, but due to a series of hardware
failures on a 4 year old server. The internal NVME died and the 2 SSD
drives I had as a raid array for running virtual guests, mock builds and
other things started reporting SMART errors. With the recent release of
CentOS Stream 10, and most of my day work going to be related to it, I
figured it was time to buy some replacement drives and install a new
operating system.
While doing this reinstall, I saw other people were running into
issues and posting on mailing lists and forums for help. I figured I
would try to collect all the info in one place and see if it would help
people.
Problem 1: Old Hardware
Problem Description
You have tried to install CentOS Stream 10 on your system which ran
previous versions of CentOS without problems. However now the system
will crash at different points in the install or not boot at all during
the media. Re-cutting the ISO or other items does not fix the issue.
Why is this happening?
The issue is due to CentOS Stream 10 only supports hardware that has
the x86-64-v3instruction
set. This is an instruction set which Intel introduced in 2013 with
the Haswell chip set and AMD added similar instructions in 2015. Sadly,
Intel did not introduce the full instruction set to all their sold CPU’s
after 2013, so there are various ‘low end’ Atom and similar Cups built
from 2013 to maybe 2020 which will not work.
The reason for the change in instruction sets comes down to Red Hat
Enterprise Linux being aimed at data center hardware which
‘theoretically’ gets replaced on a 8 year life cycle. Owners of this
newer hardware usually want to use the hardware as much as possible so
want libraries and tools to use the newer instructions. There is also
the fact that supporting older hardware with newer code becomes harder
and harder running anything from manufacturers no longer giving driver
updates to the general problem of newer software using more memory and
CPU cycles. People with older hardware are generally going to need to
stick with CentOS Stream 9 until it reaches its EOL in early 2027 or
move to another operating system which is aimed to support older
systems.
What can I do?
This is a fundamental instruction set change. The only fixes are to
buy newer hardware, stay on an EL clone of 8 or 9, or move to a
different operating system.
Problem 2: Server Going
to Sleep Problem
Problem Description
The system was installed with the default ‘Server with GUI’ and after
the install completes the system goes into ‘deep sleep’ mode after no
activity on the GUI console. This may only occur after a user has logged
in one time to the server console, but may also happen without any user
interaction.
Why is this happening?
My first worry was that the system had died, but when pressing a key,
the system power back up from a low powered state. Doing a search for
bugs, I ran into problems with Red Hat’s https://issues.redhat.com
search capabilities and ended up looking through stack overflow and
other places. The issue seems to be that the default GNOME settings
assume that this is either a ‘desktop’ or ‘laptop’ which when not in
constant use will want to sleep to save power. This is not a useful
setting for a 24x7 server, and neither was the fact that the GUI
settings to turn this off mentioned on the GNOME pages did not seem to
exist in the version of GNOME shipped in the current CentOS Stream 10
(2024-12-30).
What can I do?
Doing some other searches I was able to cobble together some command
line options which turned off the settings.
## Set the inactive sleep to 0 when there is power to the
## system.
$ sudo -u gdm dbus-run-session gsettings \
set org.gnome.settings-daemon.plugins.power \
sleep-inactive-ac-timeout 0
## Do the same for the main user
$ dbus-run-session gsettings \
set org.gnome.settings-daemon.plugins.power \
sleep-inactive-ac-timeout 0
I am expecting that this will be reported to Red Hat at some point
and can be fixed in the install settings so new users won’t have this in
the future.
Problem 3:
Continuous prompt error mentioning EPEL
Problem Description
After enabling an external repository like the EPEL-10, the system
will continually pause during shell entries and give an error like:
Failed to search for file: Failed to download gpg key for
repo 'epel': Curl error (37):
Could not read a file:// file for
file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-$releasever_major
[Couldn't open file
/etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-$releasever_major]
Why is this happening?
This problem occurs because of two helpful changes coming into
contact with each other.
The first helpful tool is that the various shells in CentOS Stream 10
come with helpers which will catch ‘command not found’ and try to figure
out if you mistype it:
vm
bash: vm: command not found...
Similar command is: 'mv'
or if you need to install an additional package set from a
repository. This can be helpful even on systems which are controlled by
a tool like puppet or ansible as sometimes
something expected was just missed on the system.
The second helpful change is that a much used additional repository
EPEL is trying
to fix a reoccurring problem with EL releases. Since the release of EL8
in 2019, Red Hat has been doing a major release every 3 years and around
10 minor releases per major release every 6 months. In a change with
earlier Enterprise Linux, where major re-bases did not happen that much,
each minor release may see one or more subsystems upgraded or ‘aged out’
as what Red Hat call’s an application stream is no longer supported.
This is useful for systems which find they need a newer python for some
reason but still be on an older base operating system. However this is
also hard for people using external software which may no longer have
base libraries needed.
In order to fix this, EPEL has decided that for EL10 they would build
software CentOS Stream 10, and then branch every minor Red Hat
Enterprise Linux 10.x release with updates and changes which match what
is in a particular minor release. If a system administrator has decided
that a system needs to stay on 10.2 even after 10.3 is released, they
will be able set a variable which will keep EPEL working without having
to mirror a version from the Fedora archives. Other repositories like
rpmfusion should be able to do this also.
The problem is that the helper is not able to parse the new format of
the EPEL release files and so isn’t filling out the variable
$releasever_major. Because it is failing, the package cache
looks to be invalid so it tries to renew the cache EVERY time. This
becomes maddening if you have a shell prompt which is looking for a
‘command’ which hasn’t been installed on the system yet. My shell has an
additional helper looking for git and and such. It manipulates the
PS1 variable and so every command was giving me an error,
then a long pause and the repeated line about unable to download the gpg
key.
What can I do?
The command lookup helper in question comes with the GNOME
PackageKit. This may take a while for it to be updated to understand the
new repository environment $releasever_major so at the
moment, I suggest getting rid of the helper.
$ sudo dnf remove PackageKit-command-not-found
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server.
You can use subscription-manager to register.
Dependencies resolved.
Any shells currently in use may need to restart or resource their
.profile in order to stop erroring.
Problem 4:
Unwanted Subscription Manager Messages
Problem Description
Whenever a dnf command is given, there is a warning
printed out saying:
[root@xenadu etc]# dnf update
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server.
You can use subscription-manager to register.
Why is this happening?
This warning comes from the fact that CentOS Stream is the ‘upstream’
of Red Hat Enterprise Linux and in being that there is a desire to make
the two install as similar set of packages as possible for testing.
However in the case of ‘subscription-manager’ this is a tool which isn’t
useful for a CentOS system since subscribing it to the Red Hat
entitlement systems is not needed.
What can I do?
This problem is solved by removing a set of subscription manager
packages from the box:
[root@xenadu etc]# dnf remove subscription-manager*
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server.
You can use subscription-manager to register.
Dependencies resolved.
===============================================================
Package Arch Version Repository Size
===============================================================
Removing:
subscription-manager x86_64 1.30.2-2.el10 @anaconda 3.7 M
subscription-manager-rhsm-certificates
noarch 20220623-6.el10 @anaconda 27 k
Removing dependent packages:
insights-client noarch 3.2.8-2.el10 @AppStream 1.4 M
Removing unused dependencies:
libdnf-plugin-subscription-manager
x86_64 1.30.2-2.el10 @anaconda 44 k
python3-cloud-what x86_64 1.30.2-2.el10 @anaconda 156 k
python3-decorator noarch 5.1.1-12.el10 @anaconda 77 k
python3-iniparse noarch 0.5-10.el10 @anaconda 124 k
python3-inotify noarch 0.9.6-36.el10 @anaconda 302 k
python3-librepo x86_64 1.18.0-3.el10 @anaconda 187 k
python3-subscription-manager-rhsm
x86_64 1.30.2-2.el10 @anaconda 548 k
Transaction Summary
===============================================================
Remove 10 Packages
Freed space: 6.6 M
Is this ok [y/N]:
I’ve had many people reach out to me to ask about how to get started in using Linux as your default operating system for your laptops/desktops. Some of them are just curious as to why I keep only using Linux (Fedora actually and for the last 20 years and 33 years counting from SLS) as my main driver as well as all of the systems that I need to run – my virtual machines, my servers, my VMs at hosting sites etc. All of my servers run Fedora, Red Hat Enterprise Linux or Debian.
Since about 2001, every new year, the tech press would run stories that headline something like “This is the year of Linux” or “This is the year of Linux on the Desktop” or some variation of that.
Those statements assume that people would just switch enmass and that will be it.
The reality is that, just as Linux started at the edge (print servers, file servers) in the mid to late 1990s, before taking over the data centre when the “cloud” became a thing, that was probably over a decade and a half’s worth of slow, considered adoption with zero marketing being done. It was adopted because it empowered people to inspect, check and see if it worked and was found to be reliable and dependable and just plain fun.
Whatever the percentages are today, the vast majority of mobile phone users are running Linux, rebranded as Android. The power of innovation that Linux (and open source) brings is tremendous and as this study by the Harvard Business Review about the Value of Open Source Software in 2024 reveals, that without open source software along with the code creation ecosystems, firms would had to pay an estimated 3.5 times more to build the systems, software and platforms to run their businesses, which comes up to roughly US$8.8 trillion.
Yes, that’s US$8.8 TRILLION more, but did not.
Do spend some time reading the report above and even though it is a year old, the numbers are still valid.
Having said all that, there are still people (and some really large number of clueless CIOs) who insist that they will never ever run Linux anywhere even though they are running in it their cloud deployments, webservers, email systems, security & network devices and don’t know enough to care about it.
To all of them, while I am happy to spend time to show and tell and then have you take the dive in, I’d encourage everyone to explore this: https://distrosea.com/. It has a whole lot of Linux distributions online for you to test drive.
To guide you, I’d suggest starting with Fedora and then Debian. Do explore that rest and once you think you have a good sense of what’s possible and that which ever distribution you pick does pretty much what you need, you can then take the next step of installing it on your system.
You don’t have to, but then again, you can and do have the option to install and use.
I know a lot of people are being forced to “upgrade” their windows systems because the vendor says so. And then they are told that their hardware “can’t install” the latest incarnation.
Don’t fret. Don’t let the crutch of “familiarity” hold you back. Don’t be just driven by logos of applications “for familarity”. Lack of familiarity never held you or your child or your grandchild from learning to walk. Enjoy the journey of discovery and learning. If the brain hurts, it is the clearest sign that you are learning and it is joyful.
Linux distributions are one click away from giving you and your systems a renewed sense of purpose and long years of productivity with full on security and privacy.
I thought I would do a quick post about apps/software that I use.
Of course my requirements may be wildly different from your own,
but perhaps you will see something here that you might want to
also investigate, or in turn leeds you to something you do want
to use.
On the server side, I want to use things that are open source and
run on Fedora (my main server at home is Fedora of course).
I prefer things packaged up and available in the main Fedora repos,
but of course that is not practical for everything sadly.
nextcloud: nextcloud continues to be a great solution for
a lot of things due to it's library of plugins. I use it
for files, uploading photos/videos (see below), organizing
photos, recipies, calendar, contacts, deck lists (kanban)
and more. It's not packaged (anymore) in Fedora, but it's
pretty easy to install and upgrade on the fly. I've also
been impressed lately that things like files are... files
on the server, not some weird db format that are difficult
to add/delete/refresh if you need to from the server side.
I also do phone backups to nextcloud (see below)
postfix/dovecot/opendkim/opendmark/spamassassin/sqlgrey/saslauthd for
email. This seutp has worked for years and years and just works fine.
znc for irc bouncer. I still connect to a number of IRC networks
(although I am usually much more active on matrix).
matrix-synapse for matrix server. I'm just using the packaged
Fedora version and it works fine. Someday I will probibly
move it to use one of the setups that has more bells and whistles
but for now it's fine.
postgres for database server for everything that needs one.
I was running also a mariadb instance, but I moved the last
things off it over the holidays and didn't setup one on the
new server.
miniflux for RSS. ( https://miniflux.app/ ). miniflux isn't
packaged directly in Fedora, but the upstream folks provide
a repo and rpms and they work just fine. This allows me to
manage/read RSS feeds via web interface, or (usually) just
manage/read via an app (see below).
I was running calibre in headless server mode to serve ebooks
but it's really vastly overkill to do that and it pulls in
about 233 rpms which I otherwise do not need. So, I switched
to a simple OPDS app: cops ( https://github.com/mikespub-org/seblucas-cops )
(well, a fork of it that works with recent php). It seems to
do the job just fine. I can manage things with calibre on my
laptop and rsync them to the server where cops serves them
to my phone (or whatever reader I want).
On the Linux client side (my laptop):
firefox as main web browser, occasionally having to use chrome
or chromium. Mozilla hasn't been making good choices of late,
and I really hope they find their way again, but I really really
dislike the idea of using web browsers or engines that are
made by super large companies for their own gain. I'll probibly
stick with firefox until it becomes untenable. Perhaps servo
will be ready by then?
hexchat for IRC. Been using it for ages and ages.
discord (flatpak app). I have some friends that I have known since
college and we have a discord server to chit-chat on, so reluctantly
I connect to that to keep in touch.
necko for matrix. I use the Fedora packaged version and it's the best
of the matrix clients most of the time. It still has flaws of course,
but day to day it's the best one for me.
newsflash (flatpak) for rss client reading. Newsflash used to be packaged
in Fedora, but it just became too difficult, so I use the flatpak now.
newflash looks great, works well, and is a very pleasent reading
experence. It connects easily to miniflux (see above). Doing things that
way allows me to read things on the laptop (Newsflash), web (miniflux)
or phone (see below) and keep all of them in sync so I don't re-read
things.
foot for terminals. I've used... a lot of terminals over the years
and almost all of them are just fine if needed, but I've really
taken to foot over the last few years. It's super quick, it allows
me to have italic fonts (call me crazy, but I find oblique/italic
to be easier on my eyes and vte based terminals no longer allow that).
I do use xfce4-terminal on Xfce, because of course foot is a wayland
only terminal.
calibre for ebook / library management. There's a number of new ebook
library managers up and coming, but calibre is still far ahead of all
of them in my opinion.
Finally on my phone (a google pixel 8a) running https://grapheneos.org/
Probibly too many apps to note, but ones that interact with my laptop
and main server:
firefox here as well. The mobile version has gotten much better over
recent years. Sadly, installed from google play store.
librera ( https://tusky.app/ ) for ebook reading. While making this
post I happened to see I had it installed from google play store,
I think because in the distant past there was something that didn't
work in the f-droid version, but no longer. I switched it over to
the f-droid version and it's working fine. Its a nice reader,
it hooks into odps on the server just fine. I've ready so many
books with this thing.
KISS launcher. I ran accross this a while back somewhere and
it's still my main application launcher. It provides your apps
as a searchable list, with the ones you use the most at the bottom
so you can easily find them. This is so much nicer than paging though
a bunch of virtual desktops looking for some icon. Highly recommended.
Nextcloud android apps: nextcloud, deck, memories, cookbook all integrate nicely
with nextcloud. Deck is nice for shopping lists or organizing things.
The main nextcloud app lets you sync things back and forth and setup
autosync. I have it syncing my photos and movies up right after I take
them. Memories then lets you look at and organzie. cookbook is handy
for using in the kitchen when you want to follow a recipe.
Element X for matrix. Works fine, has the new 'fast' sync, which seems
to work reasonably well.
Paseo for step counting. I installed this last year and it's been nice
be able to see that I need to get up and walk around more. It hooks into
the android steps stuff so walking on the eliptical or the like will
show up as steps even though you didn't go anywhere.
A bunch of junk non free apps I need for various things sadly,
but it's nice to have the option of installing from play store if you
absoluetely need some app and can't avoid it.
Im sure there's more things I didn't remmeber or see while looking here
but hopefully the list inspires you somehow.
I'm still figuring out comments to blog posts, but if you want
to reply, I will be making a mastodon post pointing to this blog
post you can reply to: https://fosstodon.org/@nirik/113766175760197132
If you read this entry, DNS have done their work, and you are connected to this new server.
The low-cost server hosting my repository was 4 years old and had performance issues (permanent very high load, especially after an update, during mirrors sync).
Performance seems really better, thanks to CPU, memory, and faster disks.
The "Donate" button (in the upper right corner) is for those who want to contribute to its financing (~520€/year is slightly more expensive than the previous one).
Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for a parallel installation, the perfect solution for such tests, and also as base packages.
RPMs of PHP version 8.4.3RC1 are available
as base packages in the remi-modular-test for Fedora 39-41 and Enterprise Linux≥ 8
as SCL in remi-test repository
RPMs of PHP version 8.3.16RC1 are available
as base packages in the remi-modular-test for Fedora 39-41 and Enterprise Linux≥ 8
as SCL in remi-test repository
The packages are available for x86_64 and aarch64.
PHP version 8.2 is now in security mode only, so no more RC will be released.
Software Development flow is an elusive beast, easily frightened away. The easiest way to lose the flow is to have a long development cycle. Building the entire Linux Kernel takes time. If you are developing a single driver that can be built as a stand alone module, you do not want to have to rebuild the entire Kernel each time you make a change to that module. If that module crashes the machine when it runs into an error, you do not want to have to wait for that machine to reboot before digging in to the debugging process.
These two guidelines lead to a pattern of building on one machine and deploying on another. Here is one way to go about this process.
When I build and deploy on the same machine, I typically have a series of command linked together that build all of the steps and that error out if any of them fail. Typically, it looks like this:
make -j `nproc` && make -j `nproc` modules && make -j `nproc` modules_install && make -j `nproc` install
What I want to do is adapt this build to a format acceptable to transport to another machine. Our first choice is to build a package suitable for the remote machine. Since I am deploying to a Fedora base system, that would be an RPM. That command looks like this:
make -j `nproc` rpm-pkg
However, this takes an incredibly long time to complete, which breaks the development cycle. Part of the reason for this delay is that it builds the entire kernel from start every time. It also builds other packages, including both the production and debug packages. We likely do not need both for our work.
Instead, I want to manually copy the files over to the remote machine once, install, and then update only those files that get changed on a recompilation. I can use the tar target to get all of the binaries I need and copy them to the remote system. A build command for that looks like this:
make -j `nproc` && make -j `nproc` modules && make -j `nproc` tar-pkg
I chose not to zip the final package as the next step is to copy it across the network, and that happens pretty fast.
scp linux-6.12.1+-arm64.tar :root@$TEST_HOST
I tried untarring directly in the / directory, but it made my remote system unstable. I did not investigate why, and instead chose to continue with making my steps work manually. This is a one time cost, although it could be trivially automated. SSH in to the test system and
mkdir /root/tmp
cd /root/tmp
tar -xf ../linux-6.12.1+-arm64.tar
mv lib/modules/6.12.1+/ /lib/modules
kernel-install add 6.12.1+ boot/vmlinuz-6.12.1+
At this point I reboot and confirm that my newly built kernel is running.
# uname -a
Linux sut12sys-r112.scc-lab.amperecomputing.com 6.12.1+ #1 SMP PREEMPT_DYNAMIC Mon Dec 30 10:47:11 PST 2024 aarch64 GNU/Linux
Now if I want to redeploy a single module, I can do so by copying the file over and putting it into the appropriate place under /lib/modules . If my module crashes the system, I might need a rescue image to boot without getting an oops.
As part of their "Defective by Design" anti-DRM campaign, the FSF recently made the following claim: Today, most of the major streaming media platforms utilize the TPM to decrypt media streams, forcefully placing the decryption out of the user's control (from here). This is part of an overall argument that Microsoft's insistence that only hardware with a TPM can run Windows 11 is with the goal of aiding streaming companies in their attempt to ensure media can only be played in tightly constrained environments.
I'm going to be honest here and say that I don't know what Microsoft's actual motivation for requiring a TPM in Windows 11 is. I've been talking about TPM stuff for a long time. My job involves writing a lot of TPM code. I think having a TPM enables a number of worthwhile security features. Given the choice, I'd certainly pick a computer with a TPM. But in terms of whether it's of sufficient value to lock out Windows 11 on hardware with no TPM that would otherwise be able to run it? I'm not sure that's a worthwhile tradeoff.
What I can say is that the FSF's claim is just 100% wrong, and since this seems to be the sole basis of their overall claim about Microsoft's strategy here, the argument is pretty significantly undermined. I'm not aware of any streaming media platforms making use of TPMs in any way whatsoever. There is hardware DRM that the media companies use to restrict users, but it's not in the TPM - it's in the GPU.
Let's back up for a moment. There's multiple different DRM implementations, but the big three are Widevine (owned by Google, used on Android, Chromebooks, and some other embedded devices), Fairplay (Apple implementation, used for Mac and iOS), and Playready (Microsoft's implementation, used in Windows and some other hardware streaming devices and TVs). These generally implement several levels of functionality, depending on the capabilities of the device they're running on - this will range from all the DRM functionality being implemented in software up to the hardware path that will be discussed shortly. Streaming providers can choose what level of functionality and quality to provide based on the level implemented on the client device, and it's common for 4K and HDR content to be tied to hardware DRM. In any scenario, they stream encrypted content to the client and the DRM stack decrypts it before the compressed data can be decoded and played.
The "problem" with software DRM implementations is that the decrypted material is going to exist somewhere the OS can get at it at some point, making it possible for users to simply grab the decrypted stream, somewhat defeating the entire point. Vendors try to make this difficult by obfuscating their code as much as possible (and in some cases putting some of it in-kernel), but pretty much all software DRM is at least somewhat broken and copies of any new streaming media end up being available via Bittorrent pretty quickly after release. This is why higher quality media tends to be restricted to clients that implement hardware-based DRM.
The implementation of hardware-based DRM varies. On devices in the ARM world this is usually handled by performing the cryptography in a Trusted Execution Environment, or TEE. A TEE is an area where code can be executed without the OS having any insight into it at all, with ARM's TrustZone being an example of this. By putting the DRM code in TrustZone, the cryptography can be performed in RAM that the OS has no access to, making the scraping described earlier impossible. x86 has no well-specified TEE (Intel's SGX is an example, but is no longer implemented in consumer parts), so instead this tends to be handed off to the GPU. The exact details of this implementation are somewhat opaque - of the previously mentioned DRM implementations, only Playready does hardware DRM on x86, and I haven't found any public documentation of what drivers need to expose for this to work.
In any case, as part of the DRM handshake between the client and the streaming platform, encryption keys are negotiated with the key material being stored in the GPU or the TEE, inaccessible from the OS. Once decrypted, the material is decoded (again either on the GPU or in the TEE - even in implementations that use the TEE for the cryptography, the actual media decoding may happen on the GPU) and displayed. One key point is that the decoded video material is still stored in RAM that the OS has no access to, and the GPU composites it onto the outbound video stream (which is why if you take a screenshot of a browser playing a stream using hardware-based DRM you'll just see a black window - as far as the OS can see, there is only a black window there).
Now, TPMs are sometimes referred to as a TEE, and in a way they are. However, they're fixed function - you can't run arbitrary code on the TPM, you only have whatever functionality it provides. But TPMs do have the ability to decrypt data using keys that are tied to the TPM, so isn't this sufficient? Well, no. First, the TPM can't communicate with the GPU. The OS could push encrypted material to it, and it would get plaintext material back. But the entire point of this exercise was to avoid the decrypted version of the stream from ever being visible to the OS, so this would be pointless. And rather more fundamentally, TPMs are slow. I don't think there's a TPM on the market that could decrypt a 1080p stream in realtime, let alone a 4K one.
The FSF's focus on TPMs here is not only technically wrong, it's indicative of a failure to understand what's actually happening in the industry. While the FSF has been focusing on TPMs, GPU vendors have quietly deployed all of this technology without the FSF complaining at all. Microsoft has enthusiastically participated in making hardware DRM on Windows possible, and user freedoms have suffered as a result, but Playready hardware-based DRM works just fine on hardware that doesn't have a TPM and will continue to do so.
The Software Freedom Conservancy Fundraiser runs for another 2 weeks. Please become a Sustainer, renew your existing membership or donate before January 15th to maximize your contribution to furthering the goals of software freedom!
They have been a great partner to Sourceware, putting users, developers and community first.
Josh and Kurt talk about new NIST password guidance. There’s some really good stuff in this new document. Ideas like usability and equity show up (which is amazing). There’s more strict guidance against rotating passwords and complex passwords. This new guidance gives us a lot to look forward to.
The Ruby on Rails template Business Class gets a whole new edition. Rails 8, new licencing, and improved CRUD generator.
Business Class 2.0
The new Business Class is built on top of Rails 8, Pay 8, Solid Trifecta libraries, and Kamal 2. Solid Trifecta is now the default and Redis dependency was removed, simplifyng everything. Also, Action Policy got added to finally refactor authorization. An overall update of dependencies but that’s not all!
CRUD Generator
The CRUD generator got much better. It will now give you bulk actions both for team views (as grid) and admin views (as table). All destructive actions load a confirmation HTML dialog view with Hotwire.
I was also able to fix the previous limitation of simple models and you can now generate models like my_shop/product or garage/fancy_car giving you a beautiful start for your well-architected domain model.
The refreshed generator now also generates policy files so you literally get a whole CRUD for a team-resource without much work.
Kamal 2
Kamal configuration got updated to its second major version and to handle the new Solid libraries. Business Class now also explicitely provides a custom production.conf for any necessary tuning and provisioning sets up unattended upgrades.
Note that you can use Business Class with managed database or even without Kamal. It’s just Kamal-ready if you need it to be.
Design and demo
The default design got improved in details and screens got much more coherent.
The new license gives you lifetime access to Business Class, no need for any kind of renewals. There is a more expensive version for agencies, but even that one is now with lifetime access.
Like many other folks, over holidays I like to read books,
watch movies and tv shows, visit family, bake and eat too
much food, drink too many good beers and ciders and meads,
and catch up some on around the house projects.
Also, of course I like to catch up on my hacking on things.
Often during the normal times I am busy at work and don't have
the time or energy to play around with things, or just improve
my home infrastructure ( queue up joke about the cobblers
children never having new shoes).
So, I thought I would put together a recap of things I looked
at/setup in case they inspire others.
There may well be a part two of this, but we will see
how much more I get done before the holidays end.
Got sound working again on my media PC via HDMI.
Sadly, I didn't actually fix it so much as swapped a
usbc doc with HDMI in for the frame.work HDMI module that
was not working. I'm really not fully sure why it no longer
lets me have sound, but I am suspecting a kernel/firmware
issue. Might play with it again someday, but it all works
fine now.
Unpacked and setup the OpenWrt One (see previous blog post
for a review). So far it's quite fast and working great.
Installed a SNO (Single Node Openshift) in a vm to play
with. Found that it's not covered under the Developer
subscription (gives you a 60day trial). So decided to
ponder more on what k8s thing I want to play with at home.
Contenders: k3s, microshift, OKD of some flavor.
Moved this blog from wordpress to nikola.
I'm sure there's still rough edges, but overall it
seems working. Spent longer on trying to get apache
redirects working than importing and setting up nikola.
(It was a ordering issue, there was a redirect that was
basically overriding all the new ones I was trying to put
in place).
Cleaned up my restic backups as the partition was
starting to get full.
Updated my Nextcloud instance to 30.0. Added some new apps.
The cookbook app is brilliant! You can (usually) just point it
to a url and it will scrape the recipe in with all info, media
and such. Really nice. The memories app is a nice improvement
over the default photos app I think. Cleaned up old trashbin
and versions of files that saved a bunch of space.
Looked at my piwigo install. I replaced an old gallery2 instance
a long time ago with this, but I haven't really done much
with it in recent years. It's also the last app I have
that uses mysql/mariadb. So, I imported all the files into
nextcloud for now and I think I will retire piwigo.
Nextcloud memories still isn't great for public sharing, but
you can share albums with a public link.
Decided it was time to move my main server over to a
fresh install. It's working just fine, but /boot is only
185MB, so it can only hold 2 kernels which is anoying.
It's also ext3 (yes, ext3!). The / fs is ext4, and I
would like to get compression at least from btrfs.
This vm was installed with f32 and upgraded to f41 with
almost no issues. Did a f41 install and synced data over to it:
309GB becomes 189GB on the new vm. I'm still organizing
and moving things. Probibly will cut over around new years
with a hopefully short downtime.
Got a calendar app on my phone and via the magic of DAVx
managed to get my nextcloud calendar, my work google calendar
and my home google calendar all syncing in one place.
The fossify calendar is not too bad, installable via fdroid
and seems reasonably active upstream.
Back to hacking on things and trying to relax some.
Rails 8 comes with a built-in authentication generator. However, it doesn’t yet come with registrations. Here’s how to add them.
Rails auth generator
To generate authentication scaffold, run:
$ rails generate authentication
This will create User and Session models as well as sign-in forms. But there is little to do without registrations.
Registrations
To add registrations, we’ll add a new RegistrationsController, route, and view.
The registration controller can look like this:
# app/controllers/registrations_controller.rbclassRegistrationsController<ApplicationController# Include Authentication module unless it's already in ApplicationControllerincludeAuthenticationallow_unauthenticated_accessrate_limitto: 10,within: 3.minutes,only: :create,with: ->{redirect_tonew_registration_url,alert: "Try again later."}defnew@user=User.newenddefcreate@user=User.new(safe_params)if@user.savestart_new_session_for@userredirect_toafter_authentication_url,notice: "Welcome!"elseflash[:alert]="Email or password confirmation invalid."render:new,status: :unprocessable_entityendendprivatedefsafe_paramsparams.require(:user).permit(:email_address,:password,:password_confirmation)endend
The code is surprisingly simple. We rely on the generator’s Authentication module and start_new_session_for method to sign people in. And we rely on has_secure_password to do the right thing when providing users with password and password_confirmation.
Note that it doesn’t come with any constrains for passwords or email address, but we can add them:
pastewindow is a neovim plugin
written in Lua to help to paste text from a buffer to a different window in
Neovim. This is my first attempt of writing a plugin.
We can select a window (in the GIF below I am using a bash terminal as target)
and send any text to that window. This will be helpful in my teaching sessions.
Specially modifying larger Python functions etc.
I am yet to go through all the Advent of Neovim
videos
from TJ DeVries. I am hoping to improve (and more features) to the plugin after
I learn about plugin development from the videos.
With the inclusion of the Generators, the main functions seem to be in the right shape. What seems off is the naming of things. For example, we have a couple functions that take a parameter named “board” but it turns out they mean different things. The names of classes do not seem to align with what they are doing.
Lets continue with the renaming of things and see where it leads us.
But first: we can remove the exceptions to the flake8 testing, and, with the removal of some white space, we can run with all the pep8 rules intact. Now, on to renaming.
I am not going to go through each commit, but will try to provide enough context so you could follow the logic in the commits in github.
Lets start with a simple one: since the collection is called board_strings, lets make the singular match:
diff --git a/tree_sudoku.py b/tree_sudoku.py
index 95d1026..b4fa5bf 100755
--- a/tree_sudoku.py
+++ b/tree_sudoku.py
@@ -62,8 +62,8 @@ class SudokuSolver:
def strings_to_board_dict(self, board_strings):
return_dict = {}
- for index, board in enumerate(board_strings):
- return_dict[str(index)] = build_board(board)
+ for index, board_string in enumerate(board_strings):
+ return_dict[str(index)] = build_board(board_string)
return return_dict
@@ -80,8 +80,8 @@ def print_board(board):
print('-' * 21)
-def build_board(board):
- rows = re.findall(r"\d{9}", board)
+def build_board(board_string):
+ rows = re.findall(r"\d{9}", board_string)
board_list = []
for row in rows:
row_list = []
This lets us distinguish between the strings pulled out of the file via the csv parser and the board we generate later as a list of lists…the Pythonic way of making a multidimensional array.
Here I made a decision that I later changed: I decided I wanted a Board class, and I introduced it. I did that by just creating the class in one commit, and in the next started moving code to that class. This is because I identified that the SudokuSolver class was really managing the list of puzzles, and I wanted to separate the logic for solving a single puzzle from the puzzle-list management. The build_board function became the __init__ function for the Board class.
Here is where my refactoring took an interesting turn: I decided that as part of determining what the different parameters meant, I should annotate them with type. To do this, I turned to the MyPy utility, and also type annotations as handled by python. First I added a type checker to the tox.ini managed environment:
diff --git a/tox.ini b/tox.ini
index 1252034..a69ac26 100644
--- a/tox.ini
+++ b/tox.ini
@@ -4,13 +4,16 @@
# and then run "tox" from this directory.
[tox]
-envlist = pep8,py312
-
+envlist = pep8,type,py312
skipsdist = True
[testenv:pep8]
commands = flake8
+
+[testenv:type]
+deps = mypy
+commands = mypy tree_sudoku.py
[flake8]
filename= *.py
With this change, I could start annotating type information:
diff --git a/tree_sudoku.py b/tree_sudoku.py
index 4b2ac5b..b79e782 100755
--- a/tree_sudoku.py
+++ b/tree_sudoku.py
@@ -10,14 +10,15 @@ import re
import copy
import time
+from typing import List
BASIS = 3
DIM = BASIS * BASIS
MAX = DIM * DIM
-def import_csv():
- list_of_boards = []
+def import_csv()->List[str]:
+ list_of_boards: List[str] = []
with open('sample_sudoku_board_inputs.csv', 'r') as file:
reader = csv.reader(file)
for row in reader:
@@ -26,11 +27,11 @@ def import_csv():
class Board:
- def __init__(self, board_string):
+ def __init__(self, board_string: str):
rows = re.findall(r"\d{9}", board_string)
self.board_list = []
for row in rows:
- row_list = []
+ row_list: List[str] = []
row_list[:0] = row
self.board_list.append(row_list)
@@ -77,7 +78,7 @@ class SudokuSolver:
return return_dict
-def print_board(board):
+def print_board(board: Board):
for index1, row in enumerate(board.board_list):
if index1 == 0 or index1 == 3 or index1 == 6:
print('-' * 21)
Continuing on with this change required reordering class definitions in the file so that classes were defined before they were referenced, something the type-checking required, even if Python does not.
I also finally did they Pythonic thing and put the main behavior in a function. I could have removed the timing functions at this point, but instead I just made their output readable.
I ended up pulling all of the logic out of the SudokuSolver class, and removing the class. Logic I wanted to keep, like the solve method I moved to the Board class. I also moved build_solution_string to Tree_node. Not all of these moves were to their final locations. I later moved build_solution_string from Tree_node to Board. In the process, the parameter list suggested to me a couple things: the Tree_Node should take a pointer to the Board.board_list in its constructor, and the two classes were named wrong. I eventually renamed Board to Solver after I had removed the original SudokuSolver class. The Tree_Node class really was representing a cell in the sudoku puzzle, so I renamed Tree_Node to Cell.
With that rename done, I added a board parameter to the Cell constructor. This was actually one of the larges refactorings I performed that was not merely reordering functions. This change required the modification of the Cell constructor, and implied that the functions that were later setting the board member variable should not longer do so, and no members of Cell should then take board in as a parameter. This required modifying the test_advance code as well. Even still, I think it is an understandable commit. You could make the argument that this one should have been split into two commits, and I probably would concede.
I also noticed that the next method was poorly named, as it implied movement, the way that advance and retreat did. Since all that function does is pop items off a list, I named it pop.
To complete the refactoring, I moved the two functions is_value_valid and is_set_valid to the Cell class.
I will include the final state of the python file below. I don’t think I will spend anymore time refactoring this, as I think it is in an understandable state now.
The class model now has two classes: a Solver takes a puzzle and solves if by iterating Cell by Cell through the board, advancing when the state is potentially solvable, and retreating when it detects a violation of the constrains. Is it still a tree? I am not certain it really ever was, but it still follows the rules of descent and backtracking.
#!/usr/bin/python
# turn each csv row into a board
# find what values can go into what spot
# create a tree trying to put in each value
# if value can not be possible, end stem, go back up the tree
# return the branch when tree is 81 layers deep, the board is filled
import csv
import re
import copy
import time
from typing import List
BASIS = 3
DIM = BASIS * BASIS
MAX = DIM * DIM
def import_csv() -> List[str]:
list_of_boards: List[str] = []
with open('sample_sudoku_board_inputs.csv', 'r') as file:
reader = csv.reader(file)
for row in reader:
list_of_boards.append(str(*row))
return list_of_boards
class Solver:
def __init__(self, board_string: str):
rows = re.findall(r"\d{9}", board_string)
self.board_list = []
for row in rows:
row_list: List[str] = []
row_list[:0] = row
self.board_list.append(row_list)
def build_solution_string(self, head_cell):
return_string = ''
curr_cell = head_cell
return_string += str(curr_cell.value)
while (curr_cell.next_cell):
curr_cell = curr_cell.next_cell
return_string += str(curr_cell.value)
return return_string
def solve(self):
test_board = copy.deepcopy(self.board_list)
head_cell = Cell(test_board, None, 0)
curr_cell = head_cell
while True:
curr_cell.write()
if curr_cell.is_value_valid():
if curr_cell.index + 1 >= MAX:
break
curr_cell = curr_cell.advance()
else:
# backtrack
while len(curr_cell.possible_values) == 0:
curr_cell = curr_cell.retreat()
curr_cell.pop()
return self.build_solution_string(head_cell)
class Cell:
def __init__(self, board, last_cell, index):
self.board = board
self.possible_values = possible_values()
self.value = self.possible_values.pop()
(self.row, self.col) = index_to_row_col(index)
self.last_cell = last_cell
self.next_cell = None
self.index = index
self.old_value = None
def advance(self):
new_cell = Cell(self.board, self, self.index + 1)
new_cell.check_solved()
self.next_cell = new_cell
return new_cell
def retreat(self):
self.board[self.row][self.col] = self.old_value
cell = self.last_cell
cell.next_cell = None
return cell
def pop(self):
self.value = self.possible_values.pop()
def __str__(self):
return self.value
def write(self):
if self.old_value is None:
self.old_value = self.board[self.row][self.col]
self.board[self.row][self.col] = self.value
def check_solved(self):
if self.board[self.row][self.col] != '0':
self.value = self.board[self.row][self.col]
self.possible_values = []
def is_set_valid(self, generator):
box = possible_values()
for (row, column) in generator(self.row, self.col):
number = self.board[row][column]
if number == '0':
continue
if number in box:
box.remove(number)
else:
return False
return True
def is_value_valid(self):
if not self.is_set_valid(column_generator):
return False
if not self.is_set_valid(row_generator):
return False
return self.is_set_valid(box_generator)
def strings_to_board_dict(board_strings):
return_dict = {}
for index, board_string in enumerate(board_strings):
return_dict[str(index)] = Solver(board_string)
return return_dict
def print_board(solver: Solver):
for index1, row in enumerate(solver.board_list):
if index1 == 0 or index1 == 3 or index1 == 6:
print('-' * 21)
for index, char in enumerate(row):
print(char, '', end='')
if index == 2 or index == 5:
print('| ', end='')
print('')
if index1 == 8:
print('-' * 21)
def possible_values():
values = []
for index in range(1, DIM + 1):
values.append('%d' % index)
return values
def column_generator(row, col):
for i in range(0, DIM):
yield (i, col)
def row_generator(row, col):
for i in range(0, DIM):
yield (row, i)
def box_generator(row, col):
row_mod = row % BASIS
start_row = row - row_mod
col_mod = col % BASIS
start_col = col - col_mod
for i in range(0, BASIS):
for j in range(0, BASIS):
yield (start_row + i, start_col + j)
def index_to_row_col(index):
col = int(index % DIM)
row = int((index - col) / DIM)
return (row, col)
def main():
start = time.time()
board_strings = import_csv()
boards_dict = strings_to_board_dict(board_strings)
solved_board_strings = dict()
for key, value in boards_dict.items():
return_string = value.solve()
solved_board_strings[key] = return_string
for key, solution in solved_board_strings.items():
print(f"Board: {key}")
print_board(strings_to_board_dict([solution])['0'])
end = time.time()
print("start time = %f" % start)
print("end time = %f" % end)
print("duration = %f" % (end - start))
if __name__ == '__main__':
main()
بر اساس اعلام سازمان فضایی ناسا، کاوشگر خورشیدی پارکر دیروز سشنبه ۲۴ دسامبر۲۰۲۴ برابر با ۴ دی ۱۴۰۳ به فاصله بیسابقه ۶.۱ میلیون کیلومتری از خورشید رسید و رکورد نزدیک ترین ملاقات با ستاره ی منظومه خورشیدی را شکست. کاوشگر پارکر همچنین با سرعت باورنکردنی ۶۹۰ هزار کیلومتر بر ساعت از کنار خورشید گذشت و […]
The Fedora Council is pleased to announce that we have chosen Forgejo as the replacement for our git forge! That means you’ll see Forgejo powering our package sources (src.fedoraproject.org) as well as our general git forge (what pagure.io is today). It has been a long road to get here, and we cannot thank the Fedora community enough for your patience and support throughout.
For deeper context into what went into this decision, we will walk you through the last few months from the council’s perspective. You may want to grab a tea or coffee or beverage — this might be a few paragraphs long
In the beginning
If you’d like to read more about the early stages of this saga, please visit the announcement from two weeks ago. There are lots of helpful links to more documentation about the investigation of both GitLab and Forgejo, as well as more detail about the history of this change.
We have liftoff!
This all officially started after the Council’s face-to-face meeting in February 2024. We made a list of all potential replacements for Pagure, including Pagure itself. Over the course of several hours, we ruled out almost all of the candidates, for various reasons, and in the end we were left with Forgejo and GitLab CE (self-hosted). We published a blog post about this, and then got to work figuring out requirements to hand to the team doing this investigation. As Fedora Council is a main governing body of the entire project, we tried to keep the requirements to the project’s holistic needs, and not weigh in on too many technical aspects — those would be better reviewed by our community and FESCo. So, with two options to choose from and a list of requirements to investigate them against, we began the formation of the git forge investigation 2024.We just needed the right folks to be part of the investigation team. Enter CLE!
The council made an official request to the Community Linux Engineering team (CLE, which includes the former CPE) to drive an investigation into the two options. The investigation team, ARC (“Advance Recon Crew�) launched the investigation in May 2024 with a call to all folks who would be impacted by a change to our current dist-git setup. They asked for use-cases in the form of user stories so they had a better picture of what Fedora needs. Since dist-git has more specific technical demands than general project hosting, the investigation centred around dist-git — we started with the hardest part first!
Throughout 2024, the investigation team promoted awareness of the upcoming change by speaking at Fedora release parties and at Flock, and by the summer they had managed to deploy a test instance of each forge option, to evaluate each use case. The results of that are can be read at the ARC team’s read-the-docs page.
Options, so many options!
The Council requested a write up comparing each option. It was sweet to hear that there are no insurmountable ‘blockers’ for Fedora in terms of technical effort, but also bittersweet as it was going to be harder to make a final decision.
Accessibility
Both projects prioritize accessibility as a core value and implement UI best practices. However, Forgejo faces challenges, as it heavily relies on Fomantic UI, a framework not inherently accessibility-friendly. While Forgejo’s upstream, Gitea, applies patches to improve accessibility, significant issues remain that require time and effort to resolve. Accessibility is important to Fedora, and a strategic priority for the Council, and Forgejo lists it as one of their core values, so we hope to collaborate on making this better.
Functionality
In terms of issue tracking, GitLab limits some critical features to premium tiers, while Forgejo offers all functionality without restrictions or fees.
An interesting distinction is syntax highlighting: both solutions leverage the same JavaScript library, but Forgejo uniquely highlights RPM spec files by default.
GitLab supports auto-merging of merge requests, allowing users to define rules for automatic merges once conditions are met. Forgejo lacks this feature. However, its implementation of “actions” is API-compatible with GitHub Actions, enabling potential integration with third-party services or custom pipelines—albeit with significant development effort required.
Maintenance
Forgejo’s deployment documentation is relatively sparse, but Codeberg, a reference deployment, successfully supports over 143,000 registered users. The deployment is documented and available in Codeberg Infrastructure repositories (They even use Ansible!). GitLab offers more comprehensive versioned documentation, covering deployment, security, and maintenance. Additionally, GitLab provides an official OpenShift operator for GitLab CE, ensuring seamless container-based deployment.
Contribution
GitLab CE operates under the MIT license, while Forgejo uses GPLv3 or later.
GitLab CE: Actively synchronizes with its proprietary upstream, which is highly active (over 30,000 commits to the main branch in 2024, from 1435 distinct contributors). However, contributions are only accepted for the Enterprise Edition (EE). The official “community fork� has only 5 accepted merge requests ever.
Forgejo: While less active upstream (about 3500 commits in 2024, from 250 contributors), Forgejo and its base, Gitea, accept contributions openly from all users, clearly a more inclusive development process.
Quality (Fedora QA)
By the time Fedora QA were in a position to look into each option for their use cases, the Council had already been trending towards choosing Forgejo. So in an effort to reduce time, we asked the QA team to just look at Forgejo when validating their use cases. QA found the majority of their use cases to be either working or likely working (meaning even though the Forgejo configuration and deployment details are not completely finalized at the moment, and see a path forward for satisfying their needs). However, they did identify several concerning shortcomings in Forgejo’s issue tracking features. None of them seem to be a complete blocker, at least from QA perspective, but they are highly-visible regressions and other parties might be affected by them as well, and will need to be prioritized in reconciling them before a migration can happen. Priority will be given to having the ability to move issues between repos to help with debugging, being able to search all of dist-git combined in order to avoid duplication of bugs being reported or opened, and to be able to create private issues in public repositories.
Reaching the decision
After tracking the investigation for months, the Fedora Council had multiple meetings both on Matrix chat and in video conferences to ask questions about each forge and to improve our understanding of just how big this change will be. In one meeting, the council needed to refresh our requirements list to make sure we were asking for the right information at the end of the report. We had initially wanted no recommendation from the investigation team from the report, however as the work continued, it became necessary to change that to needing a recommendation. The investigation team could find no technical blockers to recommend one forge over another, both will require work. This brought about several conversations amongst council members as to whether they already had a preference for one option or the other. It was great conversation and it appeared that once again Forgejo was taking the lead on virtue of their open-source nature. Some of us on the council liked the well documented, and somewhat familiar option of GitLab, but when faced with the reality that sometime in the near future, Fedora may find itself needing to make changes to our git forge, and one option might require money we don’t have, or not allow the changes we might need to make, and we did not want to limit the project in any way. And so, Forgejo galloped on down to the finish line!
One pivotal conversation was with the ARC team lead, where they walked the council through a mapping of use cases. This included complexity estimates for the sysadmin and developer work needed to deploy either option.
The diagram shows personas in the project like packagers, developers, etc and how they interact with a git forge in Fedora. The numbers indicate how complex the interaction is with the forge, with 1 being low and 10 being highest. These complexity numbers were determined from lack of documentation on how the services and features work in Fedora. Added caution was used as the team was unfamiliar with how some personas work, and with no documentation to work from….well, best put 10’s across the board for some things For others, like packaging, and being a general user of Fedora and other similar personas, because of good documentation available on those workflows, the team felt like they could follow the logic and did not add any complexity number. That is not to say that there won’t be any difficulties to overcome when we deploy and begin to use Forgejo for these personas, but I think we can all agree that the complexity can certainly lessen a bit when you have good (or any!) documentation on how something is set up and you feel more confident you can make it work!
After we announced the Council’s lean towards Forgejo, we asked the community for one last sanity check on this big decision. Two weeks have passed, and the feedback has been largely positive with no technical objections to this choice. A Council ticket captured: APPROVED by unanimous consensus with no abstentions (+9, 0, 0). Fedora will move to Forgejo!
State of Play
We have made the decision to choose Forgejo, and will look to our community to help plan and execute the migration. The CLE team has already done some investigation into how our dist-git setup works, so if you would like to learn about it you can read the overview here. Please be prepared to see this work happening slowly, and likely across multiple releases. We will continue to focus on dist-git functionality first, and eventually finish with project hosting. This will take time and your patience will be invaluable and greatly appreciated. For right now, a lot of folks are away from their computers to enjoy some time with friends and family so we will start this change in January.
The next chapter
There is a lot still unwritten for Fedora’s git forge change. We are only scratching the surface, but we have taken an important step forward – choosing our path. We will have to take our time and break this down into smaller phases so we don’t get overwhelmed, so if you’re wondering about bugzilla and the project’s bug tracking future, don’t worry – we are too! We will be exploring that too if not exactly as part of this per se, but rather a logical next step and something we will be keeping in mind. Our focus is to deploy Forgejo, develop the features the project needs to build and release Fedora Linux and function as a project, and replace the existing forges with the new solution.
In January 2025, there will be open meetings happening every Wednesday from January 15th on Matrix. If you would like to be involved with this work, or have any questions, please do join that meeting. The details can be found on the releases calendar in Fedocal. We will also use the tag #git-forge-future when posting on discussion.fpo and we intend to see this plan take the form of an overall change request not tied to any release, with several smaller change requests tied to it which we hope will allow us make this project-wide change in controlled and careful increments.
On behalf of the Fedora Council, I would like to once again thank you all for your contribution to this change, and most importantly to the investigation team who took this on and have done an excellent job reviewing Forgejo for Fedora. I look forward to 2025, it will be a big year!
Recently the OpenWrt One was announced for sale.
This is a wireless access point/router powered by
Banana Pi and designed by the OpenWrt project.
Additionally, $10 from every device sold go to
the Software freedom conservency to help
fund OpenWrt efforts.
The device was available on aliexpress, which is a bit
weird for us here in the west, but I had no trouble ordering
it there and the cost was pretty reasonable. It arrived
last week.
The design is pretty nice. There's a NAND/NOR switch.
Normal operation the switch is in NAND setting.
If something goes wrong, you can hold down the button
on the front while powering on and you shoule get a rescue
image. If somehow even that image doesn't work, you can switch
the switch to NOR mode and it will boot a full recovery from
a USB drive. So, pretty unbrickable.
Initial setup was easy. Just screw on the 3 antenna, connect
ethernet and usb-c power and everything came up fine.
I was a bit confused on what password to use, but then
I realized just hitting return would take me to the
'hey, please set a password' screen. A small note might
be nice there.
Since I was using OpenWrt on my existing linksys
e8450 it was pretty simple to configure the new accesspoint
in a similar manner. Upgrade was pretty easy as soon as
I realized that I needed to pick 24.10.0-rcN or snapshot
on the firmware selector
as there are no stable images for the One yet.
I then spent a lot of time playing with the channel_analysis
page. This page scans for other accesspoints and shows you
what channels are in heavy use or open. On 5ghz, there was
basically nothing else, so no problems there. However, on
2.4Ghz there were an astonishing number of aps. I live out
pretty far from town, but there's still a LOT of them.
Of course some were coming from 'inside the house' like
some roku devices or the like. Finally I decided channel 9
was the best bet.
switching things was a bit of a dance. I connected to the
openwrt wireless network, logged in and changed the wired
network, then powered off the old ap and swapped the network
cable to the new one. Then, rejoined the wireless and changed
the name/password so all the existing devices would just keep
working.
I do notice faster connection rates on my main laptop at least.
The accesspoint is also really responsive either via web
(luci) or ssh. I may look at adding some more duties to this
device over time. It does have a nvme slot so I could do some
caching or perhaps some other setup. I also want to play with
the usb-c console port and perhaps at some point upgrade
my home switch so I can power it via PoE.
All in all a pretty great device. It seems to currently be
sold out, but if you are looking for a nice, unbrickable ap
that is very open source, this might just be the ticket for you.
The Fedora Linux 41 election cycle has ended. We had one group eligible for an election campaign this cycle. Below are the results of the FESCo election, and Mindshare Committee election. Thank you to all who participated, both voters, especially our great candidates and congratulations to the elected members!
FESCo
Five FESCo seats were open this election. A total of 166 voters participated and cast 729 ballots, meaning a candidate could accumulate up to 996 votes.
# Votes
Candidate
709
Kevin Fenzi
527
Zbigniew Jędrzejewski-Szmek
504
David Cantrell
465
Tomáš Hrčka
415
Fabio Alessandro Locati
359
Josh Stone
Mindshare
There were two candidates nominated for the Mindshare Committee, however as Sumantro Mukherjee is already an elected member, there was only one eligible candidate nominated for election. Therefore, Luis Bazan has been automatically re-elected to the Mindshare committee. Congratulations Luis!
Josh and Kurt talk about the supply chain of Santa. Does he purchase all those things? Are they counterfeit goods? Are they acquired some other way? And once he has all the stuff, the logistics of getting it to the sleigh is mind boggling. It’s all very complex
After using wordpress for more than 20 years, I finally decided it was time
to move off of it. I'm not really happy about the recent turmoil from the
upstream wordpress folks, and I didn't think there was too much value over
just moving to a static generator as so many have before me.
I did some looking around, and decided to just go with nikola.
It uses python and seems pretty well used. It also has a wordpress import
plugin which I hoped to use.
The first problem I ran into is that the 'nikola plugin' command didn't work.
I couldn't see that I had done anything to break it, and some poking around
let me see that this was a bug in 8.3.0 (which is what the current fedora
rpm version is), but was fixed in 8.3.1 (released early this year).
There is already a PR to update it:
So, I built the new version locally and plugin was back in business.
The wordpress_import plugin worked somewhat, but there were a few issues
I hit there too. It tracebacked if I passed '--one-file' to use the new
one file format (instead of a content and a metadata file). I looked at it a
bit, but couldn't figure out where it was failing. I did have to tweak
a bit of the wordpress export, mostly for mistakes that wordpress ignored,
like posts with multiple of the same tag on them, etc.
I looked a bit at comments. I have 81 comments on all my posts over the
last 21 years, but there are none in the last 5 years. There is a
'static_comments' plugin that lets you serve the old static comments,
which looked promising, but it was not very clear to me how to insert it
into the theme I picked to use ('hack'). The doc example has jinja2 examples,
and just a 'adjust accordingly for mako templates'. I didn't want to spend
a bunch of time learning mako templates, so for now I am just going to
drop the comments. If I get time or someone wants to help me get static_comments
working, let me know.
Builds are quite fast and it's an easy rsync to my main server.
Hopefully this all will make me blog a bit more now.
This post will likely cuse aggregators (like fedoraplanet.org) to see
all my recent posts again. Sorry about that.
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 16 – 20 December 2024
Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
شب یلدا، شبی است که تاریکی طولانیتر از همیشه بر آسمان میماند، اما ما ایرانیان به جای ترس از تاریکی، آن را با نور عشق و گرمای خانواده روشن میکنیم. این شب، فرصتی است برای در کنار هم بودن، دل سپردن به حکایتهای کهن، و نگاهی به آینهی خاطرات. یلدا تنها یک شب نیست؛ نمادی […]
I don’t know if it isn’t too much even for a Master Thesis,
I have absolutely no idea who could be a mentor for it.
I have always been absolutely fascinated by the idea of
distributed network storage. My wife was a PhD student at MIT,
and it was fascinating to experience actually functional network
storage. Any Athena desktop you sit at, you had your files in
front of you. We both were at the time at humanities, so I have
never had an experience to work with demanding applications
(never compiled gcc on the top of that), but it seemed to work
surprisingly well. I have never seen anything working that
well. Yes, it was only campus-wide, I didn’t have an Athena
computer at home, but our networking improved a bit since then.
So, it is 2024, we are talking about mixed home/office-workers
(me being one of them), and I am still fiddling with rsync
to synchronize my computer at home and the computer in the
office. I have ADSL which works reasonably well (50/10 Mb/s,
ping 14ms according to lupa.cz), but I wouldn’t dare to actually
work over NFS/Samba here, and I don’t see any real alternative
available. Building Python interpreter on NFS drive! Never!
Obviously, whatever would work for such situation, would require
a heavy caching, and some kind of synchronization between the
local cache and the remote storage, but I am surprised that
I don’t see anything like AFS, Coda, InterMezzo, or heck
perhaps even some caching on the top of NFS (or, whatever,
WebDAV), widely distributed and used. We are still fiddling with
rsync, git, IMAP, CalDAV, SyncThing, and similar synchronization
stuff we all know is wrong, and we should not be doing that.
/var/mail/$LOGIN should just work, wherever its content
actually is.
What I would expect from that Thesis would be at least an
overview of available technology and some suggestion how to do
that with openSUSE distribution (to have at least some coding
done).What do you think?
journalctl is a great tool to read and filter logs. Since it is context aware, it is much easier to use compared to just using tail on text files. It can be super helpful for example to filter output given on a specific service. But how do you combine filters with a logical OR?
Today I had the need to filter the output to kernel messages, and NetworkManager. Kernel messages is -k, and NetworkManager is -u NetworkManager in short. But the combination doesn’t reveal anything:
$ journalctl -u NetworkManager -k
-- No entries --
The reason is that the (implied) logical connection here is “AND”. Only output is shown that is both a kernel message and from NetworkManager – which doesn’t exist.
journalctl knows about the + operator to combine output – but that doesn’t help with the short options used above:
$ journalctl -u NetworkManager -k
-- No entries --
Instead, we need to use the long form, call out the systemd unit as a flag explicitly, and the transport for kernel:
$ journalctl _TRANSPORT=kernel + _SYSTEMD_UNIT=NetworkManager.service --since today
...
Dez 18 21:26:52 russel NetworkManager[1510]: <info> [1734553612.1558] dhcp4 (wlp9s0): state changed new lease, address=192.168.1.2
Dez 18 21:26:52 russel kernel: wlp9s0: Limiting TX power to 30 (30 - 0) dBm as advertised by 12:34:56:78:90:12
Of course, this can be combined with -f, --since today and other typical journalctl flags. It can even be extended by more services:
However, I have not been able to wrap my head around how to do that. The problem is that the DI framework, which both creates and shares the instances, will come in to conflict with the borrow checker.
Lets say I have 3 Objects: Query1, Query2, Database. A Query object requires a Database object. Q1 and Q2 are different queries, and both use the same Database object. Thus we have a dependency graph like this:
Query1->Database<-Query2
So a super simplistic structure code base would look like this.
use std::string::String;
struct Database {
url: String
}
struct Query<'a> {
db: &'a Database,
sql: String,
}
impl Database {
fn execute(&self) {
println!("connect to {}",self.url);
}
}
impl Query<'_> {
fn execute(&self) {
self.db.execute();
println!("execute {}",self.sql);
}
}
fn main() {
println!("Hello, world!");
let db = Database{url: "postgresql:server:port".to_string()};
db.execute();
{
let q1 = Query{db: &db, sql: "select * from users;".to_string()};
q1.execute();
}
{
let q2 = Query{db: &db, sql: "select * from projects;".to_string()};
q2.execute();
}
}
This seems to point the way to doing IoC, so long as you only have immutable objects. Mutable objects are going to be tougher: for a given workflow, you need to get the mutable object from the container, mutate it, and return it to the container. Unfortunately, this means that the IoC container is going to inject itself into your workflow, not just your construction flow.
My current suspicion is that we are going to need to hold on to mutable objects for a single function call of a method from an external object. This implies a couple usage patterns. The object could be be explicitly fetched from the IoC container by the caller, passed to the method, and returned to the container after the method call. This means that the caller knows about the object, which breaks encapsulation. To maintain encapsulation, this implies that the called object fetches the mutable object from the IoC container on demand, and returns it before the end of the function call. This seems both do-able and tricky at the same time.
Since an object is supposed to register its dependencies up front, and know that the IoC container is going to fulfill them, the dependent object should register a dependency for the mutable object in the form of a smart pointer or some other type of indirection. The smart pointer will act as a lazy load proxy. Suppose we have a mutable object with a function that call change. The chain of operations would look like this:
The get_mut and release_mut calls should have an immutable reference to the mutable object passed as a parameter.
There is a cascading impact of this change: the IoC container itself must now be a mutable reference that the smart pointer holds. Something inside of it has to be unsafe in order to provide the same object to multiple requestors.
We thus would resolve to more traditional approaches of using a monitor, a semaphor, or some other multiple access control mechanism to protect the shared mutable resource.
The repository is available for x86_64 (Intel/AMD) and aarch64 (ARM).
Repositories configuration:
On Fedora, standards repositories are enough, on Enterprise Linux (RHEL, CentOS) the Extra Packages for Enterprise Linux (EPEL) and Code Ready Builder (CRB) repositories must be configured.
If these extensions are not mandatory, you can remove them before the upgrade, otherwise, you must be patient.
Warning: some extensions are still under development, but it seems useful to provide them to upgrade more people and allow users to give feedback to the authors.
More information
If you prefer to install PHP 8.4 beside the default PHP version, this can be achieved using the php84 prefixed packages, see the PHP 8.4 as Software Collection post.
The packages available in the repository were used as sources for Fedora 42.
By providing a full feature PHP stack, with about 150 available extensions, 11 PHP versions, as base and SCL packages, for Fedora and Enterprise Linux, and with 300 000 downloads per day, the remi repository became in the last 19 years a reference for PHP users on RPM based distributions, maintained by an active contributor to the projects (Fedora, PHP, PECL...).
This is a heads up that if you file an issue in the libinput issue tracker, it's very likely this issue will be closed. And this post explains why that's a good thing, why it doesn't mean what you want, and most importantly why you shouldn't get angry about it.
Unfixed issues have, roughly, two states: they're either waiting for someone who can triage and ideally fix it (let's call those someones "maintainers") or they're waiting on the reporter to provide some more info or test something. Let's call the former state "actionable" and the second state "needinfo". The first state is typically not explicitly communicated but the latter can be via different means, most commonly via a "needinfo" label. Labels are of course great because you can be explicit about what is needed and with our bugbot you can automate much of this.
Alas, using labels has one disadvantage: GitLab does not allow the typical bug reporter to set or remove labels - you need to have at least the Planner role in the project (or group) and, well, suprisingly reporting an issue doesn't mean you get immediately added to the project. So setting a "needinfo" label requires the maintainer to remove the label. And until that happens you have a open bug that has needinfo set and looks like it's still needing info. Not a good look, that is.
So how about we use something other than labels, so the reporter can communicate that the bug has changed to actionable? Well, as it turns out there is exactly thing a reporter can do on their own bugs other than post comments: close it and re-open it. That's it [1]. So given this vast array of options (one button!), we shall use them (click it!).
So for the forseeable future libinput will follow the following pattern:
Reporter files an issue
Maintainer looks at it, posts a comment requesting some information, closes the bug
Reporter attaches information, re-opens bug
Maintainer looks at it and either: files a PR to fix the issue or closes the bug with the wontfix/notourbug/cantfix label
Obviously the close/reopen stage may happen a few times. For the final closing where the issue isn't fixed the labels actually work well: they preserve for posterity why the bug was closed and in this case they do not need to be changed by the reporter anyway. But until that final closing the result of this approach is that an open bug is a bug that is actionable for a maintainer.
This process should work (in libinput at least), all it requires is for reporters to not get grumpy about issue being closed. And that's where this blog post (and the comments bugbot will add when closing) come in. So here's hoping. And to stave off the first question: yes, I too wish there was a better (and equally simple) way to go about this.
[1] we shall ignore magic comments that are parsed by language-understanding bots because that future isn't yet the present
Here are the release notes from Cockpit 331, cockpit-machines 326, and cockpit-files 14:
Files: Allow uploading files as administrator
When logged in with administrator privileges, you can now upload files to directories other than your home. The default ownership of these files are the user/group of the current directory. This can be changed after the upload completes.
Files: Drag and drop file upload
It is now possible to upload files by simply dragging them from your desktop or a file explorer and dropping them directly into Cockpit Files.
Files: Allow editing of multiple file permissions
The permissions of multiple selected files can now be changed if they have the same owner and group.
Machines: Download and install unsupported and older operating systems
Cockpit now offers to download and install unsupported and older operating systems, while still promoting supported and newer ones.
ws container: Support sharing host ssh-agent
For SSH key authentication, the cockpit/ws container has supported bind-mounting private SSH keys into the container for a long time. That mode is appropriate for server system containers or deploying in e.g. Kubernetes.
For desktop use cases similar to Cockpit Client it is preferable to instead run the ws container as your own user, and share your user session’s SSH agent. This provides a more comfortable login experience as you don’t have to unlock private keys with your passphrases on the Cockpit login page again, and this also avoids exposing the private key to the web server.
Please see the “SSH authentication: Share SSH agent with container” section in the container documentation for details.
ws: Prevent search engine indexing with robots.txt
Public Cockpit instances are no longer indexed by default.
Try it out
Cockpit 331, cockpit-machines 326, and cockpit-files 14 are available now:
Reflections on GNOME.Asia Summit 2024: A Memorable Experience
The GNOME.Asia Summit is the highlight of the year for the GNOME community in Asia, bringing together users, developers, leaders, governments, and businesses to discuss the present and future of GNOME technology. This year, I had the privilege of attending the summit hosted at Red Hat India Pvt. Ltd., Bengaluru, from December 6 to December 8, 2024, as a hybrid event—welcoming both in-person attendees and virtual participants via the Big Blue Button platform.
With two distinct tracks, the event featured an incredible lineup of sessions and discussions. Here’s a glimpse into my personal highlights and takeaways from this remarkable gathering.
Valuable Conversations
While the talks were insightful, the magic of the event lay in the conversations I had with people across the community. One discussion that stood out was with Jona Azizaj, Fedora’s Diversity, Equity, and Inclusion (DEI) Advisor. We talked about the continuity of the Fedora DEI team—a topic close to my heart.
Recent concerns about the team’s future had me reflecting, and this face-to-face discussion with Jona was exactly what I needed. Hearing her share the same passion and commitment to sustaining DEI efforts in Fedora was both validating and energizing. It reinforced that I’m not alone in believing in the value of this team—there are others equally invested in ensuring these efforts thrive.
Topics That Stood Out
Every session brought unique value, but the mentorship-focused presentations truly resonated with me. The session “Making Mentorship a Key Part of Your Open Source Community” by Jona Azizaj and Smera Goel set the tone, sharing powerful real-life experiences about building impactful mentorship programs.
Other noteworthy sessions included:
“Open Source Mentorship: Crafting Communities, Creating Leaders” by Samyak Jain, who engaged the audience with thought-provoking questions, interactive sticky-note exercises, and structured solutions.
“Fedora × Outreachy: Mentee/Mentor Retrospective + Making a Career out of FOSS” by Justin W. Flory and Nikita Tripathi, who are masters of their craft and delivered invaluable insights on mentorship and career-building through open source.
Additionally, I had the opportunity to present the AI-InstructLab demo on Fedora, which sparked significant interest among the audience. It was exciting to see the enthusiasm for AI in the open source community—it truly feels like we’re riding the wave of innovation together.
Overall Impressions
My biggest takeaway from the summit was the strong presence of Fedora throughout the event. Fedora contributors not only took center stage as presenters but were also instrumental as event organizers. Their efforts embody Fedora’s core values: Freedom, Friends, Features, and First.
This event wasn’t just about talks or presentations—it was about community connections. Fedora contributors had meaningful face-to-face interactions, discussing plans to drive success across the APAC region. These conversations underscored the importance of fostering collaboration and building on our shared vision for Fedora’s future.
Looking Ahead
Events like GNOME.Asia Summit remind us of the power of community. They provide a platform for contributors to connect, learn, and grow together. I’m optimistic about the future and believe that continuing our active participation in such events will lead to even more success stories for Fedora in APAC and beyond.
Acknowledging Sudhir Dharanendraiah – No reflection on this event would be complete without recognizing the significant contributions of Sudhir Dharanendraiah. His hard work and dedication were instrumental in organizing this event and ensuring it ran smoothly. From planning to execution, Sudhir played a pivotal role in making GNOME.Asia Summit 2024 a success. A heartfelt thank you to Sudhir for his commitment and effort—it truly made a difference!
A heartfelt thank you to everyone who made this summit a success—your hard work and dedication reflect the very best of what open source is all about.
CentOS Stream 10 and EPEL 10 just became available, and as usual, I tried to build syslog-ng as soon as possible. For now it is available in my git snapshot repository, but I am also planning to make it available in EPEL 10 soon.
Before you begin
First, a big warning: RHEL 10 has not been released yet, so you might see some changes in CentOS Stream and thus also in EPEL 10. Syslog-ng is also built form a git snapshot, even if it only contains bug fixes.
I added almost all missing dependencies to my git snapshot repository, so the syslog-ng package for EPEL 10 is almost identical to the EPEL 9 one. The only difference is that some Python dependencies are missing, so if you want to use the drivers written in Python (Kubernetes, etc.), the dependencies are not installed automatically. You must use the syslog-ng-update-virtualenv script to download dependencies to a dedicated directory outside of package management.
List of syslog-ng packages:
root@localhost:/etc/yum.repos.d# dnf search syslog-ng | grep -v debuginfo
Updating Subscription Management repositories.
Unable to read consumer identity
This system is not registered with an entitlement server. You can use subscription-manager to register.
Last metadata expiration check: 0:09:33 ago on Mon 16 Dec 2024 11:37:23 AM CET.
======================= Name Exactly Matched: syslog-ng ========================
syslog-ng.x86_64 : Next-generation syslog server
syslog-ng.src : Next-generation syslog server
====================== Name & Summary Matched: syslog-ng =======================
syslog-ng-afsnmp.x86_64 : SNMP support for syslog-ng
syslog-ng-amqp.x86_64 : AMQP support for syslog-ng
syslog-ng-bigquery.x86_64 : Google BigQuery support for syslog-ng
syslog-ng-bpf.x86_64 : Faster UDP log collection for syslog-ng
syslog-ng-cloudauth.x86_64 : cloud authentication support for syslog-ng: pubsub
syslog-ng-debugsource.x86_64 : Debug sources for package syslog-ng
syslog-ng-devel.x86_64 : Development files for syslog-ng
syslog-ng-geoip.x86_64 : geoip support for syslog-ng
syslog-ng-grpc.x86_64 : GRPC support for syslog-ng
syslog-ng-http.x86_64 : HTTP support for syslog-ng
syslog-ng-java.x86_64 : Java destination support for syslog-ng
syslog-ng-kafka.x86_64 : kafka support for syslog-ng
syslog-ng-logrotate.x86_64 : Logrotate script for syslog-ng
syslog-ng-loki.x86_64 : Loki support for syslog-ng
syslog-ng-mongodb.x86_64 : mongodb support for syslog-ng
syslog-ng-mqtt.x86_64 : mqtt support for syslog-ng
syslog-ng-opentelemetry.x86_64 : OpenTelemetry support for syslog-ng
syslog-ng-python.x86_64 : Python support for syslog-ng
syslog-ng-python-modules.x86_64 : Python-based drivers for syslog-ng
syslog-ng-redis.x86_64 : redis support for syslog-ng
syslog-ng-riemann.x86_64 : riemann support for syslog-ng
syslog-ng-slog.x86_64 : $(slog) support for syslog-ng
syslog-ng-smtp.x86_64 : smtp support for syslog-ng
List of features, when using a basic syslog-ng installation:
root@localhost:/etc/yum.repos.d# syslog-ng -V
syslog-ng 4 (4.8.1.15.g2d06795)
Config version: 4.2
Installer-Version: 4.8.1.15.g2d06795
Revision:
Compile-Date: Oct 3 2024 00:00:00
Module-Directory: /usr/lib64/syslog-ng
Module-Path: /usr/lib64/syslog-ng
Include-Path: /usr/share/syslog-ng/include
Available-Modules: timestamp,xml,http,add-contextual-data,affile,afprog,afsocket,afstomp,afuser,appmodel,basicfuncs,cef,confgen,correlation,cryptofuncs,csvparser,disk-buffer,examples,graphite,hook-commands,json-plugin,kvformat,linux-kmsg-format,map-value-pairs,metrics-probe,pacctformat,pseudofile,rate-limit-filter,regexp-parser,sdjournal,stardate,syslogformat,system-source,tags-parser,tfgetent,azure-auth-header
Enable-Debug: off
Enable-GProf: off
Enable-Memtrace: off
Enable-IPv6: on
Enable-Spoof-Source: on
Enable-TCP-Wrapper: off
Enable-Linux-Caps: on
Enable-Systemd: on
What is next?
Let us know your experiences with syslog-ng on CentOS 10! Bug reports are very welcome, but we are also happy to hear success stories! :-)
-
If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik, on Mastodon as @Pczanik@fosstodon.org.
In the weeks leading up to this release (and the week after) I have
posted a series of serieses of posts to Mastodon about key new
features in this release, under the
#systemd257 hash tag. In
case you aren't using Mastodon, but would like to read up, here's a
list of all 37 posts:
I intend to do a similar series of serieses of posts for the next systemd
release (v258), hence if you haven't left tech Twitter for Mastodon yet, now is
the opportunity.
Josh and Kurt talk about a CWE Top 25 list from MITRE. The list itself is fine, but we discuss why the list looks the way it does (it’s because of WordPress). We also discuss why Josh hates lists like this (because they never create any actions). We finish up running through the whole list with a few comments about the findings.
استروبری (Strawberry) یک پخشکننده موسیقی و ابزار مدیریت مجموعه موسیقی است که برای کلکسیونرهای موسیقی و علاقهمندان به کیفیت بالای صدا طراحی شده است. با استفاده از استروبری میتوانید مجموعه موسیقی دیجیتال خود را مدیریت و پخش کنید یا به ایستگاههای رادیویی مورد علاقه خود گوش دهید. استروبری یک نرمافزار آزاد است که تحت مجوز […]
Last Tuesday, during lunch hours I had a talk at KTH computer science students'
organization. The topic was Open Source and career. My main goal was tell the
attendees that contribution size does not matter, but continuing contributing
to various projects can change someone's life and career in a positive way. I
talked about the history of the Free Software movement and Open Source. I also
talked a bit about Aaron Swartz
and asked the participants to watch the documentary The Internet's Own
Boy. Some were surprised to hear
about Sunet's Open Source work.
There were around 70 people and few people later message how they think about
contribution after my talk. The best part was one student who messaged
next day and said that he contributed one small patch to a project.
I also told them about PyLadies Stockholm and other local efforts from various communities.
There was also a surprising visit of the #curl channel on IRC, thanks to bagder and icing :)
This is an independent, censorship-resistant site run by volunteers. This site and the blogs of individual volunteers are not officially affiliated with or endorsed by the Fedora Project.