The Datacenter move has been completed.
Almost all services are back online and processing.
Followups and known minor issues are being tracked
in the above ticket.
If you notice anything amiss, please use our usual
issue reporting path:
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 30 June – 04 July 2025
Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
Fedora Data Center Move: “It’s Move Time!” and Successful Progress!
This week was “move time” for the Fedora Data Center migration from IAD2 to RDU3, and thanks to the collective effort of the entire team, it’s been a significant success! We officially closed off the IAD2 datacenter, with core applications, databases, and the build pipeline successfully migrated to RDU3. This involved meticulously scaling down IAD2 OpenShift apps, migrating critical databases, and updating DNS, followed by the deployment and activation of numerous OpenShift applications in RDU3. While challenges arose, especially with networking and various service configurations, our dedicated team worked tirelessly to address them, ensuring most services are now operational in the new environment. We’ll continue validating and refining everything, but we’re thrilled with the progress made in establishing Fedora’s new home!
If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.
Understanding how to use dracut is critical for kernel upgrades, troubleshooting boot issues, disk migration, encryption, and even kernel debugging.
Introduction: What is dracut?
dracut is a powerful tool used in Fedora, RHEL, and other distributions to create and manage initramfs images—the initial RAM filesystem used during system boot. Unlike older tools like mkinitrd, dracut uses a modular approach, allowing you to build minimal or specialized initramfs tailored to your system.
Installing dracut (if not already available)
dracut comes pre-installed in Fedora and RHEL. If it is missing, install it with:
$ sudo dnf install dracut
Verify the version:
$ dracut --version
Basic usage
Regenerate the current initramfs
$ sudo dracut --force
This regenerates the initramfs for the currently running kernel.
Note: Always include a space at the beginning and end of the value when using += in these configuration files. These files are sourced as Bash scripts, so
add_dracutmodules+=" crypt lvm "
ensures proper spacing when multiple config files are concatenated. Without the spaces, the resulting string could concatenate improperly (e.g., mod2mod3) and cause module loading failures.
Deep dive: /usr/lib/dracut/modules.d/ – the heart of dracut
The directory /usr/lib/dracut/modules.d includes all module definitions. Each contains:
You can also create custom modules at this location for specialized logic.
Final thoughts
dracut is more than a utility—it’s your boot-time engineer. From creating lightweight images to resolving boot failures, it offers unparalleled flexibility.
Explore man dracut, read through /usr/lib/dracut/modules.d/, and start customizing.
This article is dedicated to my wife, Rupali Suraj Patil, for her continuous support and encouragement.
It’s already been a month, I can’t imagine how time flies so fast, busy time?? Flock, Fedora DEI and Documentation workshop?? All in one month.
As a Fedora Outreachy intern, my first month has been packed with learning and contributions. This blog shares what I worked on and how I learned to navigate open source communities.
First, I would like to give a shoutout to my amazing Mentor, Jona Azizaj for all the effort she has put into supporting me. Thank You, Jona!
Highlights from June
Fedora DEI & Docs Workshop
One of the biggest milestones this month was planning and hosting my first Fedora DEI & Docs Workshop. This virtual event introduced new contributors to Fedora documentation, showed them how to submit changes, and gave a live demo of fixing an issue – definitely a learning experience in event organizing!
You can check the Discourse post; all information is in the post itself, including slides and comments.
Flock 2025 recap
I wrote a detailed Flock to Fedora recap article, covering the first two days of talks streamed from Prague. From big announcements about Fedora’s future to deep dives into mentorship, the sessions were both inspiring and practical. Read the blog magazine recap.
Documentation contributions
This month, I have contributed to multiple docs areas, including:
DEI team docs – Updated all the broken links in the docs.
Outreachy DEI page, and Outreachy mentored projects pages(under review) – I updated content and added examples of past interns, how Outreachy shaped their journey even beyond the internship.
Past event section – Documented successful Fedora DEI activities. It serves as an archive for our past events.
Collaboration and learning
The good part? It’s great to work closely with others, and I’m learning this in the open source space. I spend some time working with other teams as well:
Mindshare Committee – Learned how to request funding for events
Design team – I had an amazing postcards prepared, thanks to the Design team
Marketing – Got the Docs workshop promoted to different Fedora social accounts
Documentation team – Especially with Petr Bokoc, who shared a detailed guide on how you can easily contribute to the Docs pages.
A great learning experience. One thing I could say about people in Open source (in Fedora), they’re super amazing, gentle Cheers – I’m enjoying my journey.
My role in Join Fedora SIG
Oh, I thought it’s good to mention this as well, I am also part of the Join SIG, which helps newcomers find their place in Fedora. I’ve been able to understand how the community works, onboarding and mentorship.
What I’ve learned
How to collaborate asynchronously – Video calls, and chats.
How to chair meetings – I chaired two DEI Team meetings this month. The first one was challenging, but the second, I felt confident and even enjoyed it. I can tell I didn’t know how meetings are held in text
How open source works – From budgeting to marketing, I’m learning how many moving pieces make Fedora possible.
What’s next
I plan to revisit the Event checklist and revamp it, work with my mentor Jona and make it meaningful and useful for future events.
Also to continue improving the DEI docs, and promoting Fedora’s DEI work.
Last word
This month has already been full of learning and growth. If you’re also interested in helping out the DEI work, reach out to us in the matrix room.
Hi everyone, I’m working on building a service to make it easier for packagers to submit new packages to Fedora, improving upon and staying in line with the current submission process. My main focus is to automate away trivial tasks, provide fast and clear feedback, and tightly integrate with Git-based workflows that developers are familiar with.
This month
I focused on presenting a high-level architecture of the service of the project to the Fedora community and collecting early feedback. These discussions were incredibly helpful in shaping the design of the project. In particular, they helped surface early concerns and identify important edge cases that we will need to support.
The key decision is to go with a monorepo model: Each new package submission will be a Pull Request to a central repository where contributors submit their spec files and related metadata.
The service will focus on:
Running a series of automated checks on the package (e.g rpmlint)
Detecting common issues early.
Reporting the feedback and results in the same PR thread for fast feedback loops.
Keeping the logic abstract and forge-agnostic, reuse packit-service’s code and layer new handlers on top of it.
Currently, Working on setting up the local development environment and testing for the project with packit-service.
What’s Next ?
I’ll be working on getting a reliable testing environment ready and write code for COPR integration for builds and the next series of post build checks. All the code can be found at avant .
Thanks to my mentor Frantisek Lachman and the community for the great feedback and support.
We will be moving services and applications from our IAD2 datacenter to a new RDU3 one.
End user services such as: docs, mirrorlists, dns, pagure.io, torrent, fedorapeople, fedoraproject.org website, and tier0 download server will be unaffected and should continue to work normally through the outage window.
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 23 June – 27 June 2025
Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
Performance Co-Pilot (PCP) is a robust framework for collecting, monitoring, and analyzing system performance metrics. Available in the repos for Fedora and RHEL, it allows administrators to gather a wide array of data with minimal configuration. This guide walks you through tuning PCP’s pmlogger service to better fit your needs—whether you’re debugging performance issues or running on constrained hardware.
Is the default setup of PCP right for your use case? Often, it’s not. While PCP’s defaults strike a balance between data granularity and overhead, production workloads vary widely. Later in this article, two scenarios will be used to demonstrate some useful configurations.
pmcd: collects live performance metrics from various agents.
pmlogger: archives these metrics over time for analysis.
The behavior of pmlogger is controlled by files in /etc/pcp/pmlogger/control.d/. The most relevant is local, which contains command-line options for how logging should behave.
Sample configuration:
$ cat /etc/pcp/pmlogger/control.d/local
You’ll see a line like:
localhost y y /usr/bin/pmlogger -h localhost ... -t 10s -m note
The -t 10s flag defines the logging interval—every 10 seconds in this case.
Scenario 1: High-frequency monitoring for deep analysis
Use case: Debugging a transient issue on a production server. Goal: Change the logging interval from 10 seconds to 1 second.
Edit the file (nano editor used in the examples, please use your editor of choice):
PCP archives are rotated daily by a cron-like service. Configuration lives in:
$ cat /etc/sysconfig/pmlogger
Default values:
PCP_MAX_LOG_SIZE=100
PCP_MAX_LOG_VERSIONS=14
PCP_MAX_LOG_SIZE: total archive size (in MB).
PCP_MAX_LOG_VERSIONS: number of daily logs to keep.
Goal: Keep logs for 30 days.
Edit the file:
$ sudo nano /etc/sysconfig/pmlogger
Change:
PCP_MAX_LOG_VERSIONS=30
No service restart is required. Changes apply during the next cleanup cycle.
Final thoughts
PCP is a flexible powerhouse. With just a few changes, you can transform it from a general-purpose monitor into a specialized tool tailored to your workload. Whether you need precision diagnostics or long-term resource tracking, tuning pmlogger gives you control and confidence.
So go ahead—open that config file and start customizing your system’s performance story.
Note: This article is dedicated to my wife, Rupali Suraj Patil, who inspires me every day.
This article will describe the content and structure of the sosreport output. The aim is to improve its usefullness through a better understanding of its contents.
What is sosreport?
sosreport is a powerful command-line utility available on Fedora, Red Hat Enterprise Linux (RHEL), CentOS, and other RHEL-based systems to collect a comprehensive snapshot of the system’s configuration, logs, services, and state. The primary use is for diagnosing issues, especially during support cases with Red Hat or other vendors.
When executed, sosreport runs a series of modular plugins that collect relevant data from various subsystems like networking, storage, SELinux, Docker, and more. The resulting report is packaged into a compressed tarball, which can be securely shared with support teams to expedite troubleshooting.
In essence, sosreport acts as a black box recorder for Linux — capturing everything from system logs and kernel messages to active configurations and command outputs — helping support engineers trace problems without needing direct access to the system.
How to Generate a sosreport
To use sosreport on Fedora, RHEL, or CentOS, run the following command as root or with sudo:
sudo sosreport
This command collects system configuration, logs, and command outputs using various plugins. After a few minutes, it generates a compressed tarball in /var/tmp/ (or a similar location), typically named like:
sosreport-hostname-20250623-123456.tar.xz
You may be prompted to enter a case ID or other metadata, depending on your system configuration or support workflow.
The sosreport generated tarball contains a detailed snapshot of the system’s health and configuration. It has a well-organized structure which reflects the data collected from the myriad Linux subsystems.
Exploring sosreport output is challenging due to the sheer volume of logs, configuration files, and system command outputs it contains. However, understanding its layout is key for support engineers and sysadmins to quickly locate and interpret crucial diagnostic information.
sosreport directory layout
When the tarball is unpacked, the directory structure typically resembles this:
Each file name matches the Linux command used, with all options. The contents are the actual command output, making the plugin behavior transparent.
sos_reports/
This directory contains multiple formats that index and summarize the entire sosreport:
sos.json: A machine-readable index of all collected files and commands.
manifest.json: Describes how sosreport executed – timestamps, plugins used, obfuscation done, errors, etc.
HTML output for easy browsing via browser.
sos_logs/
Contains logs from the execution of sosreport itself.
sos.log: Primary log file that highlights any errors or issues during data collection.
sos_strings/
Contains journal logs for up to 30 days, extracted using journalctl
Can be quite large, especially on heavily used systems
Structured into subdirectories like logs/ or networkmanager/
EXTRAS/
This is not a default part of an sosreport. It is created by the sos_extras plugin and used to collect any custom user-defined files.
Why this layout matters
Speed: Logical grouping of directories help engineers drill down without manually parsing GigaBytes of log files.
Traceability: Knowing where each file came from and what command produced it enhances reproducibility.
Automation: Tools like soscleaner or sos-analyzer rely on this structure for automated diagnostics.
Final thoughts
While sosreport is a powerful diagnostic tool, its effectiveness hinges on understanding its structure. With familiarity, engineers can isolate root causes of failures, uncover misconfigurations, and collaborate more efficiently with support teams. If you haven’t yet opened one up manually, try it — there’s a lot to learn from the insides!
This is my first Fedora Magazine article, dedicated to my wife Rupali Suraj Patil — my constant source of inspiration.
In this fifth article of the “Systeminsightswithcommand-line tools” series we explore free and vmstat, two small utilities that reveal a surprising amount about your Linux system’s health. free gives you an instant snapshot of how RAM and swap are being used. vmstat (the virtual memory statistics reporter) reports a real-time view of memory, CPU, and I/O activity.
By the end of this article you will be able to translate buffers and cache into “breathing room”, read the mysterious available column with confidence, and spot memory leaks or I/O saturation.
A quick tour of free
Basic usage
$ free -h total used free shared buff/cache available Mem: 23Gi 14Gi 575Mi 3,3Gi 12Gi 8,8Gi Swap: 8,0Gi 6,6Gi 1,4Gi
free parses /proc/meminfo and prints totals for physical memory and swap, along with kernel buffers and cache. Use -h for human-readable units, -s 1 to refresh every second, and -c N to stop after N samples which is handy to get a trend when doing something in parallel. For example, free -s 60 -c 1440 gives a 24-hour CSV-friendly record without installing extra monitoring daemons.
Free memory refers to RAM that is entirely unoccupied. It isn’t being used by any process or for caching. On server systems, I tend to view this as wasted since unused memory isn’t contributing to performance. Ideally, after a system has been running for some time, this number should remain low.
Available memory, on the other hand, represents an estimate of how much memory can be used by new or running processes without resorting to swap. It includes free memory plus parts of the cache and buffers that the system can reclaim quickly if needed.
In essence, the distinction in Linux lies here: free memory is idle and unused, while available memory includes both truly free space and memory that can be readily freed up to keep the system responsive without swapping. It is not a problem to have a low free memory, available memory is usually what to be concerned about.
A healthy system might even show used ≈ total yet available remains large; that mostly reflects cache at work. Fedora’s kernel will automatically drop clean cache pages whenever an application needs the space, so cached memory is not wasted. Think of it as a working set that just hasn’t been reassigned yet.
Spotting problems with free
Rapidly shrinking available combined with rising swap used indicates real memory pressure.
Large swap-in/out spikes point to thrashing workloads or runaway memory consumers.
vmstat – Report virtual memory statistics
vmstat (virtual memory statistics) displays processes, memory, paging, block-I/O, interrupts, context switches, and CPU utilization in a single line. Run it with an interval and count to watch trends (output shown below has been split into three sections for better readability):
---swap-- -----io---- si so bi bo 8 21 130 724 0 0 0 0 0 0 8 48
-system-- -------cpu------- in cs us sy id wa st gu 2851 19 15 7 77 0 0 0 5779 7246 14 10 77 0 0 0 5141 6525 12 9 79 0 0 0
Anatomy of the output
From the vmstat(8) manpage:
Procs r: The number of runnable processes (running or waiting for run time). b: The number of processes blocked waiting for I/O to complete.
Memory These are affected by the --unit option. swpd: the amount of swap memory used. free: the amount of idle memory. buff: the amount of memory used as buffers. cache: the amount of memory used as cache. inact: the amount of inactive memory. (-a option) active: the amount of active memory. (-a option)
Swap These are affected by the --unit option. si: Amount of memory swapped in from disk (/s). so: Amount of memory swapped to disk (/s).
IO bi: Kibibyte received from a block device (KiB/s). bo: Kibibyte sent to a block device (KiB/s).
System in: The number of interrupts per second, including the clock. cs: The number of context switches per second.
CPU These are percentages of total CPU time. us: Time spent running non-kernel code. (user time, including nice time) sy: Time spent running kernel code. (system time) id: Time spent idle. Prior to Linux 2.5.41, this includes IO-wait time. wa: Time spent waiting for IO. Prior to Linux 2.5.41, included in idle. st: Time stolen from a virtual machine. Prior to Linux 2.6.11, unknown. gu: Time spent running KVM guest code (guest time, including guest nice).
Practical diagnostics
Section
Key Fields
What to watch
Procs
r (run-queue), b (blocked)
r > CPU cores = contention
Memory
swpd, free, buff, cache
Rising swpd with falling free = pressure
Swap
si, so
Non-zero so means the kernel is swapping out
IO
bi, bo
High bo + high wa hints at write-heavy workloads
System
in, cs
Sudden spikes may indicate interrupt storms
CPU
us, sy, id, wa, st
High wa (I/O wait) = storage bottleneck
Catching a memory leak
Run vmstat 500 in one terminal while your suspect application runs in another. If free keeps falling and si/so climb over successive samples, physical RAM is being exhausted and the kernel starts swapping, which is classic leak behavior.
Finding I/O saturation
When wa (CPU wait) and bo (blocks out) soar while r remains modest, the CPU is idle but stuck waiting for the disk. Consider adding faster storage or tuning I/O scheduler parameters.
Detecting CPU over-commit
A sustained r that is double the number of logical cores with low wa and plenty of free means CPU is the bottleneck, not memory or I/O. Use top or htop to locate the busiest processes, or scale out workloads accordingly.
Conclusion
Mastering free and vmstat gives you a lens into memory usage, swap activity, I/O latency, and CPU load. For everyday debugging: start with free to check if your system is truly out of memory, then use vmstat to reveal the reason, whether it’s memory leaks, disk bottlenecks, or CPU saturation.
Stay tuned for the next piece in our “System insights with command-line tools” series and happy Fedora troubleshooting!
This is my recap of Flock to Fedora 2025, streamed live from Kenya! I would really like to thank the amazing team – speakers, volunteers as well, who made FLOCK possible this year!
This recap is from a virtual attendee’s viewpoint, tuning in live from Kenya for June 5–6. Massive appreciation to everyone behind the scenes!
Day 1: Big Announcements, Bold Ideas
“10 years in the making,” as Justin W. put it – this year’s Flock kicked off with energy. The central track brought everyone together for keynotes and deep conversations on Fedora’s future.
The day opened with reflections from Matthew Miller former Fedora Project Leader (FPL), who spoke candidly about the passion and challenges of the role – even nearly missing his daughter’s graduation! He handed the baton to the new FPL, JefSpaleta, who stepped on stage with a vision.
Jef Spaleta laid out Fedora’s Strategy 2028 with a focus on mentorship, contributor growth, and aligning community efforts. He emphasized the need to support people better, build transparent processes, and push forward with long-term thinking.
The “Big Elephant” – AI – was among the topics in the room, plus others.
One community member asked: “What’s the plan for those greying folks who’ve contributed for years – how do we keep them engaged?” That led to an eye-opening chat about FESCo nominations. Turns out, you don’t have to be a genius to participate. You just have to show up, say something, even if it’s “I didn’t have time to review.” It’s about honesty and growth. Not perfection. (Whew!)
And yes – apparently, it was the same Friends (Friends of Fedora) who got elected every time, or maybe people don’t really know how that works? Now, should we expect a guide on that? Many would need it I guess. Let’s push them for it. Bookmark that.
PS: You can watch all this goodness on YouTube. Trust me, it’s worth it.
Sponsor time… And a Milo break
During the break, while the stream showed sponsor slides (thank you, sponsors – you keep the lights on!), I took a real break. Picture this:
Fresh hot milk (fresh of course, never told you I used to milk cows when I was a kid) + Milo + a tiny bit of sugar = Happiness. (sugar in Milo! That’s crazy!! Milo is mostly sugar (MatH)
So, take a breather …. Oh I mean take a break `cause we’re not yet halfway through our article.
Welcome back from your break! I know you didn’t really take a break; I was just humoring you.
Yes, my Flock snack game was elite.
Back to Tech: Forgejo & Fedora Gaming?!
Otto Richter from Codeberg walked us through why Fedora moved to Forgejo for hosting – and what Codeberg is all about (all the goodies it has). He even gave a quick demo, plus a bonus pronunciation lesson. You can find the slides to the talk here on Pretalx.
Then came a surprise: Fedora’s Downstream Gaming Variant!
Noel Miller and Antheas from the Bazzite team introduced their work on making a variant of Fedora that is more gaming-friendly. I hadn’t even heard of Bazzite before this. Maybe I should try it? Yes, sometime. If you play games and love Fedora, these folks want you – whether you’re a dev, tester, or gamer who just clicks buttons and wins.
After lunch: SIGs, Mentorship, and Inspiration
After lunch sessions split across rooms. I tried to be in two rooms at once (classic virtual problem ). I joined the Topaz Room for the Fedora Join SIG session.
I’m actually a member of the Join SIG, so it felt like home. Our job? Help newcomers feel welcome, direct them to the right teams, and support their journey into Fedora. It’s possibly the easiest SIG to “join”—”if you hang around, you’re one of us.” That was simple, true and beautiful. Thanks to Ankur and AkashDeep Dhar, and yet another Friend who helped with the slides that didn’t make it – Mat H (theprogram).
But we need your help, we need you to help us help newcomers navigate the community and find their place in. There’re probably a few helping out!
Fedora Docs: Where words matter
Somewhere between hopping between Topaz and Opal rooms, I landed in one of my favorite sessions – Fedora Documentation.
Yep, mark your calendars! You can accept the invite here, also say hi in the discourse channel and let this be engaging (would love to hear from you, really).
We often forget that docs aren’t “extra.” They are the user’s first experience, the contributor’s first clue, and often the only map through the maze. And Fedora takes this seriously.
Later, in the Opal Room, I caught talks on mentorship programs like GSoC and Outreachy, led by Sumantro M. and Fernando.
And yes – drumroll please – this was especially meaningful for me – I’m currently an Outreachy Intern (June–August 2025)!
One standout talk: “Open Source Mentorship: Crafting Community Leaders” by Nikita Tripathi (an Outreachy alum still contributing to Fedora) and Samyak Jain. The session explored what mentorship is and isn’t – not just teaching, but growing together. As Samyak Jain shared: “Communities thrive on engagement. Your voice matters.”
Wrapping up Day 2: Scaling, Designing, and… T-shirts?
Quick Hits (a.k.a. Can’t-Miss Moments)
Before logging off, I caught “Scaling Fedora Ready through Community Contributions” by Roseline Bassey, presented by Justin W. The talk highlighted how Fedora Ready can grow through community testers, reviewers, and ambassadors, helping users find hardware that works well with Fedora.
Quick talks followed: “The Role of Designers in Open Source” by Smera, reminding us that design matters just as much as code. Let them be there at the beginning of the research project “How Do You Open Source a T-shirt?” by Troy Dawson. This was a cool one, cool shirts. And yes, I’m still thinking about it.
Both were refreshing takes on open source beyond code.
If you’re curious about the “hidden rooms” I missed or the full Day 2 content, check out the recordings on YouTube and the checklist on Pretalx. There’s only so much a human can digest in 48 hours – even with Milo
Before we Recap… Have you filled the Fedora contributor & user survey?
If you haven’t already, please take a moment to fill out the Fedora Contributor and User Survey. Your feedback helps shape the future of the Fedora Project.
Okay, let’s close now.
Final thoughts
Flock (2025 edition!) reminded me that Fedora is more than software. It’s people, mentorship, storytelling, experimentation, and community care. I’m glad I could stream the experience and share it – I’m hoping to join the next FLOCK.
As a remote attendee, I felt seen, included, and inspired. (Inclusion)
Looking forward to more conversations and continued impact.
Your Friend in Open Source, Cornelius – Open Source Freedom Fighter
Among the many details developers juggle, software licensing is often treated as an afterthought. We know we need it. However, faced with choosing the right license, tracking inherited code, and keeping things consistent, license management can feel like a bureaucratic burden.
Licensing is what makes the REUSE project, maintained by the Free Software Foundation Europe (FSFE), such an interesting and important effort. It does not try to replace the legal work involved in choosing a license or deciphering obligations. Instead, REUSE focuses on the mechanics of software licensing. It addresses how we communicate licensing clearly, unambiguously, and reliably in the code itself. REUSE has been adopted by a lot of projects already. These include SAP, Nextcloud and numerous Ansible community roles and collections.
I recently went down the licensing and supply chain rabbit hole myself. I had to figure out how to apply it to open source projects I work on and explain it to others. Thus, I had the unique experience of learning it from scratch while also teaching it. That process gave me insight into what makes REUSE helpful. I learned where the roadblocks are, and how you can start using it in your own open source work. So this article aims to give additional reasoning and insights for every day usage beyond the scope of a quick start tutorial.
Why licensing still feels broken
If you’ve ever tried to make sense of licensing in a codebase with contributions from half a dozen sources, or tried to package software only to find ambiguous or conflicting license declarations, you’ve seen the brokenness first hand. It’s a common pain point.
You start coding. A LICENSE file goes in the root. Maybe it’s MIT, maybe Apache 2.0, maybe GPLv3-or-later. We figure that’s enough. For the most part, tools like Licensee (which Github uses) will scan that file and report the project as single-licensed under whatever it finds.
But that is only part of the picture.
Real-world projects grow messy over time. Files come in from various places. Pull requests, upstream forks, old backups. Someone pastes in a script from Stack Overflow. Someone else uploads a code generator output. Over time, the repository becomes a tangle of files with unclear origins. The top-level LICENSE file can’t speak for all of it any more. But the tools like licensee don’t know that, and often neither do the maintainers.
If you provide code without clear licensing information, you make it hard for the open source ecosystem to collaborate with or consume your work. Unfortunately, approaches to automatic license detection can’t deliver the needed certainty. They rely on fuzzy matching, heuristics, and assumptions (like there is “one license for the project”). This just does not cut it when legal clarity is required. Automatic license heuristics are complicated and will never deliver reliable results for all the possible use cases.
FSFE REUSE to the rescue
Rather than trying to detect or infer licensing, REUSE asks developers to be explicit in a machine-readable, auditable way:
Again: This matters. It means anyone—an auditor, a packager, a contributor, or a compliance team—can look at any file in your repository and immediately understand its legal status. There is no guessing, no cross-referencing, no “well maybe this falls under the MIT license because the rest of the project does.” It’s explicit. It’s standardized. And it is quickly lintable which is great for teams with Continuous Integration.
Following REUSE, adding machine-readable copyright and licensing information can be done in the following ways:
Comment headers or <filename>.license for uncommentable files.
REUSE.toml, a machine-readable copyright file to address file and directory names. This is especially handy to define:
a default license for your project.
deviant licenses for third party artefacts residing in a sub-directory.
You can be flexible with the format, just make sure that the line starts with “SPDX-License-Identifier:“ and/or “SPDX-FileCopyrightText:“
Comment headers
REUSE, and many organizations like GNU, recommend including license header comments in source files as it helps to prevent confusion or errors. So even if the REUSE.toml copyright file exists as the central place for licensing information, sometimes files get copied or forked into new projects and third parties might not have a well organized repository bureaucracy. Without a statement about what their license is, moving single files into another context might eliminate all trace of that information.
Example of a header comment:
# SPDX-FileCopyrightText: Andreas Haerter, ACME Corp (https://example.com)
# SPDX-License-Identifier: CC-BY-SA-4.0
/* SPDX-FileCopyrightText: Jane Doe <j.doe@example.com> SPDX-License-Identifier: Apache-2.0 OR LGPL-2.1-or-later */
REUSE.toml
You might come to the conclusion that to skip adding headers to every file and only using a REUSE.toml is better for your project … fair enough and that will still be compliant. It is also possible to bulk-license whole directories using this technique. The file format is specified, but a simple example helps to get started:
version = 1 SPDX-PackageName = "Foo bar project" SPDX-PackageDownloadLocation = "https://git.example.com/foobar" SPDX-PackageSupplier = "ACME Inc. (https://example.com)"
Software Bill of Materials (SBOM): A structured list of all software components and their licenses in a project. It helps with transparency, security audits, and legal compliance.
Copyleft (license): A type of open source license that ensures derivative works remain under the same license. It protects user freedoms by requiring shared modifications. The GPL is a well-known example.
Permissive (license): A license that allows code to be reused with minimal conditions, including in proprietary software, without giving back modifications. Common examples include MIT, BSD, and Apache 2.0.
TOML: A configuration file format. REUSE.toml (a machine-readable file in your project’s root directory) uses it to declare licensing information based on filename patterns.
DEP5: A machine-readable debian/copyright file which was used before REUSE.toml. DEP5, while still supported, has been deprecated since the introduction of REUSE.toml. This is important to know when hitting older documentation or tutorials.
My personal killer feature: Additional comments in REUSE.toml
It might sound trivial, but it was always cumbersome for me to keep track of the originally used download URLs and other common data around simple third-party files, like “this small icon there“. From my point of view, the RESUE.toml file is the ideal place to keep additional data on third party files by using SPDX-FileComment without cluttering the repository or the end-user documentation. If there is at least one example, in my experience, maintaining source information and reasoning for third-party files is quickly adopted even in teams without many regulations:
README section template about licensing and copyright for humans
I find it useful to have a generic, easy to adapt text snippet for the README.md or a comparable central place which is easy for humans to notice and read. I created and use the following template, taking advantage of the existing REUSE information to make the section basically maintenance free without being useless:
## Licensing, copyright
<!--REUSE-IgnoreStart--> Copyright (c) YYYY, ACME Inc.
This project is licensed under the GNU General Public License v3.0 or later (SPDX-License-Identifier: `GPL-3.0-or-later`), see [`LICENSES/GPL-3.0-or-later.txt`](LICENSES/GPL-3.0-or-later.txt) for the full text.
The [`REUSE.toml`](REUSE.toml) file provides detailed licensing and copyright information in a human- and machine-readable format. This includes parts that may be subject to different licensing or usage terms, such as third-party components. The repository conforms to the [REUSE specification](https://reuse.software/spec/). You can use [`reuse spdx`](https://reuse.readthedocs.io/en/latest/readme.html#cli) to create a [SPDX software bill of materials (SBOM)](https://en.wikipedia.org/wiki/Software_Package_Data_Exchange). <!--REUSE-IgnoreEnd-->
Replace YYYY with the year of the first release or code contribution and adapt the mentioned license, filenames and links as needed. The HTML comments prevent REUSE linting errors when e.g. listing multiple licenses.
The wording is already pointing to the copyright file (REUSE.toml) and mentions that parts of the project might be subject to different licensing than the main one. If this is not good enough, feel free to adapt the wording of the main “licensed under” sentence to highlight the main licensing rules without the need to maintain every single bit outside of the copyright file. Examples (adapt as needed):
The project is dual-licensed under the
* GNU General Public License v3.0 or later (SPDX-License-Identifier: `GPL-3.0-or-later`), see [`LICENSES/GPL-3.0-or-later.txt`](./LICENSES/GPL-3.0-or-later.txt) for the full text.
* Apache License 2.0 (SPDX-License-Identifier: `Apache-2.0`), see [`LICENSES/Apache-2.0.txt`](./LICENSES/Apache-2.0.txt) for the full text.
[... usual template follows ...]
License detection on Github or Gitlab
If you follow REUSE, you will notice that Github and Gitlab are no longer able to detect licensing information for your repository.
Even if automatic licensing is broken by design for the reasons outlined above, it is understood that it would be nice if all the broken license detection tools spit out something, even unreliable but working, for indexes and searches (for the sole reason of not having a disadvantage if inexperienced users are searching for projects and filter by often broken meta data).
If you need this, put the stated “license with the highest freedom protections” just for search-indexes and GitHub in a LICENSE or COPYING file in the root directory of your project:
A workaround to fix the automatic license detection of GitHub and others is to place an additionalLICENSE or COPYING file in the root directory of your project. This is allowable by REUSE. These files are explicitly ignored by the toolset and do not need an additional .licensefile or header.
If you want to prevent a duplication of License texts, beware of another issue with Licensee: You can place a symlink at LICENSES/<your license>.txt pointing to the LICENSE or COPYING file in the project’s root directory. reuse lint will follow that link. Licensee sadly does not even support symlinks, so a more logical symlink from LICENSE or COPYING pointing to LICENSES/<your license>.txt is not solving the issue. I therefore recommend a real copy instead of a symlink to keep things accessible when using the workaround.
I, for myself, would use this workaround only if a single license is used for all of the project’s files. This would prevent misunderstandings or conflicts and simply ignore GitHub’s limited behavior in all other cases.
Years in copyright texts
This is not exactly a REUSE topic but I noticed it is discussed quite a lot when a project starts adopting REUSE. IANAL, but it is not necessary to update the copyright year since the main legal intention is to state the year of the first public release or code contribution. But it is common to do so anyway, especially since it shows third parties that a project is still alive.
I usually propose the following which might also be a useful technique for your project:
Update the copyright data but maintain the copyright year only at central places like a project’s README.md reduce the maintenance effort.
Simply add each year with a release or updates separated by commas. You can use a timespan (yearX-yearY) for multiple subsequent years.
Example:
The first release and copyright statement was Copyright (c) 2013.
There were releases or updates in several but not all years afterwards:
2023 → Copyright (c) 2013, 2015, 2018-2021, 2023.
2015 → Copyright (c) 2013, 2015.
2018 → Copyright (c) 2013, 2015, 2018.
2019 → Copyright (c) 2013, 2015, 2018, 2019.
2020 → Copyright (c) 2013, 2015, 2018-2020.
2021 → Copyright (c) 2013, 2015, 2018-2021.
Conclusion
Licensing clarity is needed for sustainable collaboration in open source. The REUSE specification doesn’t try to replace legal frameworks or licensing decisions, but it makes the messy practicalities of license management predictable, explicit, and automatable.
Adopting REUSE can feel like extra effort at first, especially for existing codebases. But once in place, it pays off by making your project easier to understand, maintain, package, and … reuse… . REUSE helps you express the legal structure of your project in a way that machines and humans can agree on. And that’s worth a lot.
It’s less than 2 weeks until the switch of fedoraproject to our new datacenter, so I thought I would provide a reminder and status update.
Currently we are still on track to switch to the new datacenter the week of June 30th. As mentioned in previous posts:
End users hopefully will not be affected (mirrorlists, docs, etc should all be up and working all the time)
Contributors should expect for applications and services to be down or not fully working on Monday the 30th and Tuesday the 1st. Contributors are advised to hold their work until later in the week and not report problems for those days as we work to migrate things.
Starting Wednesday the 2nd things should be up in the new datacenter and we will start fixing issues that are reported as we can do so.
We ask for your patience in the next few weeks as we setup to do a smooth transfer of resources.
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
Week: 9 June – 13 June
Infrastructure & Release Engineering
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
This blog post is a brief documentation of my journey for Google Summer Of Code – 2025 with the Fedora Community.
About Me:
Name: Tanvi Ruhika
e-Mail: tanviruhika1217@gmail.com
A 1st year Computer Science (Core) student at GITAM University, India. I’ve always loved building things that feel futuristic yet genuinely useful ,whether it’s a gesture-controlled robot, a voice-activated smart house, or an AI tool that speaks human. My core interests lie in artificial intelligence, automation, and developing tools that make technology more intuitive and accessible for developers.
I’m also drawn to creativity and design, and I’m always excited by projects that blend technology with a touch of personality. I’ve always looked for ways to expose myself to new opportunities and technologies, and Google Summer of Code felt like the perfect chance to do just that. When I got selected, I knew I wanted to give it my all ,not just to build something meaningful, but to truly dive deeper into the world of open source.
Project Abstract ExplainMyLogs is an innovative tool designed to transform complex system and application logs into clear, concise natural language explanations. This project aims to leverage large language models and machine learning techniques to help developers and DevOps engineers quickly identify, understand, and resolve issues within their infrastructure. By translating cryptic log entries into human-readable explanations and actionable insights, ExplainMyLogs will significantly reduce debugging time and lower the barrier to entry for infrastructure troubleshooting.
Project Goals
Enable progressive learning from user feedback to improve analysis accuracy.
Develop a log parser capable of handling various log formats from common services.
Create an AI-powered analysis engine that identifies patterns, anomalies, and potential issues in log data.
Build a natural language generator that produces clear explanations of detected issues.
Implement a command-line interface for easy integration into existing workflows.
Design a simple web interface for interactive log analysis and visualization.
Provide actionable recommendations for resolving identified issues.
The kernel team is working on final integration for Linux kernel 6.15. This version was just recently released, and will arrive soon in Fedora Linux. As a result, the Fedora Linux kernel and QA teams have organized a test week from Sunday, June 08, 2025 to Sunday, June 15, 2025. The wiki page in this article contains links to the test images you’ll need to participate. Please continue reading for details.
How does a test week work?
A test week is an event where anyone can help ensure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.
To contribute, you only need to be able to do the following things:
Download test materials, which include some large files
Read and follow directions step by step
The wiki page for the kernel test week has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test week web application. If you’re available on or around the days of the event, please do some testing and report your results. We have a document which provides all the necessary steps.
Happy testing, and we hope to see you on one of the test days.
As we head into Flock, It’s time again to talk about #strategy2028 — our high-level plan for the next few years.
Since it’s been a while since I’ve given an update, I’m going to start at the top. That way, If this is new to you, or if you’ve forgotten all about it, you don’t need to go sifting through history for a refresher. If you’ve been following along for a while, you may want to skip down to the “Process section”, or if you just want to get to the practical stuff, all the way down to “Right Now”.
The Strategic Framework and High Level Stuff
Fedora’s Goals
Vision
The ultimate goal of the Fedora Project is expressed in our Vision Statement:
The Fedora Project envisions a world where everyone benefits from free and open source software built by inclusive, welcoming, and open-minded communities.
Mission
Our Mission Statement describes how we do that — we make a software platform that people can use to build tailored solutions. That includes offerings from our own community (like the Fedora Editions or Atomic Desktops) and those from our “downstreams” (like RHEL, Amazon Linux, Bazzite, and many more).
Strategy 2028
We also have a medium-term goal — the target of Strategy 2028. We have a “guiding star” metric for this:
Guiding Star
By the end of 2028, double the number of contributors1 active every week.
But this isn’t really the goal. It’s a “proximate measure” — something simple we can count and look at to tell if we’re on track.2
The Goal of Strategy 2028
The goal itself this:
The Fedora Project is healthy, growing, relevant, and ready to take on the next quarter-century.
But, goals aren’t strategy — they describe the world we want, and Fedora’s overall work, but not the path we’ll take to get there.
The Actual Strategy
During our Council Hackfest session, I realized that we haven’t really put this into writing — instead, we’ve jumped straight to other levels of the process. So, here it is:
1. Identify areas of community interest and effort which we believe will advance Fedora towards our goal.
The computing world changes quickly, and Fedora is a community-driven project. We can’t pick things out of thin air or wishful thinking. We also need to pick things that really, actually, practically will make a difference, and that’s a hard call. Making these calls is the fundamental job of the Fedora Council.3
2. Invest in those areas.
A strategy needs to have focus to be meaningful. The Council will devote time, energy, publicity, and community funding towards the selected areas. This necessarily means that other things won’t get the same investment. At least, not right now.
3. Check if the things we picked are working.
The “guiding star” metric is one way, of course, but we’ll need specific metrics, too. At the meeting, we agreed that we have been lazy on this in the past. It’s hard work, and when something isn’t working, can lead to hard conversations. We need to do better — keep reading for how we plan to do that.
4. When things are working, double down. When things aren’t, stop, change, or switch direction.
If we’re on the right track in one area, we should consider what we can do next to build on that. When something isn’t working, we need to take decisive action. That might be re-scoping an initiative, relaunching in the same area but with a different approach, or simply wrapping up. What we won’t do is let things linger on uncertainly.
5. Rinse, repeat!
Some of what we choose will be smaller bites, and some will be more ambitious. That means we expect to be choosing new initiatives several times a year.
The Process
Practically speaking, for each area we choose, we’ll launch a new Community Initiative. We know these haven’t always been a smashing success in Fedora, but the general concept is sound. We’re going to do a few things differently, driven by our Fedora Operations Architect. (Thanks, @amoloney.)
Better Community Initiatives
First, we will require better initial proposals. We need to see concrete milestones with dates and deliverables. There needs to be a specific plan of action — for example, if the Initiative intends to progress its technical work through a series of Changes, the plan should include a list of expected proposals with a brief description for each.4
Second, we will hold initiatives accountable. Each Initiative Lead should produce a monthly or weekly status report, and we will actively review each initiative every quarter.
Third, we will create “playbooks” for the roles of Initiative Lead and Executive Sponsor. The Lead is responsible for the work, and the Sponsor is accountable for its success. We’re working on written guidance and onboarding material so that when we start an Initiative, the people involved at the Council level know what they actually need to do.
Finally, we will provide better support. We’ll help develop the Initiative’s Logic Model rather than requiring it as part of the submission. We will be better at broadcasting the leadership of each Initiative, so community members (and the leaders themselves!) know that they’re empowered to do the work. We’ll make sure Initiatives are promoted at Fedora events, and in other ways throughout the year. We will prioritize Initiatives for in-person Hackfests and other funding. And, we will will provide some program management support.5
Previously on Strategy 2028…
Our Themes
We started all of this a few years ago by asking for community input. Then, we grouped ideas we heard into Themes. These will be stable until the end of 2028 (when it’ll be time to do this whole thing over again). Under each theme, we have several Focus Areas. In bold, areas where we have a recently completed project, or something big in progress already. (See the footnotes.)
We spent the bulk of our time getting more specific about our immediate future. Under each theme, Council members identified potential Initiatives that we believe are important to work on next. We came up with a list of thirteen — which is way more than we can handle at once. We previously set a limit of four Initiatives at a time. We decided to keep to that rule, and are planning to launch four initiatives in the next months:
1. Editions block on a11y
Accessibility
This one is simple. We have release criteria for accessibility issues in Fedora Editions… but we don’t block on them. Sumantro will lead an effort to get all of our Editions in shape so that we can make these tests “must-past” for release.
2. GitOps Experiment
Communications/Collaboration Tools
This is Aleksandra’s project to demostrate how we could use a “GitOps” workflow to improve the packager experience from beginning to end. Matthew is the Executive Sponsor (for now!) Read more about this here: [RFC] New Community Initiative – GitOps for Fedora Packaging.
3. Gitforge Migration
Communications/Collaboration Tools
We’re moving to Forgejo. That’s going to be a long project with a lot to keep track of. Aoife is sponsoring the effort overall and will work with others on specific initiatives.
4. AI Devtools Out-of-Box
Tech Innovation
This is about making sure Fedora Linux is ready for people who want to work on machine learning and AI development. It isn’t about adding any specific AI or LLM technology. David is taking the lead here, with details in the works.
Next up
We can only focus on so much at once, but as current and near-future initiatives wrap up, these are the things we expect to tackle next, and an associated Council member. (That person may be either an Initiative Lead or an Executive Sponsor when the time comes.)
Bugzilla Archive (David) Red Hat is winding down bugzilla.redhat.com. There’s no planned shutoff date, but we should be ready. We are likely to move most issue tracking to Forgejo — it’d be nice to have packaging issues right next to pull requests. But, the current bugzilla database is a treasure-trove of Fedora history which we don’t want to lose
Discussions to Discourse (Matthew, for now) This is part of our overall effort to reduce Fedora’s collaboration sprawl — and to set us up for the future. It’s time to move our primary discussion centers from the devel and test mailing lists.
Get our containers story straight (Jason) The previous system we used to build containers was called “OSBS”, and was a hot mess of a hacked-up OpenShift, and not even the current kind of OpenShift. I know people are pretty skeptical about Konflux as a Koji replacement … but it can build containers in a better way.
Formal, repeatable plan for release marketing (Justin) We have a great Marketing team, but don’t do a great job of getting feature and focus information from Edition working groups to that team. We should build a better process.
More Fedora Ready (Matthew/Jef) Fedora Ready is a branding initiative for hardware vendors who want to signal that their product works well with our OS. Let’s expand this — and bring on more vendors with preinstalled Fedora Linux.
Mindshare funding for regional Ambassador planning events (Jona) This is the first step towards rebuilding our worldwide local community Ambassadors.
Silverblue & Kinoite are ready to be our desktop Editions, with bootc (Jason) We think image-based operating systems are the future — let’s commit.
CoreOS, IoT, and Atomic Desktops share one base image (Jason) Right now, we’ve got too many base images — can we get it down to one?
Fedora, CentOS, RHEL conversation (Matthew/Jef) See What everyone wants for more on this one.
See you all at Flock!
So, that’s where we are now, and our near-future plans. After Flock, look forward to more updates from Jef!
For this purpose, we are using a broad definition of contributor. That is: A Fedora Project contributor is anyone who: 1) Undertakes activities 2) which sustain or advance the project towards our mission and vision 3) intentionally as part of the Project, (4) and as part of our community in line with our shared values. A contribution is any product of such activities. So, active contributors for a week is the count of people who have made at least one contribution during that time. ︎
Um, yeah, I know that we don’t have a public dashboard with our estimate of this number yet. That’s because when we started, we quickly realized we need data scientist help — we need to make sure we’re measuring meaningfully. ︎
The Fedora Council has two elected positions, representatives from Mindshare and FESCo, and Leads for each Community Initiative. If you care about where we are going as a project, you could be the person in one of those seats! ︎
Of course, this plan can evolve, but any major changes should be brought back to the Council. ︎
The Fedora Linux 42 election results are in! After one of our most hotly contested elections recently, we can now share the results. Thank you to all of our candidates, and congratulations to our newly elected members of Fedora Council, Fedora Mindshare, FESCo and EPEL Steering Committee.
Results
Council
Two Council seats were open this election. More detailed information on voting breakdown available from the Elections app in the ‘results’ tab.
# votes
Candidate
1089
Miro Hrončok
906
Aleksandra Fedorova
593
Akashdeep Dhar
586
Jared Smith
554
Shaun McCance
490
Fernando F. Mancera
447
Eduard Lucena
FESCo
Four FESCo seats were open this election. More detailed information on voting breakdown available from the Elections app in the ‘results’ tab.
# votes
Candidate
1036
Neal Gompa
995
Stephen Gallagher
868
Fabio Valentini
835
Michel Lind
625
Debarshi Ray
607
Jeremy Cline
559
Tim Flink
Mindshare Committee
Four Mindshare Committee seats were open this election. More detailed information on voting breakdown available from the Elections app in the ‘results’ tab.
# votes
Candidate
774
Emma Kidney
750
Sumantro Mukherjee
702
Akashdeep Dhar
670
Luis Bazan
623
Samyak Jain
587
Shaun McCance
529
Greg Sutcliffe
500
Eduard Lucena
EPEL Steering Committee
As we had the same number of open seats as we had candidates, the following candidates are elected to the EPEL Steering Committee by default:
Davide Cavalca
Robbie Callicotte
Neal Gompa
Once again thank you to all of our candidates this election. The caliber was truly amazing! Also thank you to all of our voters, and finally – congratulations to our newly elected representatives!
We are currently working on the Fedora 43 Wallpaper and wanted to update the community while also looking for contributors!
Each wallpaper is inspired by someone in STEM in history with the letter in the alphabet we’re on. We are currently on the letter R, and voted here with the winner resulting in Sally Ride.
Who is Sally Ride?
Sally Ride (May 26, 1951 – July 23, 2012) was a physicist and astronaut, who became the first American woman in space on June 18, 1983. The third woman ever!
Once her training at Nasa was finished, she served as the ground-based CapCom for the second and third Space Shuttle flights. She helped develop the Space Shuttle’s robotic arm which helped her get a spot on the STS-7 mission in June 1983. Two communication satellites were deployed, including the first Shuttle pallet satellite (SPAS-1). Ride operated the robotic arm to deploy and retrieve SPAS-1, which carried ten experiments to study the formation of metal alloys in microgravity.
Ride then became the president and CEO of ‘Sally Ride Science’. Sally Ride Science created entertaining science programs and publications for upper elementary and middle school students, focusing largely on female students.
Ride and her life long partner O’Shaughnessy co-wrote six books on space aimed at children, to encourage children to study science. Ride remarked, “Everywhere I go I meet girls and boys who want to be astronauts and explore space, or they love the ocean and want to be oceanographers, or they love animals and want to be zoologists, or they love designing things and want to be engineers. I want to see those same stars in their eyes in 10 years and know they are on their way.” It was after her death it was revealed she was the first LGBT astronaut in space.
Brainstorming
The design team held a separate meeting from our usual time to dedicate an hour of time to gathering visuals that were related somehow to Ride’s work. From visuals of space that were used in the books she created,
Possible Themes to Develop:
Space Mid Century Modern Graphics
This is probably my personal preference! Mid century modern is categorized with clean lines, bold saturated colors, and organic forms in nature. It was most popular from the late 1940s-1960s, extending to when the space race first started to lay its roots.
Going down this route would result in a colorful wallpaper, although not overwhelming since it would be limited to a small color palette. The idea was sparked by Ride’s dedication to education and teaching- as these types of graphics would often pop up in schools as informative posters.
Blueprint of Space
A dark background with planets and white details to show information just like a blueprint would. Also sparked by the type of graphics you would find in a school. The only problem that might arise is too much detail. Wallpapers on the whole are supposed to be quite simple so the user can have a calm experience. Too many details that might make it look like a blueprint, might make it too busy. However I’m sure there could be a balance of both.
Colorful Space
We have several space themed wallpapers that show the stars or planets, so this would be a nod to them (see F33,F24, F10, F9) as well as a nod to the most well known part of Ride’s career. Including some of the colors from Fedora’s color palette, like Freedom Purple, Friends Magenta, Features Orange, and First Green, into the galaxy or planetary visuals would be a great option. But not too bright and electric that it irritates the viewer when they look at it.
As a Python developer you work hard to ensure code works correctly across different Python versions. You have to test against Python 3.11, 3.12, 3.13 and beyond, it can be tedious. But what if your continuous integration (CI) pipeline could handle it automatically? This is where GitHub Actions and tox come in – a powerful combo for seamless CI and multi-version testing.
Introduction
Imagine you are a developer for a small real estate company, racing against the clock to deliver a groundbreaking fizz-buzz feature on the company’s python app. You make a last minute fix and commit. After testing locally, you merge with confidence, push to GitHub, and ship. Then disaster strikes, the change breaks an existing feature and the new feature you added does not work. Management is furious, and the only excuse you could give is “it worked on my machine”.
If only you had set up GitHub Actions for continuous integration (CI), you could have made sure it worked on different versions of Python.
In this article, you will learn how to set up GitHub Actions to manage continuous integration for your Python projects. You will also learn how to test your code on different versions of Python using tox on Fedora.
Prerequisites
GitHub Account
Fedora, CentOS, or RHEL server. This guide uses Fedora 41 server edition. Fedora, CentOS, and RHEL servers are interchangeable.
A user account with sudo privileges on the server.
Command line competency.
Python 3.13 environment, with poetry, tox, and pytest packages installed.
What is Continuous Integration?
Continuous Integration (CI) is a practice where you merge your code to the repository several times a day. CI reduces software defects, because every time you push a change to the repository it is verified by automated tests and built. Generally, CI refers to the server part of the build process, where you run unit tests and build your application. However, it can also be done locally. To carry out CI, you define a build pipeline using YAML. The build pipeline runs a set of automated tools for unit tests, security checks, document generation, or code quality checks.
Why use CI?
Central to CI is code stability. By running unit tests on your code whenever you make a change, you are confident that those changes do not cause software defects in your codebase. This way, you know, your code is stable; commit after commit.
There are two important aspects of CI that ensure code stability:
CI checks that code compiles or builds successfully.
CI checks that all unit tests pass successfully.
What is Tox?
Tox, is a tool that automates Python unit tests in multiple Python environments. According to tox documentation, you can use tox for:
checking your package builds and installs correctly under different environments
Running your tests in each of the environments with the test tool of choice
As a frontend to continuous integration servers
What are GitHub Actions?
GitHub Actions is a feature on Github.com that serves as an automation engine for CI. It allows you to automate tasks directly within your GitHub repository using workflows.
To use GitHub Actions effectively, you need to understand how the system works using a top-down approach.
An event is a specified activity that triggers a workflow. An activity may occur when a commit is pushed or a pull request is made. In this tutorial, an event occurs when you push a commit to your GitHub repository.
A workflow is an automated process which runs when an event occurs. It defines how code is tested, built or compiled using actions. A YAML file defines the steps in the workflow and exists in the .github/workflows directory of your repository.
A job runs actions you specify. While there are no limits on the number of actions you can run in a job, there is a maximum execution time of 6 hours. Jobs are executed in runners (containers or virtual machines). You can choose Linux, Windows, or macOS runners to run your CI jobs.
An action is the smallest building block of a workflow. According to GitHub documentation; “an action is a custom application for the GitHub Actions platform that performs a complex but frequently repeated task”. You can write custom actions as Node.js scripts or use those in the GitHub marketplace.
Test a Python project
This python project is for a calculator with functions for adding and multiplying numbers only. You will use pytest and tox to test the code using Python 3.12, and 3.13. You will also push the code to GitHub, and use GitHub Actions for CI.
HEAD’s UP: Remember, GitHub Actions uses runners for CI jobs, and runners can be Windows, Ubuntu or macOS? Did you notice, Fedora is not on the list?
Checkout, out this repository on GitHub. It contains working code for the calculator, tests, and a workflow for this tutorial. It uses the tox-github-action from Fedora Python to run tests in CI. The tests are run in a Fedora container, which is hosted in an Ubuntu runner.
This is the file structure you will work with;
Step 1: Write the Code
Here is the calculator code at /ciwithfedora/calculator.py
def add(a, b): """Returns the sum of two numbers.""" return a + b
def multiply(a, b): """Returns the product of two numbers.""" return a * b
Step 2: Write unit tests
Here is the unit test code at /ciwithfedora/test_calculator.py
This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.
The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work. It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.). List of planned/in-progress issues
Here’s another update on the upcoming fedoraproject Datacenter move.
Summary: there have been some delays, the current target switch week to the new Datacenter is now the week of 2025-06-30. ( formerly 2025-05-16 ).
The plans we mentioned last month are all still in our plan, just moved out two weeks.
Why the delay? Well, there were some delays in getting networking setup in the new datacenter, but thats now been overcome and we are back on track, just with a delay.
Here’s a rundown of the current plan:
We now have access to all the new hardware, it’s firmware has been updated and configured.
We have a small number of servers installed and this week we are installing OS on more servers as well as building out vm’s for various services.
Next week is flock, so we will probibly not make too much progress, but we might do some more installs/configuration if time permits.
The week after flock we hope to get openshift clusters all setup and configured.
The week after that we will start moving some applications that aren’t closely tied to the old datacenter. If they don’t have storage or databases, they are good candidates to move.
The next week will be any other applications we can move
The week before the switch will be getting things ready for that (making sure data is synced, plans are reviewed, etc)
Finally the switch week (week of june 30th): Fedora Project users should not notice much during this change. Mirrorlists, mirrors, docs, and other user facing applications should continue working as always. Updates pushes may be delayed a few days while the switch happens. Our goal is to keep any end user impact to a minimum.
For Fedora Contributors, Monday and Tuesday we plan to “move” the bulk of applications and services. Contributors should avoiding doing much on those days as services may be moving around or syncing in various ways. Starting Wednesday, we will make sure everything is switched and fix problems or issues as they are found. Thursday and Friday will continue stabilization work.
The week after the switch, some newer hardware in our old datacenter will be shipped down to the new one. This hardware will be added to increase capacity (more builders, more openqa workers, etc).
This move should get us in a nicer place with faster/newer/better hardware.