Regardless of how one might feel about patches-over-email software development, the reality is that a lot of exciting open source projects are developed on mailing lists. Configuring a pleasant plaintext-oriented e-mail environment may not be obvious for those of us who come from a primarily git forge style background. At least, it wasn’t for me. In any case, I’ve finally arrived at a productive setup and I’d like to write it down here for posterity.
For everything but the most trivial of patches, check change out locally. Not only is this a technical prerequisite for some of the other tips in this article, but I’ve found it is easier to remain focused on the review when it takes place outside of my email inbox/GitHub/Gitlab/etc.
The default context for a diff is rather narrow. It will show lines added and removed next to only a few other lines of the code. This can reduce the efficacy of the review unless you would consider yourself already intimately familiar with the surrounding codebase without having it in front of you.
I am pleased to announce that sevctl will be available in the Fedora repositories starting with Fedora 34. Fedora is the first distribution to include sevctl in its repositories 🎉.
sevctl is an administrative utility for managing the AMD Secure Encrypted Virtualization (SEV) platform, which is available on AMD’s EPYC processors. It makes many routine AMD SEV tasks quite easy, such as:
Generating, exporting, and verifying a certificate chain
Displaying information about the SEV platform
Resetting the platform’s persistent state
As of this writing, Fedora 34 is entering its final freeze, but sevctl is queued for inclusion once Fedora 34 thaws. sevctl is already available in Fedora Rawhide for immediate use.
Do you ever get frustrated with waiting for a heavy VM image to download or with installing operating systems onto virtual machines manually? It can start to feel cumbersome after a while, especially if you bring up and tear down lots of virtual machines as part of your workflow. It’d be nice if spawning a ready-to-use VM was as quick and as easy as it is when using a public cloud.
Don’t you love it when your compiler thinks hard so you don’t have to? Rust’s built-in static analysis is praised for providing all kinds of safety guarantees for your code. Today, it’s not about your code, or even my code; it’s about how calling Linux ioctls through a type-safe abstraction layer exposed a bug in an ioctl definition and Rust’s type-checker was the first one to bark about it!
iocuddle is a library for improving the safety of ioctl calls from Rust. But what’s so unsafe about ioctls that we need a crate for it in the first place? The Linux kernel’s ioctl mechanism is a minimal interface that allows Linux module developers to provide APIs to userspace that don’t necessarily fit the mold of the primary module classes: char, block, and net. To this end, the ioctl function definition must be broad enough to avoid constraining the interfaces that module authors can expose.
Hacktoberfest 2020 had a rocky start. I’m not here to argue against any of the criticisms brought up by other members of the community. Their feedback is not unfounded. However, I don’t believe it was all bad.
I signed one of my weekend projects up for Hacktoberfest to gain some more experience in a maintainer role rather than an individual contributor role. In this regard, I believe Hacktoberfest 2020 was a successful experience for myself and for the contributors who spent their time and energy submitting patches to my project.
A while back, I participated in a software engineering capstone with a group of other computer science students to complete my degree. Our project was to create a from-scratch implementation of grsecurity’s “randstruct” GCC plugin for the Clang compiler. Long story short, we ended up sending out a request for comments (RFC) on the initial draft that we produced during the capstone. A number of Clang/LLVM contributors took the time to review what we made and kindly suggested some changes for a future revision.
Have you ever wondered how Linux knows what PCI devices are plugged in? How does Linux know what driver to associate with the device when it detects it?
In short, here’s what happens:
During the kernel’s init process (init/main.c), various subsystems are brought up according to their “init levels.” Among these early subsystems are the ACPI subsystem and the PCI bus driver.
The ACPI subsystem probes the system bus. This “probe” is actually a recursive scan since there can be other devices that act as “bridges” from that main system bus.
Each bus is probed, that is, asked to enumerate the devices that are connected to them. It’s at this point we’ll start seeing their sysfs entries.
For each device that the bus sees, it will attempt to associate a device driver to it. How does it know to do this? Well, it’s actually up to the device driver. It’s the driver’s responsibility to export a table of devices that it will support when it registers itself to the PCI subsystem. This is used by the hotplug system to map modules to the PCI devices they support. It’s basically a phone book of who we need to call when dealing with a given PCI device.
Assuming a match, the kernel will (eventually) call the driver’s probe() function, and the device driver can decide whether or not it claims the device. Yes, the kernel basically takes the device and walks up to the driver(s) that claim they can handle a certain device and then asks, “is this your kid?”
Remember, a device driver whose module_init equivalent has been called is included in this roll-call (built-in or module) so long as this device is compatible with their supported module device table. Built-in modules are asked first (according to the order that they’re linked into the kernel image).
Finally, let’s take a look at a stack trace from a kernel running in QEMU. We’ll start from the bottom, and work our way up. (I’ve removed the function addresses in the stack trace to reduce clutter)
It is sometimes hard to see past the thick haze of caution surrounding the use of C Preprocessor macros in your code. Indeed, the dangers of Macromancy are well-stated in many corners of the internet. You would do well to heed them.
However, there are times where a judicious use of macros can help reduce duplication in your code and make it tidier and easier to reason with. Every time a block of code is duplicated by hand, it is likely to become yet another maintenance burden to contend with. If it just so happens that some of these blocks of code contains a bug… well… you get to go fix it in more than one place.