I came across the post about Milk-V Titan, and there was a comment asking about the lack of the V extension would hinder running Ubuntu 25.10 which was targetting a particular RISC-V configuration, and it made me wonder if there were an opportunity for micro kernels to exploit.
Now, up-front: it's been literally decades since I had an OS design class, and my knowledge of OS design is superficial; and while I've always been interested in RISC architectures, the depth of my knowledge of that also dates back to the 90's. In particular (my knowledge of) RISC-V's extension design approach is really, really shallow. It's all at a lower level than I've concerned myself with for years and years. So I'm hoping for an ELI-16 conversation.
What I was thinking was that a challenge of RISC-V's design is that operating systems can't rely on extensions being available, which (in my mind) means either a lot of really specific kernel builds -- like, potentially an exponential number -- or a similar number of code paths in the kernel code, making for more complicated and consequently more buggy kernels (per the McConnell rule). It made me wonder if this is not, then, an opportunity for micro kernels to shine, by exploiting an ability to load extension-specific modules based on a given CPU capability set.
As I see it, the practicality of this depends on whether the extensions would be isolatable to kernel modules, or whether (like the FP extension) it'd just be so intrinsic that even the core kernel would need to vary. Even so, wouldn't having a permutation of core kernel builds be smaller, more manageable, and less bug-prone than permutations of monolithic kernels?
Given the number of different possible RISC-V combinations, would a micro kernel design not have an intrinsic advantage over monolithic kernels, and be able to exploit the more modular nature of their design?
edit clarification
Yeah, for me it's more that just "produces correct output." I don't expect to see 5 pages of sequential if-statements (which, ironically, is pretty close to LLM's internal designs), but also no unnessesary nested loops. "Correct" means producing the right results, but also not having O(n²) (or worse) when it's avoidable.
The thing that puts me off most, though, is how it usually expands code for clarified requirements in the worst possible way. Like, you start with simple specs and make consecutive clarifications, and the code gets worse. And if you ask it to refactor it to be cleaner, it'll often refactor the Code to look better, but it'll no longer produce the correct output.
Several times I've asked it for code in a language where I don't know the libraries well, and it'll give me code using functions that don't exist. And when I point out they don't exist, I get an apology and sometimes a different function call that also doesn't exist.
It's really wack how people are using this in their jobs.