- cross-posted to:
- linux@sh.itjust.works
- linux@lemmy.zip
- linux@lemmy.world
- cross-posted to:
- linux@sh.itjust.works
- linux@lemmy.zip
- linux@lemmy.world
Full text of the post by Asahi Lina (@lina@vt.social):
I regretfully completely understand Wedson’s frustrations.
A subset of C kernel developers just seem determined to make the lives of the Rust maintainers as difficult as possible. They don’t see Rust as having value and would rather it just goes away.
When I tried to upstream the DRM abstractions last year, that all was blocked on basic support for the concept of a “Device” in Rust. Even just a stub wrapper for struct device would be enough.
That simple concept only recently finally got merged, over one year later.
When I wrote the DRM scheduler abstractions, I ran into many memory safety issues caused by bad design of the underlying C code. The lifetime requirements were undocumented and boiled down to “design your driver like amdgpu to make it work, or else”.
My driver is not like amdgpu, it fundamentally can’t work the same way. When I tried to upstream minor fixes to the C code to make the behavior more robust and the lifetime requirements sensible, the maintainer blocked it and said I should just do “what other drivers do”.
Even when I pointed out that other C drivers also triggered the same bugs because the API is just bad and unintuitive and there are many secret hidden lifetime requirements, he wouldn’t budge.
One C driver works, so Rust drivers must work the same way.
Making the Rust bindings safe would have required duplicating much of the functionality of the C code just to track things to uphold the lifetime requirements. It made no sense. It would have been easier to just rewrite the whole thing in Rust (I might end up doing that).
To this day, bugs in the DRM scheduler have been the only causes of kernel panics triggered via my Apple GPU driver in production.
The design of that component is just bad. But because I come from the Rust world, the maintainer didn’t want to listen to my suggestions.
If it takes a whole year to get a concept as simple as a trivial “device” wrapper upstreamed (not any device model functionality, literally just an object wrapping a struct device so we can pass it around) then how is Rust for Linux ever going to take off?
Rust works. I’m pretty sure I’m the only person ever to single handedly write a complex GPU kernel driver that has never had a memory safety kernel panic bug (itself) in production, running on thousands of users’ systems for 1.5 years now.
Because I wrote it in Rust.
But I get the feeling that some Linux kernel maintainers just don’t care about future code quality, or about stability or security any more. They just want to keep their C code and wish us Rust folks would go away. And that’s really sad… and isn’t helping make Linux better.
It’s because
- They’re old and they don’t want to have to spend time learning something new.
- They spent a lot of time learning C and getting moderately good at it. They don’t want that knowledge to become obsolete.
- They currently don’t know Rust, and don’t want to feel like the thing they do know is no longer the best option.
- They aren’t the ones with the idea to use Rust, and they don’t want to lose face by accepting that someone other than them had a good idea. Especially not some young upstarts.
- Supporting Rust is extra work for them and they don’t care about memory safety or strong types etc.
In order to avoid losing face they’ll come up with endless plausible technical reasons why you can’t use Rust in order to hide the real reasons. They may not even realise they’re doing it.
Some of the reasons might even be genuinely good reasons, but they’ll come up with them as an “aha! So that’s why it’s impossible” rather than a “hmm that’s an issue we’ll have to solve”.
It’s not just Rust Vs C. This naysaying happens wherever there’s a new thing that’s better than the established old thing. It’s a basic human tendancy.
Fortunately not everyone is like that. Linus seems in favour of Rust which is a very good sign.
People figuratively told me to shut up about the Linux Foundation’s less than meager funding of the linux kernel (~2%), but this is exactly what happens because of it. It’s stuck in the 90s because a few oldies earned well enough to be able to dedicate their time to it. Young blood doesn’t have time nor the funds to fight an uphill battle against the greybeards.
Imagine if the situation were reversed and the Linux Foundation spent 98% of its 268M on the Linux Kernel. Imagine the amount of developers that would be fighting to get an internship there and make a career as a kernel dev/maintainer/technical writer/manager/whatever… Rust, better hardware support, better code coverage, modern contribution methods (not a damn mailing list), CI/CD, automated testing, better fuzzing, bounties, and so much more would be possible. Instead they spent…20% or something on AI.
Well, I don’t want to pull the kernel-hacker card, but it sounds like you might not have experienced being yelled at by Linus during a kernel summit. It’s not fun and not worth the money. Also it’s well-known that LF can’t compete with e.g. Collabora or Red Hat on salary, so the only folks who stick around and focus on Linux infrastructure for the sake of Linux are bureaucrats, in the sense of Pournelle’s Iron Law of Bureaucracy.
I’m kinda under the impression that he doesn’t really do that anymore
Watch the video. Wedson is being yelled at by Ted Ts’o. If the general doesn’t yell, but his lieutenants yell, is that really progress? I will say that last time I saw Linus, he was very quiet and courteous, but that likely was because it was early morning and the summit-goers were starting to eat breakfast and drink their coffee.
Maybe they should just ditch Linux and put all their efforts into a new thing like Redox or something, just out of spite.