@vividspecter - eviltoast
  • 12 Posts
  • 71 Comments
Joined 1 month ago
cake
Cake day: June 21st, 2025

help-circle

  • This is satire I presume from the jokiness of the whole thing, but being able to obtain a token in person by just showing an ID (without any storage of data) would be a less intrusive method then the “have your ID and/or face collected and stored for an undisclosed period of time” approach. The latter of which has very obvious privacy and security risks.







  • I believe the risks of silicosis from silica were known since ancient times too, although they probably didn’t have any solutions or alternatives for it historically. More recently, there was the Hawk’s Nest tunnel disaster in the US during the 1930s, where around a 100 mostly black workers died as a result of silicosis developed from cutting and blowing up quartz without any sort of protective measures.

    Then in the modern era, there was a ban implemented in Australia of construction using high silica “engineered” stone. You’d think given the known health risks of silica that this could have been predicted, although it’s not as clear cut (heh) as the risks of asbestos, since at least part of the problem was construction workers not using preventative measures such as wet drilling and PPE. But you could see how that goes over when the workers are often vulnerable in some way, and do not feel comfortable saying no to their bosses.














  • I’m not really an expert, but I’ll try and answer your questions one by one.

    Don’t VMs have a virtual GPU with a driver for that GPU in the guest that, I imagine, forwards the graphics instructions and routines to the driver on the host?

    Yes, this is what VirGL (OGL) and Venus (Vulkan) do. The latter works pretty well because Vulkan is more low level and better represents the underlying hardware so there is less of a performance overhead. However, this does mean you need to translate all APIs one by one, not just OGL and Vulkan, but also hardware decoding and encoding of videos, and compute, so it’s a fair amount of work.

    Native contexts, in contrast, are basically the “real” host driver used in the guest, and they essentially pass through everything 1:1 to the host driver where the actual work is carried out. They aren’t really like virtualisation extensions as the hardware doesn’t need to support it AFAICT, just the drivers on both the host and the guest. There’s a presentation and slides on native contexts vs virgl/venus which may be helpful.

    Where in that does Magma come in? My guess is that magma sits in the guest as the graphics driver and on the host before Mesa, but I know little about virtualisation outside of containers.

    To be honest, I don’t fully understand the details either, but your interpretation seems more or less correct. From looking at the diagram on the MR it seems that it’s a layer between the userspace graphics driver and the native context (virtgpu) layer on the guest side, which in turn communicates with another Magma layer on the host, and finally passes data to the host GPU driver, which may be Mesa but could also be other drivers as long as they implement Magma.

    The broader idea is to abstract implementation details, so applications and userspace drivers don’t need to know the native context implementation details (other than interfacing with Magma). And the native context layer doesn’t need to know which host gpu driver is being used, it just needs to interface with Magma.



  • The sandboxing sometimes breaks applications or requires additional configuration. And I don’t like that it’s a separate thing I need to maintain, although some package managers pair main package updates etc together.

    And as a NixOS user, I prefer to use nix to handle as much of my system as possible, although flatpak at least is useful as a fallback in a pinch. Of course, this is a niche within a niche and mainstream users, particularly those using immutable distros can and do benefit from flatpak.