this post was submitted on 18 Jul 2025
85 points (97.8% liked)
Programming
21924 readers
648 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm not really an expert, but I'll try and answer your questions one by one.
Yes, this is what VirGL (OGL) and Venus (Vulkan) do. The latter works pretty well because Vulkan is more low level and better represents the underlying hardware so there is less of a performance overhead. However, this does mean you need to translate all APIs one by one, not just OGL and Vulkan, but also hardware decoding and encoding of videos, and compute, so it's a fair amount of work.
Native contexts, in contrast, are basically the "real" host driver used in the guest, and they essentially pass through everything 1:1 to the host driver where the actual work is carried out. They aren't really like virtualisation extensions as the hardware doesn't need to support it AFAICT, just the drivers on both the host and the guest. There's a presentation and slides on native contexts vs virgl/venus which may be helpful.
To be honest, I don't fully understand the details either, but your interpretation seems more or less correct. From looking at the diagram on the MR it seems that it's a layer between the userspace graphics driver and the native context (virtgpu) layer on the guest side, which in turn communicates with another Magma layer on the host, and finally passes data to the host GPU driver, which may be Mesa but could also be other drivers as long as they implement Magma.
The broader idea is to abstract implementation details, so applications and userspace drivers don't need to know the native context implementation details (other than interfacing with Magma). And the native context layer doesn't need to know which host gpu driver is being used, it just needs to interface with Magma.