this post was submitted on 18 Jul 2025
84 points (96.7% liked)
Programming
22134 readers
22 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
My only experience is with gpu-side OpenGL, so here goes:
Your gpu is a separate device designed to run simple tasks with a staggering amount of parallelization. What does that mean? Basically every vertex and pixel on your screen needs to be processed before it can be displayed, and the gpu has a bunch of small cores that do all of that for every single frame your monitor outputs. A programmer defines all this using shaders. In OpenGL, the shader language is called GLSL.
In the opengl graphics pipeline, the cpu-side code defines which effects apply to which geometry in what order. For example, you may want to render every opaque object first, and then draw the transluscent objects on top with semi-transparency (this is called deferred rendering, and is a very common technique). Maybe you'd want a different shadow map for each light-emitting object. Maybe you'd want a setting to define how much bloom to draw to the screen. Maybe you want to provide textures for the gpu to access. The possibilities are endless.
On the gpu-side, we write code in shaders. The shaders, written in GLSL, get compiled by your device-specific drivers into the machine code your hardware uses. In OpenGL there are several types of shader, but there are two main ones: Vertex and Fragment shaders.
Vertex shaders run first. They run on every vertex in the scene and do that math that puts each vertex in the correct location. You can also assign varying values specific to each vertex that get passed down the pipeline to the next shaders.
Between the vertex and fragment shaders, the gpu automatically saves performance by removing any vertex that ends up off-screen, or any triangle that's definitely not visible to the camera (this is called culling), and then fills in each triangle with pixels called fragments (in a process called rasterization). Each fragment will also have access to the varying value of it's three vertices interpolated across the face of the triangle (ie the closest triangle will have the most influence).
After this, the fragment shaders are run on every pixel/"fragment" on screen - this is where you'd render effects like lightning and shadows and apply textures. The fragment shaders determine the color of the pixel as it appears on your screen.
There are other specialized shaders you can add too! But your gpu needs to be new enough to support them: