GPU Pipeline


Graphics Pipeline




Geometry and primitives

Typically, An application is the place where we want to define the geometry that we want to render to the screen. This geometry can be defined by points, lines, triangles, quads, triangle strips... These are so called geometric primitives. These points will then reside in system memory. Your application will use the 3D API to transfer the defined vertices from system memory into the GPU memory. 

Vertices

So we'll have a vertex that contains location, as well as color information.

Vertex shaders

A "pass-through" vertex shader will take the shader inputs and will pass these to its output without modifying these: the vertices P1, P2 and P3 from the triangle are fetched from memory, each individual vertex is fed to vertex shader instances which run in parallel. Vertex processing computes the normalized coordinate space position of vertices. The outputs from the vertex shaders are fed into the primitive assembly stage.

 Primitive assembly

The primitive assembly stage will break our geometry down into the most elementary primitives such as points, lines and triangles. For triangles it will also determine whether it is visible or not, based on the "winding" of the triangle. So vertices are grouped together into the pipeline to form primitives, like triangles.

Rasterization 

After the visible primitives have been determined by the primitive assembly stage, it is up to the rasterization stage to determine which pixels of the viewport will need to be lit: the primitive is broken down into its composing fragments. This can be seen in figure 4: the cells represent the individual pixels, the pixels marked in grey are the pixels that are covered by the primitive, they indicate the fragments of the triangle. These fragments are passed on to the fragment shader stage.

Fragment shaders

Each of the fragments generated by the rasterization stage will be processed by fragment shaders. The general role of the fragment shader is to calculate the shading function, which is a function that indicates how light will interact with the fragment, resulting in a desired color for the given fragment. After the color has been determined, this color is passed on to the framebuffer.




In this figure, the yellow boxes correspond to the application programmable stages of the graphics pipeline. A programmer can define the behavior of these stages by writing shader programs in a graphics shading language like GLSL (in OpenGL) or HLSL (in Direct3D).


http://stevendebock.blogspot.be/2013/08/the-graphics-pipeline.html
http://15462.courses.cs.cmu.edu/fall2015/lecture/pipeline/slide_046
http://www.g-truc.net/doc/OpenGL%204.3%20Pipeline%20Map.pdf

Comments

Popular posts from this blog

Thread & Locks

Opengl-es Buffer

Kernel Startup