Opengl Stages of Vertex Transformation

Stages of Vertex Transformation
Required to render the desired scene of a 3d world in a 2d screen

Object Coordinates: local coordinate system of object before any transformation is applied.
Eye coordinates: the result of multiplication of GL_MODELVIEW matrix and
object coordinates. Objects are transformed from object space to eye space.
M(ModelView) = M(view). M(model)
where M(model) = object space to world space and M(view) = world space to eye space.
Clip coordinates: the result of multiplication of GL_PROJECTION matrix and eye coordinates.
GL_PROJECTION matrix used to define the frustum. It determined hot the 3D scene is projected onto the screen.
NDC: Perspective division which is yielded by dividing the clip coordinates by w.
Window coordinates: NDC are scaled and translated in order to fit into the rendering screen.

Analogy With a Camera
Transformations:
Viewing Transformation: Setting up tripod and pointing the camera at the scene.
Two ways to define viewing transformation. Both changes the view keeping camera at (0,0,0)
  1. gluLookAt(eyeX,eyeY,eyeZ,centerX,centerY,centerZ,upX,upY,upZ) - positioning the view in the perspective of camera position. Eye is the camera position.center is the target destination to look at. Up is camera up direction (for y direction value is (0,1,0))
  2. Identity Matrix - positioning the view from the default position(0,0,0).


Projection Transformation: Choose a camera lens or adjust the zoom
  • Orthographic - Parallel Projection. Parallel lines will never meet. Assumptions made in this projection - camera position is in +Inf and Camera is a plane & view is in (0,0,0).
  • glOrtho( left,  right, bottom,  top, nearVal, farVal)
















  • Perspective - Perspective Projection. Parallel lines meet at infinity. Two ways to generate perspective projection matrix.
  • gluPerspective( fovy,  aspect, zNear,  zFar)



  • glFrustum(left,right,bottom,top,nearVal,farValue)















Viewport Transformation: Determine how large you want the final photograph to be (Defining render area using glViewPort)

Coordinates:


Object Coordinates: It is the local coordinate system of objects and is initial position and orientation of objects before any transform is applied


Eye Coordinates: It is yielded by multiplying GL_MODELVIEW matrix and object coordinates. Objects are transformed from object space to eye space using GL_MODELVIEW matrix in OpenGL. GL_MODELVIEW matrix is a combination of Model and View matrices. Model transform is to convert from object space to world space. And, View transform is to convert from world space to eye space.


Note that there is no separate camera (view) matrix in OpenGL. Therefore, in order to simulate transforming the camera or view, the scene (3D objects and lights) must be transformed with the inverse of the view transformation. In other words, OpenGL defines that the camera is always located at (0, 0, 0) and facing to -Z axis in the eye space coordinates, and cannot be transformed.


Clip Coordinates: The eye coordinates are now multiplied with GL_PROJECTION matrix, and become the clip coordinates. This GL_PROJECTION matrix defines the viewing volume (frustum); how the vertex data are projected onto the screen (perspective or orthogonal). The reason it is called clip coordinates is that the transformed vertex (x, y, z) is clipped by comparing with ±w.


Normalized Device Coordinates (NDC): It is yielded by dividing the clip coordinates by w. It is called perspective division. It is more like window (screen) coordinates, but has not been translated and scaled to screen pixels yet. The range of values is now normalized from -1 to 1 in all 3 axes.



Window Coordinates (Screen Coordinates): It is yielded by applying normalized device coordinates (NDC) to viewport transformation. The NDC are scaled and translated in order to fit into the rendering screen. The window coordinates finally are passed to the rasterization process of OpenGL pipeline to become a fragment.


Matrix Transformation: Matrix is used to transform modelview easily, which converts from one coordinate system to another coordinate system.


  • Translate
  • Scale
  • Rotate



Homogeneous Coordinates: Homogeneous coordinates introduce to describe projective space by adding another dimension called w-coordinate, which is the distance from camera to view. For making normalization w have to be 1 so all the matrix elements are divided by w. Three dimensional euclidean coordinate (x,y,z) equivalent homogeneous coordinate is (x,y,z,w).
If w = 0.0, it corresponds to no euclidean point, but rather to some idealized "point at infinity." To understand this point at infinity, consider the point (1, 2, 0, 0), and note that the sequence of points (1, 2, 0, 1), (1, 2, 0, 0.01), and (1, 2.0, 0.0, 0.0001), corresponds to the euclidean points (1, 2), (100, 200), and (10000, 20000). Thus, we can think of (1, 2, 0, 0) as the point at infinity in the direction of that line.

References

Comments

Popular posts from this blog

Thread & Locks

Opengl-es Buffer

Kernel Startup