Week 2: Shaders and the OpenGL Pipeline

An Qt/OpenGL application.

The core of an OpenGL program is to first set up the shaders (what do you want to do?) and the input data (what do you want the shaders to process?). Once we send the shader program and the geometry data to the GPU, we give one small command gl.drawArrays to actually do the drawing.

  • loading, compiling, linking shaders

    • We write shaders in GLSL, e.g., vshader.glsl and fshader.glsl.

    • We send the shader source to the GPU to be compiled and linked into a shader program. Each program must have one vertex shader and one fragment shader.

  • set OpenGL flags, initial state

    • OpenGL is a state machine that provides scaffolding to support rendering images. To draw anything, it is essential that you provide shaders and models. But there is also additional state you can set in the initializeGL() method. One example is setting the background clear color.

      glClearColor(0.9,0.9,0.9,1.);

Most of OpenGL is just setting state. For most of the functions, none of the effects are directly visible. You are simply asking OpenGL to set some internal variable and remember it for later use. When it comes time to actually draw, the draw command will be relatively simple, but it will leverage all the current state you set with prior OpenGL function calls. This may be a new way of thinking about things, but we will continue to emphasize this point throughout the course.

paintGL

  • Adjust OpenGL context to fit current window

    • Occasionally, the user might resize the window and we’d like OpenGL to adjust the rendering context to fit the new size.

      void MyOGLWidget::resizeGL(int w, int h) {
        /* map the width/height of the widget to the OpenGL context*/
        glViewport(0, 0, w, h);shaderProgram->bind();
        shaderProgram->setUniformValue("iResolution", QVector2D(w,h));
        update(); /* repaint */
      }
  • Erase old data

    • OpenGL will write output color fragments to a buffer and display this buffer in the current context. If you are rendering multiple frames, you must explicitly erase the old frame using glClear. This will fill the output buffer with the current clear color that you most recently set with the glClearColor function.

      glClear(gl.COLOR_BUFFER_BIT );
  • Make program, vao active

    • We are now ready to draw. It’s a good idea to ensure your program and buffers are active. Since there is only one program/buffer, this step probably could be done once in init, but for complex scenes, we may toggle between multiple programs and buffers and having this process in paintGL is a good idea.

      vao->bind(); /*recall all binding of geometry, layouts */
      shaderProgram->bind();
  • Draw!

    • All the prep work is done. To actually get things going on the GPU and see an output image, we have to call a draw function.

      glDrawArrays(GL_TRIANGLES, 0, numVertices);

      The syntax for drawArrays is drawArrays(primitive, offset, count). This function instructs OpenGL to start the vertex shader and start drawing the specified primitive GL_TRIANGLES using the data in the current buffer starting at the provided offset 0 and using count(=3) total vertices. In this example, we have just instructed OpenGL to draw one triangle. That seemed like a lot of work!

In, Out, Uniform

Your shader programs have their own syntax. They are written in a language called GLSL designed for running on the GPU. GLSL is not C/C++, though parts of it look similar. You cannot print from inside a shader, but we will see ways later this semester to use the shader output as a debugging tool.

While there are many C features you cannot use in GLSL, there are also GLSL features that are not common or supported by default in Cxx. One example is the vector and matrix types. These are built into GLSL, but once you get used to them, you tend to expect there to be a vec3 type built into or C++. There is no such thing by default, though libraries like Qt provide similar abstractions.

Note that like C/C++/Java, but unlike JavaScript or Python, you must declare the type of your variables inside GLSL. Additionally, global variables in shaders have an in, out, or uniform qualifier.

in qualifier

An in variable is received as input from a previous stage of the OpenGL pipeline. In the vertex shader, in variables get their values from GPU buffers. The VAO maps the layout of the buffer to the shader input variables, and the drawArrays call instructs the GPU to call the vertex shader a given number of times using elements from the currently bound buffer. You will often see the phrase or function bind in OpenGL. It simply means make this thing (buffer, program, texture) the currently active thing.

In the fragment shader, an in variable gets its value from the output of the rasterizer. Typically the rasterizer will interpolate a corresponding out variable across the primitive to compute the in variable. We don’t have any in variables in the fragment shader in this example, but we did in lab1. The vertex shader had texture coordinates as both in and out variables. The in texture coordinate came from a buffer and there was one texture coordinate for each corner of the geometry (one square). During the primitive assembly and rasterization step, these texture coordinates were interpolated between vertices to produce a new in texture coordinate for each fragment in the fragment shader. We will talk more about texturing and interpolation in the next few weeks.

out qualifier

An out qualifier passes the variables final value to the next stage of the OpenGL pipeline. In our first example, only our fragment shader has an out variable, the final color of our image. The next stage after the fragment shader is updating the image on the output buffer/screen/canvas.

Lab1 had an out texture coordinate from the vertex shader (see above). Output variables from the vertex shader are interpolated between vertices during the primitive assembly and rasterization step before going on to the fragment shader.

uniform qualifier

Making one call to glDrawArrays start many calls to the vertex shader program and fragment shader program in parallel. Each separate call will likely have different values for in variables and different values for out variables. But there are some variables that can stay the same across all calls to the shaders in a single drawArrays call. We assign the uniform qualifier to these values.

In lab1, we set the size of the image and size of the canvas as uniforms as these would not change on a per vertex/per fragment level.

Think of uniforms as applying to an entire shape or an entire scene instead of a single vertex or pixel.

Reading

Having completed an overview of a basic Qt/OpenGL program, we are almost ready to take the big jump to 3D and do bigger, more exciting things. So far, you have been manipulating 2D vector geometry and 3D colors, but before we get too far, it will be good to review some linear algebra. Before Friday’s class, please read the following sections from the Immersive Linear Algebra website

  • Chapter 1: Introduction

  • Chapter 2: Vectors (Sections 2.1-2.5)

  • Chapter 3: Dot Product (Sections 3.1-3.3, Ex 3.6., 3.7)

We may not get to the dot product stuff on Friday. We’ll eventually read parts of chapters 4 and 6. To guide your reading and some discussion on Friday, please know the mathematical/geometric definition of the following terms:

  • Point

  • Vector

  • Basis

The term vector gets overused in computer science, but for graphics, it will be important to distinguish at times between something like a generic type vec3 and the geometric concept of vector. To help guide that conversation, thin about the following operations and decide on the output type of the result. Some operations may not be valid geometrically.

  • point + vector

  • float * vector

  • point + point

  • vector + vector

  • vector - vector

  • point - point

Finally, think about the GLSL or Qt type you would use to store each of these geometric objects (assume you have a 3D basis). Are any of the operations you declared invalid above valid in GLSL?

In addition to the Immersive Linear Algebra site, the Learn OpenGL website has a good summary on Vector math and 3D Transform Matrices, and Coordinate Systems. The Learn OpenGL guide uses glm and C++ syntax for some of the code examples, but we will have access to similar features in Qt.

Transforms

In 3D, we will manipulate geometry including points and vectors primarily using 4x4 matrix multiplication. Some common transformations are listed below.

Translation

Translation in three dimensions can be expressed as a matrix-vector (4x1 matrix) multiplication using 4D homogenous coordinates. What happens if you apply a translation to a geometric vector with this transform?

\[T(t_x, t_y, t_x) \cdot P = \begin{bmatrix} 1 & 0 & 0 & t_x \\ 0 & 1 & 0 & t_y \\ 0 & 0 & 1 & t_z \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \cdot \begin{pmatrix} p_x \\ p_y \\ p_z \\ 1 \\ \end{pmatrix} = \begin{pmatrix} p_x + t_x \\ p_y + t_y\\ p_z + t_z\\ 1 \\ \end{pmatrix}\]

Scale

\[S(s_x, s_y, s_x) \cdot P = \begin{bmatrix} s_x & 0 & 0 & 0 \\ 0 & s_y & 0 & 0 \\ 0 & 0 & s_z & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \cdot \begin{pmatrix} p_x \\ p_y \\ p_z \\ 1 \\ \end{pmatrix} = \begin{pmatrix} s_x \cdot p_x \\ s_y \cdot p_y \\ s_z \cdot p_z \\ 1 \\ \end{pmatrix}\]

Rotation

There are several ways to do rotation. We will primarily focus on methods that rotate about one of the axes of the basis. For example, a rotation around the \(z\)-axis is helpful, even for 2D applications, as it only modifies the \(x\) and \(y\) coordinates.

\[R_z(\theta) \cdot P = \begin{bmatrix} \cos \theta & -\sin \theta & 0 & 0 \\ \sin \theta & \cos \theta & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \cdot \begin{pmatrix} p_x \\ p_y \\ p_z \\ 1 \\ \end{pmatrix} = \begin{pmatrix} p_x \cdot \cos \theta - p_y \cdot \sin \theta \\ p_x \cdot \sin \theta + p_y \cdot \cos \theta \\ p_z \\ 1 \\ \end{pmatrix}\]

Consider looking down the \(+z\)-axis. Is the rotation clockwise or counter-clockwise? How can you tell?

The matrices for \(R_x\) and \(R_y\) are given below.

\[R_x(\theta) = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & \cos \theta & -\sin \theta & 0 \\ 0 & \sin \theta & \cos \theta & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix} \ \ R_y (\theta) = \begin{bmatrix} \cos \theta & 0 & \sin \theta & 0 \\ 0 & 1 & 0 & 0 \\ -\sin \theta & 0 & \cos \theta & 0 \\ 0 & 0 & 0 & 1 \\ \end{bmatrix}\]

Using matrix transforms

The good news for this course is you will almost never have to create one of these matrices by hand on either the CPU or the GPU. Qt provides helper functions to create and manipulate 4x4 matrices and common graphics transforms.

What you should be able to do in this course is:

  • Describe what effect a transform has on some geometry. Sketch a small scene before and after the transform.

  • Understand that the order in which you apply transforms matters. Matrix multiplication is not commutative, i.e., \(AB \neq BA\) in general for matrices \(A\) and \(B\)

  • Know what frame/basis you are working in and how to convert between frames/basis.