Week 1: Introduction
Welcome to Computer Graphics! This course serves as an introduction to the area of graphics. We will learn many of the basics of modeling and rendering from a modern OpenGL approach. This course will feature lots of programming, and lots of math, but the math is not super complex. We will design many interesting projects, and you will have a chance to explore your own final project. If you are expecting to design a full fledged 3D game, or the next CGI movie, this course will not meet your expectations. This course is designed around understanding core concepts and preparing you to explore more advanced topics in vast field of computer graphics.
A Birds-eye view of Computer Graphics
Computer Graphics aims to compute realistic-looking representations of geometric models and scenes. The view through our eyes or the lens of a camera is a complex interaction of material surfaces, light scattering of photons, and the optics of lenses. While the underlying physics is well understood, modeling these interactions exactly on a single computer is computationally infeasible. The field of computer graphics seeks to find efficient ways to approximate these interactions and create ever more realistic scenes.
For some state of the art rendering, check out the short demo of the Marbles game unveiled during the 2020 GPU Technology Conference. Work like this requires teams of artists and engineers along with powerful hardware to produce. We’ll have considerably more modest goals, but we’ll be learning many of the core techniques used by professionals. Compare to the first Pixar film, Toy Story, which was released in 1995
We will begin our journey using a modern OpenGL approach, which consists of the following key steps.
-
Modeling: Defining 3D objects with geometric primitives including points, lines and triangles.
-
Scene Creation: Arranging objects in a virtual world using linear algebra transformations.
-
Projections: Setting a viewpoint for visualizing the world, and converting the scene into a standard volume.
-
Rasterizing: Transforming the projected 3D scene into a 2D image of pixels/fragments on the computer screen or other output device.
We will initially see how to perform these basic steps, and we will then gradually add more advanced features such as texturing, lighting, and noise to improve the realism.
Later we will explore Ray Tracing or Ray Marching as an entirely different way of modeling the interaction of light and surfaces to create a scene. Both the Triangle→Transform→Rasterize and Ray Tracing methods are widely used in modern graphics.
Connections to other Courses
Some courses explore topics related to those in this course, but approach various problems from a different perspective or starting point:
-
Computer Vision: Starts with images, ends with models
-
Animation/Game Design: Extends core concepts with advanced applications, models of motion, game AI, level design.
-
Build OpenGL: Start with C and 2D arrays, build something like OpenGL from scratch.
Tools
See Remote Tools for many of tools we’ll be using for remote learning in this course, and the Resources section for technical tools.
-
OpenGL: the primary graphics framework for understanding core low level concepts
-
GLSL: the shading language for OpenGL
-
Qt6: A cross-platform application framework for developing graphical user interfaces. Qt also provides some OpenGL utility classes to make OpenGL development easier.
-
C++: Our language for developing high-performance graphics applications on the desktop
-
CMake: Helps find third party dependencies to build our project. CMake makes Makefiles
-
Git: version control, sharing
-
CUDA: GPGPU programming
Math
Trig/Algebra/Calc
-
Sine
-
Cosine
-
Radians/Degrees
-
Polynomials mostly up to degree 3
-
Occasional derivatives of polynomials or trig functions.
Linear Algebra
-
Matrices (4x4)
-
Vectors (up to 4x1)
-
Dot product
-
Cross product
-
Matrix/Vector product
-
Affine Transformation
-
Frames, Orthogonal Bases
Hardware
This course will mostly focus on the software side of computer graphics, but realism has greatly increased since the advent of dedicated Graphics Processing Units (GPUs). These hardware devices have some notable properties compared to general purpose CPUs
-
optimized for vector math
-
highly parallel (1000-2000 cores)
-
SIMD
-
programmable via shaders or CUDA
Additionally, large format displays, multi panel displays, 3D displays, and 3D printers have connections to many computer graphics concepts
High level overview of a OpenGL pipeline
-
Modeling - setting geometry
-
CPU→GPU transfer (vertex buffer objects VBO)
-
Vertex Shading - transform our view of the world to the clip space
-
Vertex shader is programmable in GLSL
-
Highly parallel SIMD (1000 of cores)
-
Runs on GPU with data on GPU
-
Programmer is responsible for passing data to the GPU, and for writing the vertex shader
-
-
Rasterization - Convert geometry to (potential) pixels/fragments
-
Fragment shading - assign final color of fragments that come out of rasterization process
OpenGL as a state machine
OpenGL is a state machine. You set various states, and then issue commands to draw geometry. Once a property is set in one function call, it remains set until changed. For example, if you set the clear color to red, then all subsequent calls to clear the color buffer will use red until you change it.
While this Online Demo is for WebGL, the same concepts apply to OpenGL on the desktop. The internal state is similar, but our syntax in C++/Qt6/OpenGL will be different.
Reading
-
Skim the overview of Modern OpenGL
-
Read the LearnOpenGL sections on Hello Triangle and Shaders. You can skip the sections on Creating a Window and Hello Window as this tutorial uses GLFW as the GUI, and we are using Qt6. Qt6 also provides some shortcuts for creating VBOs and compiling shaders, so don’t worry too much about the code in this tutorial. Instead, focus on the concepts which are the same as CS40.
-
Read the Readme for the first openGL demo. We will build off of a similar structure for our first lab assignment. You can clone and build this demo to see how it works, but you don’t need to understand all the details of the code yet. You also will be unable to push changes to this repository, as it is read-only for you.
Other Resources
Here are some tangents that may be of interest, but are not required reading or understanding for this course.
OpenGL History
Version 1.0 was released in 1992. The concept of programmable shaders was a long ways away. Specific commercial/consumer graphics hardware really wasn’t even available until 3D/fx released the Voodoo Graphics chip in late 1995. NVIDIA GPUS started around the same time. ATI (later AMD) also developed GPUs in the mid 1990s. Other GPU vendors include Intel and Apple. GPUS supporting Direct3D and OpenGL 1.2 started appearing in the late 1990s (The NVIDIA GeForce 256 SDR launched in October 1999).
OpenGL 2.0 first introduced programmable shaders in 2004, but the fixed function pipeline was still common. OpenGL 3.0 in 2008 started making the push towards a fully programmable pipeline by coming up with a deprecation plan for older fixed-function features and adding more support for programmable shaders.
OpenGL 4.0 (March 2010), further expanded the use of, the number of, and the capability of programmable shaders. OpenGL expanded until version 4.6 (July 2017), at which point the industry focus shifted to Vulkan, for greater efficiency and support for multi-core CPUs. Beyond the OpenGL/Vulkan APIs, Microsoft’s Direct3D and Apple’s Metal/MoltenVK are common APIs on those specific platforms.
To illustrate core graphics concepts in a Linux environment, we will use OpenGL 4.x in this course. All of our hardware in the CS labs should support up to OpenGL 4.6, the last version of OpenGL.