Chapter 2. Overview of Commands and Routines

Many OpenGL commands pertain specifically to drawing objects such as points, lines, polygons, and bitmaps. Other commands control the way that some of this drawing occurs (such as those that enable antialiasing or texturing). Still other commands are specifically concerned with frame buffer manipulation. This chapter briefly describes how all the OpenGL commands work together to create the OpenGL processing pipeline. Brief overviews are also given of the routines comprising the OpenGL Utility Library (GLU) and the OpenGL extensions to the X Window System (GLX).

This chapter has the following main sections:

OpenGL Processing Pipeline

Now that you have a general idea of how OpenGL works from Chapter 1 , let's take a closer look at the stages in which data is actually processed and tie these stages to OpenGL commands. The figure shown on the next page is a more detailed block diagram of the OpenGL processing pipeline.

For most of the pipeline, you can see three vertical arrows between the major stages. These arrows represent vertices and the two primary types of data that can be associated with vertices: color values and texture coordinates. Also note that vertices are assembled into primitives, then to fragments, and finally to pixels in the frame buffer. This progression is discussed in more detail in the following sections.

As you continue reading, be aware that we've taken some liberties with command names. Many OpenGL commands are simple variations of each other, differing mostly in the data type of arguments; some commands differ in the number of related arguments and whether those arguments can be specified as a vector or whether they must be specified separately in a list. For example, if you use the glVertex2f() command, you need to supply x and y coordinates as 32-bit floating-point numbers; with glVertex3sv(), you must supply an array of three short (16-bit) integer values for x, y, and z. For simplicity, only the base name of the command is used in the discussion that follows, and an asterisk is included to indicate that there may be more to the actual command name than is being shown. For example, glVertex*() stands for all variations of the command you use to specify vertices.

Also keep in mind that the effect of an OpenGL command may vary depending on whether certain modes are enabled. For example, you need to enable lighting if the lighting-related commands are to have the desired effect of producing a properly lit object. To enable a particular mode, you use the glEnable() command and supply the appropriate constant to identify the mode (for example, GL_LIGHTING). The following sections don't discuss specific modes, but you can refer to the reference page for glEnable() for a complete list of the modes that can be enabled. Modes are disabled with glDisable().

Figure 2-1. OpenGL Pipeline


Vertices

This section relates the OpenGL commands that perform per-vertex operations to the processing stages shown in the figure on the previous page.

Input Data

You must provide several types of input data to the OpenGL pipeline:

  • Vertices—Vertices describe the shape of the desired geometric object. To specify vertices, you use glVertex*() commands in conjunction with glBegin() and glEnd() to create a point, line, or polygon. You can also use glRect*() to describe an entire rectangle at once.

  • Edge flag—By default, all edges of polygons are boundary edges. Use the glEdgeFlag*() command to explicitly set the edge flag.

  • Current raster position—Specified with glRasterPos*(), the current raster position is used to determine raster coordinates for pixel and bitmap drawing operations.

  • Current normal—A normal vector associated with a particular vertex determines how a surface at that vertex is oriented in three-dimensional space; this in turn affects how much light that particular vertex receives. Use glNormal*() to specify a normal vector.

  • Current color—The color of a vertex, together with the lighting conditions, determine the final, lit color. Color is specified with glColor*() if in RGBA mode or with glIndex*() if in color index mode.

  • Current texture coordinates—Specified with glTexCoord*(), texture coordinates determine the location in a texture map that should be associated with a vertex of an object.

When glVertex*() is called, the resulting vertex inherits the current edge flag, normal, color, and texture coordinates. Therefore, glEdgeFlag*(), glNormal*(), glColor*(), and glTexCoord*() must be called before glVertex*() if they are to affect the resulting vertex.

Matrix Transformations

Vertices and normals are transformed by the modelview and projection matrices before they're used to produce an image in the frame buffer. You can use commands such as glMatrixMode(), glMultMatrix(), glRotate(), glTranslate(), and glScale() to compose the desired transformations, or you can directly specify matrices with glLoadMatrix() and glLoadIdentity(). Use glPushMatrix() and glPopMatrix() to save and restore modelview and projection matrices on their respective stacks.

Lighting and Coloring

In addition to specifying colors and normal vectors, you may define the desired lighting conditions with glLight*() and glLightModel*(), and the desired material properties with glMaterial*(). Related commands you might use to control how lighting calculations are performed include glShadeModel(), glFrontFace(), and glColorMaterial().

Generating Texture Coordinates

Rather than explicitly supplying texture coordinates, you can have OpenGL generate them as a function of other vertex data. This is what the glTexGen*() command does. After the texture coordinates have been specified or generated, they are transformed by the texture matrix. This matrix is controlled with the same commands mentioned earlier for matrix transformations.

Primitive Assembly

Once all these calculations have been performed, vertices are assembled into primitives—points, line segments, or polygons—together with the relevant edge flag, color, and texture information for each vertex.

Primitives

During the next stage of processing, primitives are converted to pixel fragments in several steps: primitives are clipped appropriately, whatever corresponding adjustments are necessary are made to the color and texture data, and the relevant coordinates are transformed to window coordinates. Finally, rasterization converts the clipped primitives to pixel fragments.

Clipping

Points, line segments, and polygons are handled slightly differently during clipping. Points are either retained in their original state (if they're inside the clip volume) or discarded (if they're outside). If portions of line segments or polygons are outside the clip volume, new vertices are generated at the clip points. For polygons, an entire edge may need to be constructed between such new vertices. For both line segments and polygons that are clipped, the edge flag, color, and texture information is assigned to all new vertices.

Clipping actually happens in two steps:

  1. Application-specific clipping—Immediately after primitives are assembled, they're clipped in eye coordinates as necessary for any arbitrary clipping planes you've defined for your application with glClipPlane(). (OpenGL requires support for at least six such application-specific clipping planes.)

  2. View volume clipping—Next, primitives are transformed by the projection matrix (into clip coordinates) and clipped by the corresponding viewing volume. This matrix can be controlled by the previously mentioned matrix transformation commands but is most typically specified by glFrustum() or glOrtho().

Transforming to Window Coordinates

Before clip coordinates can be converted to window coordinates, they are normalized by dividing by the value of w to yield normalized device coordinates. After that, the viewport transformation applied to these normalized coordinates produces window coordinates. You control the viewport, which determines the area of the on-screen window that displays an image, with glDepthRange() and glViewport().

Rasterization

Rasterization is the process by which a primitive is converted to a two-dimensional image. Each point of this image contains such information as color, depth, and texture data. Together, a point and its associated information are called a fragment. The current raster position (as specified with glRasterPos*()) is used in various ways during this stage for pixel drawing and bitmaps. As discussed below, different issues arise when rasterizing the three different types of primitives; in addition, pixel rectangles and bitmaps need to be rasterized.

Primitives. You control how primitives are rasterized with commands that allow you to choose dimensions and stipple patterns: glPointSize(), glLineWidth(), glLineStipple(), and glPolygonStipple(). Additionally, you can control how the front and back faces of polygons are rasterized with glCullFace(), glFrontFace(), and glPolygonMode().

Pixels. Several commands control pixel storage and transfer modes. The command glPixelStore*() controls the encoding of pixels in client memory, and glPixelTransfer*() and glPixelMap*() control how pixels are processed before being placed in the frame buffer. A pixel rectangle is specified with glDrawPixels(); its rasterization is controlled with glPixelZoom().

Bitmaps. Bitmaps are rectangles of zeros and ones specifying a particular pattern of fragments to be produced. Each of these fragments has the same associated data. A bitmap is specified using glBitmap().

Texture Memory. Texturing maps a portion of a specified texture image onto each primitive when texturing is enabled. This mapping is accomplished by using the color of the texture image at the location indicated by a fragment's texture coordinates to modify the fragment's RGBA color. A texture image is specified using glTexImage2D() or glTexImage1D(). The commands glTexParameter*() and glTexEnv*() control how texture values are interpreted and applied to a fragment.

Fog. You can have OpenGL blend a fog color with a rasterized fragment's post-texturing color using a blending factor that depends on the distance between the eyepoint and the fragment. Use glFog*() to specify the fog color and blending factor.

Fragments

OpenGL allows a fragment produced by rasterization to modify the corresponding pixel in the frame buffer only if it passes a series of tests. If it does pass, the fragment's data can be used directly to replace the existing frame buffer values, or it can be combined with existing data in the frame buffer, depending on the state of certain modes.

Pixel Ownership Test

The first test is to determine whether the pixel in the frame buffer corresponding to a particular fragment is owned by the current OpenGL context. If so, the fragment proceeds to the next test. If not, the window system determines whether the fragment is discarded or whether any further fragment operations will be performed with that fragment. This test allows the window system to control OpenGL's behavior when, for example, an OpenGL window is obscured.

Scissor Test

With the glScissor() command, you can specify an arbitrary screen-aligned rectangle outside of which fragments will be discarded.

Alpha Test

The alpha test (which is performed only in RGBA mode) discards a fragment depending on the outcome of a comparison between the fragment's alpha value and a constant reference value. The comparison command and reference value are specified with glAlphaFunc().

Stencil Test

The stencil test conditionally discards a fragment based on the outcome of a comparison between the value in the stencil buffer and a reference value. The command glStencilFunc() specifies the comparison command and the reference value. Whether the fragment passes or fails the stencil test, the value in the stencil buffer is modified according to the instructions specified with glStencilOp().

Depth Buffer Test

The depth buffer test discards a fragment if a depth comparison fails; glDepthFunc() specifies the comparison command. The result of the depth comparison also affects the stencil buffer update value if stenciling is enabled.

Blending

Blending combines a fragment's R, G, B, and A values with those stored in the frame buffer at the corresponding location. The blending, which is performed only in RGBA mode, depends on the alpha value of the fragment and that of the corresponding currently stored pixel; it might also depend on the RGB values. You control blending with glBlendFunc(), which allows you to indicate the source and destination blending factors.

Dithering

If dithering is enabled, a dithering algorithm is applied to the fragment's color or color index value. This algorithm depends only on the fragment's value and its x and y window coordinates.

Logical Operations

Finally, a logical operation can be applied between the fragment and the value stored at the corresponding location in the frame buffer; the result replaces the current frame buffer value. You choose the desired logical operation with glLogicOp(). Logical operations are performed only on color indices, never on RGBA values.

Pixels

During the previous stage of the OpenGL pipeline, fragments are converted to pixels in the frame buffer. The frame buffer is actually organized into a set of logical buffers—the color, depth, stencil, and accumulation buffers. The color buffer itself consists of a front left, front right, back left, back right, and some number of auxiliary buffers. You can issue commands to control these buffers, and you can directly read or copy pixels from them. (Note that the particular OpenGL context you're using may not provide all of these buffers.)

Frame Buffer Operations

You can select into which buffer color values are written with glDrawBuffer(). In addition, four different commands are used to mask the writing of bits to each of the logical frame buffers after all per-fragment operations have been performed: glIndexMask(), glColorMask(), glDepthMask(), and glStencilMask(). The operation of the accumulation buffer is controlled with glAccum(). Finally, glClear() sets every pixel in a specified subset of the buffers to the value specified with glClearColor(), glClearIndex(), glClearDepth(), glClearStencil(), or glClearAccum().

Reading or Copying Pixels

You can read pixels from the frame buffer into memory, encode them in various ways, and store the encoded result in memory with glReadPixels(). In addition, you can copy a rectangle of pixel values from one region of the frame buffer to another with glCopyPixels(). The command glReadBuffer() controls from which color buffer the pixels are read or copied.

Additional OpenGL Commands

This section briefly describes special groups of commands that weren't explicitly shown as part of OpenGL's processing pipeline. These commands accomplish such diverse tasks as evaluating polynomials, using display lists, and obtaining the values of OpenGL state variables.

Using Evaluators

OpenGL's evaluator commands allow you to use a polynomial mapping to produce vertices, normals, texture coordinates, and colors. These calculated values are then passed on to the pipeline as if they had been directly specified. The evaluator facility is also the basis for the NURBS (Non-Uniform Rational B-Spline) commands, which allow you to define curves and surfaces, as described later in this chapter under "OpenGL Utility Library."

The first step involved in using evaluators is to define the appropriate one- or two-dimensional polynomial mapping using glMap*(). The domain values for this map can then be specified and evaluated in one of two ways:

  • By defining a series of evenly spaced domain values to be mapped using glMapGrid*() and then evaluating a rectangular subset of that grid with glEvalMesh*(). A single point of the grid can be evaluated using glEvalPoint*().

  • By explicitly specifying a desired domain value as an argument to glEvalCoord*(), which evaluates the maps at that value.

Performing Selection and Feedback

Selection, feedback, and rendering are mutually exclusive modes of operation. Rendering is the normal, default mode during which fragments are produced by rasterization; in selection and feedback modes, no fragments are produced and therefore no frame buffer modification occurs. In selection mode, you can determine which primitives would be drawn into some region of a window; in feedback mode, information about primitives that would be rasterized is fed back to the application. You select among these three modes with glRenderMode().

Selection

Selection works by returning the current contents of the name stack, which is an array of integer-valued names. You assign the names and build the name stack within the modeling code that specifies the geometry of objects you want to draw. Then, in selection mode, whenever a primitive intersects the clip volume, a selection hit occurs. The hit record, which is written into the selection array you've supplied with glSelectBuffer(), contains information about the contents of the name stack at the time of the hit. (Note that glSelectBuffer() needs to be called before OpenGL is put into selection mode with glRenderMode(). Also, the entire contents of the name stack isn't guaranteed to be returned until glRenderMode() is called to take OpenGL out of selection mode.) You manipulate the name stack with glInitNames(), glLoadName(), glPushName(), and glPopName(). In addition, you might want to use an OpenGL Utility Library routine for selection, gluPickMatrix(), which is described later in this chapter under "OpenGL Utility Library."

Feedback

In feedback mode, each primitive that would be rasterized generates a block of values that is copied into the feedback array. You supply this array with glFeedbackBuffer(), which must be called before OpenGL is put into feedback mode. Each block of values begins with a code indicating the primitive type, followed by values that describe the primitive's vertices and associated data. Entries are also written for bitmaps and pixel rectangles. Values are not guaranteed to be written into the feedback array until glRenderMode() is called to take OpenGL out of feedback mode. You can use glPassThrough() to supply a marker that's returned in feedback mode as if it were a primitive.

Using Display Lists

A display list is simply a group of OpenGL commands that has been stored for subsequent execution. The glNewList() command begins the creation of a display list, and glEndList() ends it. With few exceptions, OpenGL commands called between glNewList() and glEndList() are appended to the display list, and optionally executed as well. (The reference page for glNewList() lists the commands that can't be stored and executed from within a display list.) To trigger the execution of a list or set of lists, use glCallList() or glCallLists() and supply the identifying number of a particular list or lists. You can manage the indices used to identify display lists with glGenLists(), glListBase(), and glIsList(). Finally, you can delete a set of display lists with glDeleteLists().

Managing Modes and Execution

The effect of many OpenGL commands depends on whether a particular mode is in effect. You use glEnable() and glDisable() to set such modes and glIsEnabled() to determine whether a particular mode is set.

You can control the execution of previously issued OpenGL commands with glFinish(), which forces all such commands to complete, or glFlush(), which ensures that all such commands will be completed in a finite time.

A particular implementation of OpenGL may allow certain behaviors to be controlled with hints, by using the glHint() command. Possible behaviors are the quality of color and texture coordinate interpolation, the accuracy of fog calculations, and the sampling quality of antialiased points, lines, or polygons.

Obtaining State Information

OpenGL maintains numerous state variables that affect the behavior of many commands. Some of these variables have specialized query commands:

glGetLight()
glGetMaterial()
glGetClipPlane()
glGetPolygonStipple()
glGetTexEnv()
glGetTexGen()
glGetTexImage()
glGetTexLevelParameter()
glGetTexParameter()
glGetMap()
glGetPixelMap()

The value of other state variables can be obtained with glGetBooleanv(), glGetDoublev(), glGetFloatv(), or glGetIntegerv(), as appropriate. The reference page for glGet*() explains how to use these commands. Other query commands you might want to use are glGetError(), glGetString(), and glIsEnabled(). (See "Handling Errors" later in this chapter for more information about routines related to error handling.) Finally, you can save and restore sets of state variables with glPushAttrib() and glPopAttrib().

OpenGL Utility Library

The OpenGL Utility Library (GLU) contains several groups of commands that complement the core OpenGL interface by providing support for auxiliary features. Since these utility routines make use of core OpenGL commands, any OpenGL implementation is guaranteed to support the utility routines. Note that the prefix for Utility Library routines is glu rather than gl.

Manipulating Images for Use in Texturing

GLU provides image scaling and automatic mipmapping routines to simplify the specification of texture images. The routine gluScaleImage() scales a specified image to an accepted texture size; the resulting image can then be passed to OpenGL as a texture. The automatic mipmapping routines gluBuild1DMipmaps() and gluBuild2DMipmaps() create mipmapped texture images from a specified image and pass them to glTexImage1D() and glTexImage2D(), respectively.

Transforming Coordinates

Several commonly used matrix transformation routines are provided. You can set up a two-dimensional orthographic viewing region with gluOrtho2D(), a perspective viewing volume using gluPerspective(), or a viewing volume that's centered on a specified eyepoint with gluLookAt(). Each of these routines creates the desired matrix and applies it to the current matrix using glMultMatrix().

The gluPickMatrix() routine simplifies selection by creating a matrix that restricts drawing to a small region of the viewport. If you rerender the scene in selection mode after this matrix has been applied, all objects that would be drawn near the cursor will be selected and information about them stored in the selection buffer. See "Performing Selection and Feedback" earlier in this chapter for more information about selection mode.

If you need to determine where in the window an object is being drawn, use gluProject(), which converts specified coordinates from object coordinates to window coordinates; gluUnProject() performs the inverse conversion.

Polygon Tessellation

The polygon tessellation routines triangulate a concave polygon with one or more contours. To use this GLU feature, first create a tessellation object with gluNewTess(), and define callback routines that will be used to process the triangles generated by the tessellator (with gluTessCallBack()). Then use gluBeginPolygon(), gluTessVertex(), gluNextContour(), and gluEndPolygon() to specify the concave polygon to be tessellated. Unneeded tessellation objects can be destroyed with gluDeleteTess().

Rendering Spheres, Cylinders, and Disks

You can render spheres, cylinders, and disks using the GLU quadric routines. To do this, create a quadric object with gluNewQuadric(). (To destroy this object when you're finished with it, use gluDeleteQuadric().) Then specify the desired rendering style, as listed below, with the appropriate routine (unless you're satisfied with the default values):

  • Whether surface normals should be generated, and if so, whether there should be one normal per vertex or one normal per face: gluQuadricNormals()

  • Whether texture coodinates should be generated: gluQuadricTexture()

  • Which side of the quadric should be considered the outside and which the inside: gluQuadricOrientation()

  • Whether the quadric should be drawn as a set of polygons, lines, or points: gluQuadricDrawStyle()

After you've specified the rendering style, simply invoke the rendering routine for the desired type of quadric object: gluSphere(), gluCylinder(), gluDisk(), or gluPartialDisk(). If an error occurs during rendering, the error-handling routine you've specified with gluQuadricCallBack() is invoked.

NURBS Curves and Surfaces

NURBS (Non-Uniform Rational B-Spline) curves and surfaces are converted to OpenGL evaluators by the routines described in this section. You can create and delete a NURBS object with gluNewNurbsRenderer() and gluDeleteNurbsRenderer(), and establish an error-handling routine with gluNurbsCallback().

You specify the desired curves and surfaces with different sets of routines—gluBeginCurve(), gluNurbsCurve(), and gluEndCurve() for curves or gluBeginSurface(), gluNurbsSurface(), and gluEndSurface() for surfaces. You can also specify a trimming region, which defines a subset of the NURBS surface domain to be evaluated, thereby allowing you to create surfaces that have smooth boundaries or that contain holes. The trimming routines are gluBeginTrim(), gluPwlCurve(), gluNurbsCurve(), and gluEndTrim().

As with quadric objects, you can control how NURBS curves and surfaces are rendered:

  • Whether a curve or surface should be discarded if its control polyhedron lies outside the current viewport

  • What the maximum length should be (in pixels) of edges of polygons used to render curves and surfaces

  • Whether the projection matrix, modelview matrix, and viewport should be taken from the OpenGL server or whether you'll supply them explictly with gluLoadSamplingMatrices()

Use gluNurbsProperty() to set these properties, or use the default values. You can query a NURBS object about its rendering style with gluGetNurbsProperty().

Handling Errors

The routine gluErrorString() is provided for retrieving an error string that corresponds to an OpenGL or GLU error code. The currently defined OpenGL error codes are described in the glGetError() reference page. The GLU error codes are listed in the gluErrorString(), gluTessCallback(), gluQuadricCallback(), and gluNurbsCallback() reference pages. Errors generated by GLX routines are listed in the relevant reference pages for those routines.

OpenGL Extension to the X Window System

In the X Window System, OpenGL rendering is made available as an extension to X in the formal X sense: connection and authentication are accomplished with the normal X mechanisms. As with other X extensions, there is a defined network protocol for OpenGL's rendering commands encapsulated within the X byte stream. Since performance is critical in three-dimensional rendering, the OpenGL extension to X allows OpenGL to bypass the X server's involvement in data encoding, copying, and interpretation and instead render directly to the graphics pipeline.

This section briefly discusses the routines defined as part of GLX; these routines have the prefix glX. You'll need to have some knowledge of X in order to fully understand the following and to use GLX successfully.

Initialization

Use glXQueryExtension() and glXQueryVersion() to determine whether the GLX extension is defined for an X server, and if so, which version is bound in the server. The glXChooseVisual() routine returns a pointer to an XVisualInfo structure describing the visual that best meets the client's specified attributes. You can query a visual about its support of a particular OpenGL attribute with glXGetConfig().

Controlling Rendering

Several GLX routines are provided for creating and managing an OpenGL rendering context. You can use such a context to render off-screen if you want. Routines are also provided for such tasks as synchronizing execution between the X and OpenGL streams, swapping front and back buffers, and using an X font.

Managing an OpenGL Rendering Context

An OpenGL rendering context is created with glXCreateContext(). One of the arguments to this routine allows you to request a direct rendering context that bypasses the X server as described above. (Note that in order to do direct rendering, the X server connection must be local and the OpenGL implementation needs to support direct rendering.) You can determine whether a GLX context is direct with glXIsDirect().

To make a rendering context current, use glXMakeCurrent(); glXGetCurrentContext() returns the current context. (You can also obtain the current drawable with glXGetCurrentDrawable().) Remember that only one context can be current for any thread at any one time. If you have multiple contexts, you can copy selected groups of OpenGL state variables from one context to another with glXCopyContext(). When you're finished with a particular context, destroy it with glXDestroyContext().

Off-Screen Rendering

To render off-screen, first create an X Pixmap and then pass this as an argument to glXCreateGLXPixmap(). Once rendering is completed, you can destroy the association between the X and GLX Pixmaps with glXDestroyGLXPixmap(). (Off-screen rendering isn't guaranteed to be supported for direct renderers.)

Synchronizing Execution

To prevent X requests from executing until any outstanding OpenGL rendering is completed, call glXWaitGL(). Then, any previously issued OpenGL commands are guaranteed to be executed before any X rendering calls made after glXWaitGL(). Although the same result can be achieved with glFinish(), glXWaitGL() doesn't require a round trip to the server and thus is more efficient in cases where the client and server are on separate machines.

To prevent an OpenGL command sequence from executing until any outstanding X requests are completed, use glXWaitX(). This routine guarantees that previously issued X rendering calls will be executed before any OpenGL calls made after glXWaitX().

Swapping Buffers

For drawables that are double-buffered, the front and back buffers can be exchanged by calling glXSwapBuffers(). An implicit glFlush() is done as part of this routine.

Using an X Font

A shortcut for using X fonts in OpenGL is provided with the command glXUseXFont().