VC++ Programming with OpenGl

Computer graphics has come a long way since letting you draw some fixed 2-D shapes on the screen. 3-D graphics programming has changed the whole scenario. Today's computers—even the desktop variety—can produce stunningly real images in near photographic quality. In fact, using modern 3-D graphics programming techniques, computers can produce everything from sophisticated computer games like doom to scenes for blockbuster films such as Terminator II and Jurassic Park.

However, understanding 3-D graphics programming techniques require a strong background in mathematics. Just flip through any standard Computer Graphics book and you would realize this. The formulae involved are too complicated even to puzzle the best ones. Though having a good mathematical background hasn’t harmed anybody, many of today’s 3-D graphics programming tools hide most of the mathematical details from the programmer leaving him to worry about creating images rather than understanding the trigonometry and differential calculus involved in it. One such tool is OpenGL, a library of graphics routines that makes sophisticated 3-D graphics programming accessible to common programmers. Luckily for PC programmers, OpenGL is now available under the Windows 95/98 and NT operating systems.

What Is OpenGL?

OpenGL offers a programming interface for producing interactive 3-D applications on a wide variety of platforms, including workstations from DEC, Silicon Graphics, and IBM. Using the more than 100 supported OpenGL commands, a programmer can do anything from displaying simple shapes to composing animated 3-D scenes. This finds application in CAD programs, simulations, and games. Programs written using OpenGL for one platform can be ported easily to another with minimum changes.

OpenGL’s story began at Silicon Graphics Incorporated. It was intended for use with the company's IRIS GL graphics workstations. However, now OpenGL has been implemented on a wide variety of platforms. An OpenGL Architecture Review Board comprising industry leaders like Intel, Microsoft, IBM etc. maintains the OpenGL library's definition, ensuring that it is implemented properly on various platforms and remains as portable as possible.

The OpenGL Library

As mentioned earlier, the main OpenGL library consists of more than 100 functions. The name of each function in this set begins with the letters gl. In addition to these core functions, Windows95 supports four other categories of functions. These are:

  1. OpenGL utility library
  2. OpenGL auxiliary library
  3. Wiggle functions unique to Windows
  4. New Win32 API functions

The core set of 115 OpenGL functions represents the basic set of functions that should be implemented on any OpenGL platform. These functions allow the programmer to create various types of shapes, produce lighting effects, incorporate texture mapping, perform matrix transformations, and much more. In fact these core functions appear in several forms. For example, there are functions like vertex3f( ), vertex3d( ), vertex3i( ), vertex3fv( ), vertext3dv( ), etc., all attempting to define a vertex. If you consider these as different functions then there are well over 300 functions in the core set itself.

OpenGL Utility Library

The functions under this category begin with a prefix glu. These functions actually call on the core set to do their work. Their job is to simplify the use of texture images, perform high-level coordinate transformations, and render such polygon-based objects as spheres, cylinders, and disks. Like the core set, all the functions in the utility library too are guaranteed to be present in any OpenGL implementation.

OpenGL Auxiliary Library

These functions have a word aux attached at the beginning of their name. These are platform dependent functions. They carry out tasks like managing windows, handling I/O, and drawing certain 3-D objects. You got to exercise caution while using these functions since their use is likely to reduce the portability of your program.

The WGL Functions

There are six functions under this category. Their names begin with wgl and hence the name wiggle functions. These functions link OpenGL to Windows95/98/NT, enabling the programmer to create and select rendering contexts (the OpenGL version of a device context) and create bitmapped fonts that are used to place text in an OpenGL window. These functions are unique to the Windows implementation of OpenGL.

New Win32 Functions

Unlike others, these functions do not begin with a specific prefix. They deal with pixel formats and double buffering. These functions being extensions to the Win32 system aren't implemented on other OpenGL platforms.

In this article we have tried to explanation how basic 2D shapes can be drawn using OpenGL. The full description of the program is given below:

OpenGL Data Types

In Windows programming we use many special data types like RECT, POINT, HDC, HWND etc. for commonly needed values. Similarly, OpenGL too defines many data types that we can use in OpenGL programs. Some of these data types are shown below. Our program too uses some of these data types.

Data Type

Equivalent

Data Type

Equivalent

GLbyte

signed char

GLfloat

float

GLubyte

unsigned char

GLdouble

double

GLshort

short

GLboolean

unsigned

GLushort

unsigned short

GLvoid

void

GLint

long

HGLRC

HGDIOBJ

GLuint

unsigned



Rendering Contexts

A rendering context is OpenGL’s version of a device context (DC). As you know, a device context contains information that specifies pen and brush colors, drawing modes, mapping modes, palette contents etc. These parameters decide how graphical information would be displayed in a window. To draw anything in a window it is necessary to first create a device context. We usually do this through the following set of statements:

CClientDC d ( this ) ;
d.Rectangle ( 10, 10, 100, 100 ) ;

Unless the device context is created we cannot call the CDC::Rectangle( ) function.

Windows based OpenGL programs use DCs, just like any other Windows programs. However, they must also deal with a rendering context. The rendering context holds information that OpenGL needs to associate OpenGL with Windows’ windowing system. An OpenGL application must create a rendering context, after which it must make the rendering context current before using OpenGL to draw in a window. Making the rendering context current binds it to the window’s DC. After drawing to a window, the application makes the rendering context not current, which unbinds the rendering context from the DC. Finally, sometime before the program ends, the rendering context must be deleted. The rendering context can be created using the function wglCreateContext( ), it can be made current using the function wglMakeCurrent( ) and it can be deleted using the function wglDeleteContext( ).

Pixel Formats

Before our program can create a rendering context, it must set the device's pixel format, which contains the attributes for the device's drawing surface. These attributes include: whether the drawing surface uses RGBA or indexed color mode, whether the pixel buffer uses single or double buffering, the number of color bits, the number of bits used in the depth and stencil buffers, and other OpenGL graphical information. (You would learn about most of this stuff later in this chapter.)

The Win32 functions that manage pixel formats are shown below.

Function Name

Use

ChoosePixelFormat( )

Returns the pixel format that most closely matches the requested pixel format

DescribePixelFormat( )

Obtains information about a given pixel format

GetPixelFormat( )

Gets the pixel format of the given device context

SetPixelFormat( )

Sets the pixel format of the given device context

Depending on the capabilities of the display device it would support a specific number of pixel formats. The attributes of a particular pixel format are represented by a 26-field structure called PIXELFORMATDESCRIPTOR. To set up a pixel format we have to fill this structure and then pass its base address to the SetPixelFormat( ) function.

The Program Explanation

When the application starts up, the myview class's PreCreateWindow( ) function gets called. This function modifies the window’s style to include the WS_CLIPCHILDREN and WS_CLIPSIBLINGS flags through the statement,

cs.style |= WS_CLIPCHILDREN | WS_CLIPSIBLINGS ;

If we do not include these flags in the window’s style, SetPixelFormat( ) function will return an error. This is because of the fact that child windows do not necessarily have the same pixel format as the parent window. The WS_CLIPCHILDREN and WS_CLIPSIBLINGS flags prohibit OpenGL from displaying output in child windows.

When it is time to create the application window, the myiew class's OnCreate( ) function is called. The OnCreate( ) function does three jobs:

  1. Setting the window’s pixel format
  2. Creating the application’s rendering context
  3. Making the rendering context current

Let us now understand these functions in detail.

Setting Up The Window’s Pixel Format

This is a four-step process. These steps are as follows:

Step 1: Setting Up The Structure

In the first step we are required to setup values in a PIXELFORMATDESCRIPTOR structure. This structure contains 26 fields as shown below:

typedef struct tagPIXELFORMATDESCRIPTOR
{

WORD nSize ;
WORD nVersion ;
DWORD dwFlags ;
BYTE iPixelType ;
BYTE cColorBits ;
BYTE cRedBits ;
BYTE cRedShift ;
BYTE cGreenBits ;
BYTE cGreenShift ;
BYTE cBlueBits ;
BYTE cBlueShift ;
BYTE cAlphaBits ;
BYTE cAlphaShift ;
BYTE cAccumBits ;
BYTE cAccumRedBits ;
BYTE cAccumGreenBits ;
BYTE cAccumBlueBits ;
BYTE cAccumAlphaBits ;
BYTE cDepthBits ;
BYTE cStencilBits ;
BYTE cAuxBuffers ;
BYTE iLayerType ;
BYTE bReserved ;
DWORD dwLayerMask ;
DWORD dwVisibleMask ;
DWORD dwDamageMask ;

} PIXELFORMATDESCRIPTOR, *PPIXELFORMATDESCRIPTOR,

FAR *LPPIXELFORMATDESCRIPTOR ;

We have set up this structure as follows:

PIXELFORMATDESCRIPTOR pfd =
{

sizeof(PIXELFORMATDESCRIPTOR), // Structure size
1, // Structure version number
PFD_DRAW_TO_WINDOW | // Property flags
PFD_SUPPORT_OPENGL,
PFD_TYPE_RGBA,
24, // 24-bit color
0, 0, 0, 0, 0, 0, // Not concerned with these
0, 0, 0, 0, 0, 0, 0 // No alpha or accum buffer
32, // 32-bit depth buffer
0, 0, // No stencil or aux buffer
PFD_MAIN_PLANE, // Main layer type
0, // Reserved
0, 0, 0, // Unsupported

} ;

How do we decide what values to fill in this structure to get the results that we want? This needs a good understanding of OpenGL and its implementation under Windows 95. To begin with, we have used default values for most of the structure's members. In the subsequent programs of this chapter you'll understand more about pixel formats. The default values work well on most systems.

Here, the property flags in dwFlags enable the application to draw to a window using OpenGL functions, whereas the PFD_TYPE_RGBA flag in iPixelType selects the RGBA color mode. The 24 in cColorBits selects 24-bit color for 16.7 million colors. The application will not use alpha or accumulation buffers, so the cAccumBits through cAccumAlphaBits members are all set to 0.

The 32 in cDepthBits selects a 32-bit depth buffer. (The depth buffer helps OpenGL remover hidden surfaces from 3-D objects.) The next two zeroes indicate that the application will not use stencil or auxiliary buffers. (A stencil buffer enables an application to restrict drawing to a specific region of a window). The PFD_MAIN_PLANE flag in iLayerType is the only flag value we can currently use for the layer type. Finally, the reserved and unsupported structure members are all set to 0.

Step 2: Setting The Pixel Format

Once the PIXELFORMATDESCRIPTOR structure has been setup with appropriate values, we can set the pixel format. In our program this has been done through the following code segment:

m_d = new CClientDC ( this ) ;
int index = ChoosePixelFormat ( m_d -> m_hDC, &pfd ) ;
SetPixelFormat ( m_d -> m_hDC, index, &pfd ) ;

Here we have first obtained a DC for the client area of the application’s window. Then through a call to the function ChoosePixelFormat( ) we have collected the index of the pixel format that most closely matches the format requested. This function’s two arguments are a handle to the DC for which to select the pixel format and the address of the PIXELFORMATDESCRIPTOR structure that holds the attributes of the requested pixel format. If the function call fails, ChoosePixelFormat( ) returns a 0; otherwise, it returns the pixel-format index.

The third line in the preceding code segment calls the SetPixelFormat( ) function to set the pixel format. The function’s three arguments are a handle to the DC, the pixel-format index, and the address of the PIXELFORMATDESCRIPTOR structure. The function returns TRUE if it succeeds; otherwise, it returns FALSE.

Step 3: Checking Whether To Create A Logical Palette

OpenGL gives excellent performance when running on the systems that display 64000 or more colors. However, if we are running an OpenGL program on a 256-color system we need to setup a logical palette. Our program must be able to determine during execution whether creation of a logical palette is necessary or not. In our program this has been accomplished through the following code:

DescribePixelFormat ( m_d -> m_hDC, index, sizeof ( pfd ), &pfd ) ;
if ( pfd.dwFlags & PFD_NEED_PALETTE )
setuplogicalpalette( ) ;

After setting the pixel format we have called the function DescribePixelFormat( ) to fill the PIXELFORMATDESCRIPTOR structure with the values for the current pixel format. If the dwFlags member of the PIXELFORMATDESCRIPTOR structure contains the PFD_NEED_PALETTE flag, we should create a logical palette for the application.

Step 4: Setting Up A Logical Palette

A logical palette is a set of colors that is associated with a particular application. The system palette, on the other hand, contains the colors that are currently displayed on the screen. When the user switches to a new application, the new application’s logical palette is mapped into Windows’ system palette. We can have many logical palettes, but there can be only one system palette.

In our program, the job of setting up the logical palette has been done by the user-defined function setuplogicalpalette( ). In this function if we have defined the following structure to hold the information that Windows needs to create a logical palette:

struct
{

WORD ver ;
WORD num ;
PALETTEENTRY entries[256] ;

} logicalpalette = { 0x300, 256 } ;

The first element of the structure is a version number, which should be 300h. The second element indicates the number of colors in the logical palette. The last element is an array of PALETTEENTRY structures. Windows defines the PALETTEENTRY structure as follows:

typedef struct
{

BYTE peRed ;
BYTE peGreen ;
BYTE peBlue ;
BYTE peFlags ;

} PALETTEENTRY ;

The peRed, peGreen and peBlue members of this structure hold the red, green and blue intensities of a color. The peFlags member specifies how Windows should handle the palette entry and can take values like PC_EXPLICIT, PC_NOCOLLAPSE, PC_RESERVED or 0. When set to 0 we are letting Windows handle the color entry any way it sees fit.

The setuplogicalpalette( ) function next creates a palette that contains a wide range of colors:

BYTE reds[ ] = { 0, 36, 72, 109, 145, 182, 218, 255 } ;
BYTE greens[ ] = { 0, 36, 72, 109, 145, 182, 218, 255 } ;
BYTE blues[ ] = { 0, 85, 170, 255 } ;

for ( int cn = 0 ; cn <>

logicalpalette.entries[cn].peRed = reds[cn & 0x07] ;
logicalpalette.entries[cn].peGreen = greens[( cn >> 0x03 ) & 0x07] ;
logicalpalette.entries[cn].peBlue = blues[( cn >> 0x06 ) & 0x03] ;
logicalpalette.entries[cn].peFlags = 0 ;

}

Within the for loop we have done some bit manipulation to calculate indexes into the reds[ ], greens[ ] and blues[ ] arrays, which contain the various color intensities used to fill the palette.

Finally, we have created the logical palette by calling the Windows API function CreatePalette( ):

m_hpalette = CreatePalette ( ( LOGPALETTE* ) &logicalpalette ) ;

Just creating the palette, however, doesn’t serve the purpose. Before we can draw on the screen we need to select the palette into the device context and then realize the palette. The following code achieves this:

if ( m_hpalette )
{

SelectPalette ( m_d -> m_hDC, m_hpalette, FALSE ) ;
RealizePalette ( m_d -> m_hDC ) ;

}

The three arguments passed to the SelectPalette( ) function are a handle to the device context, a handle to the palette and a boolean value indicating whether the palette should be a background palette (TRUE) or a foreground palette (FALSE). Usually, a FALSE value is used for this argument.

Once the logical palette is selected into the device context we must realize the palette. This tells Windows to map the palette to the system palette.

Creating A Rendering Context

Before we can draw to a window using OpenGL functions we must create and make current a rendering context. This is a simple job involving calls to two functions:

m_hGRC = wglCreateContext ( m_d -> m_hDC ) ;
wglMakeCurrent ( m_d -> m_hDC, m_hGRC ) ;

The two arguments passed to wglMakeCurrent( ) are a handle to the DC and the handle of the rendering context. If successful, wglMakeCurrent( ) returns TRUE; otherwise, it returns FALSE.

Drawing Shapes

On selecting any menu item from the shapes menu the control reaches the function myview::onshape( ) function. Here we have simply set the shape variable with a value indicating the shape that needs to be drawn and then invalidated the window. This results into a call to myview::OnDraw( ) which in turn calls myview::drawshapes( ). In this function through a big switch statement control is transferred to the appropriate shape drawing function. Let us now try to understand these functions.

Drawing Points And Lines

As we know, to create any shape we need to define the shape’s vertices within a pair of glBegin( ) and glEnd( ). If we are to draw points then the argument to glBegin( ) should be GL_POINTS, whereas, for a line it should be GL_LINES. For example, the following statements define a set of lines:

glBegin ( GL_LINES ) ;

glVertex2f ( -0.75f, 0.75f ) ;
glVertex2f ( -0.75f, -0.75f ) ;
glVertex2f ( -0.25f, 0.75f ) ;
glVertex2f ( -0.25f, -0.75f ) ;

glEnd( ) ;

The thickness of the line being drawn can be controlled by calling the glPointSize( ) function. The argument passed to it is a value indicating the requested point diameter. To determine the range of supported point sizes we have called the glGetFloatv( ) function like this:

GLfloat lw[2] ;
glGetFloatv ( GL_LINE_WIDTH_RANGE, lw ) ;

Here lw[ ] is a two element array that will hold the minimum and maximum point sizes. The first parameter passed to glGetFloatv( ) is a constant that indicates the value that we wish to retrieve. There are almost 150 constants that we can use with the various forms of the glGet( ) function.

The following figure shows the lines drawn through our program:

In addition to two solid lines we have drawn two stippled lines. The stippled lines are created out of series of dots and dashes. The code to do this is given below.

if ( lw[1] >= 4.0f )
glLineWidth ( 4.0f ) ;
else
glLineWidth ( lw[1] ) ;
glLineStipple ( 1, 0x4444 ) ;
glEnable ( GL_LINE_STIPPLE ) ;
glBegin ( GL_LINES ) ;
glVertex2f ( 0.25f, 0.75f ) ;
glVertex2f ( 0.25f, -0.75f ) ;
glVertex2f ( 0.75f, 0.75f ) ;
glVertex2f ( 0.75f, -0.75f ) ;
glEnd( ) ;
glDisable ( GL_LINE_STIPPLE ) ;

To draw a stippled line we have to first specify a stipple pattern by calling the function glLineStipple( ). The stipple pattern is defined by creating a binary value in which a 1 represents a dot and a 0 indicates a blank. For example, the following stipple pattern would result in a line of alternating blank spaces and dashes:

0011001100110011

A hexadecimal equivalent of this binary stipple pattern is supplied as an argument to the glLineStipple( ) function:

glLineStipple ( 1, 0x3333 ) ;

The first parameter passed to this function represents a pattern repeat factor. The repeat factor determines, how many times the dots represented by the binary values should be repeated. For example, if the repeat factor is 2, OpenGL will draw a line that is defined as 01010101 as if it were defined as 0011001100110011. You can think of the repeat factor as a horizontal scaling of the line. After setting the stipple pattern, we must enable line stippling by calling glEnable( ):

glEnable ( GL_LINE_STIPPLE ) ;

The argument passed to glEnable( ) function indicates the capability that we want to enable. There are almost 50 constants that we can use with the glEnable( ) and glDisable( ) functions.

Drawing Line Strips And Line Loops

A line strip is just a series of interconnected lines. When we define a line strip, the first pair of vertices defines the first line, while each vertex after that defines the next point to which OpenGL should draw. To define a line strip, we use the GL_LINE_STRIP constant as glBegin( )’s argument, after which we define the vertices.

A line loop is similar to a line strip except that in a line loop the last vertex is connected to the first. To define a line loop we need to use the GL_LINE_LOOP constant as glBegin( )’s argument. The following figure shows the line strips drawn in our program. A line loop would look similar except that the first and the last vertex would be connected using a line.

Drawing Normal And Stippled Polygons

Polygon, as we understand it is a shape created by connecting a set of vertices by drawing lines between them. OpenGL polygons are more sophisticated objects on three counts:

  1. OpenGL can draw polygons as points, outlines, or solid objects.
  2. OpenGL can fill a polygon with a pattern that we define.
  3. An OpenGL polygon has both a front and a back, each of which may be drawn separately. This facility is useful while drawing solids like cube, prism, pyramid, etc., which may have different colors on outside and inside.

The following code snippet draws a polygon:

glLineWidth ( 1.0f ) ;
glPolygonMode ( GL_FRONT_AND_BACK, GL_LINE ) ;
glBegin ( GL_POLYGON ) ;
glVertex2f ( -0.5f, 0.75f ) ;
glVertex2f ( -0.75f, 0.5f ) ;
glVertex2f ( -0.75f, -0.25f ) ;
glVertex2f ( -0.5f, -0.75f ) ;
glVertex2f ( -0.25f, -0.25f ) ;
glVertex2f ( -0.25f, 0.5f ) ;
glEnd( ) ;

For drawing a polygon, in addition to defining the shape of the polygon by defining its vertices between a pair of glBegin( ) and glEnd( ), we also need to tell OpenGL how to draw a polygon. This is done by calling the glPolygonMode( ) function :

glPolygonMode ( GL_FRONT_AND_BACK, GL_LINE ) ;

The first argument of this function indicates the polygon face whose drawing mode we want to set. GL_FRONT_AND_BACK selects both faces. Other alternatives are GL_FRONT and GL_BACK. If they were chosen, either only the front face or only the back face would be drawn. The second argument indicates the drawing mode for the selected face. This mode can be GL_POINT (which tells OpenGL to draw only points at the polygon’s vertices), GL_LINE (which tells OpenGL to draw wireframe polygons), or GL_FILL (which tells OpenGL to draw solid polygons).

Which side of a polygon is the front and which is the back is decided by the order in which we define the vertices that make up the polygon. A polygon is considered to be front-facing when its vertices are defined in a counterclockwise direction. However, if we have an object created from polygons that are defined in the clockwise direction, we can explicitly tell OpenGL to reverse its notion about front-facing polygons. In such a case we just need to call the glFrontFace( ) function:

glFrontFace ( GL_CW ) ;

This call to glFrontFace( ) tells OpenGL that front-facing polygons are defined in the clockwise direction. As you can see, the glFrontFace( ) function takes a single argument, which is the constant that controls which type of polygon is front facing. The GL_CW constant selects clockwise vertices, whereas the GL_CCW (the OpenGL default) selects counterclockwise vertices.

Note that while defining the vertices we must define them in the right direction, depending upon the current value of the GL_FRONT_FACE state variable. In the code snippet given above we have defined the vertices in counterclockwise direction since we have assumed the default setting for front-facing polygons.

The way we can draw lines with a user-defined stipple pattern, we can also draw polygons filled with a stipple pattern. The first step in drawing a stippled polygon is to define the pattern with which OpenGL should fill the polygon. This pattern is a 32x32 bitmap where each 1 in the pattern produces a dot on the screen. Once the pattern has been designed, it must be converted into hexadecimal values and then stored in an array. Note that the first row of values in the array represent the bottom row of the stipple pattern, whereas, the last row of values represent the first row of the pattern.

Once the array is ready, to give the stipple pattern to OpenGL, we need to call the glPolygonStipple( ) function:

glPolygonStipple ( pattern ) ;

Also, we must enable polygon stippling by calling glEnable( ) with the GL_POLYGON_STIPPLE flag:

glEnable ( GL_POLYGON_STIPPLE ) ;

Now if we draw any solid polygons it will be filled with the selected stipple pattern. When we want to go back to drawing regular polygons, we should call the glDisable( ) function to turn off polygon stippling:

glDisable ( GL_POLYGON_STIPPLE ) ;

Drawing Triangles, Triangle Strips And Triangle Fans

Defining a triangle is similar to defining a polygon except that we need to use the GL_TRIANGLES constant in the call to the function glBegin( ).

We can even draw a triangle strip in OpenGL. A triangle strip is a series of connected triangles where each triangle shares an edge with the previously drawn triangle. This means that after defining the three vertices for the first triangle in the strip, we need to define only one vertex for each additional triangle in the strip. Note that the vertices of the triangle must be defined in counter clockwise order if the triangles are to be front-facing.

In this figure we need to understand why the first and the second triangle are sharing the side 2-3 and not 3-1. There are two rules that govern this:

  1. When the new triangle is built, the new vertex being added must be the last one. For example, if we create a triangle 1-2-3 and then add vertex 4, then the new triangle that would be formed would 3-2-4.
  2. The common side is always the one built using the last two vertices. For example, while building the second triangle the common side would be formed out of vertex 2 and vertex 3. Similarly, while drawing the third triangle, the common side would be formed out of vertex 3 and vertex 4.

The last way you can draw triangles with OpenGL is as a triangle fan (denoted by the GL_TRIANGLE_FAN constant). A triangle fan is a series of triangles that all share a single vertex that defines the pivot of a fan shape. Like a triangle strip, a triangle fan is created by first defining three vertices for the first triangle. Then, because each succeeding triangle shares a side with the previous triangle, we are needed to define a single vertex for each additional triangle in the fan.

Drawing Quadrilaterals And Quadrilateral Strips

To draw a quadrilateral we must use the GL_QUADS constant as glBegin( )’s argument and then define four vertices, in counter-clockwise order.

A quadrilateral strip is a series of quadrilaterals joined together by sharing two vertices. To tell OpenGL that we want to draw a quadrilateral strip, we should use the GL_QUAD_STRIP constant with the glBegin( ) function. To create a quadrilateral strip we must first define four vertices for the first quadrilateral and then define two additional vertices for each additional quadrilateral in the strip. Carefully note the order in which we have defined the vertices.

With the basics of OpenGL out of the way let us now look for more sophisticated use of the OpenGL library—3D shapes for example. Read on I have devoted one full chapter to it.

Completing The Drawing Operations

So far, the program has set the clear color, cleared the background, set the drawing color, and drawn a line. Although the final image has probably appeared on the screen by this time, we should still end a drawing operation with a call to glFlush( ). This ensures that any buffered OpenGL commands are executed. The glFlush( ) command requires no arguments. A similar function is glFinish( ), which performs the same task as glFlush( ), but returns only when the drawing operations are complete.

Setting The Viewport

As the window size changes, the size of the line drawn in it should also change proportionately. To ensure this we must change the size of the viewport when the window is resized. This can be done using an OpenGL function called glViewport( ). Using this we have changed the size of the viewport to make it equal to the client area of the window. The call to glViewport( ) looks like this:

glViewport ( 0, 0, cx, cy ) ;

This function takes four parameters. The first two specify the viewport's top left corner, whereas, the next two specify the viewport's width and height. Whenever the size of the window changes the myview::OnSize( ) handler gets called. Expectedly, glViewport( ) has been called from this handler.

Deleting The Rendering Context

When the application is closed, MFC calls the myview::OnDestroy( ) function. In this function we just need to call the wglDeleteContext( ) function to delete the rendering context. This function’s single parameter is the handle of the rendering context. If successful, this function returns TRUE; otherwise, it returns FALSE.

No comments: