Opengl 4 shading language cookbook second edition pdf


Did you know that Packt offers eBook versions of every book published, with PDF and . The OpenGL Shading Language (GLSL) Version brings unprecedented power and The OpenGL Shading Language Cookbook will provide easy-to-follow .. The second parameter is the angle of rotation (in degrees), and the. OpenGL 4 Shading Language Cookbook Second EditionOver 70 recipes demonstrating simple and advanced techniques for pro. 5 days ago Opengl 4 Shading Language Cookbook Second Edition Second Edition [PDF] [ EPUB] OpenGL Programming Guide, 9th Edition []?.

Language:English, Spanish, Japanese
Country:El Salvador
Genre:Business & Career
Published (Last):07.10.2015
Distribution:Free* [*Register to download]
Uploaded by: JENEE

52450 downloads 127770 Views 29.54MB PDF Size Report

Opengl 4 Shading Language Cookbook Second Edition Pdf

Acquiring the skills of OpenGL Shading Language is so much easier with this cookbook. You'll be creating graphics rather than learning theory, gaining a high . Thank you for reading opengl 4 shading language cookbook second edition. Maybe you OpenGL Shading Language Cookbook - pdf - Free IT OpenGL. OpenGL® Shading Language, Second Edition, extensively updated for .. Chapter 4 discusses how the newly defined programmable parts of.

You'll be creating graphics rather than learning theory, gaining a high level of capability in modern 3D programming along the way. Overview Discover simple and advanced techniques for leveraging modern OpenGL and GLSL Learn how to use the newest features of GLSL including compute shaders, geometry, and tessellation shaders Get to grips with a wide range of techniques for implementing shadows using shadow maps, shadow volumes, and more Clear, easy-to-follow examples with detailed explanations and full, cross-platform source code available from GitHub In Detail OpenGL Shading Language GLSL is a programming language used for customizing parts of the OpenGL graphics pipeline that were formerly fixed-function, and are executed directly on the GPU. It provides programmers with unprecedented flexibility for implementing effects and optimizations utilizing the power of modern GPUs. With Version 4, the language has been further refined to provide programmers with greater power and flexibility, with new stages such as tessellation and compute. OpenGL 4 Shading Language Cookbook provides easy-to-follow examples that first walk you through the theory and background behind each technique, and then go on to provide and explain the GLSL and OpenGL code needed to implement it. Beginner level through to advanced techniques are presented including topics such as texturing, screen-space techniques, lighting, shading, tessellation shaders, geometry shaders, compute shaders, and shadows. The recipes build upon each other and take you quickly from novice to advanced level code. OpenGL Shading Language 4 Cookbook provides examples of modern shading techniques that can be used as a starting point for programmers to expand upon to produce modern, interactive, 3D computer graphics applications.

In our OpenGL initialization function, and after the compilation of shader objects referred to by vertShader and fragShader, use the following steps: Create the program object using the following code: Attach the shaders to the program object as follows: Link the program: Verify the link status: If linking is successful, install the program into the OpenGL pipeline: We start by calling glCreateProgram to create an empty program object.

This function returns a handle to the program object, which we store in a variable named programHandle. If an error occurs with program creation, the function will return 0. We check for that, and if it occurs, we print an error message and exit.

Next, we attach each shader to the program object using glAttachShader. The first argument is the handle to the program object, and the second is the handle to the shader object to be attached.

Then, we link the program by calling glLinkProgram, providing the handle to the program object as the only argument. As with compilation, we check for the success or failure of the link, with the subsequent query. We check the status of the link by calling glGetProgramiv. Similar to glGetShaderiv, glGetProgramiv allows us to query various attributes of the shader program. The status is returned in the location pointed to by the third argument, in this case named status.

The program log is retrieved by the call to glGetProgramInfoLog. The first argument is the handle to the program object, the second is the size of the buffer to contain the log, the third is a pointer to a GLsizei variable where the number of bytes written to the buffer will be stored excluding the null terminator , and the fourth is a pointer to the buffer that will store the log.

The string that is provided in log will be properly null terminated. With the simple fragment shader from this recipe and the vertex shader from the previous recipe compiled, linked, and installed into the OpenGL pipeline, we have a complete OpenGL pipeline and are ready to begin rendering. Drawing a triangle and supplying different values for the Color attribute yields an image of a multi-colored triangle where the vertices are red, green, and blue, and inside the triangle, the three colors are interpolated, causing a blending of colors throughout.

You can use multiple shader programs within a single OpenGL program. They can be swapped in and out of the OpenGL pipeline by calling glUseProgram to select the desired program. Deleting a Shader program If a program is no longer needed, it can be deleted from OpenGL memory by calling glDeleteProgram, providing the program handle as the only argument. This invalidates the handle and frees the memory used by the program. Note that if the program object is currently in use, it will not be immediately deleted, but will be flagged for deletion when it is no longer in use.

Also, the deletion of a shader program detaches the shader objects that were attached to the program but does not delete them unless those shader objects have already been flagged for deletion by a previous call to glDeleteShader.

Its main job is to process the data associated with the vertex, and pass it and possibly other information along to the next stage of the pipeline. In order to give our vertex shader something to work with, we must have some way of providing per-vertex input to the shader.

Typically, this includes the vertex position, normal vector, and texture coordinates among other things. In earlier versions of OpenGL prior to 3. This functionality was deprecated in OpenGL 3. Instead, vertex information must now be provided using generic vertex attributes, usually in conjunction with vertex buffer objects.

OpenGL 4 Shading Language Cookbook, Second Edition

The programmer is now free to define an arbitrary set of per-vertex attributes to provide as input to the vertex shader. For example, in order to implement normal mapping, the programmer might decide that position, normal vector and tangent vector should be provided along with each vertex.

With OpenGL 4, it's easy to define this as the set of input attributes. This gives us a great deal of flexibility to define our vertex information in any way that is appropriate for our application, but may require a bit of getting used to for those of us who are used to the old way of doing things. In the vertex shader, the per-vertex input attributes are defined by using the GLSL qualifier in.

For example, to define a 3-component vector input attribute named VertexColor, we use the following code: To do so, we make use of vertex buffer objects. The buffer object contains the values for the input attribute. In the main OpenGL program we make the connection between the buffer and the input attribute and define how to "step through" the data.

Then, when rendering, OpenGL pulls data for the input attribute from the buffer for each invocation of the vertex shader. Our vertex attributes will be position and color. We'll use a fragment shader to blend the colors of each vertex across the triangle to produce an image similar to the one shown as follows. The vertices of the triangle are red, green, and blue, and the interior of the triangle has those three colors blended together.

The colors may not be visible in the printed text, but the variation in the shade should indicate the blending. Getting ready We'll start with an empty OpenGL program, and the following shaders: The vertex shader basic. In the previous code, there are two input attributes: VertexPosition and VertexColor. They are specified using the GLSL keyword in. Don't worry about the layout prefix, we'll discuss that later. Our main OpenGL program needs to supply the data for these two attributes for each vertex.

We will do so by mapping our polygon data to these variables. In this case, Color is just an unchanged copy of VertexColor. The fragment shader basic. This links to the corresponding output variable in the vertex shader, and will contain a value that has been interpolated across the triangle based on the values at the vertices. We simply expand and copy this color to the output variable FragColor more about fragment shader output variables in later recipes. Write code to compile and link these shaders into a shader program see "Compiling a shader" and "Linking a shader program".

In the following code, I'll assume that the handle to the shader program is programHandle. Use the following steps to set up your buffer objects and render the triangle: Create a global or private instance variable to hold our handle to the vertex array object: GLuint vaoHandle; 2.

Within the initialization function, we create and populate the vertex buffer objects for each attribute: Create and define a vertex array object, which defines the relationship between the buffers and the input attributes. See "There's more…" for an alternate way to do this that is valid for OpenGL 4. In the render function, we bind to the vertex array object and call glDrawArrays to initiate rendering: Vertex attributes are the input variables to our vertex shader.

In the given vertex shader, our two attributes are VertexPosition and VertexColor. The main OpenGL program refers to vertex attributes by associating each active input variable with a generic attribute index. We can specify the relationship between these indices and the attributes using the layout qualifier. For example, in our vertex shader, we use the layout qualifier to assign VertexPosition to attribute index 0 and VertexColor to attribute index 1.

It is not strictly necessary to explicitly specify the mappings between attribute variables and generic attribute indexes, because OpenGL will automatically map active vertex attributes to generic indexes when the program is linked.

We could then query for the mappings and determine the indexes that correspond to the shader's input variables. It may be somewhat more clear however, to explicitly specify the mapping as we do in this example. The first step involves setting up a pair of buffer objects to store our position and color data.

OpenGL 4 Shading Language Cookbook, Second Edition

As with most OpenGL objects, we start by creating the objects and acquiring handles to the two buffers by calling glGenBuffers. We then assign each handle to a separate descriptive variable to make the following code more clear. The first argument to glBindBuffer is the target binding point. The second and third arguments to this function are the size of the array and a pointer to the array containing the data. Let's focus on the first and last arguments. The first argument indicates the target buffer object.

The data provided in the third argument is copied into the buffer that is bound to this binding point. The last argument is one that gives OpenGL a hint about how the data will be used so that it can determine how best to manage the buffer internally. For full details about this argument, take a look into the OpenGL documentation. The VAO contains information about the connections between the data in our buffers and the input vertex attributes. This gives us a handle to our new object, which we store in the global variable vaoHandle.

Then we enable the generic vertex attribute indexes 0 and 1 by calling glEnableVertexAttribArray. Doing so indicates that that the values for the attributes will be accessed and used for rendering. The next step makes the connection between the buffer objects and the generic vertex attribute indexes. The first argument is the generic attribute index. The second is the number of components per vertex attribute 1, 2, 3, or 4.

In this case, we are providing 3-dimensional data, so we want 3 components per vertex. The third argument is the data type of each component in the buffer. The fourth is a Boolean which specifies whether or not the data should be automatically normalized mapped to a range of [-1, 1] for signed integral values or [0, 1] for unsigned integral values. The fifth argument is the stride, which indicates the byte offset between consecutive attributes.

Since our data is tightly packed, we can use a value of zero. The last argument is a pointer, which is not treated as a pointer! Instead, its value is interpreted as a byte offset from the beginning of the buffer to the first attribute in the buffer. In this case, there is no additional data in either buffer before the first element, so we use a value of zero. When another buffer is bound to that binding point, it does not change the value of the pointer. The VAO stores all of the OpenGL state related to the relationship between buffer objects and the generic vertex attributes, as well as the information about the format of the data in the buffer objects.

This allows us to quickly return all of this state when rendering. The VAO is an extremely important concept, but can be tricky to understand. It's important to remember that the VAO's state is primarily associated with the enabled attributes and their connection to buffer objects. It doesn't necessarily keep track of buffer bindings.

We only bind to this point in order to set up the pointers via glVertexAttribPointer. In our render function, we clear the color buffer using glClear, bind to the vertex array object, and call glDrawArrays to draw our triangle. The function glDrawArrays initiates rendering of primitives by stepping through the buffers for each enabled attribute array, and passing the data down the pipeline to our vertex shader.

The first argument is the render mode in this case we are drawing triangles , the second is the starting index in the enabled arrays, and the third argument is the number of indices to be rendered 3 vertexes for a single triangle. To summarize, we followed these steps: Make sure to specify the generic vertex attribute indexes for each attribute in the vertex shader using the layout qualifier. Create and populate the buffer objects for each attribute.

OpenGL 4 Shading Language Cookbook Second Edition [Book]

Create and define the vertex array object by calling glVertexAttribPointer while the appropriate buffer is bound. When rendering, bind to the vertex array object and call glDrawArrays, or other appropriate rendering function e. In the following, we'll discuss some details, extensions, and alternatives to the previous technique.

Separate attribute format With OpenGL 4. In the previous example, the glVertexAttribPointer function does two important things. Secondly, it specifies the format of that data type, offset, stride, and so on. It is arguably clearer to separate these two concerns into their own functions. This is exactly what has been implemented in OpenGL 4. For example, to implement the same functionality as in step 3 of the previous How to do it… section, we would use the following code: We create and bind to the VAO, then enable attributes 0 and 1.

Next, we bind our two buffers to two different indexes within the vertex buffer binding point using glBindVertexBuffer. Instead, we now have a new binding point specifically for vertex buffers. This binding point has several indexes usually from 0 - 15 , so we can bind multiple buffers to this point. The first argument to glBindVertexBuffer specifies the index within the vertex buffer binding point. Here, we bind our position buffer to index 0 and our color buffer to index 1.

Note that the indexes within the vertex buffer binding point need not be the same as the attribute locations. The other arguments to glBindVertexBuffer are as follows. The second argument is the buffer to be bound, the third is the offset from the beginning of the buffer to where the data begins, and the fourth is the stride, which is the distance between successive elements within the buffer.

Unlike glVertexAttribPointer, we can't use a 0 value here for tightly packed data, because OpenGL can't determine the size of the data without more information, so we need to specify it explicitly here.

Next, we call glVertexAttribFormat to specify the format of the data for the attribute. Note that this time, this is decoupled from the buffer that stores the data.

Instead, we're just specifying the format to expect for this attribute. The arguments are the same as the first four arguments to glVertexAttribPointer. The function glVertexAttribBinding specifies the relationship between buffers that are bound to the vertex buffer binding point and attributes.

The first argument is the attribute location, and the second is the index within the vertex buffer binding point. In this example, they are the same, but they need not be.

This version is arguably more clear and easy to understand. It removes the confusing aspects of the "invisible" pointers that are managed in the VAO, and makes the relationship between attributes and buffers much more clear with glVertexAttribBinding. Additionally, it separates concerns that really need not be combined.

This variable receives the final output color for each fragment pixel. Like vertex input variables, this variable also needs to be associated with a proper location. Of course, we typically would like this to be linked to the back color buffer, which by default in double buffered systems is "color number" zero.

The relationship of the color numbers to render buffers can be changed by using glDrawBuffers. In this program, we are relying on the fact that the linker will automatically link our only fragment output variable to color number zero. To explicitly do so, we could and probably should have used a layout qualifier in the fragment shader: This can be quite useful for specialized algorithms such as deferred rendering see Chapter 5, Image Processing and Screen Space Techniques. Specifying attribute indexes without using layout qualifiers If you'd rather not clutter up your vertex shader code with the layout qualifiers or you're using a version of OpenGL that doesn't support them , you can define the attribute indexes within the OpenGL program.

We can do so by calling glBindAttribLocation just prior to linking the shader program. For example, we'd add the following code to the main OpenGL program just before the link step: Similarly, we can specify the color number for fragment shader output variables without using the layout qualifier. We do so by calling glBindFragDataLocation prior to linking the shader program: Using element arrays It is often the case that we need to step through our vertex arrays in a non-linear fashion.

In other words, we may want to "jump around" the data rather than just moving through it from beginning to end as we did in this example. For example, we might want to draw a cube where the vertex data consists of only eight positions the corners of the cube. In order to draw the cube, we would need to draw 12 triangles 2 for each face , each of which consists of 3 vertices.

All of the needed position data is in the original 8 positions, but to draw all the triangles, we'll need to jump around and use each position for at least three different triangles. The element array is another buffer that defines the indices used when stepping through the vertex arrays.

For details on using element arrays, take a look at the function glDrawElements in the OpenGL documentation http: Interleaved arrays In this example, we used two buffers one for color and one for position. Instead, we could have used just a single buffer and combined all of the data. In general, it is possible to combine the data for multiple attributes into a single buffer.

The data for multiple attributes can be interleaved within an array, such that all of the data for a given vertex is grouped together within the buffer. Take a look at the documentation for full details http: The decision about when to use interleaved arrays and when to use separate arrays, is highly dependent on the situation.

Interleaved arrays may bring better results due to the fact that data is accessed together and resides closer in memory so-called locality of reference , resulting in better caching performance. Getting a list of active vertex input attributes and locations As covered in the previous recipe, the input variables within a vertex shader are linked to generic vertex attribute indices at the time the program is linked.

If we need to specify the relationship, we can either use layout qualifiers within the shader, or we could call glBindAttribLocation before linking. However, it may be preferable to let the linker create the mappings automatically and query for them after program linking is complete. In this recipe, we'll see a simple example that prints all the active attributes and their indices.

Getting ready Start with an OpenGL program that compiles and links a shader pair. You could use the shaders from the previous recipe. As in previous recipes, we'll assume that the handle to the shader program is stored in a variable named programHandle. After linking and enabling the shader program, use the following code to display the list of active attributes: Start by querying for the number of active attributes: Loop through each attribute and query for the length of the name, the type and the attribute location, and print the results to standard out: In step 1, we query for the number of active attributes, by calling glGetProgramInterfaceiv.

The result is stored in the location pointed to by the last argument numAttribs. The indices of the attributes run from 0 to numAttribs We loop over those indices and for each we call glGetProgramResourceiv to get the length of the name, the type and the location. We specify what information we would like to receive by means of an array of GLenum values called properties.

The third is the index of the attribute, the fourth is the number of values in the properties array, which is the fifth argument. The properties array contains GLenums, which specify the specific properties we would like to receive. In this example, the array contains: The sixth argument is the size of the buffer that will receive the results; the seventh argument is a pointer to an integer that would receive the number of results that were written.

If that argument is NULL, then no information is provided. Finally, the last argument is a pointer to a GLint array that will receive the results. Each item in the properties array corresponds to the same index in the results array. Next, we retrieve the name of the attribute by allocating a buffer to store the name and calling glGetProgramResourceName. The results array contains the length of the name in the first element, so we allocate an array of that size with an extra character just for good measure.

The OpenGL documentation says that the size returned from glGetProgramResourceiv includes the null terminator, but it doesn't hurt to make sure by making a bit of additional space. Finally, we get the name by calling glGetProgramResourceName, and then print the information to the screen.

We print the attribute's location, name and type. The location is available in the third element of the results array, and the type is in the second. Note the use of the function getTypeString. This is a simple custom function that just returns a string representation of the data type. The getTypeString function consists of just one big switch statement returning a human-readable string corresponding to the value of the parameter see the source code for glslprogram.

The output of the previous code looks like this when it is run on the shaders from the previous recipes: It should be noted that in order for a vertex shader input variable to be considered active, it must be used within the vertex shader.

In other words, a variable is considered active if it is determined by the GLSL linker that it may be accessed during program execution. If a variable is declared within a shader, but not used, the previous code will not display the variable because it is not considered active and effectively ignored by OpenGL.

The previous code is only valid for OpenGL 4. See also ff The Compiling a shader recipe ff The Linking a shader program recipe ff The Sending data to a shader using vertex attributes and vertex buffer objects recipe Sending data to a shader using uniform variables Vertex attributes provide one avenue for providing input to shaders; a second technique is uniform variables.

Uniform variables are intended to be used for data that may change relatively infrequently compared to per-vertex attributes. In fact, it is simply not possible to set per-vertex attributes with uniform variables. For example, uniform variables are well suited for the matrices used for modeling, viewing, and projective transformations.

Within a shader, uniform variables are read-only. However, they can be initialized within the shader by assigning to a constant value along with the declaration. Uniform variables can appear in any shader within a shader program, and are always used as input variables. They can be declared in one or more shaders within a program, but if a variable with a given name is declared in more than one shader, its type must be the same in all shaders.

In other words, the uniform variables are held in a shared uniform namespace for the entire shader program. Getting ready We'll use the following vertex shader: We'll provide the data for this variable via the OpenGL program. Within the main OpenGL code, add the following include statements: We'll assume that the handle to the vertex array object is vaoHandle, and the handle to the program object is programHandle. Within the render method, use the following code: The steps involved with setting the value of a uniform variable include finding the location of the variable, then assigning a value to that location using one of the glUniform functions.

In this example, we start by clearing the color buffer, then creating a rotation matrix using GLM. Next, we query for the location of the uniform variable by calling glGetUniformLocation. This function takes the handle to the shader program object, and the name of the uniform variable and returns its location. If the uniform variable is not an active uniform variable, the function returns We then assign a value to the uniform variable's location using glUniformMatrix4fv.

The first argument is the uniform variable's location. The second is the number of matrices that are being assigned note that the uniform variable could be an array. The third is a Boolean value indicating whether or not the matrix should be transposed when loaded into the uniform variable.

The last argument is a pointer to the data for the uniform variable. Of course uniform variables can be any valid GLSL type including complex types such as arrays or structures. OpenGL provides a glUniform function with the usual suffixes, appropriate for each type.

For example, to assign to a variable of type vec3, one would use glUniform3f or glUniform3fv. For arrays, one can use the functions ending in "v" to initialize multiple values within the array.

Note that if it is desired, one can query for the location of a particular element of the uniform array using the [] operator. For example, to query for the location of the second element of MyArray: As with arrays, one can query for the location of a member of a structure using something like the following: Rotation" ; Where the structure variable is MyMatrices and the member of the structure is Rotation. For example, one might choose to create a set of variables to store the location of each uniform and assign their values after the program is linked.

This would avoid the need to query for uniform locations when setting the value of the uniform variables, creating slightly more efficient code. The process for listing uniform variables is very similar to the process for listing attributes see the Getting a list of active vertex input attributes and locations recipe , so this recipe will refer the reader back to the previous recipe for detailed explanation.

Getting ready Start with a basic OpenGL program that compiles and links a shader program. In the following, we'll assume that the handle to the program is in a variable named programHandle. How to do it… After linking and enabling the shader program, use the following code to display the list of active uniforms: Start by querying for the number of active uniform variables: Loop through each uniform index and query for the length of the name, the type, the location and the block index: The process is very similar to the process shown in the recipe Getting a list of active vertex input attributes and locations.

I will focus on the main differences. The reason for this is that some uniform variables are contained within a uniform block see the recipe Using uniform blocks and uniform buffer objects.

For this example, we only want information about uniforms that are not within blocks. The block index will be -1 if the uniform variable is not within a block, so we skip any uniform variables that do not have a block index of Again, we use the getTypeString function to convert the type value into a human-readable string see example code.

When this is run on the shader program from the previous recipe, we see the following output: Active uniforms: As with vertex attributes, a uniform variable is not considered active unless it is determined by the GLSL linker that it will be used within the shader. See also ff 42 The Sending data to a shader using uniform variables recipe Chapter 1 Using uniform blocks and uniform buffer objects If your program involves multiple shader programs that use the same uniform variables, one has to manage the variables separately for each program.

Uniform locations are generated when a program is linked, so the locations of the uniforms may change from one program to the next. The data for those uniforms may have to be regenerated and applied to the new locations. Uniform blocks were designed to ease the sharing of uniform data between programs.

With uniform blocks, one can create a buffer object for storing the values of all the uniform variables, and bind the buffer to the uniform block. When changing programs, the same buffer object need only be re-bound to the corresponding block in the new program. A uniform block is simply a group of uniform variables defined within a syntactical structure known as a uniform block. For example, in this recipe, we'll use the following uniform block: With this type of block definition, the variables within the block are still part of the global scope and do not need to be qualified with the block name.

The buffer object used to store the data for the uniforms is often referred to as a uniform buffer object. We'll see that a uniform buffer object is simply just a buffer object that is bound to a certain location. For this recipe, we'll use a simple example to demonstrate the use of uniform buffer objects and uniform blocks.

We'll draw a quad two triangles with texture coordinates, and use our fragment shader to fill the quad with a fuzzy circle. The circle is a solid color in the center, but at its edge, it gradually fades to the background color, as shown in the following image: Provide the position at vertex attribute location 0, and the texture coordinate 0 to 1 in each direction at vertex attribute location 1 see the Sending data to a shader using vertex attributes and vertex buffer objects recipe.

We'll use the following vertex shader: The variables within this block define the parameters of our fuzzy circle. The variable OuterColor defines the color outside of the circle. InnerColor is the color inside of the circle. RadiusInner is the radius defining the part of the circle that is a solid color inside the fuzzy edge , and the distance from the center of the circle to the inner edge of the fuzzy boundary.

RadiusOuter is the outer edge of the fuzzy boundary of the circle when the color is equal to OuterColor. The code within the main function computes the distance of the texture coordinate to the center of the quad located at 0. It then uses that distance to compute the color by using the smoothstep function. This function provides a value that smoothly varies between 0.

Otherwise it returns 0. The mix function is then used to linearly interpolate between InnerColor and OuterColor based on the value returned by the smoothstep function. In the OpenGL program, after linking the shader program, use the following steps to assign data to the uniform block in the fragment shader: Get the index of the uniform block using glGetUniformBlockIndex.

Allocate space for the buffer to contain the data for the uniform block. We get the size using glGetActiveUniformBlockiv: Query for the offset of each variable within the block. To do so, we first find the index of each variable within the block: Place the data into the buffer at the appropriate offsets: Create the buffer object and copy the data into it: Bind the buffer object to the uniform buffer binding point at the index specified by the binding layout qualifier in the fragment shader 0: This seems like a lot of work!

However, the real advantage comes when using multiple programs where the same buffer object can be used for each program. Let's take a look at each step individually. First we get the index of the uniform block by calling glGetUniformBlockIndex, then we query for the size of the block by calling glGetActiveUniformBlockiv.

After getting the size, we allocate a temporary buffer named blockBuffer to hold the data for our block. So in order to accurately layout our data, we need to query for the offset of each variable within the block.

This is done in two steps. First, we query for the index of each variable within the block by calling glGetUniformIndices. This accepts an array of variable names third argument and returns the indices of the variables in the array indices fourth argument.

Then we use the indices to query for the offsets by calling glGetActiveUniformsiv. This function can also be used to query for the size and type, however, in this case we choose not to do so, to keep the code simple albeit less general.

Here we use the standard library function memcpy to accomplish this. Now that the temporary buffer is populated with the data with the appropriate layout, we can create our buffer object and copy the data into the buffer object.

The space is allocated within the buffer object and the data is copied when glBufferData is called. Of course, this is entirely dependent on the situation.

Finally, we associate the buffer object with the uniform block by calling glBindBufferBase. This function binds to an index within a buffer binding point. Certain binding points are also so-called "indexed buffer targets".

This means that the target is actually an array of targets, and glBindBufferBase allows us to bind to one index within the array. In this case, we bind it to the index that we specified in the layout qualifier in the fragment shader: These two indices must match.

Aren't these the same binding points used in two different contexts? With glBindBuffer, we bind to a point that can be used for filling or modifying a buffer, but can't be used as a source of data for the shader. When we use glBindBufferBase, we are binding to an index within a location that can be directly sourced by the shader.

Granted, that's a bit confusing. If the data for a uniform block needs to be changed at some later time, one can call glBufferSubData to replace all or part of the data within the buffer.

Using an instance name with a uniform block A uniform block can have an optional instance name. For example, with our BlobSettings block we could have used the instance name Blob, as shown here: Therefore our shader code needs to refer to them prefixed with the instance name. InnerColor, Blob.

OpenGL 4 Shading Language Cookbook, Second Edition

OuterColor, smoothstep Blob. RadiusInner, Blob. RadiusOuter, dist ; Additionally, we need to qualify the variable names with the block name: BlobSettings within the OpenGL code when querying for variable indices: InnerColor", "BlobSettings.

OuterColor", "BlobSettings. RadiusInner", "BlobSettings. However, one can avoid this by asking OpenGL to use the standard layout std This is accomplished by using a layout qualifier when declaring the uniform block.

Other options for the layout qualifier that apply to uniform block layouts include packed and shared. The packed qualifier simply states that the implementation is free to optimize memory in whatever way it finds necessary based on variable usage or other criteria.

With the packed qualifier, we still need to query for the offsets of each variable. The shared qualifier guarantees that the layout will be consistent between multiple programs and program stages provided that the uniform block declaration does not change. There are two other layout qualifiers that are worth mentioning: These define the ordering of data within the matrix type variables within the uniform block.

Unfortunately, that is an exceedingly tedious method for debugging a program. The glGetError function returns an error code if an error has occurred at some point previous to the time the function was called. This means that if we're chasing down a bug, we essentially need to call glGetError after every function call to an OpenGL function, or do a binary search-like process where we call it before and after a block of code, and then move the two calls closer to each other until we determine the source of the error.

What a pain! Thankfully, as of OpenGL 4. Now we can register a debug callback function that will be executed whenever an error occurs, or other informational message is generated. Not only that, but we can send our own custom messages to be handled by the same callback, and we can filter the messages using a variety of criteria. Getting ready Create an OpenGL program with a debug context.

While it is not strictly necessary to acquire a debug context, we might not get messages that are as informative as when we are using a debug context. If, however, you need to enable debug messages explicitly, use the following call. Use the following steps: Create a callback function to receive the debug messages. The function must conform to a specific prototype described in the OpenGL documentation.

For this example, we'll use the following one: Enable all messages, all sources, all levels, and all IDs: The callback function debugCallback has several parameters, the most important of which is the debug message itself the sixth parameter, message. For this example, we simply print the message to standard output, but we could send it to a log file or some other destination.

The first four parameters to debugCallback describe the source, type, id number, and severity of the message. The id number is an unsigned integer specific to the message. The possible values for the source, type and severity parameters are described in the following tables.

The source parameter can have any of the following values: The severity parameter can have the following values: Errors or dangerous behaviour The length parameter is the length of the message string, excluding the null terminator.

The last parameter param is a user-defined pointer. We can use this to point to some custom object that might be helpful to the callback function.

This parameter can be set using the second parameter to glDebugMessageCallback more on that in the following content. Within debugCallback we convert each GLenum parameter into a string. Due to space constraints, I don't show all of that code here, but it can be found in the example code for this book.

We then print all of the information to standard output. The first parameter is a pointer to our callback function, and the second parameter NULL in this example can be a pointer to any object that we would like to pass into the callback. This pointer is passed as the last parameter with every call to debugCallback.

This function can be used to selectively turn on or off any combination of message source, type, id, or severity. In this example, we turn everything on. OpenGL also provides support for stacks of named debug groups. Essentially what this means is that we can remember all of our debug message filter settings on a stack and return to them later after some changes have been made.

This might be useful, for example, if there are sections of code where we have needs for filtering some kinds of messages and other sections where we want a different set of messages. We can then change our filters using glDebugMessageControl, and later return to the original state using glPopDebugGroup. A prime example is the shader program object. First, we'll use a custom exception class for errors that might occur during compilation or linking: Full source code for all of the recipes in this text is also available on GitHub at: The techniques involved in the implementation of these functions are covered in previous recipes in this chapter.

Due to space limitations, I won't include the code here it's available from this book's GitHub repository , but we'll discuss some of the design decisions in the next section.

The state stored within a GLSLProgram object includes the handle to the OpenGL shader program object handle , a Boolean variable indicating whether or not the program has been successfully linked linked , and a map used to store uniform locations as they are discovered uniformLocations. The first version determines the type of shader based on the filename extension. In the second version, the caller provides the shader type, and the third version is used to compile a shader, taking the shader's source code from a string.

The file name can be provided as a third argument in the case that the string was taken from a file, which is helpful for providing better error messages. The GLSLProgramException's error message will contain the contents of the shader log or program log when an error occurs. The private function getUniformLocation is used by the setUniform functions to find the location of a uniform variable.

It checks the map uniformLocations first, and if the location is not found, queries OpenGL for the location, and stores the result in the map before returning. The fileExists function is used by compileShaderFromFile to check for file existence. The constructor simply initializes linked to false and handle to zero. The variable handle will be initialized by calling glCreateProgram when the first shader is compiled. The link function simply attempts to link the program by calling glLinkProgram.

It then checks the link status, and if successful, sets the variable linked to true and returns true. The use function simply calls glUseProgram if the program has already been successfully linked, otherwise it does nothing.

The functions getHandle and isLinked are simply "getter" functions that return the handle to the OpenGL program object and the value of the linked variable. Note that these functions should only be called prior to linking the program. The setUniform overloaded functions are straightforward wrappers around the appropriate glUniform functions.

Each of them calls getUniformLocation to query for the variable's location before calling the glUniform function. Shaders give us the power to implement alternative rendering algorithms and a greater degree of flexibility in the implementation of those techniques. With shaders, we can run custom code directly on the GPU, providing us with the opportunity to leverage the high degree of parallelism available with modern GPUs.

Instead, if you're new to GLSL, reading through these recipes should help you to learn the language by example. However, before we jump into GLSL programming, let's take a quick look at how vertex and fragment shaders fit within the OpenGL pipeline. In this chapter we'll focus only on the vertex and fragment stages. In Chapter 6, Using Geometry and Tessellation Shaders, I'll provide some recipes for working with the geometry and tessellation shaders, and in Chapter 10, Using Compute Shaders, I'll focus specifically on compute shaders.

Shaders replace parts of the OpenGL pipeline. More specifically, they make those parts of the pipeline programmable. The following block diagram shows a simplified view of the OpenGL pipeline with only the vertex and fragment shaders installed: Vertex Shader Primitive Assembly Clipping Viewport Transform Rasterization Fragment Shader Framebuffer Vertex data is sent down the pipeline and arrives at the vertex shader via shader input variables.

The vertex shader's input variables correspond to the vertex attributes refer to the Sending data to a shader using vertex attributes and vertex buffer objects recipe in Chapter 1, Getting Started with GLSL. In general, a shader receives its input via programmerdefined input variables, and the data for those variables comes either from the main OpenGL application or previous pipeline stages other shaders.

For example, a fragment shader's input variables might be fed from the output variables of the vertex shader. Data can also be provided to any shader stage using uniform variables refer to the Sending data to a shader using uniform variables recipe, in Chapter 1, Getting Started with GLSL. These are used for information that changes less often than vertex attributes for example, matrices, light position, and other settings.

The following figure shows a simplified view of the relationships between input and output variables when there are two shaders active vertex and fragment: The vertex shader can send other information down the pipeline using shader output variables.

For example, the vertex shader might also compute the color associated with the vertex. That color would be passed to later stages via an appropriate output variable. Between the vertex and fragment shader, the vertices are assembled into primitives, clipping takes place, and the viewport transformation is applied among other operations.

The rasterization process then takes place and the polygon is filled if necessary. The fragment shader is executed once for each fragment pixel of the polygon being rendered typically in parallel.

Data provided from the vertex shader is by default interpolated in a perspective correct manner, and provided to the fragment shader via shader input variables. The fragment shader determines the appropriate color for the pixel and sends it to the frame buffer using output variables.

The depth information is handled automatically. Replicating the old fixed functionality Programmable shaders give us tremendous power and flexibility. However, in some cases we might just want to re-implement the basic shading techniques that were used in the default fixed-function pipeline, or perhaps use them as a basis for other shading techniques.

Studying the basic shading algorithm of the old fixed-function pipeline can also be a good way to get started when learning about shader programming. In this chapter, we'll look at the basic techniques for implementing shading similar to that of the old fixed-function pipeline.

We'll cover the standard ambient, diffuse, and specular ADS shading algorithm, the implementation of two-sided rendering, and flat shading. Along the way, we'll also see some examples of other GLSL features such as functions, subroutines, and the discard keyword.

I present them this way to avoid additional confusion for someone who is learning the techniques for the first time. We'll look at a few optimization techniques at the end of some recipes, and some more in the next chapter. Implementing diffuse, per-vertex shading with a single point light source One of the simplest shading techniques is to assume that the surface exhibits purely diffuse reflection.

That is to say that the surface is one that appears to scatter light in all directions equally, regardless of direction. Incoming light strikes the surface and penetrates slightly before being re-radiated in all directions.

Of course, the incoming light interacts with the surface before it is scattered, causing some wavelengths to be fully or partially absorbed and others to be scattered. A typical example of a diffuse surface is a surface that has been painted with a matte paint. The surface has a dull look with no shine at all. The following screenshot shows a torus rendered with diffuse shading: The mathematical model for diffuse reflection involves two vectors: The vectors are represented in the following diagram: The physics of the situation tells us that the amount of radiation that reaches a point on a surface is maximal when the light arrives along the direction of the normal vector, and zero when the light is perpendicular to the normal.

In between, it is proportional to the cosine of the angle between the direction towards the light source and the normal vector. So, since the dot product is proportional to the cosine of the angle between two vectors, we can express the amount of radiation striking the surface as the product of the light intensity and the dot product of s and n.

Where Ld is the intensity of the light source, and the vectors s and n are assumed to be normalized. The dot product of two unit vectors is equal to the cosine of the angle between them. As stated previously, some of the incoming light is absorbed before it is re-emitted.

We can model this interaction by using a reflection coefficient Kd , which represents the fraction of the incoming light that is scattered. This is sometimes referred to as the diffuse reflectivity, or the diffuse reflection coefficient. The diffuse reflectivity becomes a scaling factor for the incoming radiation, so the intensity of the outgoing light can be expressed as follows: Because this model depends only on the direction towards the light source and the normal to the surface, not on the direction towards the viewer, we have a model that represents uniform omnidirectional scattering.

In this recipe, we'll evaluate this equation at each vertex in the vertex shader and interpolate the resulting color across the face. In this and the following recipes, light intensities and material reflectivity coefficients are represented by 3-component RGB vectors. Therefore, the equations should be treated as component-wise operations, applied to each of the three components separately.

Luckily, the GLSL will make this nearly transparent because the needed operators operate component-wise on vector variables. The OpenGL application also should provide the standard transformation matrices projection, modelview, and normal via uniform variables. The light position in eye coordinates , Kd, and Ld should also be provided by the OpenGL application via uniform variables. Note that Kd and Ld are of type vec3. We can use vec3 to store an RGB color as well as a vector or point.

To create a shader pair that implements diffuse shading, use the following steps: Use the following code for the vertex shader: Use the following code for the fragment shader: Compile and link both shaders within the OpenGL application, and install the shader program prior to rendering.

The vertex shader does all of the work in this example. The diffuse reflection is computed in eye coordinates by first transforming the normal vector using the normal matrix, normalizing, and storing the result in tnorm.

Note that the normalization here may not be necessary if your normal vectors are already normalized and the normal matrix does not do any scaling. The normal matrix is typically the inverse transpose of the upper-left 3 x 3 portion of the model-view matrix.

We use the inverse transpose because normal vectors transform differently than the vertex position. For a more thorough discussion of the normal matrix, and the reasons why, see any introductory computer graphics textbook A good choice would be Computer Graphics with OpenGL by Hearn and Baker.

If your model-view matrix does not include any non-uniform scalings, then one can use the upper-left 3 x 3 of the model-view matrix in place of the normal matrix to transform your normal vectors. However, if your model-view matrix does include uniform scalings, you'll still need to re normalize your normal vectors after transforming them. The next step converts the vertex position to eye camera coordinates by transforming it via the model-view matrix.

My Collection. Deal of the Day Discover advanced virtualization techniques and strategies to deliver centralized desktop and application services. Sign up here to get these deals straight to your inbox. Find Ebooks and Videos by Technology Android. Packt Hub Technology news, analysis, and tutorials from Packt. Insights Tutorials. News Become a contributor. Categories Web development Programming Data Security. Subscription Go to Subscription. Subtotal 0.

Title added to cart. Subscription About Subscription Pricing Login. Features Free Trial. Search for eBooks and Videos. Are you sure you want to claim this product using a token? David Wolff December Quick links: What do I get with a Packt subscription? What do I get with an eBook? What do I get with a Video? Frequently bought together. Learn more Add to cart. OpenGL - Build high performance graphics. Paperback pages. Table of Contents Chapter 1: Using a function loader to access the latest OpenGL functionality.

Sending data to a shader using vertex attributes and vertex buffer objects. Getting a list of active vertex input attributes and locations. Chapter 2: Implementing diffuse, per-vertex shading with a single point light source.

Implementing per-vertex ambient, diffuse, and specular ADS shading. Chapter 3: Lighting, Shading, and Optimization. Chapter 4: Using Textures. Chapter 5: Image Processing and Screen Space Techniques. Chapter 6: Using Geometry and Tessellation Shaders. Chapter 7: Creating shadows using shadow volumes and the geometry shader. Chapter 8: Using Noise in Shaders. Chapter 9: Particle Systems and Animation. Chapter Using Compute Shaders. Implementing a particle simulation with the compute shader.

Implementing an edge detection filter with the compute shader. Authors David Wolff. Read More. Read More Reviews. Recommended for You.

Similar articles

Copyright © 2019
DMCA |Contact Us