티스토리 뷰

















Software Rendering School, Part IV: Shading and Filling ???????
 
 


15/06/2004






Legality Info


The tutorials in this series as well as the source that comes with them are intellectual property of their producers (see the Authors section above to find out who they are). You may not claim that they are your property, nor copy, redistribute, sell, or anything of the nature, the tutorials without permission from the authors. All other articles, documents, materials, etc. are property of their respective owners. Please read the legality info shipping with them before attempting to do anything with them!

Shading and Filling


Haya! Did we miss you? Oh we know we did! Welcome to another exciting article about software rasterization. You’re about to make one more step towards the knowledge of software rasterizing.

Let’s quickly review what we are going to discuss this time:

  • Depth sorting

  • Gouraud shading

  • Phong shading

  • Fill convention


The thing I wanted to talk about first is depth sorting. As you will notice in the source to the previous article, I order the polygons such that the polygons that are most far away are drawn first and the polygons that closer to the view point are drawn over them. But why is that? Well think about how a painter draws a picture. He first draws the background and then the objects over it. It wouldn’t be very smart to draw the objects first, would it?

We draw our geometry in the same manner as the painters; polygons far away must be obscured by polygons closer to us, thus drawn before. That’s why this algorithm of depth sorting is called “painters algorithm”. How do we do it then?

Pretty simple! For each polygon in the object, sum the z components of all vertices and then divide the result by the amount of vertices. For example, if you have a triangle composed of 3 points A, B and C, you’d compute the centroid?as follows:

centroid = (Az + Bz + Cz) / 3

After you’ve found the centroids of all triangles, sort them such, that the polygons with the bigger centroids are drawn first. That’s pretty much all of it. Keep in mind that this simplified version of the Painters algorithm suffers from a lot bugs and side effects (as you’ll see in the demo) but for the most part it’ll take care that your polygons look correctly rendered. Later in the series we’ll see more advanced depth sorting algorithms (Z-buffering, BSP trees, Portals, etc.) but for now I think the simple Painter's algorithm is enough for now.

Next stop ? Gouraud shading: It’s time you learn some more advanced shading techniques than the simple flat shading, which simply fills the whole triangle with a given color. Of course the flat shading is also a somewhat good shading-technique for some situations but combined with lighting it makes the objects look like they’re made from many small faces (hmmm, they are in fact :) ).

However with Gouraud shading we smooth the faces and give the objects a more decent look. So what is it and how do we do it? Both questions are easy to answer. As already stated for flat shading, we simply define a color with which we fill the whole polygon. However, when doing Gouraud shading we define a color for each vertex of the polygon. In the rasterization process we interpolate these colors over the edges and then over the scanlines. May sound a little weird so let’s see some schemes, shall we?


Here we see a triangle defined by the points A, B and C. Each vertex also has a color defined: Acol, Bcol and Ccol for A, B and C, respectively. What we do is we interpolate the colors over the edges just as the x coordinates are interpolated when we define the start and end points of all scanline; the?only difference is that we define start and end colors. So, as we trace x along the edge A -> B in order to find the start coordinate of the scanline, we trace the color in pretty much the same way. We trace the color along the other edge A -> C too. So for every scanline we should have a starting and ending color value just as we have start and end x coordinates. Let’s see the process in a little bit more detail:


In the beginning of the rasterization setup, we calculate the initial values of x and the slopes for the edges. Then we start sliding x along the edges to find the start and ending coordinates of current the scanline. This is exactly the same thing we would do with the colors (see figure to the right):

At the place where we set the initial x and calculate the slopes we will set the initial color value and will calculate the color slopes, which we will add to the color value as we move along the edges. The initial color value is Acol for the triangle above and the slope is calculated just as the x slope. For example the color slope for the A -> B edge will be:
               Bcol - Acol
color_slope = -------------
By - Ay

As we already know this will simply return the amount of change in color as we increment y. Not that weird, eh? There is only one more thing to consider before we’re done with the Gouraud. If we step the color values along the edges just as we step the x coordinates, for each scanline we will have a start and end color value. See this picture below:


So, at the funny lookin’ picture above we see a scanline in the triangle with start points L and M. Along with the start and end x coordinates we now have start and end color values, which we will have to interpolate along the scanline just as we did along the edges. Just as before we set the initial color value to Lcol (according to the scanline above) and calculate the slope. The slope this time is equal to:
                        Mcol - Lcol
color_slope_this_time = -------------
Mx - Lx

which is simply the change in color as we increment x. For the color of each pixel of the scanline, use the color value that you’re interpolating. And that’s basically it! We’ve just done Gouraud shading! You’ll see the huge difference right away! Of course I’ve demonstrated the things only with one color value but if you want an RGB color mode you simply do the same thing for each separate color channel and write the RGB pixels when interpolating the colors along the scanlines. The demo I’ve wrote runs in 32 bpp mode so you’ll see the stuff with multiple color channels.

The problem with Gouraud shading, as with everything else, is speed. However in this case, the problem is big. You might not feel a big speed hit when interpolating one color channel but with 3 or more it runs pretty slow and even with low-level ASM optimizations it’s still not speedy enough. Also when dynamic lighting gets involved, everything becomes ultra-slow (Gouraud shading is mainly used with per-vertex dynamic lightning). So, in the future we will not use Gouraud shading much (or maybe not at all) but texture mapping and static lighting instead.

There is just one issue that we have not yet discussed concerning Gouraud shading; perspective correction. It's very hard (near impossible) to see that the colors in the polygons in the demo “swim” around. The reason is that we're doing a 3D interpolation in 2D space, but the artifacts are not visible with Gouraud shading. The fix to this will be covered later on when we get to texture mapping.

Ok, we've covered everything there is to know about Gouraud shading so let’s move on to the other topics.

Next is Phong shading. To explain this type of shading I’ll first have to explain a little bit about lighting. Just keep in mind that I’ll explain lighting in MUCH more detail in the next article. So, when we light the polygons in our world we perform some calculations with the main help of a surface normal and a bunch of other stuff. So when working with?flat shading we simply use the normal of the polygon and then use the color, which we‘ve found with the lightning calculations to draw the whole polygon.

When using Gouraud shading we perform almost the same trick. But instead of using the polygon’s normal, we calculate the normal for each vertex of the surface, then we perform lightning calculations for each vertex and finally we interpolate the colors we’ve found along the polygon using Gouraud shading. This will smooth out edges and corners in objects.

The Phong shading takes this one step further. The main algorithm is this:

  • we calculate vertex normals

  • we interpolate the vertex normals along the polygon edges (just as we would interpolate colors for Gouraud shading)

  • we interpolate the start and end vertex normal along each scanline, so in fact we find the normal for each pixel!

  • finally we perform lightning calculations on all pixels with the help of the pixel normals and set the pixel’s color to the resulting color.


It might sound somewhat simple to implement but there are many, many things that are tricky with Phong shading. First thing is of course speed. There ain’t a processor or/and a 3D accelerator on the planet, which will be fast enough to do Phong shading in real time. It’s perfectly fine to do Phong shading in off-screen rendering although a raytracer will be more suitable for this purpose (but that’s another story!).

“So did ya fill our heads with all that crap, that we will never need?” Well, yes, I agree completely but Phong shading should definitely be known. Believe me it’s worth it ;) Ok let’s move to the next and most important topic ? Fill convention.

Why is the fill convention so important? Well, it solves all kinds of color/texture bugs that will occur in your rasterizer if you simply swap the coordinates to integer positions. The fill convention also improves visual quality of the polygons. It removes all kinds of jittering, texture/color swim and etc. Nasty things you will have to face without a fill convention.

So let’s see what that fill convention is all about. It can be defined as follows: by applying a fill convention you make sure that only pixels, which are inside the polygon based on some reference point inside the pixel.?Let’s see this here:


Here we see a triangle rasterized on a pixel grid. Now let’s analyze what’s wrong with the picture above. First, note that when we convert the y coordinate of the most upper point to integer coordinates, the new position will be located above the default point itself. This is very, very bad. Let’s imagine for a second that we want to use Gouraud shading for the triangle in the scheme above. We set slopes, initial values and everything as normal. But! When we convert the y position of the top-most point to integer coordinates the new point will be above the default point. Therefore the edges of the triangle will become longer. Now when we interpolate the colors along the edges they will in some moment pass the end values and will get out of the range. The problem for colors might not be so big but for a texture mapper this will almost always crash the program.

So we need to avoid this situation. The solution for this problem is pretty simple. We will only light pixels, which have their top-left corners in the polygon. Let’s see this in practice:


There! You don't need to care that some of the pixels are not lit because they will be rendered from the neighbour triangles. However there is one last little thing to consider. Say we have an edge that lands exactly on the top-left corner of some pixels. The problem here is that the neighbour triangles as well as the current one are going to light those pixels! However this should never happen in a good rasterizer. Check the picture below:


These are two neighbor triangles. The pixels lit by the first one are in blue, by the second in yellow and by both in light blue. To avoid that problem we will further define our fill convention to be “top-left” fill convention, which states the following: if the edge in question is top or left for the polygon and it falls exactly on the reference point, then the pixel will be included in the polygon. Respectively the pixel won’t be included if the edge is bottom or right. Don’t worry about it too much though since, as already stated, the skipped pixels will be lit by the neighbour polys.

The thing is that the big 3D hardware APIs OpenGL and Direct3D also use top-left fill convention (and every other renderer that applies fill convention as far as I know) so your polys will have the same quality as the ones rendered with OpenGL and D3D :D.

The big issue that's left is the implementation. Hopefully this is the easiest part of all! For the top-left fill convention where the reference point is the top-left corner of the pixel we simply do this: instead to snap the starting and ending y positions of the triangle edges before the start converting to integers, we calculate them as follows:
    startY = ceil( top_most_point_of_triangle_y );
endY = ceil( bottom_most_point_of_triangle_y ) ? 1;

I think the ceil and the floor math functions should not be explained but here is a short one:

ceil(x) ? calculates the smallest integer bigger than or equal to x
floor(x) ? calculates the largest integer smaller than or equal to x

Now to avoid graphical artifacts we also have to update the interpolants too, since the new start position is a little bit below the initial one (or it remains unchanged if it’s already an integer). So we do this:
    x1 += slope1 * (startY ? top_most_point_of_triangle_y );
x2 += slope2 * (startY ? top_most_point_of_triangle_y );
c1 += cSlope1 * (startY ? top_most_point_of_triangle_y );
...

where x1, x2, c1,?... are the values we’re interpolating along the triangle’s edges and slope1,?... are their respective slopes.

The exact same thing must be done for when rasterizing a scanline:
    startX = ceil( x1 );
endX = ceil( x2 ) ? 1;
col += cSlope1 * (startX ? x1);

where x1, x2 are the start/end coords of the scanline and col is the color that we’re interpolating along it.

I think we’re done! It’s important that you learn fill convention and to start doing it right. Please if you do not understand the described facts in this article, revisit some other documents about it or ask in the forums.

Also look at the demo and the source coming with this article. You’ll see a huge difference in the rendering quality compared to the demo for the previous tutorial.

Well, I guess that’s all this time folks! The tutorial is pretty big this time but why the hell should I care ?! The DevMaster guys never set me no space limit!?Time to say goodbye, folks! ‘Till the next article I advise you to try what you’ve learnt, whit some pesky DOS compiler under mode 13h or under SDL with the cool display system that I’ve written and which you can find in the sources.

Next time we will discuss lighting in detail and maybe one or two small stuff two. After that we’re getting right into the interesting things like clipping (sorry no homogeneous stuff) and texture mapping! Das war alles! Bis spaeter, tschuess!

Download source code for this article

원본 사이트 : http://www.devmaster.net/articles/software-rendering/part4.php