Normals are often talked about in terms of vertex normals, face normals and normal maps. Vertex normals and face normals concern the perceived curvature and facing angle of a modeled surface, let us call it surface shading, without any normal maps applied. Normal maps are image files that come in two variants: Object space normal maps, used to replace the surface shading, and tangent space normal maps, used to adjust rather than replace it.
A polygonal model is basically a hollow shape made up of faces with three or more end points, called vertices. Singular for vertices is vertex or vertex point. One vertex, many vertices/verts. If a face has more than three vertices, it is not a triangle and the computer will consider it to be made up by a combination of triangles before showing it to us. The amount of triangles on a model are often used by 3d artists as a measure of memory / rendering cost, called tricount. That measure is technically not very precise for reasons more fitting to bring up in a post about polygonal modeling, but we will touch upon the subject briefly in this post.
Let’s have a look at three pictures and think about the difference between them.
Hard edges and flat surfaces.
Some hard edges and flat surfaces, some smoother.
Uniformly smooth surfaces without any hard edges.
The three images show models of the exact same shape, with the exact same amount of triangles, but they still look different. The edges look either soft or hard and the faces look either flat or rounded. This difference is because of the vertices and their normal vectors, that make up the surface shading:
When all the edges look hard like this, all the vertices we see here, except the one centered at the top of the cylinder to the right, hold three normal vectors, one for each adjacent face. These vectors, shown as green lines here, are called vertex normals, and they point out facing angles for the corners of a face. A vertex with two or three vertex normals use up more memory than a vertex with just one normal vector. For the computer a vertex with more than one vertex normal is actually more than one vertex, one per normal. I tend to think about them as just one vertex with many normal vectors because of how 3d modeling software handle them. This is one of the reasons that triangle count is not a very precise measurement for memory and rendering cost. We have the exact same amount of triangles, but more vertices to store and display the more hard edges we have on the model. When all edges look sharp like this, the surface shading follow the objects polygonal shape.
Here some of the vertex positions hold two vertices with different normals. Not one per adjacent face like in the last image, but rather one per adjacent group of faces, where each group is made up by a number of faces that are linked together via soft edges. In some 3d applications you control this by setting individual edges to soft or hard, for example when working in Maya. In other applications, like 3ds Max, you control this by selecting faces and assigning them to a smoothing group, basically giving them a number, and edges shared by two faces of the same smoothing group/number will look soft. The end result is the same, giving you one, two or three normal vectors per vertex position.
Here we have only one normal vector per vertex position so the edges look smooth and the faces rounded. All vertices are shared between three faces each. Any given point on the surface looks bent according to an interpolation between its vertex normal vectors. The surface shading looks rounded, much more so than the actual shape of the object, and that is why we do this. To make a surface made up by just a few polygons look smoother than it is, instead of using massive amounts of polygons. Of course the silhouette does not change, just the surface shading.
An illustration of a polygon surface with one normal vector per vertex like the above example. The result is a smooth surface shading, the blue curve here.
Face normals point straight out from a face and mainly keep track of its front and backside.
We see the faces that point somewhat toward the camera.
If we flip the normals of an object, game engines will show the inside/backside of that model if you do not use a double sided shader. Many high end renderers use double sided shaders by default and you will need to turn that feature off somehow if you want to render the inside of a model like this. Double sided geometry is more expensive since it contains more data. If you do not have a double sided shader in a game engine, but want to be able to see for example a plane from both sides, you can duplicate that plane, flip the normals of the copy and combine the two planes to one mesh.
Normal maps are used to change the angles at which light reflects off a surface, mainly to add detail with per pixel precision, many small pixels offer more detail than a few big polygons, but also to slightly shift or replace larger curvature in the objects surface shading without adding additional geometry. Every pixel in a normal map texture store an angle as a color value. This angle add to the surface shading of models when using tangent space normal maps, and for object space normal maps it is relative to the local orientation of the object, entirely replacing the surface shading.
Let’s say that i want to add some detail to this cube.
Here some additional geometry is modeled. The cost in vertex data is higher and there are more triangles for the computer to draw.
Here a normal map is applied instead, so we still have only six sides on the model, made up by two triangles each, forming a simple cube. We get the same lighting of the surface as if geometry with the same angles as in the previous image were present. The textured nose of this cubic head gets lit with the same color as the modeled nose, but there is no real extra geometry here, so we cannot see it protruding and add to the silhouette, cover any other part of the model or cast shadows. In this example it is actually not a very good idea to use a normal map since the higher resolution model is also modeled with quite few faces and gives us a more correct look. The normal map contains very little information, mainly just flat angles, but uses a lot of memory since it has many pixels. Usually normal maps work best when you have a model with the wanted overall shape and want to add a lot of surface detail that do not deviate much from that silhouette. Here is an example from wikipedia.
This is a tangent space normal map, it is the most commonly used type because it has many advantages over object space normal maps:
You can reuse parts of the texture to get smaller files with the same amount of unique pixel information or use more of the pixels in the image to fit the same file for less unique faces.
Only three of the cube’s six faces in this example need unique normal map information, so we could get rid of the other three, put the remaining ones in a row and cut the texture in half. The original size of this normal map was 1024 * 1024 pixels and it could then become a 1024 * 512 pixels texture, saving lots of memory, or the remaining three faces could be made bigger, using more pixels per side of the cube and the end result would be a sharper and less pixelated look when the cube is close to the camera.
Tangent space normals behave like you would expect when applied to deforming geometry. As an object bends, vertices move and their vertex normals shift the surface shading, corresponding to to this new shape. The tangent space normal map is then added on top of this so the light can appear to come from the right direction.
A tangent space normal map makes sure that the incoming light cares about the shifting surface shading of this bending tube. This scene is lit with a warm color from below and cool color from above.
When using an object space normal map the objects local orientation is used for light calculation instead of the deforming surface, so the direction of a surface originally pointing upwards, will still be lit as if it was pointing upwards after it has been bent to the side while the model itself is not rotated. We can rotate objects with object space normal maps, since the stored angles in the texture are relative to the objects local orientation, but we can not bend them and get correct looking surface angles. A pixel of a certain color always corresponds to the same direction in the objects local space.
Use and repeat over any model of any shape
You can apply a finished tangent space normal map to any model of any shape and get good angles for surface lighting for the same reason as the deforming geometry example, it will add to the angles dictated by the vertex normals. For example you could create a tiling rock tangent space normal map and repeat it many times over a large piece of ground that bends in many directions, and also use that same normal map on some smaller stones.
Object space normal maps are mainly created specifically for a given object.
An advantage of object space normal maps is that you can save some vertex data setting all edges to soft since vertices do not need any extra normal vectors to bend the surface shading in a visually appealing way. The surface shading of the model itself can be really bad and get completely replaced by an object space normal map, so when modeling objects the focus will be mainly on the silhouette and the actual shape, not needing to worry about surface shading.
Normal map color channels
To further understand normal maps, you can compare their color channels:
Tangent space channels
Red contains left and right UV to surface shading angle offset.
Green contains up and down UV to surface shading angle offset.
Blue contains steepness for these angles. Medium gray is a steep angle and a brighter intensity indicates a not very steep angle.
Object space channels
Red contains absolute front and back object space directions.
Green contains absolute up and down object space angles.
Blue contains absolute left and right object space angles.