Solid Wireframes

Various solid wireframe modes in Maya, Mesh Lab, and our WebGL Model Viewer (from left to right).

About solid wireframes

In order to spot problems in a mesh it is sometimes useful to look at the shape of its triangles or polygons. That’s what wireframe modes are for. But if there are too many lines it can be confusing to look at. That’s why it’s also useful in 3D CAD software to have a solid wireframe mode, that is, a mode where you only see front lines, and the rest are occluded by the triangles of the solid mesh. See the figure on top.

Because I use my WebGL Model Viewer to preview models quite often, I thought it would be a good idea to add a solid wireframe mode to it. Instead of attempting to draw actual line primitives, I thought it would be more fun to draw the lines directly from the triangles in a pixel shader. And that’s what I will describe in this blog post.

Barycentric coordinates

There’s a very easy way to know whether a point inside a triangle is at the edge of that triangle, i.e. the line we want to draw, or not. If we convert the coordinates of the point to barycentric coordinates, the point will be expressed as a 3D tuple (α, β, γ), where the 3 vertices of the triangle correspond to coordinates (1, 0, 0), (0, 1, 0), and (0, 0, 1). Any point where one of the coordinates is zero lies on one of the edges of the triangle. See illustration below.

Barycentric coordinates

In barycentric coordinates, one of the coordinates is zero for points in an edge

If we label each vertex with its equivalent barycentric coordinate, then the rasteriser can do the dirty work of computing the exact (α, β, γ) coordinates for every single pixel inside the triangle. Then, to draw white lines we can simply paint white pixels where one of the coordinates is zero, and paint black otherwise.

That works for a single triangle, but what happens when triangles share one or more vertices?

Graph colouring

In the triangle strip of the illustration below we need to assign barycentric coordinates in a way such that not two adjacent vertices share the same coordinate label. This problem is known as graph colouring in Graph Theory. The 3 barycentric at the vertices can be seen as 3 distinct colours, and then the question we want to answer is:

can we colour the graph (the mesh) with just 3 colours?

Barycentric coordinates on a triangle strip

Barycentric coordinates on a triangle strip

Unfortunately the general answer to this question is NO. So, spoilers ahead, this shader I implemented doesn’t work for all meshes. But it turns out that many meshes are indeed 3-colourable. Let’s not get too deep into graph theory, but let’s check some examples instead.

Planar graphs are 4-colourable

The Four Colour Theorem states that a planar graph can be painted with no more than 4 colours. The proof of that is quite complicated, so let us just believe the result. However, it is NP-complete in complexity to decide whether an arbitrary planar map can be coloured with just three colours.

Moreover, 3D meshes aren’t planar graphs in general. A graph is planar if you can draw it on a plane without any 2 edges intersecting. I think this is related to the UV-unwrapping problem: if the graph is planar, you can unwrap the texture coordinates (UVs) of a mesh continuously on a plane, without having to break the UVs in pieces. Note that the opposite may not be true. For instance, in the examples below the cube isn’t planar, because no matter how we draw it, there will always be edges that cross. However, the UVs can be unwrapped on a plane, and it can be coloured with only 2 colours. On the other hand, the tetrahedron may seem not planar either, but if you draw it from above, its edges do not cross. So it’s planar, but it needs the maximum number of colours of a planar graph to be painted, 4.

Planarity and UVs

The cube isn’t planar, but it’s 2-colorable. The tetrahedron is planar but 4-colorable.

Open quad strips are 2-colourable

Quads are used often for modelling. An open quad strip, as illustrated below, is 2-colourable. But quads are rarely supported by GPUs, so usually the quads need to be triangulated before rendering. By adding that extra edge, the new triangle strip becomes 3-colourable.

Colouring of quad strip

A 2-colourable quad strip, and a 3-colourable triangle strip

Quad strip loops are 3-colourable

If the quad strip “loops”, that is, if we close the strip so the mesh has a hole, we may need an extra colour to paint it. See below. If we triangulate that, we will need an extra colour, making the triangle strip loop 4-colourable.

Quad loop colouring

A 3-colourable quad loop, and a 4-colourable triangle loop

Vertex colouring algorithms for meshes

It is NP-hard to compute the chromatic number of a graph, and the 3-colouring problem is still NP-complete on 4-regular planar graphs. But there are different algorithms we can use to colour a graph in polynomial time.

One of such algorithms is Gregory Chaitin‘s, which I implemented in the Model Viewer. If the mesh is 3-colourable, it seems to work well. However, it seems a more naive approach based on the inherent order of the vertices of the mesh works better for meshes that are not 3-colourable. See the examples below.

Graph colouring comparison through wireframes

Graph colouring comparison through wireframe. The left column uses Gregory Chaitin’s algorithm, whereas the right images follow the vertex ordering of the data.

In the example above, white areas appear white because 2 or more vertices share the same colour, so our wireframe shader that’s based in barycentric coordinates fails to paint just the edge. The mesh is not 3-colourable, so it can’t be helped. But, as it can be seen in the images on the right, we can do better if we paint the vertices in a different order. In this case, we simply follow the ordering given by the index number of the vertices in the mesh. The painting algorithm is simply:

  1. For each triangle, sort its vertices by vertex index. E.g. [3, 0, 2] becomes [0, 2, 3].
  2. Sort all the triangles in the mesh by “most significant” vertex index, i.e. we look at the first index of each triangle and if they are the same, we look at the next index. E.g. Given a = [2, 4, 20], b = [3, 4, 12], and c= [3, 5, 1], they will be sorted [a, b, c].
  3. Then, for each triangle,
    1. Start with 3 available labels/colours.
    2. Remove vertices already painted, and labels already used by those vertices.
    3. For the remaining vertices, assign the remaining labels in order. If we run out of available labels, the mesh may not be 3-colourable. Skip vertex.

That’s all.

The wireframe shader

It turns out that many meshes have a very regular structure and are indeed 3-colourable. So I thought it was still valuable to implement this shader, even if it doesn’t work for any arbitrary mesh. The shader is extremely simple:

vec4 lines = 1.0 / clamp(64.0 * vertexColor, 0.0001, 64.0);
float line = max(lines.r, max(lines.g, lines.b));
gl_FragColor = vec4(line, line, line, 1.0);
And the colours we paint the vertices with need to be exactly (1, 0, 0), (0, 1, 0), (0, 0, 1), because those represent the barycentric coordinates. The mixture of any other colours will make us to either lose one edge, or to paint the whole surface because some coordinate is non-zero. See some rasterisation examples below.
Rasterisation examples

Rasterisation examples to illustrate the shader output given different vertex colours as input. Only pure Red, Green, Blue inputs result in a wireframed triangle.

The function that we use is an inverse distance function, scaled by some arbitrary number so we can make the lines thicker or thinner. Lines also get thicker as they get closer to the camera and the triangles get bigger. This is different from normal wireframe/line rendering, where the thickness of the line is always the same, regardless of the size of the triangles. You can see this effect in the image below, which also shows one of our models, that turns out to be 3-colourable.
Close up of our wireframe

Solid wireframe on a close-up of one of our models

In terms of implementation complexity, it was harder for me to enable vertex colouring than the actual shader. If you are curious, check this pull request: Model Viewer / Memory Layout. I refactored my old code so I could define the memory layout of the vertex data that I send to the GPU in a tidier way. I stored the colours as 32-bit RGBA values, so it’s just an extra 4 bytes per vertex.

Conclusion

This was a fun exercise to learn a bit about graph colouring, barycentric coordinates, and wireframe shaders, and how all those 3 things can relate together. Although the resulting rendering algorithm may create undesirable white patches in arbitrary geometries, it works for regular topologies like the ones in our models. Because lines are rendered using a distance function in a pixel shader, lines can look thick when you zoom in, an effect that may be desirable when you want to emphasise certain areas of the wireframe.

MeModel GBuffer

About skin colour authoring

Part of our MeModel development process involves skin colour matching. We have to match our 3D avatars to a photographic reference. We have attempted to do this automatically in the past, but as the lighting process became more complex, the results were no longer good and it required a lot of manual tweaking. In effect, we needed to manually author the skin colour, but writing parameters by hand and trying them out one at a time is a tedious process. That’s why we decided to create an interactive tool so we could see the result immediately and iterate quickly.

The first choice we made was the platform: the browser. If we wrote this tool for the web, then we could share it immediately with remote teams. It’s a zero-install process, and therefore painless for the user.

We wrote a prototype that would use a high-resolution 2D canvas, and transform all the pixels in simple for-each loops. However, this was far from interactive. For our images, it could take a couple of seconds per transform, not very pleasant when adjusting parameters with sliders. You could try to parallelise those pixel loops using Javascript workers, for a 2 or 3-fold speed increase. But the real beast for local parallel processing is your GPU, giving us in this case more than a 100-fold speed increase.

So we decided to make the canvas a WebGL canvas. WebGL gives you access to the GPU in your machine, and you can write small programs for it to manipulate all pixels of the image in parallel.

Quick introduction to rendering

Forward rendering

The traditional programmable rendering pipeline is something that in the computer graphics jargon is referred to as forward rendering. Here’s a visual summary,

Forward rendering pipeline

Forward rendering pipeline

Before you can render anything, you need to prepare some data buffers with your vertex positions and any parameters you may need, which are referred to as uniforms. These buffers need to be in an area of memory that your GPU can access. Depending on your hardware, that area could be the same as the main memory, or a separate graphics memory. WebGL, based on OpenGL ES 2.0 API, has a series of functions to prepare this data.

Once you have the data ready, then you have to provide two programs to the GPU, a vertex shader and a fragment shader. In OpenGL/WebGL, these programs are written in GLSL, and compiled during run time. Your vertex shader will compute the final position and colour of your vertices. The GPU will rasterize the vertices for you (this part is not programmable), which is the process of computing which pixels the given geometry will cover. Then, your fragment shader program will be used to decide the final pixel colour on screen. Notice that all the processing in both the vertex and pixel/fragment shaders is done in parallel, so we write programs that know how to handle one data point. There’s no need to write loops in your program to apply the same function to all the input data.

A traditional vertex shader

There are basically two things that we compute in the vertex shader:

  • Space transforms. This is how we find the position of each pixel on screen. It’s just a series of matrix multiplications to change the coordinate system. We pass these matrices as uniforms.
  • Lighting computations. This is to figure out the colour of each vertex. Assuming that we are using a linear colour space, it is safe to assume that, given 2 vertices, the interpolation of pixel colours that happens during rasterization is correct because irradiance is additive.
A traditional vertex shader

A traditional vertex shader

Both the space transforms and lighting computations can be expensive to compute, so we prefer doing it per vertex, not per pixel, because there are usually fewer vertices than pixels. The problem is that the more lights you try to render, the more expensive it gets. Also, there’s a limit of the number of uniforms you can send to the GPU. One solution to these issues is deferred rendering.

Deferred rendering

The idea of deferred rendering is simple: let’s defer the lighting & shading computation until a later stage. It can be summarized with this diagram,

Deferred rendering pipeline

Deferred rendering pipeline

Our vertex shader will still compute the final position of each vertex, but it won’t do any lighting computation. Instead, we will output any data that will be needed for lighting later on. That’s usually just the depth (distance from the camera) of each pixel, and the normal vectors. If necessary, we can reconstruct the full 3D position of each pixel in the image, given its depth and its screen coordinates.

As I mentioned earlier, irradiance is additive. So now we can have a texture or a buffer where to store the final irradiance value, and just loop through all the lights in the scene and keep summing the pixel values in the final texture.

Skin colour authoring tool

If you followed so far, you may see where this is going. I introduced deferred rendering as the process of deferring lighting computation to a later stage. In fact, that later stage can be done in a different machine if you wanted to. And that’s precisely what we have done. Our rendering server does all the vertex processing, and produces renders of the albedo, normals, and some other things that we’ll need for lighting. Those images will be retrieved by our WebGL application, and it will do all the lighting in a pixel shader. The renders we generate look like this,

MeModel GBuffer

MeModel GBuffer

Having these images generated by our server, the client needs to worry only about lighting equations, and we only need a series of sliders that connect directly to the uniforms we send to the shader to produce a very responsive and interactive tool to help us author the skin tones. Here’s a short video of the tool in action,

 

The tool is just about 1000 lines of pure Javascript, and just 50 lines of shader code. There are some code details in the slides here:

(These slides were presented in the Cambridge Javascript meetup)

Summary

Javascript & WebGL are great for any graphic tool (not only 3D!): being in the web means zero-install, and being in WebGL should mean it gives you interactive speeds. Also, to simplify the code of your client, remember that you don’t need to do all the rendering in the client. Just defer the things that need interaction (lighting in our case).