In the first part of this series of posts, we introduced the idea of trying to detect flesh in images by looking at the colour values of individual pixels in the image. This produces reasonable results, but far too many “false positives” due to the fact that other items in the scene, such as hair and clothes, may be flesh-coloured too.
Boolean Pixel Function
In the example below, pixels in the left-hand image that are fleshy (R > G > B) are rendered red in the right-hand image, whereas non-fleshy pixels are rendered green:
Fuzzy Pixel Function
We can improve things slightly by using fuzzy logic. Our original fleshy function (R > G > B) is actually made up of two conditions: a pixel is “fleshy” if the red component is greater than the green component and the green component is greater than the blue component. These two conditions are binary, but they could be made fuzzy. Consider the following JavaScript function:
function fuzzy(x, false_limit, true_limit) { var y = (x - false_limit) / (true_limit - false_limit); return Math.min(Math.max(y, 0), 1); }
This produces a fuzzy truth value between zero, meaning definitely false, and one, meaning definitely true:
We can then compose a fuzzy logic expression for fleshiness (notice that the fuzzy AND operator is simply multiplication):
var rg = fuzzy(r - g, 0, 0.10); var gb = fuzzy(g - b, 0, 0.02); var fleshiness = rg * gb;
The values 0.10 and 0.02 were derived empirically. Effectively, we’re saying that we expect the red value to be quite a bit greater than the green channel; the difference between the green and blue values is less important.
The fuzzy approach gives us marginally better results. Parts of the hair are deemed to be less likely to be fleshy, as are some portions of the dress pattern.
But, as mentioned at the end of Part One, we need a radically different approach to consistently find accidentally-rendered “naughty bits” in an image.
Chameleon Detector
Fortunately, we have control over the rendering pipeline of these images, so there’s nothing stopping us from rendering the scene twice with slightly different parameters. Let us pretend that belly buttons are considered “naughty” and that we want to detect renders that show some or all of this body part. When we render body parts, we use texture mapping on to a 3D mesh. If we “paint” over the naughty bits in the source texture maps with a known colour (say, green) and render the scene, we may get the following for two different outfits:
For the purposes of clarity, we’ve painted a large star over the belly button. In reality, the painted region would be smaller and more accurately shaped. If we render the scene again with the naughty bits over-painted with the same shape but a different colour, say, red, we get:
Obviously, the image on the right is unchanged by this modification to the skin texture, but the image on the left is. All we need to do is run the two sets of images through a very simple (fuzzy) comparator to find visible naughty bits:
As can be seen, this “chameleon” technique produces a strong signal. And even though it requires two renders per image, there are other advantages too:
- The regions considered “naughty” are hand-painted into the source skin textures. This is both intuitive and flexible.
- Different “naughtiness maps” can easily be used for different regions and cultures.
- One of the outputs of the technique is an image illustrating which naughty bit is visible and where.
- It is body shape agnostic.
- It is viewpoint agnostic.
- It handles translucent garments gracefully, particularly if a fuzzy comparator is used.
- It does not matter how complex the scene is.
- The code used to run the test is identical to the final rendering code: only input texture data is modified.