I’ve always loved colour, and I’ve always been a bit geeky about it. Other kids would argue about football, but in my circle we’d go: “my Amiga 500 has 4096 colours, and yours has only 16”. And we counted the days for those 4096 to become 262144. But colour is not just numbers. And when now someone sends me an RGB triplet saying, “this colour: 146, 27, 98”, my brain just short-circuits. That’s not a colour, and I’ll explain why later. Colour and colour spaces are hard topics, and the more you dig into them, the more complex it gets, and the uglier the truth becomes.
Some of the topics around colour, like colour perception in humans, are still hot areas of research. They sometimes even become mainstream discussions in bars, like the white/orange dress meme a couple of years ago. But I won’t talk about colour perception today.
Instead, I will focus on a less mainstream but more technical topic, which is sadly neglected more often than I’d like: colour spaces. I’ll try to summarize the different concepts around colour spaces as briefly as I can, and then talk about a particular colour space that you may want to start using in your mobile apps: the Display P3 colour space.
Why do we see colour?
I’m not going to define what colour is. You probably know what it is even without a formal definition. Instead, I think it’s more useful to explain how colour is created.
We see colour because our eyes have photoreceptor cells, sensitive to different wavelengths of light. These photoreceptors are rods and cones in mammalian eyes, and ommatidia in arthropods. Although photoreceptors themselves do not signal colour, but only the presence of light in the visual field, the signals from the cones are used by the visual system to work out the colour. Here’s a figure of human photoreceptor absorbances for different wavelengths,
In terms of number of photoreceptors, you can say you see more colours than your cat does (they discern at least 3 or 4 colours), but a mantis shrimp sees many more colours than you (they have 16 types of cones). But also, the colours that you see may not be exactly the same I see. And it is also worth mentioning that luminance is orthogonal to colour, so a cat can see much better in the dark than we do.
Colour Spaces and Colour Models
These two terms get messed up together more often than not. A Colour Space organizes colours so they are reproducible in physical devices (e.g. sRGB, Adobe RGB, CIE 1931 XYZ), whereas a Colour Model is an abstract mathematical model describing the way colours can be represented as tuples of numbers (e.g. RGB, CMYK).
That distinction is really important. I often hear people telling each other random RGB tuples to communicate colours, and I have to assume those tuples are in sRGB colour space, with the gamma already applied. But even the gamma may change from system to system, so those numeric values really don’t tell me anything unless you specify a colour profile as well.
Another very important thing to remember is that there’s no single RGB colour space! Although in most desktop applications we use sRGB, cameras may use Adobe RGB because they need a wider gamut.
XYZ Colour Space and Colour Conversions
XYZ Colour Space is a device-invariant representation of colour. It uses primary colours that aren’t real, i.e., that can’t be generated in any light spectrum. That means we can even represent “imaginary colours”, that is, colours with no physical reality.
In XYZ, Y is the luminance, Z is the blue stimulation, and X is a linear mix. It is also very common to normalize this space by the luminance to obtain the xyY colour space. You may have seen the typical horseshoe shape from the xy chromacity space before,
The horseshoe shape is the gamut of human vision, that is, the colours that we can see. The curved edge is the spectral locus and represents monochromatic light, in nanometres. Notice that the straight line at the bottom, or line of purples, are colours that can’t be represented by monochromatic light. Read about CIE 1931 Colour Space to find more details.
The important thing to remember is that these diagrams are useful to visualize the gamut of different devices and the colour conversions that happen between them. For instance, in the example below, the green dot from an Adobe RGB Camera needs to be flattened down to a more yellowish green in order to be displayed in a laptop display. Note that in an RGB colour model, both values may look identical, e.g.
(0, 255, 0), but they aren’t the same. This conversion may be irreversible if we aren’t careful. When printing this green, we want to go back to the closest match between the original green colour, and the greens that the printer can represent, not the yellowish green from the display.
(The image above is taken from Best Practices for Color Management in OS X and iOS, another recommended read.)
DCI-P3 Colour Space
The most common colour space for displays is sRGB. However, recent OLED displays have a wider gamut, so we need a better colour space to make the most out of them. DCI-P3 is a wide gamut RGB colour space. Apple calls it Display P3. Because the gamut is wider, you will need at least 16-bit per channel in order to store colours in P3. So if you are storing values as integers, instead of having maximum values of 255 per colour channel, now it will be 65535.
In order to visualize the differences between P3 and sRGB, I recommend using Apple’s ColorSync utility, which comes with macOS. This tool also has great colour calculators included that will help you understand all the different concepts from this blog post. It’s very simple to create a visualisation like the one below using that tool. This figure compares P3 and sRGB gamuts, plotted in L*a*b* colour space (close to human perception).
Apple recommends the use of Display P3 for newer devices in its Human Interface Guidelines, so if you are developing a website or an app for iOS and/or macOS, it’s worth updating your authoring pipeline to use wide colour in every stage.
Most of MacOS and iOS SDK supports Display P3 already, with the exception of some frameworks like SpriteKit. The UIColor class has an initializer for
displayP3. If you need to do the conversion yourself, I’ve written a couple of posts on how to compute (Exploring Display P3) and test (Stack Overflow). It boils down to this matrix that you can apply to your linear RGB colours (before applying the gamma) to convert from P3 to sRGB,
1.2249 -0.2247 0 -0.0420 1.0419 0 -0.0197 -0.0786 1.0979
I’ve written a battery of colour conversion unit tests here: Colour Tests.
How much wider DCI-P3 is?
According to Wikipedia, the DCI-P3 colour gamut is 25% larger than sRGB. According to my accounts, it’s approximately 39% bigger. I’ve counted all the 24-bit samples in linear Display P3 RGB (16M) that fall out of sRGB, and that accounts for approximately 4.5M (~28%).
I’ve also tried visualising these differences in different ways. I ended up creating an iOS app, Palettist, to help me create colour palettes with P3 colours that fall outside the sRGB gamut. The result is some discrete palettes where each square is in P3, and a circle inside it with the same colour clamped to sRGB. Here’s one of such palettes,
Depending on where you are reading this, you may or may not see the circles. More details in this blog post: Display P3 vs sRGB. If you have a modern iOS device, try downloading this palette, and uploading it to, say, Instagram. You will see the circles, but the moment you click “Next”, all colours will look duller and the circles will disappear (you don’t need to actually post it; Instagram converts it to sRGB before uploading). Please try to use these palettes to test if an app supports P3 or not.
If you see circles in the colour palette I posted above, but you are sure your display is sRGB, it could be that the colour management in your OS is trying its best by applying a rendering intent before displaying the image. The common modes are these two:
Relative Colorimetric intent: clamp values out of gamut to the nearest colour. This causes posterization (you won’t see the circles).
Perceptual intent: blindly squash the gamut of the image to fit the target colour space. This reduces saturation & colour vibrancy (but you’ll see the circles). We say “blindly” because even if it’s just one pixel that’s out of gamut, it will cause the whole image to shift colour… The amount of compression will be shown in the ICC profile
There are other modes, like Absolute Colorimetric intent and Saturation intent. Check this article for details: Understanding Rendering Intents.
A note about gamma
Gamma correction alone deserves a separate blog post… But it’s important to emphasize here that when people give you an RGB triplet like
(181, 126, 220) (Lavender), not only do they mean it’s in sRGB (and there are different sRGB profiles), but also they mean the gamma correction – an exponential function – has already been applied. If you do your own colour conversions with the CIE Colour Calculator, you also need to remember that the sRGB illuminant is D65, but it’s encoded with D50.
Why do we apply gamma? This is because equal steps in encoded luminance correspond roughly to subjectively equal steps in brightness. Read Gamma Correction.
If you only have 8 bits to store a luminance value, you better store it with the gamma applied, so you lose less perception-valuable information. However, if you are into 3D graphics, remember that light computations should happen in linear space!
The final decision: choosing a colour space
This is my small personal guide to choosing a colour space, depending on the occasion:
L*a*b* for Machine Learning, because the Euclidean distance between 2 colours in L*a*b* is closer to perceptual distances.
Linear RGB if you are working with light (3D graphics), because light can be added linearly;
DCI-P3 if you target newer screens, because you can represent more colours; sRGB if you can only afford 8-bit per channel – make sure the gamma is applied to avoid banding artefacts in dark colours (the eye is more sensitive to differences in dark areas)
For the colour model,
- RGB, if you are doing light or alpha blending computations, you better stick to RGB;
- HSV for design, because the representation is intuitive; and if you are colour-blind, you can adjust saturation and luminance without worrying about accidentally having changed the hue.
Thanks for reading this long blog post! To be brief, I’ve tried to summarize it all with a few bullet points:
Cats may dream in colour
Every human is unique
Colour Space ≠ Colour Model
Display-P3 > sRGB
ColorSync Utility is your friend
Use provided P3 palettes as reference
Choose appropriate Colour Space & Gamma for the occasion (storage, ML, 3D)
That is a very nice write up – I like it. What the world’s colour space/colour model discussion in general leaves implicit, is gradation resolution. You touch on it in some formal terms. Pixel peepers understand detail resolution, but we need to also distinguish – separately – gradation resolution and dynamic range. Gradation resolution, to me, is the number of steps (grades) we can distinguish numerically (i.e. in the colour model) within the dynamic range of the technology we are using. You can imagine an older medium format camera with 10 F-stops of dynamic range but 16 bits gradation resolution. Correctly exposed for human skin, that camera will have much better skin tone gradation than a professional 35mm camera with 14 F-stops dynamic range and 12 bits gradation resolution (actually, 12R + 12G + 12B). Something similar plays in displays or prints where gradation resolution and dynamic range are never related, but should be. So whether we are able to actually display the better skin tone gradation really depends on the display medium. My 10 bits display can only approximate that quality. However my professional photographic multi-pigment printer can reveal more on baryta paper, under a bright light with continuous spectrum. Your discussion of the “gamma” is an interesting further nuance of this point I am making as it expresses how gradation evolves with changing numeric values. As the human eye has a 10-base logarithmic applied between intensity of light and perception, we could imagine a 10-base log to be applied to RGB values too (how this is really implemented in raw files is the secret sauce of digital cameras that 3rd party software developers like DxO and Adobe try to reverse engineer; Nikon now works with Adobe for them to apply distortion correction functions for their new Z-series lenses to raw files in Lightroom and may have given up on this kind of secrecy.) We now can also appreciate that display manufacturers claiming “HDR”, are merely advertising their ability to compress contrast in content into the limited dynamic range of their devices and map 8 bits sRGB to their devices in a way that all nuances in the 8 bits content will be displayed.
I’d like to thank you for the efforts you have put in penning this website. I’m hoping to view the same high-grade content from you in the future as well. In truth, your creative writing abilities has inspired me to get my very own blog now 😉
“…not only do they mean it’s in sRGB D65 (there’s more than one sRGB, depending on the white point)…”
This is not correct. sRGB is independent of illuminant. This is quite confusing and not easy to understand; so I won’t try to explain it.
Thanks for catching that. I’ve updated the text:
not only they mean it’s in sRGB (and there are different sRGB profiles), but also they mean the gamma correction – an exponential function – has already been applied. If you do your own colour conversions with the CIE Colour Calculator, you also need to remember that the sRGB illuminant is D65, but it’s encoded with D50.
Hopefully is not too confusing. My intention is to highlight that there’s more depth to colour other than sharing RPG triplets around.