Completely Painless Programmer's Guide to XYZ, RGB, ICC, xyY, and TRCs
This tutorial was written in the hope that it might be of use to technically savvy people who know a whole lot about the code and the mathematics that goes into making an image editing program, but perhaps not so much about color spaces and ICC profiles.
All RGB, XYZ, and yxY numbers in this tutorial are floating point numbers. RGB numbers have the nominal range 0 to 1, as does Y from XYZ and xyY. Nominal ranges are often extended in practice.
On the one hand, light comes from the sun or other radiant sources, and is refracted by mediums (water, the atmosphere, glass) and diffusely or specularly reflected by surfaces. On the other hand, color isn't out there in the world in the same tangible way that light is. Rather color is part of how we sense the world around us. Light enters the eyes, is processed by light receptors (cones and rods), and sent via the optic nerves to the brain for further processing and interpretation. Light varies in wavelengths, which our eyes and brain interpret as varying colors, and also in intensity. So our perception of color is composed of both intensity information and chromaticity information. The naming of colors carries one out of the narrow realm of color perception, and into the larger realm of cultural and linguistic interpretation and classification of color, and thence into even larger philosophical, aesthetic, theological, and metaphysical considerations.
Color mapping experiments: what the average human sees
In the late 1920s William David Wright and John Guild independently conducted a series of color matching experiments that mapped out all the colors the average human (meaning the average of the humans in the experiments) can see. In 1931 color scientists used the results of the Wright and Guild experiments to create the 1931 CIEXYZ color space ("XYZ" for short).
To visualize XYZ, think of a three-dimensional cartesian coordinate system (high school algebra) with axes labelled X, Y, and Z. In the XYZ color space, Y corresponds to relative luminance; Y also carries color information related to the eye's "M" (yellow-green) cone response. X and Z carry additional information about how the cones in the human eye respond to light waves of varying frequencies.
Real colors and imaginary colors
Theoretically, the XYZ axes go off to infinity in both the positive and negative direction. However, not every set of coordinates in XYZ space corresponds to a color that the average human can see. XYZ coordinates that are outside the locus of colors mapped by the color matching experiments that led to the creation of the XYZ color space are called imaginary colors. XYZ coordinates that are inside the locus of colors mapped by the color matching experiments are called real colors.
Colors that weren't measured
Not every being sees color exactly like the hypothetical average human. For example, birds, bees, dogs, and humans with nonstandard color perception don't see the same colors in the same way as the average human. However, for purposes of the digital darkroom, the colors that are seen by any being with non-standard color perception are neither real nor imaginary. Here's why:
As mentioned in the first section of this article, light waves of different frequencies are out there in the world, but color happens in the eye and brain. One could do (and I'm sure color scientists have done) color matching experiments with human tetrachromats, with color-blind humans, and perhaps even with birds, bees, dogs, and etc. But the resulting "tetrachromat-XYZ" color space (or "color-blind-XYZ" color space, or "bird-XYZ" color space) wouldn't be the same as the "average humans only" 1931 CIEXYZ color space. These alternative color spaces would have their own sets of real and imaginary colors.
To summarize, if a flower reflects it ("it" being that complex phenomenon we call light) and a bee sees it, of course it's real for the bee. And if a painting reflects it and a human tetrachromat sees it, it's real for the tetrachromat. But as far as the 1931 CIEXYZ color space that we use in the digital darkroom is concerned, these "nonstandard color perception" colors aren't real and aren't imaginary, rather they simply weren't measured during the color matching experiments that led to the creation of the XYZ color space.
I don't know the composition of the humans that participates in the color matching experiments that led to the creation of the "average human" 1931 CIEXYZ color space, but if I had to guess, I would guess that they were healthy young adult British males. It amazes me that despite the somewhat limited experimental foundations of the XYZ color space (since supplemented with additional experiments), XYZ is nonetheless extremely useful. In fact, though seldom used directly, the XYZ color space is the basis of everything that relates to color in a color-managed image editing application.
RGB from XYZ
The various RGB color spaces that we use in the digital darkroom are simply useful subsets of all the colors contained in the XYZ color space.
How to specify an RGB matrix color space
The simplest type of RGB color space, which is also the type of RGB color space that we use for normal image editing, is an RGB matrix color space. An RGB matrix color space is defined by specifying the XYZ coordinates for five colors, those being the color space's:
- Darkest dark color ("black")
- Lightest light color ("white")
- Reddest red color
- Greenest green color
- Bluest blue color
The five XYZ coordinates that define an RGB matrix color space in XYZ space have names that are a little easier to say than phrases like "darkest possible dark", lightest possible light", and so on:
- The XYZ coordinates for the darkest dark is called the color space's black point.
- The XYZ coordinates for the lightest light is called the color space's white point.
- The XYZ coordinates for the reddest possible red, greenest possible green and bluest possible blue are called the color space's Red, Green, and Blue primaries.
What is black? What is white?
We have an intuitive sense that "black" and "white" only have one meaning each. But as far as specifying a particular RGB matrix color space goes, the only requirement is that the Y coordinate for the RGB color "white" is greater than the Y coordinate for the RGB color "black". For example, for a printer profile , the XYZ coordinates for "black" might represent a real color that in ordinary terms we might call dark blue-gray or dark yellow-gray, depending on the printer inks. And the XYZ coordinates for white might represent a real color that in ordinary terms we might call bluish off-white or eggshell off-white, depending on the base color of the printer paper. (As an aside, most printer profiles are LUT rather than matrix profiles, but LUT profiles still have black and white points.)
Eight vertices from five coordinates
It only takes five XYZ coordinates to define an RGB matrix color space, those being the XYZ coordinates for the five RGB colors black, reddest red, bluest blue, greenest green, and white. However, the resulting shape in XYZ space isn't a pyramid (5 vertices, 4 sides), but rather a hexahedron (8 vertices, 6 sides), with the following eight XYZ coordinates as vertices:
- The XYZ coordinate for the RGB color (0,0,0) or the darkest possible RGB dark ("black").
- The XYZ coordinate for the RGB color (1,1,1) or the lightest possible RGB light ("white").
- The XYZ coordinate for the RGB color (1,0,0) or the reddest RGB red.
- The XYZ coordinate for the RGB color (0,1,0) or the greenest RGB green
- The XYZ coordinate for the RGB color (0,0,1) or the bluest RGB blue
- The XYZ coordinate for the RGB color (1,0,1) or the most magenta possible RGB magenta.
- The XYZ coordinate for the RGB color (1,1,0) or the yellowest possible RGB yellow.
- The XYZ coordinate for the RGB color (0,1,1) or the most cyan possible RGB cyan.
How do you get 8 XYZ vertices by specifying only five XYZ coordinates? Light is additive. So once you know the XYZ locations of the red, blue, and green primaries, add red and blue to get magenta, add red and green to get yellow, and add blue and green to get cyan.
An infinite number of RGB color spaces, or, "What is red?"
People who've been around since the beginning of digital imaging sometimes tend to unconsiously assume that "RGB" means "sRGB", even if consciously they know otherwise. But actually there are an infinite number of possible RGB matrix color spaces, and the physical (real world) meaning of "reddest red", "greenest green", and "bluest blue" depends on which XYZ coordinates you pick for an RGB color space's primaries.
Let's say you want to make two different RGB matrix color spaces, LargeRGB and SmallRGB. The reddest possible red in LargeRGB has the RGB coordinates (1,0,0). The reddest possible red in SmallRGB also has the RGB coordinates (1,0,0). But the meaning of (1,0,0) in LargeRGB is not the same as the meaning of (1,0,0) in SmallRGB, because the two color spaces have different red primaries, which means different XYZ coordinates that corresond to their respective RGB color (1,0,0). This is why people say RGB is relative to XYZ.
I haven't yet introduced xyY space (see Section D below). But unlike XYZ, xyY space cleanly separates the XYZ Y (luminance) from color, or rather from chromaticity, which is what the "xy" in xyY stands for. So in xyY space you can plot "color" (really, chromaticity) on a 2D diagram. Figure 1 shows the xy chromaticity coordinates for the red, blue, and green primaries for the relatively small sRGB and the larger WideGamutRGB color spaces, providing a visual counterpart to the often-repeated claim that "RGB is relative to XYZ" (and hence also relative to xyY):
A two-dimensional xy diagram can't really convey an intuitive sense of different three-dimensional RGB color spaces inside the XYZ reference color space. Experimenting with Bruce Lindbloom's 3D Gamut Viewer (towards the bottom of his RGB Working Space Information) is the quickest way to acquire an intuitive feel for the relationship between the XYZ reference color space and differently sized and shaped RGB color spaces. I strongly encourage you to vist Bruce Lindbloom's website and spend a few minutes examining various RGB color spaces inside the XYZ reference color space. For example, using the 3D Gamut Viewer, pick the 'WideGamut' color space as the Primary Working Space, 'None' as the Secondary Working Space, and 'XYZ' as the (reference) Color Space. The Gamut Viewer is interactive so you can swing the XYZ color space around and view the Primary Working Space from all angles. Also try switching to xyY space. See if you can locate all 8 XYZ vertices (black, white, red, blue, green, magenta, cyan, white). Then add 'sRGB' as the Secondary Working Space and see how the WideGamut color space compares to the sRGB color space.
ICC means D50 adapted RGB
XYZ was invented in 1931. The ICC was founded 62 years later, in 1993. RGB color spaces were used in color science and practical applications long before the ICC suggested that people use ICC profiles in their digital darkrooms. An ICC profile RGB matrix color space is just an RGB matrix color space that's been adapted to the D50 reference white.
The ICC profile D50 reference white
The concept of a D50 reference white is important, but not exactly intuitively obvious, so requires a little background information. The ICC chose D50 as the reference white for ICC profiles because the ICC is heavily oriented toward facilitating the making of prints on paper, and D50 is the preferred reference white for evaluating prints on paper.
So the next question is "What is a reference white"? What we perceive as "white" depends on the type of light that's illuminating a scene. In terms of everyday experiences, D50/5003K is the color of direct sunlight in the early morning on a clear day. D50 is warmer and yellower than the D65/6504K color of the indirect, diffusely scattered daylight at noon on a very slightly overcast day, which in turn is warmer and yellower than the color of light in the deep shade of a building (11,000K and up) on a bright sunny day. Going the other way around, we say the color of light in the deep shade of a building is coolor and bluer than daylight, which in turn is cooler and bluer than early morning sunlight.
Saying a color is "white" means you've already picked a particular "color of white" to act as your reference white. If D50 is your reference white, then D65 is actually blue. If D65 is your reference white, the D50 is actually yellow. If you change your reference white, you have to change all the colors or they look funny. The technical term for changing colors to match a new reference white is adaptation.
Out there in the real world, your reference white is determined by the type of light that illuminates what you are looking at. To give you a visual picture of why adaptation is necessary whenever your reference white changes (your eyes do adapt as your reference white changes!), consider the following: Relative to direct morning sunlight, the light on a cloudy, rainy day is cold and blue. Relative to direct morning sunlight, candlelight is warm and yellow. So when you and your partner are having a romantic dinner by candlelight, your partner's face has a warm "candlelight glow". It would look very odd if your partner's face had that same warm "candlelight glow" not just at the dinner table but also while standing outside on a cloudy, rainy day where the color of light is much colder and bluer. It would be as if your partner were always illuminated by candlelight even when everyone else is drenched in rain (which is a nice poetic metaphor but not exactly color science).
Well behaved RGB matrix working spaces
This tutorial focuses on ICC profiles for a particular type of matrix color space, namely, the well behaved, D50-adapted RGB matrix color spaces that we use in the digital darkroom, commonly called working spaces. Quoting from What Makes a Color Space Well Behaved?:
[For well behaved color spaces,] "[w]hite" and "black" have precise physical definitions. Assuming a single full spectrum light source illuminating a scene, "solid black" is the color you get when absolutely no light whatsoever is reflected from a surface, and "solid white" is the color you get when 100% of the light is reflected by a perfectly diffuse reflecting surface. In reality there are no surfaces that reflect no light at all (a light trap comes close) and no surfaces that perfectly diffusely reflect all the light (finely ground white powder comes close).
All ICC RGB well behaved working space profiles have three characteristics:
- The RGB color black (0,0,0) has the XYZ coordinates (0.0000, 0.0000, 0.0000).
- The RGB color white (1,1,1) has the XYZ coordinates (0.9642, 1.0000, 0.8249).
- If R=G=B, the resulting color is neutral gray.
The color white in any ICC RGB matrix working space profile always has the XYZ coordinates (0.9642, 1.0000, 0.8249) because that's the XYZ coordinates of the D50 standard illuminant, and the ICC decided that all ICC profiles should use D50 as the ICC profile reference white. If the ICC had been oriented towards displaying images on CRT monitors instead of evaluating paper prints, they likely would have chosen D65 as the reference white, and the ICC profile reference white would have had the D65 XYZ coordinates (0.9505 1.0000 1.0891). Had they chosen E (equal energy), which is the reference white for the 1931 CIEXYZ color space and is very close to D55, which is the reference white for color film stock, then the ICC profile reference white would have had the E XYZ coordinates (1.0000, 1.0000, 1.0000).
Matrix and LUT ICC profile RGB working spaces
Almost all ICC RGB working space profiles (sRGB, AdobeRGB, BetaRGB, ProPhotoRGB, etc) are simple matrix profiles. Matrix profiles use a 3x3 matrix to convert an RGB color to XYZ and vice versa.
The other type of ICC profile is a lookup table ("LUT") profile. LUT profiles contain (sometimes very large) lookup tables that locate RGB colors in XYZ space. LUT profiles are commonly used to describe the complex behavior of devices such as printers, digital cameras (for which matrix profiles are also often used), and sometimes even monitors. As an aside, there are also LUT and matrix profiles for non-RGB color spaces such as CMYK.
In this tutorial, to avoid repeating the word "matrix" over and over, all references to ICC working space profiles mean RGB matrix working space profiles. I only know of one RGB lookup table working space, called PhotoGamutRGB. The ICC has created a few print-oriented LUT RGB profiles with names that invoke the standard matrix sRGB and ProPhotoRGB profiles, but these non-floss LUT profiles are not working space profiles in the normal sense of the word, and won't be discussed further in this tutorial.
The sRGB D65 white point and the sRGB ICC profile
I'm pretty sure you are objecting that the white point of sRGB isn't D50, it's D65. And you are right. But that's the sRGB color space reference white, not to be confused with the ICC profile reference white (see Section A of The Luminance of an sRGB Color depends on the Reference White for more information). In an ICC profile the illuminant tag specifies the ICC profile reference white, which is always D50.
The ICC sets the specifications for making an ICC profile, and over time they've changed their minds about how to make an ICC profile, resulting in V2 profiles, V4 profiles, and V2 profiles made according to V4 specifications (if you think that's confusing, go talk to the ICC). In a V2 profile made according to V2 profile specifications, the actual color space reference white (eg D65 for sRGB) is given in the profile white point tag. So a V2 sRGB profile made using the V2 profile specifications (such as the Argyllcms "sRGB.icm" profile) has a D65 profile white point.
In any profile, V2 or V4, made according to the V4 profile specifications, the profile white point tag seems somewhat redundant as it always has the D50 XYZ coordinates, which is the same information that's carried in the profile illuminant tag.
So where's the original D65 sRGB color space reference white information in profiles made according to V4 specifications? It's carried in the chad ("chromatic adaptation matrix") tag. The chad tag doesn't give you the color space reference white. Rather it gives you the adaptation matrix for adapting from the color space's original reference white (eg D65, E, C, etc) to the ICC profile D50 reference white. For example, the chad tag in a V2 or V4 sRGB profile made using V4 specifications contains a D65 to D50 chromatic adaptation matrix.
Red, Blue, and Green XYZ primaries (adapted to D50)
We've already seen that all the standard well behaved "working space" RGB ICC profiles have the same black and white point locations in XYZ space. So what makes one ICC working color space different from another is the respective XYZ coordinates for the color space's reddest red, greenest green, and bluest blue. Table 1 gives the XYZ coordinates for the reddest red, greenest green, and bluest blue, for five different RGB working spaces, arranged in order from the smallest to the largest color gamut (color gamut refers to the three-dimensional shape and size of the portion of the XYZ reference color space occupied by any given RGB color space):
|Red = RGB (1,0,0)||Green = RGB (0,1,0)||Blue = RGB (0,0,1)||X||Y||Z||X||Y||Z||X||Y||Z|
I've mentioned "luminance" several times and here it is again. If you add up the Y values for the Red, Blue, and Green primaries across the row for the profiles in Table 1 , the sum is always 1.0000. For example, for sRGB, 0.2224 plus 0.7170 plus 0.0606 equals 1.0000.
All the RGB-XYZ primaries in Table 1 (and also the equivalent xyY primaries in Table 1 below) have already been adapted to the ICC profile D50 reference white. When you make an ICC profile using LittleCMS version 2 (LCMS2, which uses the ICC V4 Specifications), you actually give LCMS2 the unadapted XYZ color space primaries, usually as xyY rather than XYZ, and you also give LCMS2 the color space's reference white. Then LCMS2 obligingly does a Bradford Adaptation from the color space's reference white to the D50 ICC profile reference white (eg D65 for sRGB), and then spits out an ICC-compliant, D50-adapted color space profile.
Why so many RGB working spaces?
Anyone can make up a new RGB working space by picking a suitable set of Red, Blue, and Green primaries and a color space reference white. Two RGB color spaces in Table 1 — sRGB and ProPhotoRGB — should be familiar to everyone.
The CIE-RGB color space is the first color space ever invented. Its Red, Green, and Blue primaries represent the actual wavelengths of light used in the original Wright and Guild experiments that led to the creation of the XYZ color space. CIE-RGB has the E reference white and that's what the original 1931 CIEXYZ color space uses.
You might not have ever heard of the Identity color space, but it pops up now and again because it's mathematically obvious. It's what you get when you ask LCMS2 to make a profile with (1,0,0), (0,1,0), and (0,0,1) as the reddest red, greenest green, and bluest blue in XYZ space and D50 as the Identity color space's reference white.
I'm sure you've never heard of the AllColors-RGB color space before, because I invented it myself. AllColors-RGB is just a tiny bit larger than the ACES color space, being large enough to cover some just barely visible red and violet-blue wavelengths of light that ACES excludes, and it has the D50 reference white instead of the ACES D60 reference white.
There are many familiar and not-so-familiar RGB color spaces besides the five that are listed in Table 1. They were all invented by someone for some particular purpose. Many color spaces were invented to match various display devices. The old NTSC color space described the display characteristics of televisions from the 1950s. sRGB was invented to match the colors that could be displayed on CRTs from the 1990s, as were AppleRGB (Apple monitors) and ColorMatchRGB (PressView monitors). I'm not sure if any consumer products are available yet that use the new Rec. 2020 UHDTV color space.
Many other familiar color spaces were designed to be "just large enough" to hold colors that could be scanned from film and/or colors that could be printed on paper. "Film to print" color spaces include AdobeRGB, BetaRGB, BruceRGB, DonRGB, and ECI-RGB (this list is far from complete). Kodak created ProPhotoRGB in 1999 (under the name "ROMM" or "Reference Output Medium Metric RGB Color Space") to be big enough to hold all conceivably relevant "film to print" colors and then some, to act as an archival, future-proof color space.
However, none of these "film to print" color spaces hold all the colors that can be captured by today's digital cameras. There's a wedge of violet blue and magenta colors that ProPhotoRGB misses completely (see the ProPhoto magenta dots and lines in Figure 2, Section D), and digital cameras capture colors that go deeper into the visible blues and violet-blues than even the bluest ProPhotoRGB blue. Which is no doubt one reason why ACES came up with their own ACES color space, which holds essentially all colors (ACES is the archival color space that Kodak ROMM should have been), and why my "AllColors-RGB" color space is just a smidge larger than the ACES color space.
There's a price to be paid for any "large enough to hold all real colors" color space, which is that it also will hold varying percentages of imaginary colors in addition to all the real colors. The easiest way to demonstrate where these imaginary colors are located is by showing you some two-dimensional xy chromaticity diagrams, coming up in Section E:
xyY and the chromaticity diagram
The horseshoe-shaped chromaticity diagram that you are no doubt familiar with is a kind of "footprint" of all real colors on the xy (chromaticity) plane of the xyY color space.
xyY from XYZ
xyY is calculated from XYZ by surprisingly simple mathematical equations:
x = X / (X+Y+Z)
y = Y / (X+Y+Z)
Y = Y
Unlike XYZ, xyY space cleanly separates the XYZ Y (luminance) from color, or rather from chromaticity, which is what the "xy" in xyY stands for. For more details see The CIE XYZ and xyY Color Spaces, and Bruce Lindbloom's xyY to XYZ, and XYZ to xyY.
The outer edge of the horseshoe-shape corresponds to various spectrally pure wavelengths of actual light that humans can see, which is why I said XYZ and xyY are connected by "surprisingly" simple equations: it surprises me that XYZ values are so easily connected with wavelengths of actual light. But then again, the wavelengths of light used in Guild and Wright experiments were there before the xyY and XYZ color spaces had been invented to describe them.
All XYZ colors have unique locations in xyY space and vice versa, so like XYZ, xyY is also a reference color space. The "xy" part of xyY represents color (actually "chromaticity", which is what's left of color when luminance is abstracted away) and the "Y" represents relative luminance. If you examine the equations for calculating xyY from XYZ and vice versa, you will see that the "Y" of xyY and the "Y" of XYZ are equal to one another, so both of them represent the relative luminance of a color.
RGB color space coordinates in xyY space
Table 2 is exactly the same as Table 1 , except that instead of giving the locations of the ICC profile color space coordinates in XYZ space, now the locations are given in xyY space:
Some of the primaries in Tables 1 and 2 are highlighted in yellow. The highlighted primaries represent XYZ (and hence xyY) coordinates that aren't real colors. The bluest blue and greenest green in the ProPhotoRGB, AllColors-RGB, and Identity color spaces are imaginary colors, as is the reddest red in the Identity color space. Figure 2 below shows why these colors aren't real colors — they fall outside the horseshoe-shaped chromaticity diagram.
Black, white, red, blue, and green on the chromaticity diagram
Figure 2 shows you the xy Red, Blue, and Green chromaticity coordinates corresponding to the RGB color space "xy" coordinates in Table 1. The brightly colored horseshoe shape in Figure 2 is the chromaticity diagram, shows you all the xy coordinates that represent real colors. All xy point outside the chromaticity diagram represent imaginary colors. Showing you the actual xyY (or XYZ) coordinates would require a preferable interactive 3D representation.
The outer edge of the chromaticity diagram represents spectrally pure colors identified by their wavelengths. The blue numbers around the edges of the brightly colored "horseshoe" are different wavelengths of spectrally pure light specified in nanometers. For example, the reddest red has a wavelength of 700 nm, the greenest greens are up around 520 nm, and the bluest blues are down around 450nm. Below 450nm are the violet blues, and oddly enough, all the magenta colors between violet-blue at 380 nm and red at 700 nm are purely a construct of the human brain, with no corresponding real wavelengths of light. Light waves are out there in the world, but color happens in the interaction between light waves and the eye, brain, and mind.
As already noted, the color of black in an ICC matrix RGB working color space has the XYZ coordinates (0.0000, 0.0000, 0.0000) and the color white has the XYZ coordinates (0.9642, 1.0000, 0.8249). The corresponding black and white xyY coordinates are (0.3805, 0.3769, 0.0000) and (0.3805, 0.3769, 1.0000). If you mentally locate these points on the chromaticity diagram in Figure 2 above, it should be clear that black and white are located just about dead center in the middle of the chromaticity diagram.
"All the real colors" requires imaginary colors, too
The largest RGB working space that uses only real colors as its primaries is the WideGamutRGB color space (shown in Figure 1 in Section B), which uses three spectrally pure colors located at 700, 525, and 450 nanometers as its primaries. If you mentally draw lines on the chromaticity diagram connecting the 700nm, 525nm, and 450nm wavelengths of light on the chromaticity diagram (or refer back to Figure 1 ), it should be obvious that WideGamutRGB doesn't include all the real colors. There's a swath of greens and cyans, plus a swath of violet blues and magentas that are left out. Moving the blue primary down to 380nm would include the violet blues and magentas, but a whole lot more of the greens and cyans would be left out.
Unfortunately, it's impossible to pick three dots anywhere inside or on the edge of the horseshoe-shaped chromaticity diagram that can be connected to encompass the entire horseshoe. The easiest and most mathematically obvious way to get "all the colors" is the Identity profile, with chromaticity coordinates (1,0), (0,1), and (0,0), all of which fall outside the chromaticity diagram and hence represent imaginary colors.
The ACES/AllColors color space chromaticity coordinates are also mathematically obvious: Draw a straight line from the chromaticity coordinates for the reddest real red at 700nm, straight up to the (imaginary) coordinates (0,1) for the green primary. Draw a second line from the chromaticity coordinates for the reddest real red, through the coordinates for the bluest real (violet) blue at 380nm. Use high school algebra to calculate the slope and Y intersect, which is at the (imaginary) coordinates (x=0.0, y=-0.6).
The Identity color space and the ACES/AllColors color space both include all the real colors. Looking at the chromaticity diagram, it should be obvious that the ACES/AllColors color space is more efficient than the Identity color space, meaning it includes a lower percentage of imaginary colors.
Notice that the CIE-RGB, ProPhotoRGB, and ACES/AllColors-RGB color spaces all have (almost) the same Reddest Red chromaticity coordinates. Several other standard color spaces also use these same Reddest (real! not imaginary) Red chromaticity coordinates, including BestRGB and WideGamutRGB (the Y values vary, so the XYZ coordinates are not identical, just the xy chromaticity coordinates).
Moving on to the last topic covered in this tutorial, there's one more bit of information you need before you can make and use an ICC matrix RGB color space profile in a color managed work flow, and that's the color space profile's tone response curves, covered in Section F:
TRCs and perceptual uniformity
Perceptual uniformity in (not so) everyday experiences
Here's a thought experiment that might help demonstrate what perceptual uniformity means from an everyday point of view: Pretend that you are in a room. There are no windows and the light-tight door is shut and locked. The room has 25 ten-watt light bulbs, all grouped closely together and attached to the ceiling. The ceiling is low enough and the room is small enough that 255 watts of light makes the room reasonably bright, but large enough that 10 watts of light hardly lights the room at all.
Now imagine that none of the light bulbs are turned on, so the room is completely dark.You have a document in your hand that you need to read because it tells you how to unlock the door and get out of the room. So once you locate the first light switch (there are 25 light switches, too, and unfortunately they aren't all in one place), you start turning on light bulbs. That first light bulb makes a big difference (some light vs no light). But depending on how far away the ceiling is, you probably can't read the document yet, because light intensity falls off as the square of the distance from the light source.
Turning on the second light bulb makes things look perhaps twice as bright, because humans are very sensitive to minor changes in light when the light level is very low. Turning on the third light bulb makes things look still brighter, but not three times brighter than just turning on one light bulb. The reason is because the more the photometric luminance increases, the less of a perceptual difference a small "unit of change" makes. We can easily tell the difference betweeen two lit 10-watt light bulb vs three lit 10-watt light bulbs, in our hypthetical "small enough" locked room, but not between 24 and 25 lit 10-watt light bulbs.
To summarize the results of our thought experiment, our perception of changes in luminance isn't linear, which means linear increments of additional light doesn't mean linear increments of our perception of the brightness of light. When the light level is low, "one more bulb" makes a big perceptual difference. When the light level is high enough, "one more bulb" makes essentially no perceptual difference at all. (As an exercise, try connecting the results of this "thought experiment" with the TRC and xicclu graphs in Figure 3 below.)
What is a tone response curve?
Switching gears rather abruptly here, in an ICC profile, a tone response curve ("TRC") determines how fast a color goes from dark to light as the color's RGB values go from 0 to 1. Some TRCs are linear. Some TRCs are more or less perceptually uniform.
There are actually three TRC tags in an ICC profile, one each for the Red, Blue, and Green channels. Theoretically each channel of an RGB ICC profile can have its own TRC that doesn't match the TRCs in the other two channels. This is commonly done with look profiles that are intended to make an image "look prettier" by virtue of simply applying an ICC profile. But for the well behaved RGB matrix ICC profiles that we use for image editing in the digital darkroom, all three channels have exactly the same tone response curve.
Commonly used TRCs
Although there are an infinite number of possible ICC profile tone response curves, only a few have found widespread use in working space ICC profiles:
- The linear gamma TRC is mathematically simple (valuein=valueout). There is no one single "linear light RGB" color space. Any RGB color space can be made into a "linear light" color space simply by giving it a linear gamma TRC in place of its regular TRC, hence linear light sRGB, linear light ProPhotoRGB, linear light Identity, etc. So "linear light RGB" or "linear gamma RGB" doesn't tell you which linear gamma RGB color space, it only tells you that the color space in question has a linear gamma tone response curve. The ACES color space is the only widely used color space that has a linear gamma TRC by design.
- Other "true gamma" curves. Besides the linear gamma TRC, two other commonly used "true gamma" TRCs are gamma=1.80 (for example, AppleRGB, ColorMatchRGB, and ProPhotoRGB), and gamma=2.2 (for example, AdobeRGB, BetaRGB, and WideGamutRGB). A gamma=2.2 TRC is the closest to being perceptually uniform. The gamma=2.0 TRC is the mathematically simplest nonlinear TRC.
- The mathematically inconvenient sRGB TRC is composed of a small linear segment (in the shadows) grafted onto a gamma=2.4 curve (everywhere else). The sRGB TRC is also close to being perceptually uniform and is approximately equal to the mathematically much simpler true gamma=2.20 TRC.
- The "L-star" curve is a perceptually uniform tone response curve based on the CieLAB L* channel. The L-star curve is used in the ECI-RGB color space. The L-star TRC is also mathematically inconvenient, relying as it does on the L* Companding equations.
Tone response curves and xicclu graphs
Figure 3 compares the linear gamma TRC with the exactly perceptually uniform L-star TRC and the approximately perceptually uniform sRGB TRC:
The main "take away" points from Figure 3 are:
- A linear gamma TRC represents the way real light in the real world actually combines and changes, which is to say, linearly — twice the light, twice the luminance (Y of xyY and XYZ). But the rate of change is perceptually very uneven.
- The L-star TRC is exactly perceptually uniform and the sRGB TRC is approximately perceptually uniform. A perceptually uniform rate of change require a very nonlinear TRC and does not represent the way real light behaves.
If light is linear, why do so many familiar RGB color spaces use (approximately) perceptually uniform TRCs?
The reason so many familiar RGB color spaces use approximately or exactly perceptually uniform TRCs has to do with a little problem called posterization that plagues 8-bit image editing. Some background is required to explain what posterization is. Figure 4 compares three gradients, created respectively with the linear gamma TRC, the almost perceptually uniform sRGB TRC, and the perceptually uniform l-star TRC:
As you can see by looking at the linear gamma gradient in the top row of Figure 4 , a linear gamma gradient distributes tone steps very unevenly. By "tone steps" I mean how many gradations are available to get from black to white. In an 8-bit integer image, there are 255 tone steps per channel. In a 16-bit integer image, there are 65535 tone steps per channel. In a floating point image, it depends on the processor and the type of floating point, but the answer is "lots".
Most of the tone steps in a linear gamma image are concentrated towards the highlights. There are correspondingly fewer tone steps available for the shadows and midtones. When digital imaging first got started back in the late 1990s, computers weren't powerful enough to handle more than 8 bits per channel. So to make the most of those 255 available tone steps, everyone used working color spaces with more or less perceptually uniform tone response curves. Otherwise shadow areas would have been posterized.
Posterization is the visually noticeable banding in an image that is caused by too few tone steps spread too far across an area in an image. Posterization from working with 8 bit images is the reason there are so many "film and print" working spaces (AdobeRGB, BetaRGB, BruceRGB, etc). If a color space is too large, shadows aren't the only areas in an image that can be affected by posterization (in Figure 1 above, compare the distance between the reddest WideGamut red and the greenest WideGamut green to the much smaller distance between the reddest sRGB and the greenest sRGB green). So people kept trying to make a color space that was big enough to hold film and print colors without stretching a measly 255 tone steps across too large a color space.
Figure 5 shows posterization in an 8-bit linear gamma gradient:
Color images are slightly less prone to the appearance of posterization because each channel has its own 255 tone steps, which hopefully are "out of sync" with the tone steps in the other two shadows. However, the shadow areas of each channel are still posterized and depending on the image, the posterization can be obvious, as shown in Figure 6 :
In a nutshell, when digital imaging first got started, computers were too slow to allow for anything other than 8-bit image editing. At 8-bits, there's not enough tone steps in the shadows of a linear gamma image to allow for visually smooth tonal transitions. So to avoid posterization, RGB working color spaces had more or less perceptually uniform tone response curves.
There seems to be a rumor going around that 16-bit images suffer from posterization in linear gamma color spaces, but so far I have not found that to be true, even when editing in the super-sized Identity color space. I suspect the rumor is based on source code that uses LCMS optimizations regardless of the image color space gamma. Certainly, in 32-bit floating point imaging "not enough tone steps to avoid posterization" is no longer a reason to choose a small working space over a large working space.
Summary of XYZ, RGB, ICC, xyY, and TRCs
- The 1931 CIEXYZ reference color space ("XYZ") is based on color matching experiments done in the 1920s. Y measures relative luminance; X and Z are based on how the cones in the human eye respond to light waves to make color. xyY is a mathematical transform of XYZ that separates chromaticity from luminance. Some XYZ/xyY coordinates represent real colors; the rest represent imaginary colors.
- The familiar horseshoe-shaped chromaticity diagram shows the chromaticities of all real colors. The xy coordinates of an RGB matrix color space's red, green, and blue primaries can be plotted on the xy chromaticity diagram. The chromaticity diagram shows that an RGB color space that's large enough to hold all real world colors also must hold some imaginary colors. There are two mathematically obvious RGB color spaces that hold all real colors: the Identity color space and the ACES/All-Colors color space. The Identity color space is mathematically simpler. The ACES/All-Colors color space is mathematically more efficient, meaning it has a lower percentage of imaginary colors than the Identity color space.
- A matrix RGB color space is a convenient subset of all XYZ colors. A matrix RGB color space is defined by its black and white points and red, green and blue primaries as located in XYZ space. Different RGB color spaces were invented for different applications. For example, the sRGB D65 color space describes a D65 display device that's been calibrated to match sRGB. The sRGB ICC profile color space describes a D50-adapted ICC working space profile with a color gamut that matches the color gamut of the sRGB D65 color space. The various "film and print" RGB color spaces try to hold colors can be captured on film and printed on paper. None of the "film and print" color spaces hold all the colors that can be captured by today's digital cameras.
- An ICC profile matrix RGB color space is a matrix RGB color space that has been adapted to use the D50 reference white. The ICC chose D50 because it's the preferred reference white for evaluating prints on paper. For a well behaved, "working" RGB ICC color space profile, black (R=G=B=0) has the XYZ coordinates (0.0000, 0.0000, 0.0000) and white (R=G=B=1) has the XYZ coordinates (0.9642, 1.0000, 0.8249). Well behaved ICC RGB color space profiles have the additional feature that if R=G=B, the resulting color is neutral gray.
- The technological limitations of editing images on computers in the 1990s meant digital imaging was 8-bit imaging. The contraints of 8-bit imaging meant RGB working color spaces had more or less perceptually uniform tone response curves, despite the disadvantages of editing in nonlinear color spaces. It also fueled a quest for a perfectly sized color space that was small enough to not cause posterizatin while editing 8-bit images, and large enough to hold various sets of "film and print" colors. For 32-bit floating point processing in the 21st century, editing images in a linear gamma color space that's large enough to hold "all the colors" won't cause banding.
Historical perspective on linear gamma editing:
As soon as computers had enough RAM and fast enough processors, 3D rendering software began switching over to linear gamma image editing because getting realistic color and tonal gradations in a nonlinear color space is
difficult impossible. For a while, some rendering software programs used horrendously complicated workarounds, but those workarounds were eventually abandoned as not worth the effort.
Over in the world of 2D image editing, 16-bit image editing has been around for quite a few years now. But 2D image editors have been slow to accomodate linear gamma image editing. Adobe is perhaps partly to blame for not leading the way ten years ago. The uproar over claims made by an early champion of linear gamma image editing, Timo Autokari, also might be partly to blame, because the furor over what he got wrong obscured the value of what he got right. On the one hand, Timo made some arguments based on what seems to have been some fundamental misunderstandings regarding displaying images on monitors and on the limitations of 8-bit image editing. On the other hand, some of his concrete examples of the advantages of linear gamma image editing were exactly right. And on the third hand (any discussion of Timo needs at least three hands), for some reason Timo seemed to upset people. A lot.
Timo's website is gone, though still worth perusing through the Wayback Machine archives of The Accurate Image Manipulation website. Probably the most balanced remaining online discussion of Timo's actual linear gamma image editing techniques is Dan Margulis Applied Color Theory - Linear Gamma (Gamma 1.0), wherein Andrew Rodney quips "Timo is somewhat of a joke to the 10th floor at Adobe (where all the Photoshop engineers hang)". Now that Adobe LightRoom boasts of its linear gamma image processing, hopefully the quip from Andrew Rodney has given Timo a few chuckles in return.
This Completely Painless Programmer's Guide to XYZ, RGB, ICC, xyY, and TRCs by Elle Stone is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.