Choosing color spaces and formats¶
Working within a digital content creation application, it’s important to know and master how colors are managed in the various features of the application being used, as well as throughout the production pipeline in which it is inserted.
This second part is intended to concretely explain how to organize this production pipeline with regard to colors, and to help configure it.
Theory¶
Before setting up the various applications, it’s important to understand that colors are regularly converted during the production of images. Indeed, at each stage from the generation of a color until its display on any device, each piece of software and hardware invovled works in its own space.
Controlling the production pipeline of the colors therefore doesn’t imply choosing well a single colorimetric space, but rather to be aware of the various conversions and the various colorimetric spaces coming into play at each stage, within an application as much as between the applications.
This control will not guarantee that the colors will be correctly reproduced on the viewer’s device (TV, computer screen, telephone, cinema screen…) but at least allows for control throughout the production while ensuring images that meet the current standards are delivered. It’s then up to the broadcaster to take over at the end of the pipeline with a correctly configured pipeline as well.
Journey of a color¶
Let’s follow the path that a color must travel before being correctly displayed.
A color generated by a first application and to be exported to another application, via an intermediate file, is likely to undergo two conversions: from the first application to the file, and from the file to the second application. This operation is repeated at each stage of the production pipeline, up to broadcast, where the broadcast application (and hardware) converts what it receives into the broadcast color space.
Each application must therefore be correctly informed about the color space of the files it imports, in order to be able to carry out the correct conversion to its own color space. Likewise, at the time of export, the correct conversion from the color space of the application to that of the exported file must be carried out.
Some file types, such as openEXR1, allow you to minimize the number of conversions required by storing only raw data without color space information (provided that the different applications use the colors in the same standard format as openEXR, that is to say they work in RGB with 32 bpc in floatting point* format). But even in this case, there remains the problem of interpreting this data, i.e. converting it for display.
But within an application itself, many conversions can take place:
- From the color space of the imported file to the color space of the application.
- From the color space of the application to that of the computer screen, for preview.
- From the color space of the application to that of the output file.
Indeed, all these color spaces aren’t necessarily the same…
Each “brick” of the application using colors has a color space associated with it. Let’s see these different bricks and some recommendations.
Hint
Not all applications necessarily allow access to all the settings for all spaces of these different elements. The “imposed” settings can be more or less practical and intelligent depending on the application…
In applications, the different stages of color management will have to be adjusted:
- Workspace
- Input (imports)
- Display
- Color pickers
- Output (intermediate and final)
Note
All the explanations that follow apply both to design applications (3D, drawing, compositing, retouching, etc.) and to players (image display, video players, etc.)
Workspace (scene referred)¶
For any application that manipulates images, it’s essential to know how colors are generated and calculated.
There are two important characteristics to check:
- Is the space used linear*? And if it isn’t, what transfer curve or gamma is used? Most applications will work either in a linear space (notably all 3D rendering engines, which linearity guarantees physical accuracy), others with the sRGB transfer curve, the standard space for digital displays, and some others will leave the choice.
- Are the values integer* or floating point*? In other words, can the intensities go beyond the limitations of the display? 3D renderers all calculate with floatting point numbers, to simulate physical lights. As for other applications, it all depends… Some leave the choice, for others the choice is implicit (for example Adobe After Effects works with integer values when the project is defined on 8 or 16 bits per channel, but floatting point values with 32 bits per channel), and finally some leave no choice. It’s easy to guess what the application does by looking at whether the color values given by an eyedropper tool are integers or not.
These parameters form the first step of what will be called the workspace (scene referred), to which a second, equally important step, must be added: how, once generated in three R, G and B channels (for all applications) are the colors converted for display on the screen?
If colors are given in a linear space, they will therefore necessarily be delinearized, to use the transfer curve of the defined display space2 (see below). This process is relatively simple and hassle-free.
Concerning the transition from floating point values to integer values, also necessary for display, the process is more delicate, and involves important choices by the user3. It is indeed necessary to define what is done with intensities greater than 1.0
, which are the values more intense than the display limit. This process is quite similar to what happens in a still or video camera when capturing light: faced with unlimited intensities, the device must reduce these values within a range limited by the physical capabilities of the sensor and the technical parameters of its internal software and recording format. The choices to make are similar: the exposure can be corrected* and a curve can be applied to “compress” the values so that they fit within our limited range. Without these choices, values greater than the limit (1.0
, in the case of digital images) are simply ignored and clamped, i.e. overexposed, within the limit value in integer number (255
with 8 bits per channel).
Technically, this choice is generally made by selecting a working color space, called scene referred, and, depending on the system in use and the available parameters, an additional transformation to the display space. This is where the importance of having a wide gamut space and a fine transformation comes into play to allow the dynamic range* (the intensity range) to be compressed within the limits, without giving the impression of overexposure and obtaining a natural image.
The choice of this space therefore greatly influences the appearance of the generated colors, the way in which they are converted to the display space (therefore the final result). This choice is technical, obviously, to obtain a best-quality image (without over or underexposed areas), but it’s also an artistic and aesthetic choice.
Warning
Although it doesn’t change the “raw” data and is only a conversion of already calculated data, the workspace is chosen before starting to work; in fact, it’s once the space has been chosen that we work on colors can be done in this specific space. Changing workspaces once work has progressed makes no sense; it will indeed be necessary to adjust all lights, all color settings… t’s like changing the model of the camera after lighting has been done according to it on a shooting stage.
Input¶
Each time a file or other external element is imported, the application must correctly interpret (know) the color space of the element, in order to convert it to its workspace.
There are then two possibilities:
- Either the files respect the most common standards (when they exist…), and the application correctly interprets the files correctly by default.
- Either the files, or the application itself, do not respect these standards, or no standard exists, and the application must then allow to modify the interpretation of the data manually to specify the imported color space4.
In any case, in order to control the production, it’s imperative to control the interpretation of the colors by the applications during the import; some will systematically “get it wrong” on certain files, and you will then have to think of correcting the interpretation each time you import (or automate it)5.
See Intermediate Output and Final Output for more information on file-specific color spaces, and A few standards for files for a list of the most common standards.
Display¶
It is essential to keep in mind that the working space of the application is most often different from the color space of the display. After applying the working, scene referred, color space to the computed data, the application must associate a conversion to the color space used for display.
There are several elements to take into account for this display:
- The color space of the screen itself,
- The adjustments of the screen which can deteriorate the colors,
- The color profile applied to the screen by the operating system,
- The conversion carried out by the application from its workspace to that of the display.
See section Screen Calibration for more details on the subject.
Screen space¶
Each screen displays colors in a predefined color space chosen by the manufacturer for each specific model of screen.
There are three main categories of screens:
- Computer screens (and projectors)
- Televisions
- Phones, tablets, etc.
Following these categories, most displays use these color spaces:
- Computer: sRGB, although some displays (often called HDR) are also capable of displaying P3 colors; P3 displays also display sRGB, which is entirely contained “within” the P3. Computer screens display the full/pc range of colors (cf. Full range / Limited / TV / PC ?).
- Televisions: Rec.709, or sometimes sRGB (adjustable), or sometimes other spaces when they are HDR. TVs display the limited/tv range of colors (cf. Full range / Limited / TV / PC ?).
- Phones, tablets, etc. : sRGB, although some (rare) phones and tablets are also able to display P3 colors. These devices display the full range (full/pc) of colors (cf. Full range / Limited / TV / PC ?).
It should be noted that screens displaying exactly the advertised color space are rare, and most generate (more or less) small variations; these variations are generally largely corrected by a controlled calibration. Cf. screen calibration.
Settings and color profiles¶
The vast majority of screens offer several color settings on the screen itself, notably via brightness and contrast settings, supplemented, depending on the screen, by red, green and blue gammas*, and sometimes other settings.
These adjustments sometimes make it possible to correct the biggest defects of the screens as they are delivered from the factory (provided you have an efficient calibration process), and can be supplemented by finer adjustments, both via the color profile applied by the operating system, and possibly adjustments at the graphics card driver level.
Warning
Many screens offer “eco”, “auto”, “gaming” modes, etc., which automatically adapt their settings depending on the activity, the type of signal received, etc. In a production pipeline where colors are managed, it’s imperative to deactivate these different modes which change the display in an unpredictable way.
Knowing these settings is important for controlling the correct display of the colors on the workstation.
It should also be noted that these settings should be checked (and adjusted) regularly; the color display may vary with the aging of the screen, the ambient temperature, etc.
Cf. screen calibration for detailed explanations of the screen settings and how to adjust them.
Within the application¶
Once the screen is installed and properly set up (or as best as possible), all that is left to do is to select the correct display profile in the application.
Most of the time, a simple display option allows you to specify whether the screen is sRGB, Rec.709, P3 or something else; sometimes no settings are available and the application relies on the operating system.
It should be kept in mind that the application continues to work in its own space, which doesn’t depend on the one in which the colors are displayed, and that the file output doesn’t depend on this display space either; on the other hand, a bad choice of display leads to bad choices of colors and therefore unexpected and non-standard variations during output!
The worst mistake is, for example, to choose the wrong display space and then believe that it is the output space which is different from what we expected. This error then leads to changing the interpretation of the colors when importing into the next application in an attempt to compensate, and introducing bad corrections while completely losing control of the production pipeline.
Soft-Proofing¶
Some applications offer, in addition to controlling conversions to the display space, a simulation or soft proofing*, which consists of carrying out an intermediate conversion to the intended output space but during the work in progress, before finally converting to display space. When working for specific outputs, it can be useful to activate this kind of tool and thus check the result after the multiple conversions that the colors will undergo until the final format.
This method is particularly useful for simulating the result of printing in a CMYK space for example, but also the display of a video in its output format.
However, soft-proofing is just a verification method and one can often do without it (especially in video).
Cf. Soft-Proofing) for more details on the subject.
Color pickers¶
In some applications, color pickers may have their own color space.
Most often, they are either in the application working color space, which makes them difficult to use when the space is linear, or in the display space, which is more practical.
Non-linear spaces are preferred to facilitate the choice of colors; having color selectors in sRGB also makes it easy to retrieve colors from other applications, from images, etc. A conversion is then carried out after the selection of the color to the application working space.
Intermediate output¶
When exporting intermediate files, which will be used for further production, the goal is to lose as little information as possible and to keep as much data as possible for further work.
In this case, the simplest thing is, if possible, to export files in the application working space (scene referred). To do this, the file format best able to store any color information is openEXR (which is supposed to use linear spaces). It’s quite possible to use other formats, but in this case, either the choice of space will not be standard and may be misinterpreted subsequently, or unnecessary conversions are introduced, or a loss of data by having to use a smaller or non-linear space happens.
If it’s impossible to export in the working color space and in openEXR (or other format allowing to keep the right space), RGB formats should be favored (and YUV should be avoided, or in any case 4:4:4 subsampling should be used).
When the working space is linear but not the output space (and vice versa), a loss of precision and quality occurs, and in this case it’s absolutely necessary that the depth of the working space is larger than that of the output space (working in linear 32 bpc to output in non-linear 16 bpc for example).
Final output¶
During the final output, the standard corresponding to the delivery should obviously be respected as much as possible, or a specific request of the broadcaster must be filled out.
See the Practical / A few standards section for a list of the most common standards.
Most final outputs will be in color spaces dedicated to display, and therefore with a non-linear transfer curve; a loss of precision and quality occurs when moving from a linear working space to a non-linear display space, it’s important in this case that the working space has a higher depth than that of the final output (working in 16 bpc for an 8 bpc output for example).
-
It is often said, wrongly, that openEXR stores data in a linear RGB space. In reality, the openEXR format does contain the data of three channels which it names R, G and B, (or sometimes Y, U and V) in a linear space (if the standard is respected), but doesn’t contain any information regarding the primaries corresponding to its channels. It’s therefore at the time of importing files that the application must be “told” which are the primaries, i.e. the color space, to be used to interpret this data. ↩
-
In reality, without any indication to the contrary, most applications convert these colors not directly to the screen space (whatever that is), but to the sRGB standard (and let the operating system handle the rest). ↩
-
In this case, the choice of the working color space. ↩
-
If an application doesn’t allow you to change the color space during import, expect to have unexpected color variations during import. You will then have to guess where the application is “wrong” in order to manually perform a color correction to restore the original colors (most often simply via a gamma correction* or the application of a LUT*). Note that such an application doesn’t really have its place in a production pipeline where one seeks to control the color… ↩
-
Not all applications allow the automation of color management (for example Adobe After Effects, in 2024, doesn’t have an API for this specific point). ↩