Explore chapters and articles related to this topic
Fundamentals of Human Vision and Vision Modeling
Published in H.R. Wu, K.R. Rao, Digital Video Image Quality and Perceptual Coding, 2017
Ethan D. Montag, Mark D. Fairchild
Color appearance models [Fai98] attempt to assign values to the color attributes of a sample by taking into account the viewing conditions under which the sample is observed so that colors with corresponding appearance (but different tristimulus values) can be predicted. These models generally consist of a chromatic-adaptation transform that adjusts for the viewing conditions (e.g., illumination, white-point, background, and surround) and calculations of at least the relative color attributes. More complex models include predictors of brightness and colorfulness and may predict color appearance phenomena such as changes in colorfulness and contrast with luminance [Fai98]. Color spaces can then be constructed based on the coordinates of the attributes derived in the model. The CIECAM02 color appearance model [MFH+02] is an example of a color appearance model that predicts the relative and absolute color appearance attributes based on specifying the surround conditions (average, dim, or dark), the luminance of the adapting field, the tristimulus values of the reference white point, and the tristimulus values of the sample.
Brightness Model for Neutral Self-Luminous Stimuli and Backgrounds
Published in LEUKOS, 2018
Stijn Hermans, Kevin A. G. Smet, Peter Hanselaer
A number of CAMs are derived to describe the perception of surface colors. The application of these CAMs requires knowledge of the characteristics of the light source illuminating the target and background [Fairchild 2005; Hunt and others 2011]. One of them, CIECAM02, is most widely used and it includes chromatic adaptation, luminance adaptation, cone saturation and noise, and influence of background and surround [Moroney and others 2002]. In Li et al., 2016, Li and others revised the CIECAM02 model by merging the chromatic adaptation transform in the cone response transform and by adopting a two-step chromatic adaptation transform as proposed by Smet and others [2017a, 2017b].