A vibrancy of a color is a perceived quality that is somewhat similar to Saturation. One could think of it as pure, bright, high chroma color. It is not an absolute measure (like Saturation) and there's no agreed upon definition of what it means to change the vibrancy of a color.

Having said that we still want to make a distinction between “changing color's Saturation” and “changing color's Vibrancy” when talking about color corrections of a given image. Both increase the color's Saturation but one behaves like a linear Gain increase, applied uniformly to any color. And the other one uses a sliding scale that adjusts the magnitude of change based on the current Saturation level, where the highest multiplication is done for low Saturation colors and highly vibrant (highly saturated, vivid) colors are adjusted on a progressively smaller scale.

RGB color space

To adjust the saturation of a pixel in RGB space we will follow this general approach:

  1. create a desaturated version of the pixel by converting it to grayscale
  2. interpolate/extrapolate between the original and desaturated pixel


YUV color space

The goal is to increase the Saturation of each pixel based on its current Saturation level, where the increase factor is inversely proportional to the current value. Given the range of values for U and V \(\in[-127..+128]\), the highest increase factor will be around values of 0 and as the value gradually nears either end of the spectrum the multiplication coefficient approaches \(1.0\).

There are a few ways to implement this, listed below.

Linear estimation

Let's remember that the YUV color space is using the pair of U and V values to represent all the colors on a 2D cartesian plane, where the center \((0, 0)\) represents a pure grey color (or white, or black - depending on the luminosity Y). Both U and V are bound to \([-127..+128]\) range and we can just use the one “closest” to the outer edge as our baseline for calculating the multiplication coefficient:

double scale(int _u, int _v, double _vib){ // both _u and _v are in range [0..255]
  // center-normalize the U and V
  const unsigned int cu = abs(128 - _u);
  const unsigned int cv = abs(128 - _v);
  // use the one that is "further from the center"
  return 1. + (128 - max(cu, cv)) / 128. * _vib;

And then just apply the scale to both U and V components, basically performing a vector multiplication (preserving the color's Hue but changing its Saturation).

// pseudo-code
void vibrancy(/*array of pixels*/image, double _vib){
  for(const & pixel: image){
    const auto k = scale(pixel.u, pixel.v, _vib);
    // of course both new values have to be properly capped (omitted here for readability)
    pixel.u = (pixel.u - 128) * k + 128;
    pixel.v = (pixel.v - 128) * k + 128;

Vector's magnitude

Another way to see “how saturated is the color” would be to calculate the magnitude of the color vector, represented by its U and V components with the origin at \((0, 0)\) using a well-known Pythagorean formula.

double scale(int _u, int _v, double _vib){
  // center-normalize the U and V
  const unsigned int cu = abs(128 - _u);
  const unsigned int cv = abs(128 - _v);
  // use the vector's magnitude, ignoring the corners of the U-V rectangle
  const auto len = sqrt(cu*cu + cv*cv);
  return 1. + (128 - len) / 128. * _vib;

This approach has one advantage and 2 glaring drawbacks compared to the previous one:

  • Advantage: the vector's magnitude is a more precise representation of the color's Saturation than either U or V component in isolation
  • Disadvantage: the sheer number of calculations needed to calculate the value of multiplication coefficient is a few orders of magnitude higher, than the simpler linear approach
  • Disadvantage: color Saturation “in the corners” are getting a flat “no increase” result since they lay outside the circle of radius len

When tested in the lab both implementations provided very similar results, which were not visually distinguishable even at very high values of Vibrancy (up to 7.0 and beyond(!), both positive and negative).


There are, of course, many other approaches one could take in figuring out the multiplication coefficient based on the current Saturation of the pixel, which we are not going to discuss here.

Possible optimization

(This is valid for CPU-type architecture and is not very applicable to FPGAs where general purpose RAM is a big limiting factor)

There are over 8 million pixels in a 4K image and only 65K (256*256) possible transformations to either U or V component of the YUV triplet so it makes sense to pre-calculate those transformations and use the result as a LUT (provided it is faster to index the memory than to re-calculate the value, of course).

A sample implementation of this approach might look like the following (some pseudo-code follows):

void vibrancy(image, double _vib){
  uint8_t lut[256][256] = {}; // assuming 8-bit U and V components
  for(int i = 0; i < 256; ++i){       // for each scaler
    const auto scale = (128 - abs(i - 128)) * k;
    for(int j = 0; j < 256; ++j){   // for each scalee
      lut[i][j] = cap_val(round(j + (j - 128) * scale));
  for(every pixel in image){
    // figure out whether to use U or V as a scaler value
    const auto scaler = max(abs(pixel.u-128), abs(pixel.v-128)) + 128;
    // use the same scaler for both U and V
    pixel.u = lut[scaler][pixel.u];
    pixel.v = lut[scaler][pixel.v];

HSL color space

Moving to HSL for image processing has a wonderful advantage of “separation of concerns” where the 3 visual components are treated independently of each other. Correcting the Saturation is just one of such areas where moving into HSL color space provides a notable simplification in the approach. As the \(S\) component of the HSL is “just” a linear scalar value we don't need to concern ourselves with the UV square or an RGB cube and just directly approach the subject of calculating only the \(S\).

As was described before the main idea of the Vibrancy is that the lowest saturated colors get the most relative boost in Saturation while highly saturated colors get progressively lower boost, up to a “no boost” for the 100% saturated colors.

The scale (boost) multiplier therefore depends on the vibrancy factor and the pixel's saturation: \[ scale = 1 + \frac{100 - saturation}{100} * (vibrancy-1) \\ vibrancy \in [0..1]\\ saturation \in [0..100]\% \]

double scale(int _s, double _vib){ // _s is in range [0..100]%
  return 1. + (100 - _s) / 100. * (_vib - 1);

Once the scale (boost) value is calculated - just apply it to the pixels:

// pseudo-code
void vibrancy(/*array of pixels*/image, double _vib){
  for(const & pixel: image){
    pixel.sat *= scale(pixel.sat, _vib);