Greenand never adjust it for color normalization
Quite often end users don't have the patience or the expertise (or both) to properly set up the white balance of the captured scene and prefer to rely on camera “just knowing what to do”.
Many algorithms are available today to deal with color normalization, some very sophisticated (and computationally expensive) and others, way less computationally expensive, but so simple that they are easily thrown off by things like a huge green screen background or a bright super-yellow T-shirt.
Below is one of those “simplistic” approaches with a little twist, that makes it an excellent option for real-time image processing on even the most basic of the devices.
The general idea is based on an assumption that if one takes a look around and measures the color of every pixel, then the grand total sum of all those values should be close enough to a pure grey color. This idea has a name and it is the "grey world".
Unfortunately, the world around is only grey if we are in a more-or-less neutral environment. Such is not the case with large patches of high-contrast colors (like "green screens") or even a relatively small patches of “highly toxic” colors, like bright yellow. These conditions affect the scene's average color sufficiently enough to throw it off and produce horrible color artifacts instead of normalizing the colors to their perceived values.
To combat the problem of “toxic colors”, a simple twist is introduced: ignore all the pixels that are too saturated. Now, what exactly is “too saturated” is up for a debate and is quite subjective. In our experiments, it looked like the threshold value should be chosen in such a way as to end up considering around 25-35% of all the pixels for the grey world calculation. For regular room environments, that means the threshold should be between 20 and 30 but may need adjustment either way if the scene is poorly lit (adjust down), or has lots of highly saturated bright colors (adjust the threshold up).
Once we throw away all the pixels that normally contribute to the normalization going wrong, we proceed with the “usual” grey world color normalization.
Below are a few blocks of steps that describe the process in a top-down approach (which is generally easier to comprehend than the bottom-up):
For each pixel:
0so first check if the
Yvalue is not
true- add that pixel's
Vcomponents to our running totals and increment the total number of “good” pixels by
20%of the total pixels in the image - that means we need to increase the “tolerance level”
40%- decrease the “tolerance level”
Now that we have our data set we can figure out what our picture looks like from the color balance point of view. For that we just calculate the average values of \(U\) and \(V\). The simplest approach is to calculate the arithmetic mean value, but other formulae may provide more accurate results.
At the end of this step we have the following 3 sets of numbers:
Bluecolors are from the
Green1). Those ratios are \(^R/_G\) and \(^B/_G\)
Now that we have all the information we need to make a decision let's make one!
Bluechannel. In our case we just change the sensor's blue channel gain by multiplying it by the \(^B/_G\) ratio
Redchannel, only we use the \(V\) and \(^R/_G\) values
Blue channel gets a bit over-saturated on the sensor we use so a tiny adjustment is introduced: the \(^B/_G\) ratio is offset by \(2%%\) right before it is applied to the new gain's value.
Sample AWB implementation in C++ shows one possible implementation of the approach described above. It does use (without much explanation) some of the external structures, but those should not be hard to deduce from their usage