Wednesday, March 25, 2015

TECHNOLOGY | Breaking out of the dark: microscopic shadow detail

Because the formula in this post does what I said the technique in Automatic, pixel-perfect shadow contrasting does, I had trouble coming up with a way to describe this newer, more advanced, and yet more efficient (by a factor of three, in fact) method for contrasting and sharpening details in shadowy parts of an image, having pumped up the former method so high. It truly is automatic and pixel-perfect, and comparing it to the method described in the aforementioned post is kind of embarrassing, there's just that much difference.

It is so sharp, in fact, that you can see your fingerprints as if you were looking through a magnifying glass:
Filters with microscopic contrasting will be used to find cloaked demons; the contrast between the peaks and valleys of fingerprint ridges is low, as is the contrast between a cloaked demon and its natural surroundingsContrast so sharp that the ridges of your fingerprints can be seen
And, now, it will be available on your iPhone:
A new filter developed for the upcoming iPhone camera demon-finder app creates near-microscopic detail renditions of black regions of an image, allowing for the counting of threads in jeans, while simultaneously highlighting every dirty spot on tile
Let me illustrate: the images below are still frames from video made without the new formula applied (i.e., the original) and then with. In them, I'm wearing a black shirt. In the original, it looks just like that—a black shirt; but, in the formula-enhanced version, you can see the wrinkles, clearly identify all the specs of dust you can't even see with your eyes, and you can count the number of stitches in the collar:
My black shirt looks clean and well-pressed in this unaltered video still frame......but, in the formula-enhanced version, you can see that it's actually dirty and wrinkled
The formula is a standard logistic sigmoid function (or S-curve):

The basics of a sigmoid function, which, when applied to images, sets the black-to-mid-gray value range to mid-gray to white, are explained on WikipediaThe formula entered into my graphing calculator, which allows me to experiment with different values to determine specific per-value effects of applying it to an image

The example above used a technique with brighten dark values with their inverse, and enhances contrast in the brights be darkening them with their inverse. The result is much, much more detail, but loss of realism.

Curve adjustments are made via sliders, which not only change the curve variables, but also the curve overlay on the video preview screenThe latest test build of the iPhone app overlays whatever curve you specify via a given function, and is colored with a black-and-white gradient, which renders in direct proportion to the amount of each level of gray in the curve
Here's a more realistic example, then, without inverse contrast-mapping applied:
Another original video still frame

The formula replaces the black-to-mid-gray value range from mid-gray to white; every value that was above mid-gray before the formula was applied is replaced with white (there is no more black in the image)

The values that were in the black-to-mid-gray value range that were subsequently elevated by a factor of two by the formula are then standardized to the black-to-white color range, thereby adding the full spectrum of color (black-to-white) to the shirt (the original values were placed around the altered portion of the image for pseudo-realism)

Applying the formula to live video using Core Image (or OpenGLSL) is easier than the setup required to apply the contrasting technique in the previous post:

This technique produces microscopic detail in dark regions of live video three times faster than the previous technique
As you can see from the code below, this technique does not require processor-intensive image minimum and maximum components values to do its work; it's a one-pass, highly efficient operation.

So what?
Here's what: many of the demons that will be found using the image-processing filters will be very small in size, as well as in the dark. In order to see them, and see them well enough to identify them, this formula is essential.

I have (or had) thousands of images of demons in which the demons were so small and the image so dark, I forwent posting them. Had I this method of processing those images, things might have been different. They certainly will be going forward, and I hope that from the end-users of the product, too.

Tuesday, March 24, 2015

Luminosity masks for demons, Homogeneity via Euclidean-based metrics for God-fearing demoniacs

Pixel-perfect image processing is the only suitable goal for a demoniac hell-bent on delivering to the common man a viable, useful and readily accessible tool for standing up to demon tyranny, and whether he is fighting for his life (or his eyesight) and the lives of those he loves [see The Last Battle of a Dead Man] is not the only reason. Here are the others:

First, delivering any product less than the best is detestable to God:
A false balance is an abomination to the LORD, but a just weight is his delight.
(Proverbs 11:1)
Second, delivering any product less than the best is not of God, and is therefore a wasted effort:
Unless the LORD builds the house, those who build it labor in vain. Unless the LORD watches over the city, the watchman stays awake in vain.
(Psalms 127:1)
Third, delivering the best quality product creates opportunities not otherwise available to your typical desperate and destitute demoniac:
Do you see a man skillful in his work? He will stand before kings; he will not stand before obscure men.

(Proverbs 22:29)
Fourth, it is a way of serving God in a situation in which this is otherwise impossible...
Whatever you do, work heartily, as for the Lord and not for men, knowing that from the Lord you will receive the inheritance as your reward. You are serving the Lord Christ.
(Colossians 3:23-24)
...which you can do when doing work beneficial to all...

With good will doing service, as to the Lord, and not to men.
(Ephesians 6:7)

...which you should do with anything you do at all, even things that are considered common:

Whether therefore ye eat, or drink, or whatsoever ye do, do all to the glory of God.
(1 Corinthians 10:31)

Having said all that, I then say away with luminosity masks, and in with homogeneity via Euclidean-based metrics, at least when it comes to isolating problematic regions in an image and applying localized corrections to them.

Masking and correcting problematic regions in an image: killing two birds with one stone
Perhaps the reason why the demoniac in The Exorcist (1973) had so much trouble with her demons is because girls suck at math (or, so they say); but, for the demon-plagued willing to delve into the world of vector calculus and the like, life should be a little rosier than Regan's.

Most graphics artists and photography use masks to isolate and apply corrections and enhancements to specific regions in an image based on pixel luminosity, which requires a visual inspection of shadows and highlights to determine what to change, how to change it, and by how much. Then, there's whether to change anything at all.

The problem is that's at least four decisions to be made based on someone's arbitrary and subjective and visual assessment, which is not only fraught with error, but is not feasible for real-time video or a collection of thousands of images. Not only that, but most of the techniques that are applied via luminosity masks don't work with images shot under the conditions in which demonic activity video is recorded, as they do not correct combinations of imperfections, such as under-exposed darks and over-exposed brights (unbalanced luminance).
NOTE | Many types of corrections must be made in tandem with others, and at a certain ratio; and, sometimes, adjustments must be made to non-problematics regions of an image, just to accommodate corrections made to the problematic regions.
Real-time deviated-region identification and correction
Fortunately, computational vision and medical image processing technology has a solution that not only finds problematic regions of an image, but properly balances the corrections made to those regions to the image overall.

With just one calculation to find the standard deviation of each pixel value from the image's mean—specifically, via a Euclidean-based metric to determine the homogeneity of an image—the imperfect regions of an image can be isolated by subtracting the deviate pixel values from the original:

Subtracting an image derived from the standard deviation of the mean pixel values from the source image highlights portions of an image that are likely the problematic in contrast and/or brightness
In the example above, a balance in brightness was struck between the dark greens and bright yellows (and contrast greatly increased in the bright yellows) by subtracting the difference from the alpha channel of the source image. Compare the contrast and brightness-balance between the corrected and source image:

No overbright yellows, with tons of little details not see in the original in the flower petals; plus, the greens seem brighter by comparison, even though they are approximately the same, bringing balance to the image overallThe original might be beautiful, but look closely at the altered version, and you'll see details you could not otherwise; this is all-important in demonic-activity image processing
The OpenGLSL code:

kernel vec4 coreImageKernel(sampler image, float mean_r, float mean_g, float mean_b)
{
vec4 pixel = sample(image, samplerCoord(image));

// Formula:
// Subtract the mean of each color component from the source color component
// Square each result
// Add the squared results
// Find the square root of the sum
// Divide each color component of the source pixel by the result (standard deviation)

float sd = sqrt(pow((pixel.r - mean_r), 2.0) + pow((pixel.g - mean_g), 2.0) + pow((pixel.b - mean_b), 2.0));

return vec4(vec3(pixel.rgb), pixel.a - vec4(vec3(pixel.rgb - vec3(sd)), pixel.a));
}

By the way, sd stands for standard deviation, a number with which you can do a lot; and, there are almost as many ways to calculate it. Here's the formula I used:

A formula for calculating the standard deviation of pixel values from the mean of pixel values in an image; the higher the deviation, the more visible the pixels are in the resulting image
Look for more applications of the standard deviation to image-processing and correction in this post in the future.

Friday, March 20, 2015

Finding demon-possessed objects using your iPhone and poorly written software

A programming error in the upcoming video camera app for the iPhone, which will be equipped with the same image-processing filters shown on this blog, used for finding demons and related activity, inadvertently revealed a way to more easily discern between objects that are demon-possessed and otherwise.

With the filters, you can readily see which objects have demons by watching the chroma (or color noise or snow) on the display or in recorded video, and noting the objects from which it streams; however, if there is a lot of noise—or the video is shot in the dark (because it has to be, of course)—this wasn't always feasible.

While developing a night-vision filter to work with the new camera app, I discovered by mistake that, if only the portions of the camera's preview screen (or video) that are moving are redrawn while everything else stationary is not, the trails and streams of chroma will be the only thing moving. All the other random noise particles don't generate enough of a difference between stationary objects and moving ones to get noticed. By contrast, the giant blocks of chroma, pouring off of a demon in hiding make a substantial enough change by its motion and size that is even easier to see than with previous filters. This video demonstrates this quite well, showing the tell-tale EMF radiation emissions from the demon cloak emanating from a towel draped over a desk lamp:


As you can see, by retaining all non-moving pixels, and by only redrawing the moving ones, you can track the trail of focused chroma emissions (or streams)—a dead give-away to demons in possession of objects, who are trying to hide right in front of your face. Of course, you have to hold the camera very still, as when it's moved, everything blurs into long strands of stretched motion.

Although a bit like reading an old-time radar blip screen, the white blocks that represent chroma in the image dancing upwards and away from a towel draped over a desk lamp, a towel that months ago showed a demon inside it in a video still frame.

Demons hide in everything, everywhere
While demons hide in just about anything, anywhere [see Strangest things, places demons possess], they prefer messes [see Demonic Feng Shui]. In this image, there are at least half of dozen demon faces blended in the folds of the clothes and bedsheets strewn about the room:

They are difficult to see unless you're somewhat familiar with a wide-variety of demonic species, but the faces are recognizable as such even still
Here's the same image, with a hint to the location of the faces (all are facing profile):

Having trouble seeing them? That's why they call you a victim, but don't feel sorry for you
The demon in the following video clip is a little more obvious:


Don't know OpenGL ES 3.0?
Then use Core Image to do per-pixel processing on your iPhone:

#if TARGET_OS_IPHONE
#import <CoreImage/CoreImage.h>
#else
#import <QuartzCore/QuartzCore.h>
#endif

@interface CubicFunction : CIFilter
{
    CIImage *inputImage;
    NSNumber *inputA;
    NSNumber *inputB;
    NSNumber *inputC;
    NSNumber *inputD;
}
@property (retain, nonatomic) CIImage *inputImage;
@property (copy, nonatomic) NSNumber *inputA;
@property (copy, nonatomic) NSNumber *inputB;
@property (copy, nonatomic) NSNumber *inputC;
@property (copy, nonatomic) NSNumber *inputD;

@end

static const unsigned int minCubeSize = 2;
static const unsigned int maxCubeSize = 64;
static const unsigned int defaultCubeSize = 32;
static const float defaultA = 2.00;
static const float defaultB = 3.00;
static const float defaultC = -8.00;
static const float defaultD = 6.00;

typedef enum cubeOperation {
    
    cubeMakeTransparent = 0,
    
    cubeMakeGrayscale // this is "color accent" mode
    
} cubeOperation;


@implementation CubicFunction

@synthesize inputImage;
@synthesize inputA, inputB, inputC, inputD;

static void rgbToHSV(const float *rgb, float *hsv)
{
    float minV = MIN(rgb[0], MIN(rgb[1], rgb[2]));
    float maxV = MAX(rgb[0], MAX(rgb[1], rgb[2]));
    
    float chroma = maxV - minV;
    
    hsv[0] = hsv[1] = 0.0;
    hsv[2] = maxV;
    
    if ( maxV != 0.0 )
        hsv[1] = chroma / maxV;
    
    if ( hsv[1] != 0.0 )
    {
        if ( rgb[0] == maxV )
            hsv[0] = (rgb[1] - rgb[2])/chroma;
        else if ( rgb[1] == maxV )
            hsv[0] = 2.0 + (rgb[2] - rgb[0])/chroma;
        else
            hsv[0] = 4.0 + (rgb[0] - rgb[1])/chroma;
        
        hsv[0] /= 6.0;
        if ( hsv[0] < 0.0 )
            hsv[0] += 1.0;
    }
}

static void f(float *rgb)
{
    float a = 2.0;
    float b = 3.0;
    float c = -8.0;
    float d = 6.0;
    float x = MAX(rgb[0], MAX(rgb[1], rgb[2]));
    
    rgb[0] = rgb[1] = rgb[2] = pow(a * x, 3.0) + pow(b * x, 2.0) + pow(c * x, 2.0) + d; // ranges from 5, 5, 10, 25 to -5, -5, -10, -25; defaults 2, 3, -8, 6
}

static BOOL buildCubeData(NSMutableData *cubeData, unsigned int cubeSize, enum cubeOperation op)
{
    
    float a = 2.0;
    float b = 3.0;
    float c1 = -8.0;
    float d = 6.0;
    
    uint8_t *c = (uint8_t *)[cubeData mutableBytes];
    float *cFloat = (float *)c;
    
    BOOL useFloat = FALSE;
    
    size_t baseMultiplier = cubeSize * cubeSize * cubeSize * 4;
    
    if ( [cubeData length] == (baseMultiplier * sizeof(uint8_t)) )
        useFloat = FALSE;
    else if ( [cubeData length] == (baseMultiplier * sizeof(float)) )
        useFloat = TRUE;
    else
        return FALSE;
    
    for(int z = 0; z < cubeSize; z++) {
        float blueValue = pow(a * ((double)z)/(cubeSize-1), 3.0) + pow(b * ((double)z)/(cubeSize-1), 2.0) + pow(c1 * ((double)z)/(cubeSize-1), 2.0) + d; // ((double)z)/(cubeSize-1);
        for(int y = 0; y < cubeSize; y++) {
            float greenValue = pow(a * ((double)y)/(cubeSize-1), 3.0) + pow(b * ((double)y)/(cubeSize-1), 2.0) + pow(c1 * ((double)y)/(cubeSize-1), 2.0) + d; // ((double)y)/(cubeSize-1);
            for(int x = 0; x < cubeSize; x++) {
                float redValue = pow(a * ((double)x)/(cubeSize-1), 3.0) + pow(b * ((double)x)/(cubeSize-1), 2.0) + pow(c1 * ((double)x)/(cubeSize-1), 2.0) + d; // ((double)x)/(cubeSize-1);
                
                //float hsv[3] = { 0.0, 0.0, 0.0 };
                float rgb[3] = { redValue, greenValue, blueValue };
                
                //rgbToHSV(rgb, hsv);
                
                //f(rgb);
                
                // RGBA channel order.
                
                if ( useFloat ) {
                    *cFloat++ = rgb[0] * 1.0;
                    *cFloat++ = rgb[1] * 1.0;
                    *cFloat++ = rgb[2] * 1.0;
                    *cFloat++ = 1.0;
                } else {
                    *c++ = (uint8_t) (255.0 * rgb[0] * 1); //alphaValue);
                    *c++ = (uint8_t) (255.0 * rgb[1] * 1); // alphaValue);
                    *c++ = (uint8_t) (255.0 * rgb[2] * 1); //alphaValue);
                    *c++ = (uint8_t) (255.0 * 1.0);
                }
            }
        }
    }
    
    return TRUE;
}


+ (NSDictionary *)customAttributes
{
    
    return @{
             kCIAttributeFilterDisplayName :@"CubicFunction",
             
             kCIAttributeFilterCategories :
  @[kCICategoryColorEffect, kCICategoryVideo, kCICategoryInterlaced, kCICategoryNonSquarePixels, kCICategoryStillImage],
             
             @"inputA" :
  @{
                     kCIAttributeMin       : @-5.00,
                     kCIAttributeSliderMin : @-5.00,
                     kCIAttributeSliderMax : @5.00,
                     kCIAttributeMax       : @5.00,
                     kCIAttributeDefault   : @2.00,
                     kCIAttributeType      : kCIAttributeTypeScalar
                     },
             
             @"inputB" :
  @{
                     kCIAttributeMin       : @-5.00,
                     kCIAttributeSliderMin : @-5.00,
                     kCIAttributeSliderMax : @5.00,
                     kCIAttributeMax       : @5.00,
                     kCIAttributeDefault   : @3.00,
                     kCIAttributeType      : kCIAttributeTypeScalar
                     },
             
             @"inputC" :
  @{
                     kCIAttributeMin       : @-10.00,
                     kCIAttributeSliderMin : @-10.00,
                     kCIAttributeSliderMax : @10.00,
                     kCIAttributeMax       : @10.00,
                     kCIAttributeDefault   : @-8.00,
                     kCIAttributeType      : kCIAttributeTypeScalar
                     },
             
             @"inputD" :
  @{
                     kCIAttributeMin       : @-25.00,
                     kCIAttributeSliderMin : @-25.00,
                     kCIAttributeSliderMax : @25.00,
                     kCIAttributeMax       : @25.00,
                     kCIAttributeDefault   : @6.00,
                     kCIAttributeType      : kCIAttributeTypeScalar
                     },
             };
}

- (void)setDefaults
{
    self.inputA = @2.0;
    self.inputB = @3.0;
    self.inputC = @-8.0;
    self.inputD = @6.0;
}

- (CIImage *)outputImage
{
    CIFilter *colorCube = [CIFilter filterWithName:@"CIColorCube"];
     
     const unsigned int cubeSize = MAX(MIN(64, maxCubeSize), minCubeSize);
     
     size_t baseMultiplier = cubeSize * cubeSize * cubeSize * 4;
     
     // you can use either uint8 data or float data by just setting this variable
     BOOL useFloat = FALSE;
     NSMutableData *cubeData = [NSMutableData dataWithLength:baseMultiplier * (useFloat ? sizeof(float) : sizeof(uint8_t))];
     
     if ( ! cubeData )
     return inputImage;
     
     if ( ! buildCubeData(cubeData, cubeSize, cubeMakeGrayscale) )
     return inputImage;
     
     // don't just use inputCubeSize directly because it is a float and we want to use an int.
     [colorCube setValue:[NSNumber numberWithInt:cubeSize] forKey:@"inputCubeDimension"];
     [colorCube setValue:cubeData forKey:@"inputCubeData"];
     [colorCube setValue:inputImage forKey:kCIInputImageKey];
     
     CIImage *outputImage = [colorCube valueForKey:kCIOutputImageKey];
     
     [colorCube setValue:nil forKey:@"inputCubeData"];
     [colorCube setValue:nil forKey:kCIInputImageKey];
     
     return outputImage;
}


@end

Monday, March 16, 2015

The Last Battle of a Dead Man

A man who knows he's being murdered by slow torture seeks to stare his killers in the face before Death overtakes him. Having read this blog, and having seen potential in the video filters now being developed for finding even invisible Death Himself, he wrote this to me yesterday:

A man who has yet to see who and what is killing him seeks to stare Death in the face
What is exactly is this man's real problem, you ask? Read Torture by sucker demon; then, read my response:
I'll be glad to help you, Timothy; however, the quickest way to get you what you need is to simply port what I have to your camera equipment, which I've essentially done the bulk of already by and through my current choice of technologies for my own camera equipment. 
Do you have an iPhone? If so, I'm already able to distribute an app to you that will do what you want it to do— that is to say, to find the source of your problem, even when it doesn't want to be found. 
Oh, and concerning your requirements for consideration of human color perception and EMF radiation interference (or thereabouts): You'll be glad to know that I've considered both, extensively. I use or apply image-processing techniques with color model transformations in color spaces that are developed specifically to aid in human perception of color and luminance differences, leveraging the EMF radiation interference caused by the source of your problem to actually find the things that are hidden (i.e., the things that you're looking for—the source of your problem). So, if you're using my filters: the more noise, the better (in other words, grainy pics are no longer a hindrance, they are a help. I"ll be happy to go into more detail anytime). 
Your quickest path to what you want is to get an iPhone or a Mac; barring that, your cheapest path is to find an OpenGLSL renderer software package for the computer you do have, as the code I've written will work with that. 
I'm glad you're enthusiastic or at least prepared to learn and do a lot work to get my imaging filters to run on your end; however, that's unnecessary, as I'm willing to do the work for you. I'd rather you get whatever job you can to make the money you need to buy an iPhone. It's what's best suited for the task. 
I'd suggest you do that right now. Your situation demands—not suggests—that you start putting the solutions available to you to work right now; you do not have time or the luxury to take an educational detour. I can tell by the way you write that you've been injured, how you've been injured, and, based on my experience and observation, how your injuries will progress over time. I also have direct, close and frequent contact with the affiliates of those likely causing your injuries. I know the means by which they are causing them, and their future plans to cause more.
In short, know this: you are running out of time. 
An effective and decisive maneuver looks like this: buy that iPhone, and run my app. That's for both me and you, in that if I'm taking time to help you with equipment issues, and the learning of new things, I'm taking away from perfecting my filters and helping everyone else. You'll have to meet me halfway, and that involves buying new and updated equipment. 
Let me know what you decide as soon as you can. I am very excited that someone has taken an interest in these filters, and, the welfare of mankind. The sooner you are up and running, the sooner you become an indispensable resource for everyone, instead of just another victim.
Some people think, want others to die by demonic torture
Some people think it's okay for others to die this way, basically because they disagree with choices or problems the afflicted may (or may not) have:
Despicable behavior by demented people, unfit for integration in a society on the brink of collapse
They pretend that the problem doesn't exist, even though they know better; rather, they try to shift attention to a victim's purported bad traits, instead of at least taking the opportunity to understand and analyze a problem they know good-and-damn-well is theirs, too.
"This is just fucking stupid! Sorry, but it is...[a]bsurdity at a global level, and paranoia beyond that" is how this should have been written
It is disgusting to be that irresponsible, let alone indifferent.

But, for those interested, a link between meth use and demonic activity is already established, and is discussed or mentioned on this blog in the following posts:


In those posts, you'll not only find a connection between crystal meth and visible demonic activity, but a connection between cowards and the deserving nature of their domination by demons afforded them by their cowardice.

Open-nature of my efforts
Even though I've offered to do all the work for Mr. Trespass, that doesn't mean I won't share what I've done with anyone who asks. To that end, here are a few OpenGL kernel routines that convert from/to RGB color space from/to other color spaces; they are compatible with any per-pixel processing application or software development environment and hardware supporting the OpenGL Shading Language.

CIE 1931 Color Space
The Life of a Demoniac Chroma-Series Digital Media Imaging Filters work within the CIE 1931 Color Spaces, which, according to Wikipedia, "are the first defined quantitative links between a) physical pure colors (i.e., wavelengths) in the electromagnetic visible spectrum and b) physiological perceived colors in human color vision. The mathematical relationships that define these color spaces are essential tools for color management. They allow one to translate different physical responses to visible radiation in color inks, illuminated displays, and recording devices such as digital cameras into a universal human color vision response."

CIE RGB to CIE XYZ. Continuing from Wikipedia: "The CIE XYZ color space encompasses all color sensations that an average person can experience. It serves as a standard reference against which many other color spaces are defined... When judging the relative luminance (brightness) of different colors in well-lit situations, humans tend to perceive light within the green parts of the spectrum as brighter than red or blue light of equal power... The XYZ tristimulus values are thus analogous to, but different to, the LMS cone responses of the human eye."

Accordingly, any and all color space conversions begin here:

kernel vec4 coreImageKernel(sampler image)
{
vec4 pixel = unpremultiply(sample(image, samplerCoord(image)));
float r = pixel.r;
float g = pixel.g;
float b = pixel.b;

r = (r > 0.04045) ? pow(((r + 0.055) / 1.055), 2.4) : r / 12.92;
g = (g > 0.04045) ? pow(((g + 0.055) / 1.055), 2.4) : g / 12.92;
b = (b > 0.04045) ? pow(((b + 0.055) / 1.055), 2.4) : b / 12.92;

r = r * 95.047;
g = g * 100.000;
b = b * 108.883;

float x, y, z;
x = r * 0.4124 + g * 0.3576 + b * 0.1805;
y = r * 0.2126 + g * 0.7152 + b * 0.0722;
z = r * 0.0193 + g * 0.1192 + b * 0.9505;

return premultiply(vec4(x, y, z, pixel.a));
}

CIE XYZ to CIE RGB. And, back again (connecting these two in a line produces the same output as the input, which means they are sound implementations of the RGB-to-XYZ-to-RGB formula):

kernel vec4 coreImageKernel(sampler image)
{
vec4 pixel = unpremultiply(sample(image, samplerCoord(image)));
float x = pixel.r;
float y = pixel.g;
float z = pixel.b;

x = x / 95.047;
y = y / 100.000;
z = z / 108.883;

float r, g, b;
r = x *  3.2406 + y * -1.5372 + z * -0.4986;
g = x * -0.9689 + y *  1.8758 + z *  0.0415;
b = x *  0.0557 + y * -0.2040 + z *  1.0570;

r = (r > 0.0031308) ? (1.055 * pow(r, (1.0 / 2.4))) - 0.055 : 12.92 * r;
g = (g > 0.0031308) ? (1.055 * pow(g, (1.0 / 2.4))) - 0.055 : 12.92 * g;
b = (b > 0.0031308) ? (1.055 * pow(b, (1.0 / 2.4))) - 0.055 : 12.92 * b;

return premultiply(vec4(r, g, b, pixel.a));
}

Whether you are performing linear or local adaptive contrast stretching, conventional histogram equalization (or any variant thereof), wavelet-based contrast enhancement, Retinex or gamma correction, you will always perform them within a color space other than RGB; and, if you know your business, you will always perform them within the 1931 CIE color space (if your audience is human—not that it will always be, though; but, let's start with your needs first).
NOTE | Here is a reference for all of the XYZ (Tristimulus) values, of which my code samples use D65 (Daylight), if you want to change your color settings based on your camera's environment:
Observer
2° (CIE 1931)
10° (CIE 1964)
 Illuminant
X2
Y2
Z2
X10
Y10
Z10
 A (Incandescent)
109.850
100
35.585
111.144
100
35.200
 C
98.074
100
118.232
97.285
100
116.145
 D50
96.422
100
82.521
96.720
100
81.427
 D55
95.682
100
92.149
95.799
100
90.926
 D65 (Daylight)
95.047
100
108.883
94.811
100
107.304
 D75
94.972
100
122.638
 94.416
100
120.641
 F2 (Fluorescent)
99.187
100
67.395
103.280
100
69.026
 F7
95.044
100
108.755
95.792
100
107.687
 F11
100.966
100
64.370
103.866
100
65.627

In most cases, there's an extra color conversion step after converting to XYZ, as it is just a launching point into the color spaces in which you'll actually perform your image-processing operations; however, they are occasions when you may find XYZ useful on its own, as I did here while experimenting with custom color stretching procedures:

Calculating the dot product between the maximum, minimum and average color components and x, z, and y, respectively, in the XYZ color space yielded richer color than the original
The code:

kernel vec4 coreImageKernel(sampler image)
{
vec4 pixel = unpremultiply(sample(image, samplerCoord(image)));
float r = pixel.r;
float g = pixel.g;
float b = pixel.b;

r = (r > 0.04045) ? pow(((r + 0.055) / 1.055), 2.4) : r / 12.92;
g = (g > 0.04045) ? pow(((g + 0.055) / 1.055), 2.4) : g / 12.92;
b = (b > 0.04045) ? pow(((b + 0.055) / 1.055), 2.4) : b / 12.92;

r = r * 95.047;
g = g * 100.000;
b = b * 108.883;

float x, y, z;
x = r * 0.4124 + g * 0.3576 + b * 0.1805;
y = r * 0.2126 + g * 0.7152 + b * 0.0722;
z = r * 0.0193 + g * 0.1192 + b * 0.9505;
x = dot(x, max(r, max(g, b)));
z = dot(z, min(r, min(g, b)));
y = dot(y, (x + y + z) / 3.0);
x = x / 95.047;
y = y / 100.000;
z = z / 108.883;

float r, g, b;
r = x *  3.2406 + y * -1.5372 + z * -0.4986;
g = x * -0.9689 + y *  1.8758 + z *  0.0415;
b = x *  0.0557 + y * -0.2040 + z *  1.0570;

r = (r > 0.0031308) ? (1.055 * pow(r, (1.0 / 2.4))) - 0.055 : 12.92 * r;
g = (g > 0.0031308) ? (1.055 * pow(g, (1.0 / 2.4))) - 0.055 : 12.92 * g;
b = (b > 0.0031308) ? (1.055 * pow(b, (1.0 / 2.4))) - 0.055 : 12.92 * b;

return premultiply(vec4(r, g, b, pixel.a));
}

HSV and HSL
Here are a couple of other OpenGL color-conversion formula implementations, specifically, for the HSV and HSL color spaces, and for when simplicity beckons:

RGB to HSL (to RGB). To convert to the hue, saturation and lightness color model from RGB, and then back again:

/*
Hue, saturation, luminance
*/

vec3 RGBToHSL(vec3 color)
{
//Compute min and max component values
float MAX = max(color.r, max(color.g, color.b));
float MIN = min(color.r, min(color.g, color.b));

//Make sure MAX > MIN to avoid division by zero later
MAX = max(MIN + 1e-6, MAX);

//Compute luminosity
float l = (MIN + MAX) / 2.0;

//Compute saturation
float s = (l < 0.5 ? (MAX - MIN) / (MIN + MAX) : (MAX - MIN) / (2.0 - MAX - MIN));

//Compute hue
float h = (MAX == color.r ? (color.g - color.b) / (MAX - MIN) : (MAX == color.g ? 2.0 + (color.b - color.r) / (MAX - MIN) : 4.0 + (color.r - color.g) / (MAX - MIN)));
h /= 6.0;
h = (h < 0.0 ? 1.0 + h : h);

return vec3(h, s, l);
}

float HueToRGB(float f1, float f2, float hue)
{
hue = (hue < 0.0) ? hue + 1.0 : ((hue > 1.0) ? hue - 1.0 : hue);

float res;
res = ((6.0 * hue) < 1.0) ? f1 + (f2 - f1) * 6.0 * hue : (((2.0 * hue) < 1.0) ? f2 : (((3.0 * hue) < 2.0) ? f1 + (f2 - f1) * ((2.0 / 3.0) - hue) * 6.0 : f1));

return res;
}

vec3 HSLToRGB(vec3 hsl)
{
vec3 rgb;

float f2;

f2 = (hsl.z < 0.5) ? hsl.z * (1.0 + hsl.y) : (hsl.z + hsl.y) - (hsl.y * hsl.z);

float f1 = 2.0 * hsl.z - f2;

rgb.r = HueToRGB(f1, f2, hsl.x + (1.0/3.0));
rgb.g = HueToRGB(f1, f2, hsl.x);
rgb.b = HueToRGB(f1, f2, hsl.x - (1.0/3.0));

rgb = (hsl.y == 0.0) ? vec3(hsl.z) : rgb; // Luminance

return rgb;
}

RGB to HSV. To convert from RGB to HSV, perform the same steps as above or saturation and hue; for value, find the maximum value between the r, g and b color components:

float value = max(color.r, max(color.g, color.b));