TECHNOLOGY | Breaking out of the dark: microscopic shadow detail

Because the formula in this post does what I said the technique in Automatic, pixel-perfect shadow contrasting does, I had trouble coming up with a way to describe this newer, more advanced, and yet more efficient (by a factor of three, in fact) method for contrasting and sharpening details in shadowy parts of an image, having pumped up the former method so high. It truly is automatic and pixel-perfect, and comparing it to the method described in the aforementioned post is kind of embarrassing, there's just that much difference.

It is so sharp, in fact, that you can see your fingerprints as if you were looking through a magnifying glass:
Filters with microscopic contrasting will be used to find cloaked demons; the contrast between the peaks and valleys of fingerprint ridges is low, as is the contrast between a cloaked demon and its natural surroundingsContrast so sharp that the ridges of your fingerprints can be seen
And, now, it will be available on your iPhone:
A new filter developed for the upcoming iPhone camera demon-finder app creates near-microscopic detail renditions of black regions of an image, allowing for the counting of threads in jeans, while simultaneously highlighting every dirty spot on tile
Let me illustrate: the images below are still frames from video made without the new formula applied (i.e., the original) and then with. In them, I'm wearing a black shirt. In the original, it looks just like that—a black shirt; but, in the formula-enhanced version, you can see the wrinkles, clearly identify all the specs of dust you can't even see with your eyes, and you can count the number of stitches in the collar:
My black shirt looks clean and well-pressed in this unaltered video still frame......but, in the formula-enhanced version, you can see that it's actually dirty and wrinkled
The formula is a standard logistic sigmoid function (or S-curve):

The basics of a sigmoid function, which, when applied to images, sets the black-to-mid-gray value range to mid-gray to white, are explained on WikipediaThe formula entered into my graphing calculator, which allows me to experiment with different values to determine specific per-value effects of applying it to an image

The example above used a technique with brighten dark values with their inverse, and enhances contrast in the brights be darkening them with their inverse. The result is much, much more detail, but loss of realism.

Curve adjustments are made via sliders, which not only change the curve variables, but also the curve overlay on the video preview screenThe latest test build of the iPhone app overlays whatever curve you specify via a given function, and is colored with a black-and-white gradient, which renders in direct proportion to the amount of each level of gray in the curve
Here's a more realistic example, then, without inverse contrast-mapping applied:
Another original video still frame

The formula replaces the black-to-mid-gray value range from mid-gray to white; every value that was above mid-gray before the formula was applied is replaced with white (there is no more black in the image)

The values that were in the black-to-mid-gray value range that were subsequently elevated by a factor of two by the formula are then standardized to the black-to-white color range, thereby adding the full spectrum of color (black-to-white) to the shirt (the original values were placed around the altered portion of the image for pseudo-realism)

Applying the formula to live video using Core Image (or OpenGLSL) is easier than the setup required to apply the contrasting technique in the previous post:

This technique produces microscopic detail in dark regions of live video three times faster than the previous technique
As you can see from the code below, this technique does not require processor-intensive image minimum and maximum components values to do its work; it's a one-pass, highly efficient operation.

So what?
Here's what: many of the demons that will be found using the image-processing filters will be very small in size, as well as in the dark. In order to see them, and see them well enough to identify them, this formula is essential.

I have (or had) thousands of images of demons in which the demons were so small and the image so dark, I forwent posting them. Had I this method of processing those images, things might have been different. They certainly will be going forward, and I hope that from the end-users of the product, too.

Luminosity masks for demons, Homogeneity via Euclidean-based metrics for God-fearing demoniacs

Pixel-perfect image processing is the only suitable goal for a demoniac hell-bent on delivering to the common man a viable, useful and readily accessible tool for standing up to demon tyranny, and whether he is fighting for his life (or his eyesight) and the lives of those he loves [see The Last Battle of a Dead Man] is not the only reason. Here are the others:

First, delivering any product less than the best is detestable to God:
A false balance is an abomination to the LORD, but a just weight is his delight.
(Proverbs 11:1)
Second, delivering any product less than the best is not of God, and is therefore a wasted effort:
Unless the LORD builds the house, those who build it labor in vain. Unless the LORD watches over the city, the watchman stays awake in vain.
(Psalms 127:1)
Third, delivering the best quality product creates opportunities not otherwise available to your typical desperate and destitute demoniac:
Do you see a man skillful in his work? He will stand before kings; he will not stand before obscure men.

(Proverbs 22:29)
Fourth, it is a way of serving God in a situation in which this is otherwise impossible...
Whatever you do, work heartily, as for the Lord and not for men, knowing that from the Lord you will receive the inheritance as your reward. You are serving the Lord Christ.
(Colossians 3:23-24)
...which you can do when doing work beneficial to all...

With good will doing service, as to the Lord, and not to men.
(Ephesians 6:7)

...which you should do with anything you do at all, even things that are considered common:

Whether therefore ye eat, or drink, or whatsoever ye do, do all to the glory of God.
(1 Corinthians 10:31)

Having said all that, I then say away with luminosity masks, and in with homogeneity via Euclidean-based metrics, at least when it comes to isolating problematic regions in an image and applying localized corrections to them.

Masking and correcting problematic regions in an image: killing two birds with one stone
Perhaps the reason why the demoniac in The Exorcist (1973) had so much trouble with her demons is because girls suck at math (or, so they say); but, for the demon-plagued willing to delve into the world of vector calculus and the like, life should be a little rosier than Regan's.

Most graphics artists and photography use masks to isolate and apply corrections and enhancements to specific regions in an image based on pixel luminosity, which requires a visual inspection of shadows and highlights to determine what to change, how to change it, and by how much. Then, there's whether to change anything at all.

The problem is that's at least four decisions to be made based on someone's arbitrary and subjective and visual assessment, which is not only fraught with error, but is not feasible for real-time video or a collection of thousands of images. Not only that, but most of the techniques that are applied via luminosity masks don't work with images shot under the conditions in which demonic activity video is recorded, as they do not correct combinations of imperfections, such as under-exposed darks and over-exposed brights (unbalanced luminance).
NOTE | Many types of corrections must be made in tandem with others, and at a certain ratio; and, sometimes, adjustments must be made to non-problematics regions of an image, just to accommodate corrections made to the problematic regions.
Real-time deviated-region identification and correction
Fortunately, computational vision and medical image processing technology has a solution that not only finds problematic regions of an image, but properly balances the corrections made to those regions to the image overall.

With just one calculation to find the standard deviation of each pixel value from the image's mean—specifically, via a Euclidean-based metric to determine the homogeneity of an image—the imperfect regions of an image can be isolated by subtracting the deviate pixel values from the original:

Subtracting an image derived from the standard deviation of the mean pixel values from the source image highlights portions of an image that are likely the problematic in contrast and/or brightness
In the example above, a balance in brightness was struck between the dark greens and bright yellows (and contrast greatly increased in the bright yellows) by subtracting the difference from the alpha channel of the source image. Compare the contrast and brightness-balance between the corrected and source image:

No overbright yellows, with tons of little details not see in the original in the flower petals; plus, the greens seem brighter by comparison, even though they are approximately the same, bringing balance to the image overallThe original might be beautiful, but look closely at the altered version, and you'll see details you could not otherwise; this is all-important in demonic-activity image processing
The OpenGLSL code:

kernel vec4 coreImageKernel(sampler image, float mean_r, float mean_g, float mean_b)
{
vec4 pixel = sample(image, samplerCoord(image));

// Formula:
// Subtract the mean of each color component from the source color component
// Square each result
// Add the squared results
// Find the square root of the sum
// Divide each color component of the source pixel by the result (standard deviation)

float sd = sqrt(pow((pixel.r - mean_r), 2.0) + pow((pixel.g - mean_g), 2.0) + pow((pixel.b - mean_b), 2.0));

return vec4(vec3(pixel.rgb), pixel.a - vec4(vec3(pixel.rgb - vec3(sd)), pixel.a));
}

By the way, sd stands for standard deviation, a number with which you can do a lot; and, there are almost as many ways to calculate it. Here's the formula I used:

A formula for calculating the standard deviation of pixel values from the mean of pixel values in an image; the higher the deviation, the more visible the pixels are in the resulting image
Look for more applications of the standard deviation to image-processing and correction in this post in the future.

Finding demon-possessed objects using your iPhone and poorly written software

A programming error in the upcoming video camera app for the iPhone, which will be equipped with the same image-processing filters shown on this blog, used for finding demons and related activity, inadvertently revealed a way to more easily discern between objects that are demon-possessed and otherwise.

With the filters, you can readily see which objects have demons by watching the chroma (or color noise or snow) on the display or in recorded video, and noting the objects from which it streams; however, if there is a lot of noise—or the video is shot in the dark (because it has to be, of course)—this wasn't always feasible.

While developing a night-vision filter to work with the new camera app, I discovered by mistake that, if only the portions of the camera's preview screen (or video) that are moving are redrawn while everything else stationary is not, the trails and streams of chroma will be the only thing moving. All the other random noise particles don't generate enough of a difference between stationary objects and moving ones to get noticed. By contrast, the giant blocks of chroma, pouring off of a demon in hiding make a substantial enough change by its motion and size that is even easier to see than with previous filters. This video demonstrates this quite well, showing the tell-tale EMF radiation emissions from the demon cloak emanating from a towel draped over a desk lamp:


As you can see, by retaining all non-moving pixels, and by only redrawing the moving ones, you can track the trail of focused chroma emissions (or streams)—a dead give-away to demons in possession of objects, who are trying to hide right in front of your face. Of course, you have to hold the camera very still, as when it's moved, everything blurs into long strands of stretched motion.

Although a bit like reading an old-time radar blip screen, the white blocks that represent chroma in the image dancing upwards and away from a towel draped over a desk lamp, a towel that months ago showed a demon inside it in a video still frame.

Demons hide in everything, everywhere
While demons hide in just about anything, anywhere [see Strangest things, places demons possess], they prefer messes [see Demonic Feng Shui]. In this image, there are at least half of dozen demon faces blended in the folds of the clothes and bedsheets strewn about the room:

They are difficult to see unless you're somewhat familiar with a wide-variety of demonic species, but the faces are recognizable as such even still
Here's the same image, with a hint to the location of the faces (all are facing profile):

Having trouble seeing them? That's why they call you a victim, but don't feel sorry for you
The demon in the following video clip is a little more obvious:


Don't know OpenGL ES 3.0?
Then use Core Image to do per-pixel processing on your iPhone:

#if TARGET_OS_IPHONE
#import <CoreImage/CoreImage.h>
#else
#import <QuartzCore/QuartzCore.h>
#endif

@interface CubicFunction : CIFilter
{
    CIImage *inputImage;
    NSNumber *inputA;
    NSNumber *inputB;
    NSNumber *inputC;
    NSNumber *inputD;
}
@property (retain, nonatomic) CIImage *inputImage;
@property (copy, nonatomic) NSNumber *inputA;
@property (copy, nonatomic) NSNumber *inputB;
@property (copy, nonatomic) NSNumber *inputC;
@property (copy, nonatomic) NSNumber *inputD;

@end

static const unsigned int minCubeSize = 2;
static const unsigned int maxCubeSize = 64;
static const unsigned int defaultCubeSize = 32;
static const float defaultA = 2.00;
static const float defaultB = 3.00;
static const float defaultC = -8.00;
static const float defaultD = 6.00;

typedef enum cubeOperation {
    
    cubeMakeTransparent = 0,
    
    cubeMakeGrayscale // this is "color accent" mode
    
} cubeOperation;


@implementation CubicFunction

@synthesize inputImage;
@synthesize inputA, inputB, inputC, inputD;

static void rgbToHSV(const float *rgb, float *hsv)
{
    float minV = MIN(rgb[0], MIN(rgb[1], rgb[2]));
    float maxV = MAX(rgb[0], MAX(rgb[1], rgb[2]));
    
    float chroma = maxV - minV;
    
    hsv[0] = hsv[1] = 0.0;
    hsv[2] = maxV;
    
    if ( maxV != 0.0 )
        hsv[1] = chroma / maxV;
    
    if ( hsv[1] != 0.0 )
    {
        if ( rgb[0] == maxV )
            hsv[0] = (rgb[1] - rgb[2])/chroma;
        else if ( rgb[1] == maxV )
            hsv[0] = 2.0 + (rgb[2] - rgb[0])/chroma;
        else
            hsv[0] = 4.0 + (rgb[0] - rgb[1])/chroma;
        
        hsv[0] /= 6.0;
        if ( hsv[0] < 0.0 )
            hsv[0] += 1.0;
    }
}

static void f(float *rgb)
{
    float a = 2.0;
    float b = 3.0;
    float c = -8.0;
    float d = 6.0;
    float x = MAX(rgb[0], MAX(rgb[1], rgb[2]));
    
    rgb[0] = rgb[1] = rgb[2] = pow(a * x, 3.0) + pow(b * x, 2.0) + pow(c * x, 2.0) + d; // ranges from 5, 5, 10, 25 to -5, -5, -10, -25; defaults 2, 3, -8, 6
}

static BOOL buildCubeData(NSMutableData *cubeData, unsigned int cubeSize, enum cubeOperation op)
{
    
    float a = 2.0;
    float b = 3.0;
    float c1 = -8.0;
    float d = 6.0;
    
    uint8_t *c = (uint8_t *)[cubeData mutableBytes];
    float *cFloat = (float *)c;
    
    BOOL useFloat = FALSE;
    
    size_t baseMultiplier = cubeSize * cubeSize * cubeSize * 4;
    
    if ( [cubeData length] == (baseMultiplier * sizeof(uint8_t)) )
        useFloat = FALSE;
    else if ( [cubeData length] == (baseMultiplier * sizeof(float)) )
        useFloat = TRUE;
    else
        return FALSE;
    
    for(int z = 0; z < cubeSize; z++) {
        float blueValue = pow(a * ((double)z)/(cubeSize-1), 3.0) + pow(b * ((double)z)/(cubeSize-1), 2.0) + pow(c1 * ((double)z)/(cubeSize-1), 2.0) + d; // ((double)z)/(cubeSize-1);
        for(int y = 0; y < cubeSize; y++) {
            float greenValue = pow(a * ((double)y)/(cubeSize-1), 3.0) + pow(b * ((double)y)/(cubeSize-1), 2.0) + pow(c1 * ((double)y)/(cubeSize-1), 2.0) + d; // ((double)y)/(cubeSize-1);
            for(int x = 0; x < cubeSize; x++) {
                float redValue = pow(a * ((double)x)/(cubeSize-1), 3.0) + pow(b * ((double)x)/(cubeSize-1), 2.0) + pow(c1 * ((double)x)/(cubeSize-1), 2.0) + d; // ((double)x)/(cubeSize-1);
                
                //float hsv[3] = { 0.0, 0.0, 0.0 };
                float rgb[3] = { redValue, greenValue, blueValue };
                
                //rgbToHSV(rgb, hsv);
                
                //f(rgb);
                
                // RGBA channel order.
                
                if ( useFloat ) {
                    *cFloat++ = rgb[0] * 1.0;
                    *cFloat++ = rgb[1] * 1.0;
                    *cFloat++ = rgb[2] * 1.0;
                    *cFloat++ = 1.0;
                } else {
                    *c++ = (uint8_t) (255.0 * rgb[0] * 1); //alphaValue);
                    *c++ = (uint8_t) (255.0 * rgb[1] * 1); // alphaValue);
                    *c++ = (uint8_t) (255.0 * rgb[2] * 1); //alphaValue);
                    *c++ = (uint8_t) (255.0 * 1.0);
                }
            }
        }
    }
    
    return TRUE;
}


+ (NSDictionary *)customAttributes
{
    
    return @{
             kCIAttributeFilterDisplayName :@"CubicFunction",
             
             kCIAttributeFilterCategories :
  @[kCICategoryColorEffect, kCICategoryVideo, kCICategoryInterlaced, kCICategoryNonSquarePixels, kCICategoryStillImage],
             
             @"inputA" :
  @{
                     kCIAttributeMin       : @-5.00,
                     kCIAttributeSliderMin : @-5.00,
                     kCIAttributeSliderMax : @5.00,
                     kCIAttributeMax       : @5.00,
                     kCIAttributeDefault   : @2.00,
                     kCIAttributeType      : kCIAttributeTypeScalar
                     },
             
             @"inputB" :
  @{
                     kCIAttributeMin       : @-5.00,
                     kCIAttributeSliderMin : @-5.00,
                     kCIAttributeSliderMax : @5.00,
                     kCIAttributeMax       : @5.00,
                     kCIAttributeDefault   : @3.00,
                     kCIAttributeType      : kCIAttributeTypeScalar
                     },
             
             @"inputC" :
  @{
                     kCIAttributeMin       : @-10.00,
                     kCIAttributeSliderMin : @-10.00,
                     kCIAttributeSliderMax : @10.00,
                     kCIAttributeMax       : @10.00,
                     kCIAttributeDefault   : @-8.00,
                     kCIAttributeType      : kCIAttributeTypeScalar
                     },
             
             @"inputD" :
  @{
                     kCIAttributeMin       : @-25.00,
                     kCIAttributeSliderMin : @-25.00,
                     kCIAttributeSliderMax : @25.00,
                     kCIAttributeMax       : @25.00,
                     kCIAttributeDefault   : @6.00,
                     kCIAttributeType      : kCIAttributeTypeScalar
                     },
             };
}

- (void)setDefaults
{
    self.inputA = @2.0;
    self.inputB = @3.0;
    self.inputC = @-8.0;
    self.inputD = @6.0;
}

- (CIImage *)outputImage
{
    CIFilter *colorCube = [CIFilter filterWithName:@"CIColorCube"];
     
     const unsigned int cubeSize = MAX(MIN(64, maxCubeSize), minCubeSize);
     
     size_t baseMultiplier = cubeSize * cubeSize * cubeSize * 4;
     
     // you can use either uint8 data or float data by just setting this variable
     BOOL useFloat = FALSE;
     NSMutableData *cubeData = [NSMutableData dataWithLength:baseMultiplier * (useFloat ? sizeof(float) : sizeof(uint8_t))];
     
     if ( ! cubeData )
     return inputImage;
     
     if ( ! buildCubeData(cubeData, cubeSize, cubeMakeGrayscale) )
     return inputImage;
     
     // don't just use inputCubeSize directly because it is a float and we want to use an int.
     [colorCube setValue:[NSNumber numberWithInt:cubeSize] forKey:@"inputCubeDimension"];
     [colorCube setValue:cubeData forKey:@"inputCubeData"];
     [colorCube setValue:inputImage forKey:kCIInputImageKey];
     
     CIImage *outputImage = [colorCube valueForKey:kCIOutputImageKey];
     
     [colorCube setValue:nil forKey:@"inputCubeData"];
     [colorCube setValue:nil forKey:kCIInputImageKey];
     
     return outputImage;
}


@end