Tuesday, May 19, 2015

TECHNOLOGY | Edge Detection | The Prewitt Operator

A quick note about this post before you read it...

Here's what one blog author wrote about the task I am currently applied to—specifically, what it is I am trying to do, what I am doing right now, what it takes to get to the goal, and the hurdles I face.

On the multiplicity of possible tools:
There are dozens of different kernels which produce many different feature maps, e.g. which sharpen the image (more details), or which blur the image (less details), and each feature map may help our algorithm to do better on its task (details, like 3 instead of 2 buttons on your jacket might be important).
Just like the quote says, even the slightest adjustments to a kernel can make a huge difference in two otherwise identical images, like this oneAlternating the kernel divisor between just -1 to 1 revealed a sucker demon (white, string-like entity above right nostril) on the end of my nose in one rendering of a still, but not in another rendering of the same still frame
The type of work I am engaged in:
Using this kind of procedure — taking inputs, transforming inputs and feeding the transformed inputs to an algorithm — is called feature engineering. Feature engineering is very difficult, and there are little resources which help you to learn this skill. In consequence, there are very few people which can apply feature engineering skillfully to a wide range of tasks.
The complexity of the task:
Feature engineering is so difficult because for each type of data and each type of problem, different features do well: Knowledge of feature engineering for image tasks will be quite useless for time series data; and even if we have two similar image tasks, it will not be easy to engineer good features because the objects in the images also determine what will work and what will not. It takes a lot of experience to get all of this right. So feature engineering is very difficult and you have to start from scratch for each new task in order to do well. 
That's from Understanding Convolution in Deep Learning, written by Tim Detters, who I pray writes his blog and does his work without hearing Voices Demons' plans to mutilate his anus, as I did just today.
NOTE | Apparently, they have the tools, skills and experience to make that happen, and introduced this fact and their intentions for 15 minutes somewhere during their diatribe on their power and anger and hatred for me during my shower today.

Processing images made during periods of high demonic activity using an edge detection kernel/convolution filters will invariably uncover things you wouldn't see at any other time or with any other filter. There is activity at every level during such periods from sources as thin as a hair to as big as the sky. Edge detection filters are concerned with the former. A good edge detection filter will count the hairs on the head of a hair; an even better one will count those hairs at arms length.
NOTE | Read how they work at the end of this post, which also includes sample code for OpenGL ES and Objective-C programmers.
This post describes just such a filter, one based on the Prewitt Operator.

The smallest of details, literally
The Prewitt Operator brings out detail on the most minute level possible on imaging devices without specialized hardware (i.e., the iPhone), allowing you to search for the hardest-to-find hidden demons and to analyze otherwise blurry cloaked objects up-close and in-detail with accuracy limited only by your device's display, hence my platform choice for the initial distribution of my imaging filters [see Apple Retina HD Display].

It produces razor-sharp edges—too sharp for most lit-environments, requiring one to view most videos and images processed by this kernel viewable only in the dark; however, even given its penchant for pricy hardware and inconvenient (especially under the circumstances) viewing requirements, it is an indispensable tool for uncovering things that other filters don't, rendering super-fine surface detail from distances much farther away than most surface-analysis filters.

The following still frames from a recently made video prove the point; they show a man's claw-hand demonic weapon jabbing at me in super-fine detail:

Micro-fine detail in still frames showing motion......is the hallmark of the Prewitt Kernel
Compare this image of the "claw-hand" demonic weapon to one made just a few months ago of another demon person deploying the same weapon [see Clandestine surgical mutilation, hidden demonic "bomb" uncovered via demonic-activity video filter]:

Results of the formerly used image-processing technique to reveal deployed demonic weapons in a moving still frameResults of the new Prewitt Convolution/Kernel for iPhone
No blurring, no noise
The Prewitt Kernel sharpens even the most discrete of edges where it counts, namely, twilight, the border between light and darkness (or less light or two different kinds/sources of light). As you can see, using it during yesterday's period of high demonic activity allows for capturing more clearly than ever matter made malleable by the absorption of the same qualities as cloaked molecules from demons and the like. In numerous posts, the objects and people and demons in media made during such periods exhibit a drippy, smear when the camera or the subject is in motion [see How demons alter matter (and what it looks like when they do)], either by coincidence or purposefully, as one must move the camera in order to obtain images of certain kinds of cloaked activity or objects, furthered marred by excessive chroma.

The Prewitt Kernel dispenses with both the blur and the noise of these types of images and enhances detail at the same time, enabling the undulating nature some metals exhibit when demons run amok:

It's not because of the motion blur that makes the metal on the car look like it is dripping upwards; rather, it is because of motion that the undulation of the (temporarily) highly malleable metal can be seenOccasionally, matter in a transitory state between cloaked and normal can be captured clearly without special image processing, as shown by the above image of a t-shirt being transformed and otherwise manipulated by its possessing demon [from How demons alter matter]; however, in most cases, filters are needed
Highlight demonic-activity "hot" spots
A welcome-but-unexpected perk of the Prewitt Kernel is what most photographers would call a disadvantage: chromatic aberration (color misalignment). The color in an image processed by this kernel should look black-and-white for the most part; however, in some parts, red, green and blue are misaligned, creating a rainbow in an otherwise cloudy sky. This is due to the effect of EMF radiation emitted by cloaked molecules on the CCD sensor in digital cameras and/or the effect that it has on light itself (probably the latter, mostly; the former usually results in color noise—not aberration).

That's a good thing, as higher EMF radiation levels go hand-in-hand with danger, particularly from angry demons who want to interact forcibly with normal matter or fire weapons. In the first series of images above and the video clip itself (below), note the red color to the man firing the weapon; he would not be that color—or at nearly that red—had he not been in battle mode, as the Voices Demons and their people say:


And, the original for comparison:



NOTE | The YouTube version of the clip and the clip as displayed on an iPhone 5 or 6 is a very, very different view.

Ideal photographic conditions
The ideal conditions under which the Prewitt Kernel imaging filter works best and/or the kinds of activity best captured by it are:
  • Lights (when the subject or activity is in daylight, even though, ironically, the media must be viewed in the dark, similar to a doctor examining X-rays)
  • Camera (when the torch is enabled and the camera held in close proximity to the subject or surface)
  • Action (when the camera or subject is in-motion)
Implementing the kernel and/or convolution
The Prewitt Operator was implemented by kernel and convolution in OpenGL ES 3.0 and Objective-C, respectively; the code for both are provided below.

OpenGL ES 3.0 (in Quartz Composer)
The OpenGL kernel version of the Prewitt Operator, as authored in Quartz Composer:

Prewitt Kernel (in Quartz Composer)
The code:

/*
A Core Image kernel routine that applies edge detection per the Prewitt Kernel.
The code finds the difference between the pixel values on either side (x-axis)
of the source pixel and then divides it by two.
*/

kernel vec4 prewittKernel(sampler image)
{
vec4 pixel = unpremultiply(sample(image, samplerCoord(image)));
vec2 xy = destCoord();
pixel = (sample(image, samplerTransform(image, xy + vec2(1, 0))) - 
sample(image, samplerTransform(image, xy + vec2(-1, 0)))) / 2.0;
return premultiply(vec4(pixel.rgb, 1.0));
}

The Objective-C version of the Prewitt Operator (again) as a convolution in Xcode:

Prewitt Convolution (in Xcode)

Following is the code, followed by important notes:

- (CIImage *)outputImage
{
    const double g = 1.;
    const CGFloat weights_v[] = { 1*g, 0, -1*g,
        1*g, 0, -1*g,
        1*g, 0, -1*g};
    
    
    CIImage *result = [CIFilter filterWithName:@"CIConvolution3X3" keysAndValues:
                       @"inputImage", self.inputImage,
                       @"inputWeights", [CIVector vectorWithValues:weights_v count:9],
                       @"inputBias", [NSNumber numberWithFloat:0.5],
                       nil].outputImage;
    
    CGRect rect = [self.inputImage extent];
    rect.origin = CGPointZero;
    
    CGRect cropRectLeft = CGRectMake(0, 0, rect.size.width, rect.size.height);
    CIVector *cropRect = [CIVector vectorWithX:rect.origin.x Y:rect.origin.y Z:rect.size.width W:rect.size.height];
    result = [result imageByCroppingToRect:cropRectLeft];
    
    result = [CIFilter filterWithName:@"CICrop" keysAndValues:@"inputImage", result, @"inputRectangle", cropRect, nil].outputImage;
    
    
    const CGFloat weights_h[] = {1*g, 1*g, 1*g,
        0*g,   0*g,   0*g,
        -1*g, -1*g, -1*g};
    
    
    result = [CIFilter filterWithName:@"CIConvolution3X3" keysAndValues:
              @"inputImage", result,
              @"inputWeights", [CIVector vectorWithValues:weights_h count:9],
              @"inputBias", [NSNumber numberWithFloat:0.5],
              nil].outputImage;
    
    result = [result imageByCroppingToRect:cropRectLeft];
    
    result = [CIFilter filterWithName:@"CICrop" keysAndValues:@"inputImage", result, @"inputRectangle", cropRect, nil].outputImage;
    

    
    result = [CIFilter filterWithName:@"CIAreaMaximum" keysAndValues:@"inputImage", result, @"inputExtent", [CIVector vectorWithX:0.0 Y:0.0 Z:result.extent.size.width W:result.extent.size.height], nil].outputImage;
    
    result = [result imageByCroppingToRect:cropRectLeft];
    
    result = [CIFilter filterWithName:@"CICrop" keysAndValues:@"inputImage", result, @"inputRectangle", cropRect, nil].outputImage;


    return result;

About the Prewitt Operator, Core Image
Note that, as a convolution, the Prewitt Operator requires two passes: one for edge detection; one for edge direction. Also, Core Image requires the output of convolutions to be rendered to a new extent (or Core Graphics rectangle) in order to be visible. Finally, the micro-fine output from the Prewitt Operator was created by reducing the bias on both convolution matrix filters by half, and by inverting the results.