Monday, August 28, 2017

TECH | Histogram equalization using Apple Metal Performance Shaders and more...

There are a lot of developers (or would-be developers) interested in finding a foothold in the world of Apple Metal, but are stymied from the start by lack of adequate documentation and simple examples. While Apple's claim that MetalKit was built just for those developers, it has failed to provide sample code comprehensible to the newbie.
A dark room, as it appears in the Camera app for iPhone
The same room, after applying the histogram equalization Metal Performance Shader
This post should remedy that by showing beginners how to use MetalKit to display video frames from the camera on iPhone using Metal, both with Metal Performance Shaders (pre-packaged code) and Metal Shaders (hand-coded). The first app uses the histogram equalization Metal Performance Shader; the second app uses a Metal pass-through shader only.


A steep learning curve for all image-processing frameworks—vImage, OpenGL, etc—is what is extending the development time for Chroma (an app I'm developing that enables users to see the unseeable, which includes seeing in the dark). Metal has proven no different; however, unlike vImage and OpenGL, Metal will be worth the effort, as it is able to perform histogram equalization in real-time, allowing users to do other things with their phone that were impossibilities when using vImage or OpenGL for such an intensive operation.

Both apps do the same thing from the user perspective, i.e., start the video camera when the app launches, and displays the Metal-processed output to the screen. Each contains the de minimus code required to process video frames.

The basic structure of both apps are illustrated using a Quartz Composition document (below); it identifies their three major components and the image-processing workflow. The three components are the video camera and frame provider, the Metal shader for processing each frame, and the MTKView for display; however, for the Metal App, substitute Video Input for AVFoundation, Core Image Filter for the Metal shader (shaders.metal) and Image for MTKView:

A Quartz Composition, showing the three major components of both Metal apps

All of the above-described functionality is provided on a single view controller. The code contained therein is the absolute minimum required to achieve the intended functionality, and can be divided into these primary parts:
  1. Setting up the camera using AVFoundation, and implementing the delegate methods for handling the video output
  2. Setting up the MetalKit view
  3. Converting the sample buffers (video-frame data) output to Metal textures
  4. Processing the output using Metal Performance Shaders or Metal shader
  5. Rendering the texture to the MetalKit view
Metal Sample App #1: Processing video using a Metal Performance Shader
The following is the code for the initial (and only) view controller:

ViewController.h:

@import UIKit;
@import AVFoundation;
@import CoreMedia;
@import CoreVideo;
#import <MetalKit/MetalKit.h>
#import <Metal/Metal.h>
#import <MetalPerformanceShaders/MetalPerformanceShaders.h>

@interface ViewController : UIViewController <MTKViewDelegate, AVCaptureVideoDataOutputSampleBufferDelegate>

@property (retain, nonatomic) AVCaptureSession *avSession;

@end

ViewController.m:

#import "ViewController.h"

@interface ViewController () {
    MTKView *_metalView;
    id<MTLDevice> _device;
    id<MTLCommandQueue> _commandQueue;
    MTKTextureLoader *_textureLoader;
    id<MTLTexture> _texture;
    CVMetalTextureCacheRef _textureCache;
}

@property (strong, nonatomic) AVCaptureDevice *videoDevice;
@property (nonatomic) dispatch_queue_t sessionQueue;

@end

@implementation ViewController

//
// Setting up the Metal View
// - (void)viewDidLoad {
    NSLog(@"%s", __PRETTY_FUNCTION__);
    [super viewDidLoad];
    
    _device = MTLCreateSystemDefaultDevice();
    _metalView = [[MTKView alloc] initWithFrame:self.view.bounds];
    [_metalView setContentMode:UIViewContentModeScaleAspectFit];
    _metalView.device = _device;
    _metalView.delegate = self;
    _metalView.clearColor = MTLClearColorMake(1, 1, 1, 1);
    _metalView.colorPixelFormat = MTLPixelFormatBGRA8Unorm;
    _metalView.framebufferOnly = NO;
    _metalView.autoResizeDrawable = NO;
    CVMetalTextureCacheCreate(NULL, NULL, _device, NULL, &_textureCache);
    [self.view addSubview:_metalView];
    
    
    if ([self setupCamera]) {
        [_avSession startRunning];
    }
}

//
// Setting up the video camera using AVFoundation
// - (BOOL)setupCamera {
    NSLog(@"%s", __PRETTY_FUNCTION__);
    @try {
        NSError * error;
        
            _avSession = [[AVCaptureSession alloc] init];
            [_avSession beginConfiguration];
            [_avSession setSessionPreset:AVCaptureSessionPreset640x480];
            
            // get list of devices
            self.videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
            if (self.videoDevice == nil) return FALSE;
            
            AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:self.videoDevice error:&error];
            [_avSession addInput:input];
            
            dispatch_queue_t sampleBufferQueue = dispatch_queue_create("CameraMulticaster", DISPATCH_QUEUE_SERIAL);
            
            AVCaptureVideoDataOutput * dataOutput = [[AVCaptureVideoDataOutput alloc] init];
            [dataOutput setAlwaysDiscardsLateVideoFrames:YES];
            [dataOutput setVideoSettings:@{(id)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA)}];
            [dataOutput setSampleBufferDelegate:self queue:sampleBufferQueue];
            
            [_avSession addOutput:dataOutput];
            [_avSession commitConfiguration];
            
    } @catch (NSException *exception) {
        NSLog(@"%s - %@", __PRETTY_FUNCTION__, exception.description);
        return FALSE;
    } @finally {

    }
    
}


//
// The video camera output delegate method for capturing each frame, where they are converted to Metal textures
//
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    {
        size_t width = CVPixelBufferGetWidth(pixelBuffer);
        size_t height = CVPixelBufferGetHeight(pixelBuffer);
        
        CVMetalTextureRef texture = NULL;
        CVReturn status = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _textureCache, pixelBuffer, NULL, MTLPixelFormatBGRA8Unorm, width, height, 0, &texture);
        if(status == kCVReturnSuccess)
        {
            _metalView.drawableSize = CGSizeMake(width, height);
            _texture = CVMetalTextureGetTexture(texture);
            _commandQueue = [_device newCommandQueue];
            CFRelease(texture);
        }
    }
}

// NOTE | A better way to display the Metal texture would 
// be to remove the view controller as a delegate to the MTKView so 
// that you are required to call the drawInMTKView method
// explicitly, and call that method each time a new texture is 
// created from a sample buffer.

//
// Processing the texture, i.e., equalizing its histogram (shown in red), and displaying it with the Metal view (MTKView) (shown in green)
// - (void)drawInMTKView:(MTKView *)view {
    if (_texture) {
        // creating command encoder
        id<MTLCommandBuffer> commandBuffer = [_commandQueue commandBuffer];
        id<MTLTexture> drawingTexture = _metalView.currentDrawable.texture;
        
        // Histogram equalization
        MPSImageHistogram *calculation;
        MPSImageHistogramEqualization *equalization;
        
        // Information to compute the histogram for the channels of an image.
        MPSImageHistogramInfo histogramInfo;
        histogramInfo.numberOfHistogramEntries = 256;
        histogramInfo.histogramForAlpha = FALSE;
        histogramInfo.minPixelValue = (vector_float4){0,0,0,0};
        histogramInfo.maxPixelValue = (vector_float4){1,1,1,1};
        
        /* Performing histogram equalization requires two filters:
         - An MPSImageHistogram filter which calculates the image's current histogram
         - An MPSImageHistogramEqualization filter which calculates and applies the equalization.
         */
        calculation = [[MPSImageHistogram alloc] initWithDevice:_device histogramInfo:&histogramInfo];
        
        equalization = [[MPSImageHistogramEqualization alloc] initWithDevice:_device histogramInfo:&histogramInfo];
        
        /* The length of the histogram buffer is calculated as follows:
         Number of Histogram Entries * Size of 32-bit unsigned integer * Number of Image Channels
         However, it is recommended that you use the histogramSize(forSourceFormat:) method to calculate the buffer length.
         The buffer storage mode is private because its contents are only written by the GPU and never accessed by the CPU.
         */
        NSUInteger bufferLength = [calculation histogramSizeForSourceFormat:_texture.pixelFormat];
        MTLResourceOptions options;
        options = MTLResourceStorageModePrivate;
        id<MTLBuffer> histogramInfoBuffer = [_device newBufferWithLength:bufferLength options:options];
        
        // Performing equalization with MPS is a three stage operation:
        
        // 1: The image's histogram is calculated and passed to an MPSImageHistogramInfo object.
        [calculation encodeToCommandBuffer:commandBuffer sourceTexture:_texture histogram:histogramInfoBuffer histogramOffset:0];
        
        // 2: The equalization filter's encodeTransform method creates an image transform which is used to equalize the distribution of the histogram of the source image.
        [equalization encodeTransformToCommandBuffer:commandBuffer sourceTexture:_texture histogram:histogramInfoBuffer histogramOffset:0];
        
        // 3: The equalization filter's encode method applies the equalization transform to the source texture and and writes the output to the destination texture.
        [equalization encodeToCommandBuffer:commandBuffer sourceTexture:_texture destinationTexture:drawingTexture];
        
        // committing the drawing
        [commandBuffer presentDrawable:_metalView.currentDrawable];
        [commandBuffer commit];
        
        _texture = nil;
    }
    
}

- (void)mtkView:(MTKView *)view drawableSizeWillChange:(CGSize)size {
    
}

@end
NOTE | You can substitute the histogram equalization Metal Performance Shader (shown in red) for any other; however, the surrounding code (shown in green) must remain intact.
Compare the amount of code required to convert a video frame to a Metal texture in the didCaptureOutput... method to the code traditionally used to convert frames to an image format that can be displayed in a view:

    NSData *data = [NSData dataWithBytes:CFBridgingRetain(_texture) length:_texture.allocatedSize];
    NSError *err;
    [_session.session sendData:data toPeers:_session.session.connectedPeers withMode:MCSessionSendDataReliable error:&err];
    
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer,0);
    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGDataProviderRef provider = CGDataProviderCreateWithData((void *)sampleBuffer, baseAddress, bytesPerRow * height, nil);
    CGImageRef cgImage = CGImageCreate(width, height, 8, 4*8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst, provider, NULL, NO, kCGRenderingIntentDefault);
    CGColorSpaceRelease(colorSpace);
    CGDataProviderRelease(provider);

    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    UIImage *image = [[UIImage alloc] initWithCGImage:cgImage scale:1 orientation:UIImageOrientationUp];
    CGImageRelease(cgImage);

    if (image) {
        [self.view.layer setContents:(__bridge id)image.cgImage];
    }

NOTE | A better way to render textures in the above setup would be to remove the view controller as a delegate to the MTKView, replace the drawInRect: override with another method (say, render), and call that method each time a new texture is created from a sample buffer.

Processing video with an Apple Metal Shader
This is the code for the view controller in the second sample app; it is followed by the source for the Metal shader:

ViewController.h
@import UIKit;
@import AVFoundation;
@import CoreMedia;
@import CoreVideo;
#import <MetalKit/MetalKit.h>
#import <Metal/Metal.h>

@interface ViewController : UIViewController <MTKViewDelegate, AVCaptureVideoDataOutputSampleBufferDelegate>

@end

ViewController.m
#import "ViewController.h"

#define degreesToRadians( degrees ) ( ( degrees ) / 180.0 * M_PI )

@interface ViewController () {
    AVCaptureSession *_avSession;
    AVCaptureDevice *_videoDevice;
}

@end

@implementation ViewController {
    MTKView *_metalView;
    // Non-transient objects
    id<MTLDevice> _device;
    id<MTLCommandQueue> _commandQueue;
    id<MTLLibrary> _library;
    id<MTLBuffer> _vertexBuffer;
    id<MTLComputePipelineState> _computePipelineState;
    CVMetalTextureCacheRef _textureCache;
    MTKTextureLoader *_textureLoader;
    id<MTLTexture> _texture;
}

- (void)viewDidLoad {
    [super viewDidLoad];
    
    // creating MTLDevice and MTKView
    _device = MTLCreateSystemDefaultDevice();
    _metalView = [[MTKView alloc] initWithFrame:self.view.frame device:_device];
    _metalView.device = _device;
    _metalView.delegate = self;
    _metalView.clearColor = MTLClearColorMake(1, 1, 1, 1);
    _metalView.framebufferOnly = NO;
    _metalView.autoResizeDrawable = NO;
    _metalView.colorPixelFormat = MTLPixelFormatBGRA8Unorm;
    _metalView.drawableSize = CGSizeMake(self.view.frame.size.width, self.view.frame.size.height);
    [_metalView setContentMode:UIViewContentModeScaleAspectFit];
    [_metalView setTransform:CGAffineTransformRotate(_metalView.transform, degreesToRadians(90.0))];
    [_metalView setContentScaleFactor:[[UIScreen mainScreen] scale]];
    [_metalView.layer setFrame:[[UIScreen mainScreen] bounds]];
    [_metalView.layer setContentsScale:[[UIScreen mainScreen] scale]];
    
    // creating command queue and shader functions
    _commandQueue = [_device newCommandQueue];
    _library = [_device newDefaultLibrary];
    id<MTLFunction> computeShader = [_library newFunctionWithName:@"compute_shader"];
    
    // creating compute pipeline state
    NSError *err = nil;
    _computePipelineState = [_device newComputePipelineStateWithFunction:computeShader error:&err];
    NSAssert(!err, [err description]);
    
    CVMetalTextureCacheCreate(NULL, NULL, _device, NULL, &_textureCache);
    
    [self.view addSubview:_metalView];
    
    if ([self setupCamera]) {
        [_avSession startRunning];
    }
}

- (BOOL)setupCamera {
    NSLog(@"%s", __PRETTY_FUNCTION__);
    @try {
        NSError * error;
        
        _avSession = [[AVCaptureSession alloc] init];
        [_avSession beginConfiguration];
        [_avSession setSessionPreset:AVCaptureSessionPreset3840x2160];
        
        // get list of devices
        _videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
        if (_videoDevice == nil) return FALSE;
        
        AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:_videoDevice error:&error];
        [_avSession addInput:input];
        
        dispatch_queue_t sampleBufferQueue = dispatch_queue_create("CameraMulticaster", DISPATCH_QUEUE_SERIAL);
        
        AVCaptureVideoDataOutput * dataOutput = [[AVCaptureVideoDataOutput alloc] init];
        [dataOutput setAlwaysDiscardsLateVideoFrames:YES];
        [dataOutput setVideoSettings:@{(id)kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA)}];
        [dataOutput setSampleBufferDelegate:self queue:sampleBufferQueue];
        
        [_avSession addOutput:dataOutput];
        [_avSession commitConfiguration];
        
    } @catch (NSException *exception) {
        NSLog(@"%s - %@", __PRETTY_FUNCTION__, exception.description);
        return FALSE;
    } @finally {
        return TRUE;
    }
}

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    NSLog(@"%s %lu", __PRETTY_FUNCTION__, CMSampleBufferGetTotalSampleSize(sampleBuffer));
    CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    {
        size_t width = CVPixelBufferGetWidth(pixelBuffer);
        size_t height = CVPixelBufferGetHeight(pixelBuffer);
        
        CVMetalTextureRef texture = NULL;
        CVReturn status = CVMetalTextureCacheCreateTextureFromImage(kCFAllocatorDefault, _textureCache, pixelBuffer, NULL, MTLPixelFormatBGRA8Unorm, width, height, 0, &texture);
        if(status == kCVReturnSuccess)
        {
            _metalView.drawableSize = CGSizeMake(width, height);
            _texture = CVMetalTextureGetTexture(texture);
            if (_texture)
                [_metalView draw];
            CFRelease(texture);
        }
    }
}

- (void)drawInMTKView:(MTKView *)view {
    // creating command encoder
    id<MTLCommandBuffer> commandBuffer = [_commandQueue commandBuffer];
    id<MTLTexture> drawingTexture = view.currentDrawable.texture;
    
    // set texture to command encoder
    id<MTLComputeCommandEncoder> encoder = [commandBuffer computeCommandEncoder];
    [encoder setComputePipelineState:_computePipelineState];
    [encoder setTexture:_texture atIndex:0];
    [encoder setTexture:drawingTexture atIndex:1];
    
    // dispatch thread groups
    MTLSize threadGroupCount = MTLSizeMake(16, 16, 1);
    MTLSize threadGroups = MTLSizeMake(drawingTexture.width / threadGroupCount.width, drawingTexture.height / threadGroupCount.height, 1);
    [encoder dispatchThreadgroups:threadGroups threadsPerThreadgroup:threadGroupCount];
    [encoder endEncoding];
    
    // committing the drawing
    [commandBuffer presentDrawable:view.currentDrawable];
    [commandBuffer commit];
}

- (void)mtkView:(MTKView *)view drawableSizeWillChange:(CGSize)size {
    
}

@end

shader.metal


#include <metal_stdlib>
using namespace metal;

kernel void compute_shader(texture2d<float, access::read> input [[texture(0)]],
                    texture2d<float, access::write> output [[texture(1)]],
                    uint2 gid [[thread_position_in_grid]])
{
    float4 color = input.read(gid);
    output.write(float4(color.r, color.g, color.b, color.a), gid);
}

Sample Apps Source Code
You can download the source for both sample apps here (build with Xcode 9 and run on iOS 11):




Thursday, August 24, 2017

PIC | Shoe demon rankles jealous demon people (IN-PROGRESS)

Just posting the pics for now; but, also, a quick note about the image quality and the technique used to acquire an image of an invisible demon on the attack:

I know the image is blurred by motion — that's intentional. It's how you capture the light reflected from a demon's cloak (by moving the camera). The light contains so few photons, you have to "scoop" extra light to see it. Running the camera sensor through an emission of photons collects more of them than if you hold it still. That's how you get the blur from the uncloaked objects (by collecting extra photons). It's digital over-exposure, if you will. If I hadn't moved the camera, I would never have seen the face— just as the demon intended. It's a technique to make visible what is otherwise invisible.

Why would I not use this technique ? Everybody should use it to determine whether a malevolent entity has taken an aggressive stance (i.e., cloaked) against them in their home. Moving the camera with the exposure duration set to maximum would be the correct method in an attack scenario.

This is not the only way to see a cloaked (invisible) demon. Another way is positioning the lens adjacent to a tight space (crack underneath the door, for example) while pointing it towards the subject; and, another way is pointing the lens on a shiny, curved surface that is reflecting the subject area (such as the reflective coating on a pair of sunglasses or a door knob).

The first requires the demon to be stationary, and a straight shot between the tight space and the demon; the second requires post-processing to bend the image flat, and sufficient reflective area to accommodate both the reflection of the camera and the subject area.


The technique I used catches demons in action, working quick — exactly what you want in an emergency situation. It has no caveats. So what if the uncloaked items are blurry; I know what they are, and I don't need a picture of them. Besides, the blurry, uncloaked, known, visible items differentiate themselves from the sharp cloaked subjects in a way that makes the invisible demons easier to spot at a glance.

The face of demon, appearing on (or over) my shoeClose-up of image shown left

Add caption

Add caption

Add caption

Add caption

Add caption
There were plenty of "likes" from Facebook users in other groups; but, only one comment, which was left on Google+, made a point to praise the pics clarity



Tuesday, August 15, 2017

Cut your beheadings down to none: How relocating disrupts demons' victims' "program," "drama"

I've never met a victim first beginning to recognize their demon problem as a serious one who thought the solution would be easy. Typically, newly awakened victims believe that it would be so difficult to implement, it would be outside of their means to affect it by themselves; however, moving—the ultimate solution to their problem at that early stage of its cognizance—is definitely within the means of most victims at that time, when resources are being depleted, but remain somewhat available, and while health and youth affords a greater endurance and stamina for undertaking a new life in a new town. Unfortunately, that is the hardest time for victims to take that step.

Moving can alleviate problems like decapitation by demons as shown (in part) by this still frame, which was taken from a video showing my head in the clutches of a demon's claw. He is reattaching it after having severed it, apparently (also shown on video, below) [search for decapitation-related posts]. Victims in the earliest stages of demonic attack are usually unaware that such atrocities can and do occur frequently, and their failure to perceive the true gravity of their circumstances results in a lesser motivation to relocate
First, it's a big life change right in the middle of every conceivable big life change already suffered , all of which are compressed into a very short period of time. In a matter of months, demons and their human counterparts can (and do) deplete victims of every resource and opportunity they have, which was the case with me. The following video sufficiently conveys the magnanimity of a demon-led effort to take everything you have [see also The web site that launched the demonic war]:


Second, the decision to move is never spur-of-the-moment except under the most extreme circumstances, the likes of those that occur to citizens of foreign nations, that you only learn of from CNN; but, even if it was, it's not one that can be implemented quickly. Yet, that's exactly what a victim has to do while it is in within their power to make that decision [see Back Home Again in Indiana], mainly, because that choice won't be there for long.


Working in favor of a victim is the plentitude of motivation the problem brings victims, which is felt as constant shock.

My head, being lifted from my severed neck in the arms of a small demon gripping the base of my skull, and looking upwards towards my head's eventual destination. The original still frame is shown right; a close-up is shown bottom-left. A highly sharpened and contrasted version of the original close-up is shown top-left, and was provided to enhance the distinction between the sever wounds around my neck (which extend down my spine), the demon's arm and face, and the empty space between my head and body.
That feeling, at least, enables victims at the onset of the cognizable portion of their demon problem (demons work against victims unbeknownst well in advance of the time a victim recognizes they have a problem even outside-the-norm) to consider relocation as an option they have to consider, but it's never enough to remove the all-to-common stubborn hesitancy to see that option as the only effective solution available to them, having never faced or even witnessed a problem of such a magnitude as to require such a big, drastic step. They can't seem to accept that they have encountered a problem that big, and that life has demanded that they change it in every conceivable way, all-at-once, and all-of-the-sudden.

A reader asks for best advice on handling demon-allied flash mobbingI advise moving, and explain why it is so effective
I'm hoping demonvictim12345@gmail.com will be different, and accept moving as the ultimate and only solution to the problem, which he describes in his own words here:
I haven't read your entire blog but is there any tips you can tell me or refer me to to help the noise harassment shit and related fear that a few others talked to you about? I am trying to stay off drugs and alcohol and am as always being fucked with extra hard for it. I have never really found any real way to deal with it effectively without being on something at least semi regularly. I also just lost the only real friend I had to the police who I now am pretty sure were deployed by them as part of a plan. I am not going to have internet access much longer from what I can see, so anything would be much appreciated
Those unfamiliar with the scenario he's describing should read all posts related to Anger Management Rituals. My advice to him for eliminating those altogether, which I have taken for myself to great success [see Reader: Did your demons follow you to Indiana?], follows:
Do you have the means to move? While it’s true the “problem” is everywhere, it’s greatly diminished in most respects by simply relocating—mostly, because the persons involved cannot/will not move with you. Only those who have to will follow you, but without the whole team, their “program” for you will not be successful.
The reason why this is the most powerful maneuver for people whose addiction has absolutely nothing to do with the demon problem (like yours, and unlike mine—I am the only door is because your program starts at birth and ends at death; the path is laid out before you are born. Those who are working for your program, who have worked for your program, and will work for your program are “hired” at that time (before you were born). Your program requires, then, that you be where those persons are; it’s easier and more reasonable to keep you in one place than to move all of them whenever you move.
Most of your program that you recognize as the beginning of major problems probably involved losing your job, losing your home, losing your car, losing your money all at once, and was followed by an inability to get back on your feet. The reason was to make sure you went nowhere they couldn’t or wouldn’t. 
In addition to that, take comfort that most wouldn’t move. They have the things they took, which were hard to get and keep, and aren’t portable, such as a home, or are too many to move and too important to leave, such as friends and family. Plus, person’s working the program circuit are in insanely high demand; they can easily find the same job, but with another victim. 
Another significant reason relates to a change in your environment. You undoubtedly have deep emotions associated with certain places that provide ready tools for your detractors to use against you whenever you return to those places. Your world probably is a mix of “good” place, “bad” place, “good” street, “bad” street, “good” person, “bad” person, etc. 
None of that applies in a new town. You can go anywhere, meet anyone without any encumbering emotions or anxiety. The world feels open, and you can decide how you want to feel where you are, and no one else. 
If I had the means to do it, I’d relocate every victim of the problem you describe before doing anything else, primarily, because regardless of how effective a plan for help promises to be, it can’t be carried out successfully by a person who has too much to deal with than they can handle, and too many persons who wield a powerful influence in their thinking and behavior, sufficient to distract them from nearly any goal. 
I hope this convinces you to move, and to do it now. The best distance is 2,000 miles in any direction. I’d say as far as Indiana, at the least, and all the way to New York.

Friday, August 11, 2017

Reader acknowledges sobriety as ultimate solution to demon problem

This post contains a conversation initiated by a complete stranger via my The Life of a Demoniac Facebook Page today. It hasn't gone any further than what is shown here, yet was posted by merit of the fact that, unlike nearly everyone else for the past 14 years with the knowledge the reader demonstrates, he didn't act oblivious or ignorant things to the point of mocking when communicating with me. Even though he did tease a bit by abandoning the conversation at the point in which he did, it was still refreshing that he was not afraid to show his thorough understanding of the consequences of the problem and the means by which they are derived; and, best of all, he didn't skirt the truth on the most consequential aspect there is to this problem, specifically, that my dope = your inevitable demise, and actually (albeit very indirectly) suggests sobriety as his advice to me for helping myself.

Here are a couple of things I was surprised to hear him mention, things that most readers (or victims) don't know about or don't/won't discuss:
  • Pods. This is commonly known among demons and their people as a horde of Voices Demons that are assigned to a particular victim, and I've never heard it used except once, and that by an accident by the one who spoke it. He used the term and described how the pods (or Voices Demons) govern each other exactly the way Voices Demons portray it to me: i.e., that they govern each other, but serve a higher authority or common agenda. As far as being unable to turn their backs on that authority and agenda: they do often say they have no choice (but, also say they like what they do at the same time).
NOTE | I do believe that guilt plays a significant role in Voices Demons lives'—but, not for having injured victims as much as destroying their once proud, affluent, well-educated and highly civilized former selves, having now turned themselves into full-time torturers and murderers.
A couple of facts about pods while I'm on the subject: a pod can exchange victims with other pods under certain circumstances; they are but one section of an orchestra of demons at work on a victim; they generally take instruction from another set of unheard demons during periods of high demonic activity, and when demonic weapons are used on victims.
  • Voices Demons. I've never heard anyone call demons using any the pedantic names I've ascribed to certain species over the years on this blog; in fact, most people—even victims—tend to shy away from them for whatever reason. Voices Demons is only used in one place: on the blog (or is it?).
  • Opening the door. He seemed to get and accept how demons get their occasional, additional power in this world [see My dope = your inevitable demise], but then ascribed the same happenstance to himself, and, by inference and implication, others, which is not how it works. It's possible that every victim has been/is kept on the same craving/usage schedule, and it's possible that Voices Demons tell victims that their shame brings punishment by and through their acts in that way. So, it's possible, then, he and many others believe the same thing so long as they ignore the events during periods of high demonic activity that only elude (or plain flat call out) just one of the victims on the list). Either way, I can't imagine anyone even halfway in-the-know failing to see that turning my back (or dying on it) would nail the tangible, most egregious part of the problem. I also can't imagine anyone believing that demons would lose interest in great power and reach because I stopped being interested in them no more than I can imagine that anyone would believe that it's my interest or issues that draw them to me.
Still, it's all pretty simple: no dope = no power, no presence; and, I have yet to meet one demon or human who wants that. I seem to be literally the only one who dislikes things as they are and are going.
The rest is suspect, particularly his remedy for his sucker demon problem. There's no belief in the world that's going to dissuade someone on the demons' side of things that is going to sway things for their targets. It's not a thinking and believing and wishing problem: it requires action to make things go this way or that.

As far as turning his back on evil as part of his solution: that's not what will happen, believe this. You do that to bring yourself closer to God, and to achieve perfection as He commands [Be ye therefore perfect, even as your Father which is in heaven is perfect. Matthew 5:48]. That will be a solution, but one tends to mention that when that is the case (which he didn't).

Finally, I don't know what he thinks I'm doing, nor do I see how he thinks I'm on a mission to "save people." I developed a solution for seeing the invisible; that way, people can see where their health problems come from. People are wrong on that right now; Chroma would correct that. And, even if they were right, people still have to fix the problems themselves, which they haven't. People also have to be willing to see the cloaked demons and demonic entities (the source of most problems), which they have been reluctant to do, preferring instead to invest money into, say, seeking causes and a cure for osteoporosis that couldn't possibly be the cause or the cure than just use their eyeballs to see that those worm-like tunnels in people's bones are sucker demons implanted in them [see link to see just how dense and plentiful sucker demons are placed in and on people], thereby seeing the true cause and dealing directly with it when developing a cure.