Out with the old iPhone 6s Plus 64 GB, and in with the new iPhone 7 Plus 128 GB:
I'm hoping this one will manage to avoid the destruction of my iPhone 5 and incur less death threats than my iPhone 6; however, given the early success of my latest app in development—the alpha version already provides high-quality, peer-to-peer video streaming app in real-time using Apple's highly touted and extremely efficient Multipeer Connectivity Framework—my hope will likely be dashed to pieces. Demons don't like forward progress, after all.
Still, without my high (yes, I quit—for good...literally), demons are blockaded from the power source that brings them into physical proximity to our realm, and makes them tangible and allows them to interact tangibly, most notably, with their weapons:
Accordingly, attacks have been nil, sightings have been de minimus, and the Voices Demons have been well-behaved (e.g., quiet and scarce). I'm sure that would change if I did get high, and I'm sure that the anger against me would be great, seeing as the app runs both on iPhone and Mac:
What's more: it runs on iOS 11, which is not even released to the public yet. That's, like, a triple no-no. Needless to say, although the incentive is slightly different this time, the previous incentives for staying sober remain.
Like I said, it's alpha:
Since the time of this post, I posted a beta version of my video streaming app to the App Store; and, I've also come up with about half a dozen different solutions for squeezing high-quality video over demon-burdened Bluetooth and Airport wireless connections, all of them really good—except for today's. Yet, it, by far, is the one I'm happiest about.
For years, I and all one thousand iOS developers I've ever talked to have never had much success with pixel buffers. A pixel buffer is a pointer to image data in memory, and requires a lot of knowledge outside of simple programming in order to transfer, store, display and otherwise manipulate (which, as a developer, you are bound to do when dealing with images).
It seems that no one ever gets any of these tasks quite right; often, after having spent more time figuring out pixel buffers than anything else in their lives (and having the hardest time of their lives at the same time), developers either give up or compromise in favor of a shoddy substitute of some kind.
My pixel-buffer nightmare always revolved around the same, general issue: moving them over a network, intact and on-time. Today, I just nailed intact for the first time in over two years of periodic attempts, whereas before, I could do nothing but corrupt them (like literally everyone else).
Having achieved this, I can feed the image output from a video camera on one iPhone directly to the GPU on another iPhone for rendering without requiring an Internet connection—albeit far too slowly for use in a shipping product for now.
Since a large segment of my readership actually comes from stackoverflow.com, I'll post the code that transforms the camera video data into objects that are broadcasted over a local network to another device; and, I'll post the code that receives the object, and puts it into a pixel buffer for rendering with OpenGL on the GPU.
Here's the first segment, which consists of two methods that prepare and send the video data as they receive it from the camera:
My new iPhone 7 Plus 128 GB promises speedier development and testing of my upcoming high-quality, peer-to-peer video streaming app now in development |
A portion of the source code from the iPhone version of the app, which acts like a remote camera for clients connected via Bluetooth, wireless or any other local network connection |
Apparently, news doesn't travel as fast as it used to...or, does it? |
A portion of the source code from the Mac (desktop) version of the app, which displays in real-time video acquired by any iPhone in the vicinity with no setup or configuration |
What's more: it runs on iOS 11, which is not even released to the public yet. That's, like, a triple no-no. Needless to say, although the incentive is slightly different this time, the previous incentives for staying sober remain.
NOTE | It's been nearly two months since I last handed my ass to demons. I'm still going strong. No cravings, no desire...nothing. Demons and their people are ass out to me now; they may own this town, but I own their strength and presence.About that new app...
Like I said, it's alpha:
Since the time of this post, I posted a beta version of my video streaming app to the App Store; and, I've also come up with about half a dozen different solutions for squeezing high-quality video over demon-burdened Bluetooth and Airport wireless connections, all of them really good—except for today's. Yet, it, by far, is the one I'm happiest about.
For years, I and all one thousand iOS developers I've ever talked to have never had much success with pixel buffers. A pixel buffer is a pointer to image data in memory, and requires a lot of knowledge outside of simple programming in order to transfer, store, display and otherwise manipulate (which, as a developer, you are bound to do when dealing with images).
It seems that no one ever gets any of these tasks quite right; often, after having spent more time figuring out pixel buffers than anything else in their lives (and having the hardest time of their lives at the same time), developers either give up or compromise in favor of a shoddy substitute of some kind.
My pixel-buffer nightmare always revolved around the same, general issue: moving them over a network, intact and on-time. Today, I just nailed intact for the first time in over two years of periodic attempts, whereas before, I could do nothing but corrupt them (like literally everyone else).
Having achieved this, I can feed the image output from a video camera on one iPhone directly to the GPU on another iPhone for rendering without requiring an Internet connection—albeit far too slowly for use in a shipping product for now.
Since a large segment of my readership actually comes from stackoverflow.com, I'll post the code that transforms the camera video data into objects that are broadcasted over a local network to another device; and, I'll post the code that receives the object, and puts it into a pixel buffer for rendering with OpenGL on the GPU.
Here's the first segment, which consists of two methods that prepare and send the video data as they receive it from the camera:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
NSError *err;
[((ViewController *)self.parentViewController).session sendData:[self dataFromImageBuffer:imageBuffer withBytesPerRow:CVPixelBufferGetBytesPerRow(imageBuffer) withHeight:CVPixelBufferGetHeight(imageBuffer)] toPeers:((ViewController *)self.parentViewController).session.connectedPeers withMode:MCSessionSendDataReliable error:&err];
}
- (NSData *)dataFromImageBuffer:(CVImageBufferRef)imageBuffer withBytesPerRow:(size_t)bytesPerRow withHeight:(NSInteger)height
{
NSMutableData *data = [NSMutableData new];
if (CVPixelBufferLockBaseAddress(imageBuffer, 0) == kCVReturnSuccess)
{
uint8_t *rawBuffer = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
[data appendBytes:rawBuffer length:1228808];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
return data;
}
Here's the code on the receiving end (a second device on the same network as the device taking videos), which restores the object that is a pixel buffer back into a useable form (for demonstration purposes—to show that the image sent is the image displayed, with no distortions or mangling), and then displays it on-screen:
Here's the code on the receiving end (a second device on the same network as the device taking videos), which restores the object that is a pixel buffer back into a useable form (for demonstration purposes—to show that the image sent is the image displayed, with no distortions or mangling), and then displays it on-screen:
- (void)session:(nonnull MCSession *)session didReceiveData:(nonnull NSData *)data fromPeer:(nonnull MCPeerID *)peerID {
dispatch_async(dispatch_get_main_queue(), ^{
NSMutableData *mdata = [NSMutableData new];
UInt8 *rawBuffer = (uint8_t *)[data bytes];
[mdata appendBytes:rawBuffer length:1228808];
uint8_t *buffer = (uint8_t *)[mdata bytes];
NSLog(@"sizeof(buffer) %lu", sizeof(buffer));
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(buffer, 640, 480, 8, 2560, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage *image = [[UIImage alloc] initWithCGImage:newImage scale:1 orientation:UIImageOrientationUp];
CGImageRelease(newImage);
if (image) {
NSLog(@"image size %f x %f", [image size].width, [image size].height);
dispatch_async(dispatch_get_main_queue(), ^{
[((ViewerViewController *)self.childViewControllers.lastObject).view.layer setContents:(__bridge id)image.CGImage];
});
}
});
}
Again, there's nothing impressive about this code, other than the fact that it is likely the first such code posted online that actually does what it's supposed to. Undoubtedly, some developer will find it quite useful someday.
Again, there's nothing impressive about this code, other than the fact that it is likely the first such code posted online that actually does what it's supposed to. Undoubtedly, some developer will find it quite useful someday.