Saturday, May 13, 2017

iOS Dev | Watching the same demon in different ways

If you ever wonder what I fight about with demons the most these days, believe it or not, it's whether I can do and succeed at stuff like this...

Today, I wrote code that splits a single iPhone video camera feed among multiple displays, each of which can be processed separately and individually and simultaneously, in real-time, without affecting app performing or significant increases to memory and energy consumption using multicast delegation. It displays video as it is acquired by the camera in multiple windows, but with a filter applied at a different setting in each one.

The point of that is to be able to monitor the same scene for different types of demonic activity simultaneously; that would require separate windows, each displaying video processed to reveal and/or enhance visibility of the specific type.

In order to process or filter a video feed in your app, you need the image data. Although you can set up multiple preview layers to display the image feed, the video is going to look the same in each one. That's because the camera supplies the image data required for processing an image (e.g., apply filters, etc.) to one recipient. It doesn't make copies and reroute them to multiple objects in your app.

To do that, you'd have to copy the data, and then redistribute it yourself; but, that means consuming way too much memory and using lot of processing power (and energy) to redistribute (or forward) it. Add the two together, and you end up with performance bottlenecks. On top of that, you'd have to create and maintain separate, custom views. Even without piping video to them, multiple views always means more overhead.

Not just any iOS developer can split video from an iPhone camera into multiple windows, and then process the video in each individually and differently in real-time—and with de minimus impact to resources
With the solution I implemented today, you can create multiple recipients of the same set of image data without expressly copying it, and you can create multiple views of the single class type.

Four views, but of the same class; no references (outlets) to the views were created. As far as the app is concerned, there is only one view
In spite of quadrupling the video feed, and quadrupling image processing tasks, the energy impact was negligible:

Energy use and memory consumption were both low, in spite of four video feeds displaying in real-time, each processed separately, and independent of each other (the exposure was set to a different setting for each)
Performance from the user's perspective is flawless—even when eight windows are displayed—as shown by this video of the app running on my iPhone:


Even while processing eight videos feeds individually and in different ways (an exposure adjustment filter was applied separately to each using a different EV setting), the app displayed them at 120 frames per second.

Update on May 23rd, 2017
Even better than that was a Grand Central Dispatch (GCD) variation on the same theme I tried later:

The highlighted area of the source code editor pane (Xcode) is all the code required to do the same thing as the protocol checker/multicast delegation solution shown first in this post

Murder, Satan wrote | “They popped a cork on you…”

"Rock him off" and "Cash him in" are both phrases used to describe murder, or at least acts likely to lead to death, in ...