A crazy idea?
I have been thinking about fun things one can do with live video, and one that would have immediate application for the friend who prompted my project would be to overlay animated sprites with alpha blending onto the video stream. Now, there are ways to do that with the existing filters, especially if I implement the looped-video source I’m thinking about, such as the chroma key blend or perhaps even the alpha blend filter, if the video loop can contain transparency information.
But I would also like to let creative users contribute algorithmic sprites to overlay on the video, without running into the problem of downloadable code being prohibited. And so, I was thinking, would I be able to use an off-screen web view containing an HTML 5 canvas, and JavaScript code to generate frames with transparency information to feed into the alpha blend filter? Apple doesn’t have a problem with people downloading JavaScript to run in their own interpreter. But is there a way to get the image data out of the web view and feed it into the GPUImage filter pipeline? Must investigate. Would anyone else find this useful?
I'm not quite sure I follow what you're trying to do here, but I do have an example of point sprites in the GPUImageCrosshairGenerator, which places crosshairs at locations in an image. Something like that could be replaced with true sprites, and then you could animate them by updating the input positions to a filter like that. The result could be blended into the final image (like I do with the Harris corner detector in the FilterShowcase example).
Interaction with the JavaScript interpreter in WebKit is not something I've done in the past, but I have heard of others doing this. In general, Apple is cool with this kind of interpreted code. They just frown on external means of loading in modules of functionality into applications in ways that work around them getting their cut for the App Store.