CPU processing in between GPUImage Filters
I would like to do some CPU processing on the image data in between GPUImage Filters. For example, apply an edge detection filter to the image, then transfer the data back to the CPU to do some calculations and change the rgb values of specific pixels based on these calculations, then transfer the data back to the gpu to apply a different filter before displaying the image on the screen. Any suggestions on how I could go about doing this?
That's wonderful, thank you! Do you have an idea of when approximately the new input class would be ready? I'd like to use GPUImage to improve performance in a demo of a project that'll be about a week from now, if possible. If (I'm guessing) the new input class won't be ready by then, do you have a suggestion on where to start if I try to hack out a way to input data back to the framework in between filters myself?
Sorry, forgot to update this, but the GPUImageRawDataInput class has been in the framework since Tuesday. Look at the RawDataTest example for how to use this.
Awesome, thank you so much!! I'll go take a look at that.
Hi Brad,
The input/output classes look great, thank you, however I am having some confusion over how to use this in the video processing pipeline. I understand how the RawDataTest example works for single image processing, but I am not sure where to put the CPU processing code in the video stream pipeline.
For example, I am trying to implement the following pipeline that goes from camera output -> luminance filter -> CPU processing that modifies the raw data and creates a second image -> alpha blending of these two images -> display on the screen:
// Set up luminance threshold and alpha blend filters luminanceFilter = [[GPUImageLuminanceThresholdFilter alloc] init]; [(GPUImageLuminanceThresholdFilter *)luminanceFilter setThreshold:0.6]; blendFilter = [[GPUImageAlphaBlendFilter alloc] init]; [(GPUImageAlphaBlendFilter *)blendFilter setMix:0.5]; // Set up videoCamera -> luminance filter [videoCamera addTarget:luminanceFilter]; // Set up luminance filter -> raw data GPUImageRawDataOutput *luminanceOutput = [[GPUImageRawDataOutput alloc] initWithImageSize:CGSizeMake(width, height) resultsInBGRAFormat:YES]; [luminanceFilter addTarget:luminanceOutput]; GLubyte *luminanceOutputBytes = [luminanceOutput rawBytesForImage]; // method call to do CPU processing on outputBytes void *blendFilterMaskBytes = calloc(width * height * 4, sizeof(GLubyte)); ProcessRawCameraOutput(luminanceOutputBytes, blendFilterMaskBytes); GPUImageRawDataInput *originalImage = [[GPUImageRawDataInput alloc] initWithBytes:luminanceOutputBytes size:CGSizeMake(width, height)]; GPUImageRawDataInput *imageToBlend = [[GPUImageRawDataInput alloc] initWithBytes:(GLubyte *)blendFilterMaskBytes size:CGSizeMake(width, height)]; // Send output of CPU processing to alpha blend filter [originalImage addTarget:blendFilter]; [imageToBlend addTarget:blendFilter]; // Display blended image on screen [blendFilter addTarget:imageView];
I've put a method call for the CPU processing directly into the pipeline setup code, but that seems like the wrong way to do it. If you could explain to me the correct way to do this for video processing I'd greatly appreciate it!
You need to set a callback block to use with the raw data output, like in the following:
[rawDataOutput setNewFrameAvailableBlock:^{ // Handle raw data processing here }];
Have your processing write to the bytes used for the raw data input, and then trigger the -processData within that callback block. I'd also set the originalImage to ignore the blendFilter for updates using the targetToIgnoreForUpdates property, so you don't get double updates.
What this should do is process through the initial filters, notify your code in the block to process, have it do its work, and then have you manually trigger the raw data input to process from that point on. The raw data input should cause the blend to work on both images and you should get a proper result.
That worked great, thank you very much!
If you wait for a little bit, I'm finalizing a new raw data input type, which should simplify the job of getting CPU-bound data to and from the framework. Right now, the GPUImageRawData class (soon to be renamed GPUImageRawDataOutput) gives you raw data output from a series of filters, but I need an input class to match that.
These input and output classes should allow for some CPU-based processing to occur in between filter stages.