Slow Processing on Large Images
If I create a shader that takes longer to process (have not nailed down exactly how long triggers this) the OpenGL renderer will stop the shader after not all of the tiles have rendered. The outcome is that you get part of the tiles rendered in your image and the rest are just blank.
This seems to only happen on larger images but I have also noticed it happens on a cheaply made shader while initially writing it (we do not always write optimized code up front).
I saw Brads bug post at github about this and I have searched non-stop to find how to get around the OpenGL renderer breaking when a frame seems to be taking too long.
I was worried about this....
When developing it is so much easier to not worry about speed and optimize later. Especially when coming up with a whole new algorithm or porting another CPU algorithm to the GPU.
I already work with raw data and pass that so making stripes is not too difficult. The only issues is that the more stripes you go the more overlap you have to worry about on certain algorithms (blur, unsharp mask, etc). So there is inherently a slow down (although should be mostly minimal)
Thanks Brad!
Unfortunately, I asked the engineers at Apple the same thing, and they said that there is no way to avoid this. If a single frame takes longer than a set amount of time, the OpenGL ES watchdog timer they use will kill it so that it doesn't monopolize the GPU and prevent the UI from rendering.
The only way to avoid this is to write a less computationally expensive shader or to split the image into a series of smaller tiles and process those separately. The latter is an approach I've been exploring for enabling processing of larger frames on older devices with smaller max texture sizes, but it might take me a while to have something like that operational.