Image Blending Improvements

This summer I continued work on my chronophotography project, Deanimation.

My focus has been making video works, and longer ones at that. A typical video of mine consists of a few thousand frames. Furthermore, I've been finding success with a procedure that uses a "running blend" where each frame is blended with a fixed amount of preceding frames, which produces an effect as if the moving subject has a tail that follows behind. This involves significantly more computation time, as now tens or hundreds of blends are computed for each output frame, (depending on the desired length of the tail).

To support the work of making these videos, I set out to improve my image processing code, and reached out to a couple of friends who provided valuable support and contributions.

C and OpenGL

I'd like to thank an anonymous friend for getting together with me and developing a program written in C for sequential image blending, and teaching me about rudimentary image processing in C along the way. This friend and this friend's program showed me significant speed gains could be attained over my original Python code with only modest effort, and gave me confidence in working in C and the inspiration to attempt implementation of an even faster tool using C and OpenGL. Thank you :-)

The videos in this collection were produced with this new C and OpenGL program. It enables faster processing times thanks to the parallel nature of blending images on the graphics processing unit (GPU) as opposed to the central processing unit (CPU), which a conventional computer program would use. My source code stems from a number of OpenGL tutorials for image manipulation and can be found here: https://gitlab.com/jeremysarchet/shader-deanimation

Background comparision blend

One of the videos from my Mokelumne series, 2020.01.20.0064 is produced using a new technique which allows blending of subjects which are not strictly lighter or strictly darker than the background, which my go-to blend depended upon.

In this case the subjects are Snow geese, whose bodies are white and whose wing-tips are black. The way this new blend works is by checking which pixel is less like the corresponding pixel of an empty background (that is to say, more distant in color space) and going with that pixel. I had attempted to implement this idea for a blend a few years back, but my results at that time were unsatisfactory.

From my Berkeley series, there's two videos which use this background comparison blend. The first, 2020.10.16.0408, depicts dry leaves blowing across the street.

The second, 2017.07.09.9488, captures a swallowtail butterfly in flight. This was one of my earliest recordings for this project, and until I got this new blend in working order, I was unable to process it, due to the yellow and black stripes of the butterfly's wings being both lighter and darker than the background sky.

I'd like to thank my friend, Dan Walsh for expressing an interest in this blend and working with me to get to the bottom of why it wasn't working as expected. Thanks to his doggedness, we were able to find there was a simple but overlooked bug in my original implementation. It turns out I was unwittingly attempting the numerical computation of color distance using mere 8-bit integers, yikes.

In addition to helping me to get to the bottom of the buggy behavior, Dan also came up with a brilliant technique for automatically generating an empty background. That is, given a sequence of images, for every pixel location, take the median value of the set of corresponding pixels across the whole sequence. This way what you get is a "typical" value for that pixel, which naturally produces the background under most circumstances, as a moving subject only passes through a given pixel for a few frames out of many in the sequence. It works superbly. Thanks, Dan :-)