About the series

These works depict trajectories of flight in and around the Sacramento–San Joaquin River Delta in California, captured in January of 2020.

How it works

The process begins by capturing a moving subject on video, using a camera on a sturdy tripod. I then extract the individual frames from the video to produce an image sequence. Then, with pixel-processing and computational techniques, the frames are sequentially superimposed, to make a progression that reveals the otherwise invisible tracks that the moving subjects leave behind.

Special thanks

C and OpenGL

I'd like to thank an anonymous friend for getting together with me and developing a program written in C for sequential image blending, and teaching me about rudimentary image processing in C along the way. This friend and this friend's program showed me significant speed gains could be attained over my original Python code with only modest effort, and gave me confidence in working in C and the inspiration to attempt implementation of an even faster tool using C and OpenGL. Thank you :-)

The videos in this collection were produced with this new C and OpenGL program. It enables faster processing times thanks to the parallel nature of blending images on the graphics processing unit (GPU) as opposed to the central processing unit (CPU), which a conventional computer program would use. My source code stems from a number of OpenGL tutorials for image manipulation and can be found here: https://gitlab.com/jeremysarchet/shader-deanimation

Background comparision blend

One of these videos (2020.01.20.0064) is produced using a new technique which allows blending of subjects which are not strictly lighter or strictly darker than the background, which my go-to blend depended upon. In this case the subjects are Snow geese, whose bodies are white and whose wing-tips are black. The way this new blend works is by checking which pixel is less like the corresponding pixel of an empty background (that is to say, more distant in color space) and going with that pixel. I had attempted to implement this idea for a blend a couple of years back, but my results at that time were unsatisfactory.

I'd like to thank my friend, Dan Walsh for expressing an interest in this blend and working with me to get to the bottom of why it wasn't working as expected. Thanks to his doggedness, we were able to find there was a simple but overlooked bug in my original implementation. It turns out I was unwittingly attempting the numerical computation of color distance using mere 8-bit integers, yikes.

In addition to helping me to get to the bottom of the buggy behavior, Dan also came up with a brilliant technique for automatically generating an empty background. That is, given a sequence of images, for every pixel location, take the median value of the set of corresponding pixels across the whole sequence. This way what you get is a "typical" value for that pixel, which naturally produces the background under most circumstances, as a moving subject only passes through a given pixel for a few frames out of many in the sequence. It works superbly. Thanks, Dan :-)