With about six weeks of pictures of the local lake, it became clear that even carefully pointing my phone camera at the same place each day was creating quite a jumpy animation. Something better must be possible. This is a fairly basic explanation, I’ll do a more technical one in future.
So I started to research image alignment techniques. FrameSpan’s image alignement works like this:
1. Feature detection
Identify “features” in the first frame. This uses an established “feature detector” algorithm to find interesting points in the image - these are normally points of high contrast change or with some conjuction of lines or colours. I use the SIFT algorithm here.
2. Feature matching
Now match up the detected features between two frames - by looking for the same features in the next frame, we can work far the image has moved. Here we see the features matches between the two frames show a movement up and to the left.
3. Transform estimation
Given the list of points matched up between the two frames, we can now work out the difference between them, and therefore how much to move the image to make it nicely match the previous one.
4. The end result
Here’s a side-by-side comparison of before and after image alignment in FrameSpan. It’s still a little jumpy in places, but it copes pretty well considering that quite a lot is changing in the scene over the six or so weeks seen here.
OpenCV has been a great resource for libraries to implement this and even tutorials.