Point autotracker

automatically tracks points across frames in experiment videos

I wrote this autotracker in 2016, years before the existence of DeepLabCut, to track the antennae of my moths. The method I used is quite nice. Using a minimal set of templates, the code learns how the antenna looks during different stages of wing beats and different light levels. And based on this template repository, it tries to find the antenna using a distance minimization method.

I put in some interesting optimization here. There is of course a distance minimization, i.e. the template that fits accurately to the image (low distance) will be preferred. But because there are multiple cameras, there can be scenarios where one video will a good fit and others will have a bad fits (happens quite often unfortunately). To circumvent this, I optimize not just the template fit on each video, but also how accurately the 3D point is estimated. This ensures that only points that give low 3D error residuals are chosen. In addition, the points tracked in one frame should be near to the points tracked in the previous frame (i.e. the antennae movement has a reasonable velocity limit). All these were optimized together to track the antennae.

And the code worked quite well. I was able to digitize all my videos within a week or so. This is not much considering I had 28000 frames per video, 2 videos per trial, 4 trials per moth and approx 15 moths :sweat_smile:

Of course, DeepLabCut and the recent machine learning based tracking software will do way better, because they can store the templates in a more compressed fashion and can identify much better than the methods I used (though they take a lot more computational power). So, this code might be a relic from the past, but it is nice to see that I was on the right track :nerd_face:

If you want to check the code, you can find it here. If you do use it for something, do shoot me a message. I would be very curious to know why :stuck_out_tongue_winking_eye: