Pages

Wednesday, April 21, 2021

Improving Ant Tracking Software: Now with a Partner!

 A couple years ago, the lab designed an experiment to learn about how turtle ants decide which nests to occupy (in the wild this is often a hole in a tree). So, previous students designed a structure or “arena” that contained multiple nests and paths connecting them. They then put a group of ants into the arena and took a video to record where the ants were going. They also painted regions of interest (ROIs) in red around key areas such as just outside the nests and on bridges connecting the nests. This way we can just pay attention to the ROIs instead of the entire video.


Figure 1. The area. The nests are the red tubes with blue bases, and the ROIs are the red rectangles and central pentagon


One small issue with these videos is there is just way too much video, so we started developing software to automate the video watching process. Our software currently works in two main steps: detecting the ROIs and then tracking the ants through those ROIs. For the past couple semesters I have been working to improve this first step of detecting the ROIs. However, this semester something changed: I got a partner to work with! 


Figure 2a. Good ROI detection Figure 2b. Less good ROI Detection


Having a partner has not only been more fun but also more productive. Most obviously she is a different person and naturally has a different approach to our work. For example, we noticed that sometimes the current ROI detection gets the ROIs confused with the nests since they are both red. After looking at the pixel HSV values we noticed that the red in the ROIs tended to have a much lower saturation, ie. they are a more washed out, whiter color since the ROIs are just red paint on the white arena. So, we implemented a maximum saturation on what we considered to be an ROI. We then did a few experiments to see if this approach can give us better results. My initial approach was to fine tune our maximum saturation to fix a single detected ROI that encapsulated the real ROI and the nearby nest in one large ROI (figure 3). However, I started feeling disheartened about this approach because I was not able to easily find a maximum saturation that fixed this issue. 

Figure 3a. Figure 3b.

Detected ROI without saturation cap My best result with saturation cap


On the other hand, my partner decided to look at an image that had many different types of issues, including the issue that I focused on, as well as detecting just a nest as an ROI, and detecting random pixels as part of an ROI (figure 4). Using the same approach of setting a maximum saturation, she was able to get an image that resolved many of these issues. It was not perfect, but indicated that this approach of setting a maximum saturation is promising for improving our results.


Figure 4a.      Figure 4b.

My partner’s image without saturation cap My partner’s image with saturation cap

                                                                                  The nests are no longer detected and the

                                                                                       quality of the detected ROIs are much better.


At this point, we decided to move forward with implementing the saturation cap but had to overcome some issues, mainly that a lower saturation maximum sometimes causes the software to stop detecting ROIs that are really there (ROI 1 in Figure 4a). It seemed like we might have to fine tune the saturation cap for each ROI individually because they may be under different lighting conditions. Maybe one ROI is in direct light and needs a higher saturation cap than the ROI in a shadow. We brainstormed some ideas and narrowed them down to two. The first idea takes advantage of the fact that many of our videos are taken from the same point of view. For each ROI, we would define an area in which we expect it to appear across all videos from that point of view, but including some extra room in case the arena or camera were shifted slightly between videos. We would then need to find a good saturation cap for that area, using some heuristic like the average saturation in that area, and try to detect the ROI using that saturation cap. Our other idea was to begin by automatically detecting the ROIs as we have been, without a saturation cap. Then, we would crop out each ROI and a small region around it, heuristically find a good saturation cap, and lastly re-detect the ROI with the stricter parameter.


Having a partner helped me slow down to more thoroughly brainstorm more options and evaluate which is best. We naturally find slightly different reasons to prefer one over the other and weight each factor differently. So, I have to explain the reasons why I find one reason or approach more favorable. Then, we need to come to some consensus of which approach is better. If I were working on my own, I would be less inclined to question my own intuitions and might choose one option without considering all of the implications of that decision.

In conclusion, it has been very beneficial for me to have a partner who takes a different approach to our work. Together, we did a better job testing our saturation cap idea, and we made sure that we carefully considered our options moving forward and thoroughly justified our choices between those options.



Further Reading


HMC Bee Lab Blog Post: “Finding Regions of Interest in Ant Footage: Automating Work with Computer Vision” by Jarred Allen, July 2019.


HMC Bee Lab Blog Post: “MATLAB: Saving Students One Blob at a Time” May 2018


HMC Bee Lab Blog Post: "Painting a Picture of Color Spaces" by Catherine, May 2021


Programming Design System’s article: “color models and color spaces


Media credits

[1] This image was taken directly from a video from our experiments from 2019.


[2a, 2b, 3a, 3b, 4a, 4b] These images were created using our pipeline on three different videos from our experiments from 2019.


No comments:

Post a Comment