Pages

Tuesday, May 2, 2023

A Bee's-Eye View! Rethinking Image Thresholding to Save the Bees

        Computers may not have eyes, but through the use of computer vision methods like image segmentation and classification, they are able to effectively recognize and process images and videos. Scientists and engineers create models that can mimic certain skills and abilities that humans have in order to more accurately and efficiently solve problems at extremely large scales. For example, our human intelligence processes have the ability to effortlessly recognize specific objects from complex and clustered images. We can look at our surrounding environment and know that the object on the table is a bowl of fruit and that there are different fruits that we can also easily distinguish.


[1] Humans can easily identify very specific objects
and details in a given scene using our sense of sight!

        In order to do this, our brains use contextual information, such as relative sizing or specific object orientation. Prior knowledge and experiences also play a significant role in how we perceive a given object or landscape. Whenever we are faced with any multilayered visual stimuli, this triggers a wave of brain activity that allows us to segment the scene into more easily recognizable and distinguishable sections. This ability of the brain acts as a model for the deep learning algorithms used to divide images into different objects in image segmentation models.

        Segmentation modeling is a computer vision method that utilizes a divide-and-conquer method to break an image into multiple areas, which can then be more easily analyzed. Segments can be formed using similarity and discontinuity: the former detects similarities between neighboring image pixels. Discontinuity, on the other hand, forms segments based on how pixel intensity values change within an image.

        One simple technique which is the first step in many segmentation algorithms is image thresholding. Thresholding algorithms work to separate the foreground from the background in an image by assigning each pixel to one or the other using a provided threshold value. These pixels are categorized based on a certain color space property, like light intensity (gray-scale), RGB values, and HSV color ranges.


    
[2] Grayscale image (top left), binary thresholding (top right),
visualizations of HSV (middle) and RGB (bottom) color spaces

        By integrating the specific color properties we understand, such as hue, brightness, chromaticity, and saturation, and the RGB color spectrum we see through, thresholding techniques change raw images to make them easier to analyze in image segmentation models. Our thresholding techniques are based on how humans visually perceive their environment, which made me start to wonder what an object classification model would look like if it were based on how bees interact with the flowers in their environment. How does the visual information they gather allow them to optimize flower detection?


[3] Bee on a lavender flower!

        While smell acts as the main initial attractant for bees, scientists were able to determine that bees also heavily rely on their color vision. This ability to perceive colors uses the absorption and reflection of light to determine specific colors and is independent of light intensity. Humans also rely significantly on color vision and are able to see a wider range of wavelengths than bees, but bees have much more specialized visual abilities. While both humans and bees are trichromatic creatures, humans have red, green, blue photoreceptors that they base their color combinations on. Bees, instead, have ultraviolet light receptors that replace the red photoreceptors. This means that bees can’t actually see the color red!


[4] Overlap and contrast in the color ranges of humans and bees

        Their ability to see ultraviolet light is an evolutionary advantage that allows them to see specific patterns on flowers called nectar guides, that are invisible to many other animals. (See a whole database of images of flowers in ultraviolet vs. RGB here!) Many flowers utilize these guides to attract bees and so both flower patterning and colors play an important role in a bee’s decision to pollinate a particular flower. Bees also have a very high critical flicker-fusion threshold. This frequency value is where an animal perceives a flickering light source as a continuous light stream rather than a set of flashes. This high CFF value allows bees to see each individual flower even when they fly by at really fast speeds!


[5] UV nectar guide on a Mimulus flower!


        Ultimately, bees’ distinct color vision capabilities allow them to see each individual flower and distinguish them from the surrounding plants. Using UV-specific color properties like iridescence, bees can define petals as more or less “shiny” and then associate them with more pollen.



        Even more interestingly, they have structural components that allow them to analyze depth and judge distance. Bees have two different types of eyes: the ocelli and the larger compound eyes. The three ocelli are much smaller and are located in the center-top of the bee’s head. These single lens organs help the bees navigate and detect light intensity. The two compound eyes are made up of thousands of tiny lenses called facets, which each perceive a small part of what the bee sees. The bee’s brain is then able to combine these fragments into a mosaic-like depiction.


[6] A depiction of the mosaic property of bee eyesight

        Bees’ extraordinary sight abilities are what allows them to be the super-pollinators that they are. As we attempt to maintain populations of these super-pollinators, it may be helpful to create deep learning models that mimic a bee’s sight receptors and signaling pathways. Beginning with color thresholding algorithms that simulate how bees optimize their ability to detect flowers, we may be able to create image segmentation models that are more equipped to model bees’ impressive visual and structural abilities. While these models have yet to be fully developed, engineers have created “bee eye” cameras that use a combination of lenses and mirrors to simulate bees’ complex faceted vision system. Ultimately, by looking at the world using the eyes and mind of a bee, we may be able to better characterize floral resources on a much larger scale, which could allow us to protect already-existing bee habitats, increase current populations, and support them on their essential quest to pollinate!


Media Credits:

[1] Photo from My Uncommon Slice of Suburbia – Contemporary kitchen.
[2] Photos by Yining Shen, Color Threshold and Image Processing Basics.
[3] Photo by Arthur Harrow, https://flic.kr/p/KSXnnn. License by CC BY-NC-ND 2.0.
[4] Figure by Museum of the Earth, https://www.museumoftheearth.org/bees/biology.
[5] Photo by Plantsurfer from Wikimedia Commons, https://commons.wikimedia.org/wiki/File:Mimulus_nectar_guide_UV_VIS.jpg. License by CC BY-SA 3.0.
[6] Figure by University of Exeter, Bees use patterns – not just colours – to find flowers, using data created by Natalie Hempel de Ibarra.



Further Readings:

Hempel de Ibarra, Natalie, et al. “The Role of Colour Patterns for the Recognition of Flowers by Bees.” Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 377, no. 1862, 2022, https://doi.org/10.1098/rstb.2021.0284.

Kahn, Jeremy. “Why Honeybees May Be the Key to Better Robots and Drones.” Fortune, Fortune, 21 Mar. 2023, https://fortune.com/2022/05/27/honeybees-biomimicry-ai-autonomous-opteran/.

Werner, Annette, et al. “Object Recognition in Flight: How Do Bees Distinguish between 3D Shapes?” PLOS ONE, vol. 11, no. 2, 2016, https://doi.org/10.1371/journal.pone.0147106.

Leonard, Anne S., et al. “Flowers Help Bees Cope with Uncertainty: Signal Detection and the Function of Floral Complexity.” Journal of Experimental Biology, vol. 214, no. 1, 2011, pp. 113–121. https://doi.org/10.1242/jeb.047407.

“University of Exeter.” Home Page News - Bees Use Patterns – Not Just Colours – to Find Flowers." University of Exeter, https://news-archive.exeter.ac.uk/homepage/title_928631_en.html



No comments:

Post a Comment