[1] A picture of the resin tree we use in the lab with attached food platforms and nests at the tips. Nodes are highlighted in blue for emphasis. |
This is the physical tree that we are using. To be brief, we let an ant loose on the tree, beginning at the middle, and see where it goes. As the ant walks, we record every node it crosses over.
I personally have been working on a computational model of these kinds of experiments, simulating an ant walking along and making decisions about which way to turn. Thus, I have been thinking extensively about how to represent a tree, like the one above, in a mathematical sense. The easy choice, and common across other research teams, is to represent the tree as a network, with the connections of branches being represented as nodes, and the branches being represented as edges. This significantly abstracts the tree, ignoring geometric properties such as angles and branch length, but maintaining left/right turns and width of the branches.
[2] A network representation of the tree. Branch color represents the width of the branches, and tips are labeled for ease of reading results from the model. |
Working with networks is two things. For one, it is a blast. I’m a big fan of anything discrete, it seems to just click in my brain (I’ll come back to this later). Secondly, it is ubiquitous in life. So many things are networks, and you don’t really realize it until you begin working with them. An obvious example of a network in real life is a transportation network, such as a subway system or airport connections, where destinations are connected by paths. Take computer networks, or social networks, and you’ll find many more examples.
Why are networks so present in our lives? Is it just that it’s the easiest way to represent things? They certainly aren’t something present in nature, right, it must be some human abstraction of real-life things, like language?
Well, for a moment, let’s talk about language. When learning a language, one may find it difficult to memorize each word. Instead, language learners often learn rules: ways to get from one word to another. That is, languages aren’t random word associations—there are rules within sentences that operate in non-random ways. For instance, you infer what the next word of this sentence will be, based on the previous words, even if I don’t finish the ______.
Now imagine that languages form a network, where associated words are connected by edges. Adjective-noun connections would often be strong: blue sky, hot sun, green grass. Verb-adjective connections would be strong as well: running late, falling asleep, going away. These direct connections are said to have distance 1, because they are connected by one edge. Some less direct connections may have larger distances, like some verb-object connections—forgot my phone, missed your call—which have a distance of 2.
Here’s a fun hypothetical. If you put the entirety of the English language in a network like this, what would be the average distance between two randomly chosen words? Of course, there would be many paths between the words, but for these purposes we are looking for the average minimum distance between two words. Turns out, it’s not just a hypothetical! The answer is shown to be around 2-3.
One thing that English and other languages share is their network structure. Languages have properties of small-world networks, namely their low average minimum distances between words. Small-world networks are ones that form small, close-knit neighborhoods, which are then connected together, like social circles. It makes sense that languages would develop this way, because small-world networks are known to make it easier for humans to anticipate future events based on previous events.
![]() |
[3] An example small-world network. Note that nodes have multiple connections, and there are 'shortcuts' allowing for low average minimum distances between nodes. |
Okay, non-sequitur over, what can this tell us about networks’ place in the overall human psyche? Well, really it makes sense that we prefer small-world networks in our languages if that is literally how our brains are shaped. And yes, our brains are networks—neurons connected by axons—which demonstrate small-world properties. This of course brings up new questions, such as why did we (and other animals) evolve this way? Likely, it’s the most efficient way to process information, but it leads me to wonder if the same is true in every environment.
Cephalotes varians (the turtle ants in our lab), for example, are arboreal ants, and thus are constrained by the geometry of the trees in which they live. As I mentioned at the beginning, trees can easily be represented as networks. However, because tree junctions are bifurcations, and branches don’t often reconnect to the tree, trees are most analogous to binary tree networks. In networks such as this, there aren’t shortcuts like there are in small-world networks—to get to a specific node you must pass through a specific path. To get from one end of the tree to another requires walking along the length of the whole tree. Necessarily, binary trees such as this cannot be small-world networks.
![]() |
[4] An example of a turtle ant network observed in the wild. Note how the network has no small-world properties: paths are long chains of nodes, and node clusters are uncommon. |
Humans, along with many other animals, have brains that evolved as small-world networks. It makes me wonder if a species such as Cephalotes varians, having evolved in such a different kind of environment, has formed different kinds of efficient networks with their colony structures. Small-world networks simply aren’t possible for these ants’ networks, though having evolved in trees would suggest that they probably have the most efficient thing figured out already.
The way ant colonies operate is very analogous to how our brains function and to networks in our lives, like transportation networks. But if they operate under different network constraints, they will produce different results. This is one of the reasons that we are so interested in ant networks in our lab. They’ve lived and thrived in trees for a very long time, so I think we’ve got a thing or two we could learn from them.
Further Readings
Lynn, Christopher W., and Danielle S. Bassett. “How Humans Learn and Represent Networks.” Proceedings of the National Academy of Sciences 117, no. 47 (November 24, 2020): 29407–15. https://doi.org/10.1073/pnas.1912328117.
Ferrer I Cancho, R., and R. V. SolĂ©. “The Small World of Human Language.” Proceedings of the Royal Society B: Biological Sciences 268, no. 1482 (November 7, 2001): 2261–65. https://doi.org/10.1098/rspb.2001.1800.
Bassett DS, Bullmore ET. Small-World Brain Networks Revisited. Neuroscientist. 2017;23(5):499-516. https://doi.org/10.1177/1073858416667720
Ants Swarm Like Brains Think https://nautil.us/issue/23/dominoes/ants-swarm-like-brains-think-rp
Media Credits
[1] Photo taken by Simon Woodside, edited by me
[2] Figure made by me, using networkx and matplotlib python libraries
[3] Small-world network by Wikipedia user Schulllz: Small-world_network
[4] Turtle ant network by Valentin Lecheval, from data collected by the lab in the Florida Keys: how-we-found-and-collected-turtle-ants
No comments:
Post a Comment