Researchers who study belief dynamics often use analogies to understand and model the complex cognitive-social systems that underly why we believe the things we do and how those beliefs can change over time. Ideas can be transmitted like a virus, for instance, “infecting” a population as they spread from person to person. We might be drawn — like magnets — to others with a similar worldview. A society’s beliefs can shift slowly before reaching a tipping point that thrusts society into a new phase.
In a new paper in Trends in Cognitive Sciences, SFI Professor Mirta Galesic and External Professor Henrik Olsson, both also faculty members at the Complexity Science Hub, explore the benefits — and potential pitfalls — of several common analogies used to model belief dynamics.
It’s quite common, particularly at SFI, for researchers in one field to draw on the analogies provided in other domains. For examples, researchers have used ideas from physics to understand economic processes, and tools from ecology to understand how scientists work. In the past century, computers were used as analogies to understand the human mind, while now in a reversal of roles, the human mind is used to understand the workings of large language models. “All analogies can be useful, but all will eventually break. The trick is to recognize when an analogy has been pushed too far,” says Galesic.
One of the most common analogies for belief dynamics is the Susceptible-Infected-Recovered (SIR) model, a tool developed in epidemiology. The SIR model can describe how a single contagion moves through a population, and the analogy can be expanded to more complex situations like when holding one belief increases the chances that a person will adopt another, just like a flu or cold infection can increase a person’s chances of developing pneumonia.
While analogies can provide “conceptual mileage” by helping researchers notice properties they might have otherwise missed, they also come with “conceptual baggage” that can lead to inaccurate inferences. Adopting an analogy — and the model that goes with it — without recognizing its shortcomings can lead to bad policy or ineffective action.
One limitation of the SIR model is that beliefs can spread quite differently from viruses. Simple exposure doesn’t always lead to an idea taking hold. With ideas, repetition can be ineffective and even counterproductive when they are radically different from a person’s existing beliefs. And, ideas spread more easily when people share other relevant beliefs and characteristics.
The authors explore the mileage and baggage of other analogies for belief dynamics, including ferromagnetism, thresholds, forces, evolution, weighted additive models, and Bayesian learning. Each analogy together with its associated models provides different useful concepts and methodologies, yet none is adequate alone.
“We need to confront analogies seriously — what can be used, what cannot, and what we can learn from them — in order to construct models that can actually be used in predicting and explaining real-world dynamics in beliefs,” says Olsson.
Better than using one simple analogy might be to draw insights from multiple sources while recognizing each one’s baggage. “In the end, of course, what matters is the result that helps you explain the natural phenomena you want to explain,” says Olsson.
“We provide some guidelines to using analogies to develop models of belief dynamics. First, map them, then implement in quantitative models. And equally important, conduct empirical tests and comparisons to see whether the models inspired by a particular analogy are useful and realistic,” says Galesic.