Credit: fizkes on Shutterstock
In A Nutshell
- Groups that allow members to go neutral, rather than forcing a binary choice, reach consensus faster and reverse it more easily when needed.
- A “de-escalation” dynamic, where people disengage after encountering opposing views rather than digging in, is orders of magnitude more efficient at shifting group opinion than staying fully polarized.
- Researchers found the same pattern in two very different systems: locusts switching march direction and humans playing an online voting game, suggesting the mechanism is broadly applicable.
- For real-world persuasion, moving committed opponents to a neutral stance may be more effective than trying to convert them directly.
Fence-sitters get a bad reputation. In politics, at work, even among friends, the person who refuses to pick a side is often seen as wishy-washy or just unhelpful. But a new study argues that neutral actors, the abstainers, the undecideds, the people who quietly step back from a fight, may help groups avoid getting stuck in decisions they struggle to reverse.
For anyone trying to shift an entrenched opinion, the math also points toward a counterintuitive strategy. Instead of fighting to convert committed opponents, the more efficient move may be getting them to step back into a neutral stance first.
Published in the journal Advanced Science, the research builds a mathematical model of group decision-making and tests it against two real-world systems: marching locusts and humans playing an anonymous online voting game. In both cases, the ability to go neutral wasn’t a sign of indecision. It was what allowed groups to reach agreement and revise it when circumstances changed.
Why Neutrality Is Central to Healthy Group Decision-Making
Consider what happens in a group where everyone is always fully committed. People push for their side, others push back, and the group tips toward a majority position. That part works fine. The problem comes when the group needs to change course.
Many earlier mathematical models assumed everyone stayed “in,” always advocating for one option or the other. Those models could explain how agreement forms but struggled to explain how it gets reversed. Once a group locks in, the models predicted it would stay locked in for a very long time, a mathematical description of an echo chamber: a system that can form a consensus but cannot break out of one.
Adding a third option, staying neutral, resolves that problem. When individuals step back rather than double down, the number of people actively fighting for either side shrinks temporarily. A smaller active group is more sensitive to random shifts, which makes the whole system more likely to tip in a new direction. Neutrality creates a window for change that pure two-sided conflict never could.
How Fence-Sitting Unlocks Faster Group Decision-Making
Researchers at the University of Bath identified two distinct ways that groups use neutrality to shift from one consensus to another. In the first, individuals drift neutral on their own, then get pulled back into the debate by others. In the second, which the team calls “de-escalation,” individuals go neutral specifically because they encounter someone with an opposing view. Rather than digging in, they disengage.
That second mechanism can be far more efficient. As the paper states, “systems in which individuals’ preferences are reinforced by in-group dynamics are found to spend orders of magnitude more time stuck in one consensus system state than those in which encounters with opposing opinions prompt the adoption of a more neutral stance.”
Put simply, a group where clashing leads to disengagement, rather than entrenchment, can reverse its collective position far faster than one where every disagreement just hardens existing camps.
What This Means for Changing Minds at Scale
Researchers draw an explicit connection to politics. In modern elections, enormous effort goes into winning over persuadable voters. But the paper suggests that energy might be better spent elsewhere: “efforts might be better expended on moderating the positions of those holding strong opinions, so that they temporarily adopt a neutral stance.” Once someone steps back from a committed position, they become more open to arriving at a new one on their own terms.
Getting a committed opponent to say “I’m not sure anymore” may ultimately be more powerful than pushing for an immediate conversion. It bears noting that the paper proposes this as a strategic direction based on the model’s findings; it does not test real elections directly, and how well the dynamic scales to large, complex societies remains an open question.
Locusts and Human Voters Follow the Same Playbook
To test whether the math holds up in real life, researchers looked at two very different populations.
On the animal side, they reanalyzed footage from earlier experiments involving groups of locust nymphs placed in a ring-shaped arena. Locusts in that setting tend to march together, all clockwise or all counterclockwise, occasionally flipping direction. Prior research had documented these switches but couldn’t fully explain them. When the Bath team looked more carefully at the tracking data, they noticed something previously overlooked: individual locusts frequently stop. They go neutral. And in the moments just before a direction switch, the number of stopped locusts spikes sharply. Neutrality played a central role in those transitions.
On the human side, the team recruited 256 English-speaking adults from the UK, between the ages of 18 and 75, to play an online voting game through the platform Prolific. Groups ranged from 12 to 33 participants. Each person played 120 rounds, choosing between two options or abstaining each round, with financial rewards going to those who voted with the majority. After each round, participants saw a small, randomized sample of how others had voted, creating a feedback loop similar to reading a public opinion poll.
Groups in the main experiment regularly shifted consensus, swinging from one dominant option to the other, with abstentions rising before each switch. In follow-up experiments where abstaining was not allowed, groups either failed to form a stable consensus at all or locked into one and became resistant to change. Removing the option to step back made the group more rigid.
Notably, the same mathematical framework fit both the locusts and the humans, suggesting this isn’t a quirk of one species or one situation. Groups don’t change their minds the way individuals do. They shift through dynamics that resemble systems studied in physics, where the share of committed participants at any given moment determines how frozen or flexible the whole group is.
For those concerned about political polarization and entrenched public opinion, the research points to a concrete mechanism rather than just a diagnosis. Getting people off the extremes and into a state of genuine uncertainty may not be a failure of persuasion. According to the math, it may be one of the most effective ways to get the process started.
Paper Notes
Limitations
The study uses a mathematical model built for simplicity and broad applicability, meaning some real-world complexities are intentionally excluded. It assumes symmetric behavior between the two opinion groups and focuses on pairwise interactions rather than more elaborate social dynamics. Human experiments involved only UK-based participants, limiting how broadly the findings translate to other cultural or political contexts. Group sizes in the voting experiments were relatively small, ranging from 12 to 33 participants, and the task was a structured game rather than a real-world decision. Locust data came from a previously published dataset re-analyzed for this study. Researchers also note the model does not account for network topology, meaning it does not capture how the structure of social connections might affect consensus formation and change.
Funding and Disclosures
This research was supported by the EPSRC Centre for Doctoral Training in Statistical Applied Mathematics at Bath (SAMBa) under the project EP/S022945/1, the Institute for Mathematical Innovation at the University of Bath, and the University of Bath Alumni Fund. The authors declare no conflicts of interest.
Publication Details
The study was authored by Andrei Sontag of the University of Bath and University College London, Janina A. Hoffmann, a psychologist at the University of Bath, Tim Rogers of the University of Bath, and Christian A. Yates of the University of Bath, who served as the corresponding author. The paper, titled “Consensus Formation and Change are Enhanced by Neutrality,” was published in Advanced Science in 2026. DOI: https://doi.org/10.1002/advs.202512301. Raw data and analysis code are available at https://doi.org/10.15125/BATH-01478.







