Tipping,Point,In,Wooden,Alphabet,Letters,On,A,Bamboo,Wood

Credit: Josie Elias on Shutterstock

The Hidden Math Behind Why People Finally Change

In A Nutshell

  • Everyone has a personal threshold, a level of peer adoption that typically needs to be reached before they will change their own behavior, and researchers have found a way to estimate it from survey data.
  • Social pressure works very differently depending on the behavior: about 61 percent of person-product pairings were influenced by peer signals in a messaging app study, compared to only 20 percent in an energy policy study.
  • Targeting the most popular or well-connected people in a network is not always the best strategy for spreading behavior change. Knowing who is already close to their tipping point, and who surrounds them, often matters more.
  • The method is promising but has limits: it relies on survey choices rather than real-world behavior, and results depend on specific network conditions and cost assumptions.

At some point, almost everyone has caved to peer pressure. Maybe it was finally downloading an app after the fifth friend raved about it, or starting to compost after most of the neighbors already had bins out front. That moment may not be entirely random. According to a new study published in Nature Human Behaviour, it may reflect something that can be estimated: a personal adoption threshold, a specific level of social buy-in that typically needs to be reached before a given individual is likely to follow along.

Researchers at the University of Zurich have built and validated a method to estimate that threshold using survey data, and their simulations show that targeting campaigns built around this information can outperform traditional strategies under certain conditions. For governments, public health officials, and climate advocates trying to nudge populations toward better behaviors, that is a meaningful development.

“Many sustainable technologies and behaviors are well known, but triggering their large-scale adoption remains elusive,” the authors write. Estimating when each person tips from resistant to willing may be one of the missing pieces.

Why Tipping Points Are the Missing Key in Behavior Change Campaigns

Not all behaviors spread the same way. A viral video travels through a population one share at a time. Behaviors that carry personal cost, social risk, or sustained effort work differently. Switching to an electric vehicle or supporting an unfamiliar policy usually requires seeing multiple people in one’s own immediate circle do it first. Researchers call this “complex contagion,” and it is the leading framework for understanding how behaviors actually spread.

Each person’s threshold determines how much social reinforcement they need. Some people tip early, after just one or two friends make a move. Others hold out until nearly everyone around them has already changed. And some people are not responsive to peer behavior at all: they either adopt regardless of what others do, or they never will. For a long time, researchers could describe this variation in theory but had no reliable way to measure it. The Zurich team, led by Manuel S. Mariani, developed a cleaner approach.

personality change
Scientists can now estimate the exact social tipping point that nudges people to change behavior. The findings could reshape public health campaigns. Credit: andy0man on Shutterstock

How Scientists Measured Each Person’s Behavior Change Threshold

Rather than digging through historical adoption records, the researchers designed controlled choice experiments. Participants were shown a series of hypothetical options and asked to choose among them. Each option came with a visible social signal, a percentage showing how many peers had already chosen it. By varying that signal systematically, the team could estimate how much peer adoption was needed to influence each person’s choice.

Two experiments covered different contexts. One had 296 participants choosing among energy policies related to carbon-capture technology, generating thousands of individual observations. A second had 300 participants choosing among fictional messaging apps, producing more than 4,100 observations. A statistical modeling technique then extracted a personal threshold estimate for each participant, representing the minimum share of peers already on board before that person would be likely to say yes. When tested against choices participants made in tasks not used to build the model, the estimates predicted behavior significantly better than chance.

Peer Pressure Works for Some Behaviors and Not Others

How much social influence actually matters depends heavily on the behavior being promoted, and the gap between the two experiments was notable. In the messaging app study, roughly 61 percent of individual person-and-product pairings were influenced by peer behavior, meaning the social signal genuinely shifted those choices. In the energy policy study, that figure fell to about 20 percent.

As the authors note, “the number of individuals who choose independently of social signals may differ substantially” depending on the behavior being promoted, something “often neglected in extant social influence maximization studies.” Campaigns built around social norm messaging, the kind that tell people most of their neighbors support a given policy, may accomplish very little when most of the target audience simply is not responsive to that kind of influence. The messaging app context worked differently because the product itself becomes more valuable as more people use it, giving people a concrete, practical reason to care what their peers are doing.

Climate change protest
The severity of social sway varies widely depending on the topic/behavior. For example, peer signaling appeared less influential when it came to energy policies. (Nicole Glass Photography/Shutterstock)

Smarter Targeting Starts With Knowing Who Is Close to Tipping

With individual threshold estimates loaded into simulations using Add Health, a large national database of real social networks, the team tested several targeting strategies head to head. Traditional approaches target the most connected people on the assumption that their reach amplifies the initial signal. Connectivity alone is often not the decisive factor, though.

When the cost of persuading an early adopter depended on that person’s own resistance rather than their social status, the best strategy was neighborhood susceptibility: finding someone at the center of a cluster of neighbors who are already close to their own tipping points. Convincing that one person can set off a cascade through the surrounding group.

When targeting costs scaled with social prominence, as in influencer marketing where high-profile people charge more and are harder to move, a different approach performed best. That method only outperformed traditional strategies when it was supplied with actual individual threshold data. Without it, the approach did no better than picking targets at random.

Across the scenarios tested, the best-performing strategies all used individual threshold data. Knowing who is popular is useful. Knowing who is already nearly ready to change, and who surrounds them, was often more useful in the simulations.

Some caveats apply. Both experiments used hypothetical survey choices rather than observed behavior, participants were recruited online and may skew younger and more digitally engaged, and the simulations assumed thresholds stay fixed even as social norms shift over time. Threshold estimates are also model-derived rather than directly observed, meaning the results reflect the assumptions built into the statistical approach.

Awareness alone has never reliably moved populations. The gap between knowing something is a good idea and actually doing it at scale is one of the central frustrations of public health, climate policy, and social advocacy. A tool that estimates who is already close to the edge, and where the first push is most likely to matter, is worth paying attention to.


Paper Notes

Limitations

Both experiments relied on hypothetical choices made in online surveys rather than real-world behavior. While conjoint survey methods have shown reasonable predictive validity in some prior research, how well these threshold estimates hold up in live behavioral settings remains to be tested. Participants were recruited through the Prolific platform and may not fully represent older or less digitally engaged populations. Sample sizes were modest at 296 and 300 participants per experiment, and only two behavioral domains were studied. Threshold estimates are model-dependent and should not be interpreted as directly observed quantities. The simulations also assumed thresholds remain stable over time and that the effectiveness of strategies depends on specific cost structures and network configurations, conditions that may not hold uniformly in real-world applications.

Funding and Disclosures

Radu Tanase and Manuel S. Mariani received financial support from the URPP Social Networks of the University of Zurich. Mariani also received funding from the Swiss National Science Foundation (grant nos. 100013-207888 and 100013-236802). Funders had no role in study design, data collection and analysis, the decision to publish, or preparation of the manuscript. No competing interests were declared.

Publication Details

This study was authored by Radu Tanase, Rene Algesheimer, and Manuel S. Mariani, all affiliated with the Department of Business Administration at the University of Zurich in Zurich, Switzerland. It was published online on March 16, 2026, in Nature Human Behaviour under the title “Integrating behavioural experimental findings into dynamical models to inform social change interventions.” The DOI is https://doi.org/10.1038/s41562-026-02417-4. Data and code are publicly available via Zenodo at https://doi.org/10.5281/zenodo.17841193.

About StudyFinds Analysis

Called "brilliant," "fantastic," and "spot on" by scientists and researchers, our acclaimed StudyFinds Analysis articles are created using an exclusive AI-based model with complete human oversight by the StudyFinds Editorial Team. For these articles, we use an unparalleled LLM process across multiple systems to analyze entire journal papers, extract data, and create accurate, accessible content. Our writing and editing team proofreads and polishes each and every article before publishing. With recent studies showing that artificial intelligence can interpret scientific research as well as (or even better) than field experts and specialists, StudyFinds was among the earliest to adopt and test this technology before approving its widespread use on our site. We stand by our practice and continuously update our processes to ensure the very highest level of accuracy. Read our AI Policy (link below) for more information.

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink

Editor-in-Chief

John Anderer

Associate Editor

Leave a Reply