Credit: CHUYKO SERGEY on Shutterstock
In A Nutshell
- Bias doesn’t need politics: Even randomly assigned teams made people more likely to believe claims that favored their group.
- It’s not about intelligence: People didn’t lose their ability to tell true from false. They just became more permissive toward flattering claims.
- Truth and falsehood both slip through: Lowering the bar means accepting more true and more false in-group claims.
- Facts alone may not fix it: Bias can persist even when everyone starts with the same information.
Two people can look at the same news story and walk away with opposite conclusions. Same article, same set of facts, same amount of time spent reading. One person calls it proof of a problem; the other calls it overblown. It happens at dinner tables, in group chats, and across social media feeds every single day, and it happens even when both people are perfectly smart and well-informed.
A common explanation for this kind of disagreement boils down to a knowledge gap: one side simply doesn’t have the right facts. If only people consumed better information, the thinking goes, the bias would disappear.
However, a new study published in Psychological Science suggests something different. When researchers at the University of Texas at Austin assigned people to completely random, meaningless teams and then asked them to evaluate true and false statements, participants showed a pattern of being more willing to accept claims that flattered their assigned group, whether those claims were true or false, even though random assignment meant no team entered the study with systematic group-level differences in background knowledge.
In the first experiment, participants assigned to one of the two teams showed a statistically reliable bias favoring their group, though the effect was lopsided in a way the researchers later addressed. The results suggest that identity protection, not just a gap in what one group knew compared with another, may play a role in shaping judgments. That finding is notable because it suggests a limitation in approaches to fighting misinformation that focus exclusively on closing knowledge gaps.
How Do You Build Bias From Scratch?
Tyler J. Hubeny and his colleagues at UT Austin designed an experiment that cut straight to the heart of a long-running debate in psychology. For years, researchers have argued over two competing explanations for partisan bias, the tendency to accept claims that support one’s own side and reject claims that don’t.
One camp says the bias is driven by motivated reasoning: people want to protect their identity, so they bend their evaluations accordingly. The other camp says it’s simpler than that. Partisans just happen to know more about topics that support their side, thanks to years of selective news consumption and like-minded social circles. Because Democrats and Republicans have genuinely different information diets, any study comparing them can never fully rule out the knowledge explanation.
Random assignment to brand-new groups solves that problem: Team A and Team B don’t enter the study with different background knowledge as groups, the way real partisans do, so any bias that appears can’t be chalked up to one side simply knowing more.
Hubeny’s team operationalized this logic with an elegant workaround. Instead of studying Democrats and Republicans, they invented brand-new groups that carried no political baggage and no built-in knowledge advantages. In the first experiment, 563 U.S. adults recruited through the online platform Prolific were randomly sorted into one of three conditions: Team UK, Team France, or no team at all. Participants first completed a short personality quiz, which was actually a decoy. Those in the team conditions were then told the quiz had determined their personality aligned with either the United Kingdom or France.
To make the identity feel real, the researchers framed the study as being about “differences between people from countries that have experienced conflict,” showed participants images of the British and French flags, and gave a brief description of each nation. This was a stack of small cues intended to encourage identification with a randomly assigned group. Everyone then evaluated 60 statements about the two countries, some true, some false, some flattering to the UK, some flattering to France, and judged whether each claim was accurate.
Because the team assignments were entirely random, neither team had a systematic reason to know more pro-UK or pro-France facts than the other. Any bias that showed up couldn’t be blamed on a group-level knowledge gap. That makes identity protection a plausible candidate for explaining whatever bias emerged, even if the study alone cannot pin down the exact mechanism.
What Does ‘Lowering the Bar’ Actually Look Like?
And bias did appear. In the first experiment, participants assigned to Team UK were measurably more willing to accept statements that made the UK look good, compared with statements favoring France. The effect for Team France, however, was weaker and did not reach statistical significance, an asymmetry the researchers traced to an unexpected baseline preference among Americans for the UK over France.
Their ability to tell true from false didn’t change; they weren’t getting smarter about their own team’s claims or dumber about the other team’s. The reason those two things can coexist is intuitive once you see it: if you’re quicker to believe anything that praises your team, you’ll correctly accept more of the flattering claims that happen to be true, but you’ll also accidentally accept more of the flattering claims that happen to be false. The hits and the false alarms rise together, leaving overall accuracy about the same even as your judgments tilt in your group’s favor.
Instead of sharpening or dulling their lie-detection ability, participants simply shifted their threshold (the internal cutoff for saying “true” versus “false”) downward when a claim flattered their group and upward when it flattered the rival group. Think of it less like losing the ability to read a scale and more like nudging the zero point: everything on your team’s side of the ledger gets a little easier to believe, and everything on the other team’s side gets a little harder.
Critically, people who weren’t assigned to any team showed no such pattern. Their acceptance of pro-UK and pro-France statements was virtually identical, suggesting that the bias wasn’t baked into the statements themselves.
Did a Cleaner Test Change the Picture?
To address this, the team ran a second, larger experiment with 848 new participants, this time using Spain and Greece as the two countries. Pilot testing had confirmed that Americans had no particular lean toward either nation. Everything else about the setup remained the same: random assignment, a bogus personality quiz, 60 true-and-false statements to evaluate, with only slight modifications to the framing instructions to fit the new country pairing.
This time, the results lined up cleanly on both sides. Participants on Team Spain lowered their acceptance bar for pro-Spain claims, and participants on Team Greece did the same for pro-Greece claims. Once again, people not assigned to a team showed no preference. And once again, accuracy in distinguishing true from false statements was unaffected by team membership. Participants weren’t becoming worse at spotting lies; they were becoming more willing to accept in-group-flattering claims across the board, true ones and false ones alike. The study suggests a risk for misinformation, but the bias observed was not a selective vulnerability to falsehoods. It’s a blanket shift in willingness to believe anything that makes your team look good.
Across both experiments, with a combined 1,411 participants, the pattern held: group identity was associated with a shift in people’s willingness to accept favorable information without changing their underlying ability to detect truth. Participants still demonstrated strong knowledge of the facts, with accuracy scores comparable to those found in studies of real political misinformation. They weren’t guessing. They showed selective patterns in what they accepted as true.
What This Doesn’t Explain
The bias effects, while consistent and statistically reliable, were modest in size, a point the researchers themselves acknowledge. They argue that randomly assigned identities naturally produce weaker loyalty than deeply held political affiliations developed over a lifetime. If a coin-flip team assignment can produce measurable bias, the effects of actual partisan identity, shaped by years of emotional investment, social belonging, and personal history, might be stronger. In that sense, these results may represent the lower bound of identity-driven bias, not the upper bound.
The study also drew exclusively from U.S. adults on the Prolific platform, so whether the same dynamics hold across cultures or demographics remains an open question. And the research does not address a related theory suggesting that bias may intensify when people think harder about an issue, a possibility that future work could explore.

One Notable Takeaway Appears in the Study’s Final Pages
One notable takeaway appears in the study’s final pages. As the authors put it: “Although it is often implied that acceptance of misinformation is due to the other side not ‘having the facts,’ our results demonstrate that this is not the full story.”
Much of the current infrastructure for fighting misinformation (fact-check labels, media literacy programs, source transparency tools) operates on the assumption that giving people better information will lead to better judgments. Those tools aren’t worthless. Knowledge gaps do exist, and closing them matters. But this research shows that even when the knowledge gap is minimized, bias can still occur. People don’t need to be ignorant to be biased. They just need to care about belonging to a group. That means interventions designed solely to inform may have limitations.
If bias persists even when everyone starts with the same information, then giving people better facts alone may address the knowledge problem without affecting the motivation problem.
The detail that lingers from these experiments is how little the group assignment actually meant. Participants had no real reason to care about Spain or Greece, the UK or France. Nobody’s livelihood, friendships, or sense of self depended on which team a computer randomly assigned them to. And still, that flimsy sense of belonging was enough to tilt the scales. A label slapped on by an algorithm did that. Decades of political identity, the kind forged through family traditions, community bonds, and deeply personal convictions, might be expected to produce stronger effects. Better facts alone may not be the full answer.
Correcting false claims may not be fully effective if the person evaluating the correction is unconsciously motivated to protect their team. The researchers argue that future efforts should address not just what people know, but also what they are motivated to defend.
Paper Notes
Limitations
The study acknowledges several limitations. First, the generalizability of the findings is potentially limited to Prolific workers in the United States. Second, Experiment 1 showed an unexpected baseline identification asymmetry (stronger identification with the UK than France), which may have suppressed the induced bias effect in the Team France condition. The authors also note that the observed effect sizes are relatively small and plausibly represent lower bounds, as the randomly assigned minimal group identities are likely weaker than existing identities with personal significance. Finally, the findings do not speak to an alternative conceptualization of motivated reasoning involving cognitive elaboration (the motivated-system-2 hypothesis).
Funding and Disclosures
This research was supported by the National Science Foundation (Grant No. BCS-2040684) and the Swiss National Science Foundation (Grant No. P500PS_214298). The authors declared no conflicts of interest. The authors state that any opinions, findings, and conclusions expressed in the material are their own and do not necessarily reflect the views of the funding agencies.
Publication Information
Hubeny, Tyler J., Lea S. Nahon, and Bertram Gawronski. “Understanding Partisan Bias in Judgments of Misinformation: Identity Protection Versus Differential Knowledge.” Psychological Science January 7, 2026. DOI: 10.1177/09567976251404040. The study utilized data from two preregistered online experiments with a final analyzed sample of 1,411 adult U.S. Prolific Academic workers. All authors are affiliated with the Department of Psychology, University of Texas at Austin.







