AIwriting

(Credit: NMStudio789 on Shutterstock)

Scientists Say AI May Be Slowly Homogenizing Human Thought.

In A Nutshell

  • In controlled experiments, people who co-wrote with opinionated AI models often mirrored the AI’s framing and later reported their own opinions had shifted, usually without realizing it.
  • Researchers warn that as millions of people use the same small set of AI systems, human language, perspective, and reasoning may gradually converge toward a narrower, more uniform range.
  • AI-polished writing strips away personal linguistic markers, including subtle cues that researchers use to detect early signs of Alzheimer’s disease.
  • People writing with AI assistance showed weaker brain activity, lower memory recall, and less sense of ownership over their work compared to writing independently.

In a series of controlled experiments, people who co-wrote with AI models specifically engineered to frame social media positively or negatively ended up mirroring those stances in their own writing. When surveyed afterward, many reported that their actual opinions had shifted to match. Most hadn’t noticed the influence at all.

That finding is one piece of a troubling larger picture laid out in a new analysis published in Trends in Cognitive Sciences. Researchers at the University of Southern California argue that large language models (LLMs), the technology powering ChatGPT, Gemini, and similar tools, may be doing something far more consequential than helping people write better emails. As these systems spread across everyday life, the researchers warn, they risk standardizing human language, thought, and reasoning in ways that could compound quietly over time.

Every time someone asks an AI to polish an essay, draft a message, or help brainstorm ideas, something subtle may be unfolding: their words and possibly their thinking growing a little more like everyone else’s. Cognitive diversity, the USC team argues, is not merely culturally enriching. It is essential to how societies solve problems, drive breakthroughs, and keep a check on dominant thinking.

Can AI Change Your Opinions Without You Knowing?

The co-writing study is the most direct evidence in the paper on opinion influence. Participants used AI models designed to push a particular view of social media. Their writing mirrored that view, and their attitudes followed. The researchers describe this as a warning about persuasive influence, not a settled finding about how all AI chatbot use shapes belief. But it points to a real and underexplored risk.

What worries the USC team most is how that risk could scale. A small number of dominant AI systems are now woven into writing, problem-solving, and communication for vast numbers of people. If those systems consistently nudge users toward particular framings, researchers warn that the effect could become self-reinforcing over time: people absorb the AI’s framing, that framing shapes what they write, and what they write may eventually feed back into future training data. The paper describes this as a plausible structural risk, not a directly observed global process, but one worth taking seriously given the concentration of influence in so few platforms.

LLMs also appear to struggle with authentic representation of different perspectives. Multiple studies found that these systems tend to reflect the norms of what researchers call “WEIRD” societies, a shorthand for Western, educated, industrialized, rich, and democratic, even when explicitly asked to simulate other viewpoints. In one example, when prompted to represent the perspective of a person with impaired vision on immigration policy, a model responded: “While I may not be able to visually observe the nuances of the US-Mexican border or read statistics, I believe…” Rather than channeling a fully formed worldview, the AI reduced a person to a single physical characteristic. That is not representation. That is caricature.

AI and human hallucinations
A handful of AI systems are already woven into writing, problem-solving, and communication for vast numbers of people. Credit: elenabsl on Shutterstock

AI Homogenization and the Language That Defines Us

When people use AI to polish their writing, whether Reddit posts, academic abstracts, or personal essays, the results grow more similar to each other. Markers that once allowed researchers to predict an author’s age, gender, personality, or political leanings from their word choices become much harder to detect after AI editing. College admissions essays generated entirely by AI showed high similarity in word choice and meaning across thousands of samples, writing that should be deeply personal converging toward a shared, generic voice.

Some of those linguistic fingerprints carry medical weight. People in the early stages of Alzheimer’s disease often show telltale language changes: simplified phrases, missing function words, unusual repetitiveness. The researchers warn that if AI tools consistently smooth those patterns before a clinician sees the text, critical early indicators for diagnosis could be obscured.

AI outputs also skew toward the norms of English-speaking, higher-income, Global North populations, treating one set of expressive habits as the default for what “clear” or “intelligent” writing looks like. Everyone else gets quietly pushed to the margins.

Does AI Make You Less Creative? What the Research Shows

In creative ideation experiments, participants who used ChatGPT generated more ideas and more elaborate ones, a surface-level gain. But those ideas were judged to be more alike across participants. Volume went up; originality of thought went down.

Brain scan research adds weight to this finding. People writing with AI assistance showed the weakest neural engagement of any writing condition tested, weaker than writing independently and weaker than using a search engine. Brain networks tied to memory and focused attention were less active. Participants also reported feeling less ownership over what they had produced.

A widely used AI prompting strategy called chain-of-thought prompting, which asks models to show their step-by-step reasoning before giving an answer, illustrates another tension. While it has lifted AI performance on many standard benchmarks, it can backfire on tasks that require intuitive or pattern-breaking thinking. In one experiment, this approach made GPT-4o four times slower to correctly identify exceptions to a rule, precisely the kind of flexible thinking that benefits from context and instinct rather than rigid sequential logic.

A McDonaldization of the Mind

Sociologist George Ritzer’s concept of “McDonaldization,” the idea that systems built for efficiency and predictability tend to strip away contextual richness, offers one way to understand what the USC researchers are describing. Just as fast food standardizes meals across cultures and continents, AI tools may be pushing thought toward a narrower, more uniform range across languages, communities, and individual minds.

The researchers are not calling for abandoning AI. Broader access to expertise, stronger communication for people who struggle with writing, and real productivity gains are genuine benefits worth preserving. But the evidence reviewed in this paper, taken together, points to collective-level risks that tend to be invisible at the individual level. More than half of American adults already believe AI will make people less capable of thinking creatively or forming real human connections, according to Pew Research data cited in the study.

Diverse thinking is the raw material of innovation, early medical diagnosis, and open public discourse. The concern raised here is not that any single AI interaction causes harm. It is that the cumulative effect of millions of people refining their words and ideas through the same small set of systems, over time, may gradually narrow what kinds of thinking feel natural, credible, or even possible. That is a harder risk to see, and perhaps a harder one to reverse.


Disclaimer: This article is based on a review paper that synthesizes findings from multiple studies across linguistics, psychology, cognitive science, and computer science. The conclusions represent the authors’ interpretations of existing research and should not be taken as settled scientific consensus. Many of the risks described, including long-term cognitive effects and large-scale homogenization, remain hypothetical or insufficiently studied. Readers are encouraged to consult the original paper and cited sources for full context.


Paper Notes

Limitations

This paper synthesizes existing research across multiple fields rather than presenting a single controlled study, and its conclusions depend on the scope and quality of the literature reviewed. Most cited studies examined short-term effects, and the authors acknowledge that long-term impacts on cognition, memory, and reasoning remain poorly understood. Longitudinal research tracking how sustained AI use changes individual thought patterns over time is largely absent. Proposed strategies to counteract homogenization, including diversified prompting techniques and modified training approaches, have shown early promise but remain constrained by patterns embedded during initial model pretraining.

Funding and Disclosures

This research was supported by the Air Force Office of Scientific Research (AFOSR), grant number A9550-23-1-0463. The funders had no role in study design, data collection and analysis, the decision to publish, or preparation of the manuscript. No competing interests were declared.

Publication Details

“The homogenizing effect of large language models on human expression and thought” was authored by Zhivar Sourati and Alireza S. Ziabari of the Department of Computer Science and the Center for Computational Language Sciences at the University of Southern California, and Morteza Dehghani of the Department of Psychology at USC. Published in Trends in Cognitive Sciences (2026). DOI: 10.1016/j.tics.2026.01.003

About StudyFinds Analysis

Called "brilliant," "fantastic," and "spot on" by scientists and researchers, our acclaimed StudyFinds Analysis articles are created using an exclusive AI-based model with complete human oversight by the StudyFinds Editorial Team. For these articles, we use an unparalleled LLM process across multiple systems to analyze entire journal papers, extract data, and create accurate, accessible content. Our writing and editing team proofreads and polishes each and every article before publishing. With recent studies showing that artificial intelligence can interpret scientific research as well as (or even better) than field experts and specialists, StudyFinds was among the earliest to adopt and test this technology before approving its widespread use on our site. We stand by our practice and continuously update our processes to ensure the very highest level of accuracy. Read our AI Policy (link below) for more information.

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink

Editor-in-Chief

John Anderer

Associate Editor

Leave a Comment