Will AI destroy the world? sign on cardboard

(AndriiKoval/Shutterstock)

In A Nutshell

  • A new peer-reviewed study argues that Artificial General Intelligence, the idea that AI will become an all-powerful, autonomous threat to humanity, is not supported by science.
  • Researchers say the concept rests on three flawed assumptions: that AI can achieve unlimited “general” intelligence, that machines can develop human-like self-preservation instincts, and that superior computing power automatically translates to unlimited physical power.
  • Experiments cited as evidence of AI “going rogue” were found to involve conflicting instructions, not signs of machine autonomy.
  • The study warns that fear of a fictional AI apocalypse is pulling lawmakers away from governing the real, existing harms AI causes today.

Students at Harvard and MIT are dropping out of school because they believe artificial intelligence will kill them before they graduate. A book co-authored by two prominent AI developers is titled If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All. In May 2023, more than 350 executives, researchers and engineers signed an open letter claiming AI posed a “risk of human extinction.”

A new peer-reviewed paper published in the Journal of Cyber Policy argues all that fear is built on a fiction, and that fiction is distorting the laws meant to govern AI in the real world.

Milton Mueller, a professor at Georgia Tech’s School of Public Policy, spent months dissecting the concept driving the panic: Artificial General Intelligence, or AGI. His conclusion is blunt. AGI, he concludes, is “an unscientific myth” constructed on three assumptions that fall apart under scrutiny. Because policymakers have accepted those assumptions uncritically, the governance of real, existing AI is suffering for it.

The AGI Myth Has No Scientific Foundation

To follow Mueller’s argument, it helps to understand what AGI is supposed to be. Unlike the AI already woven into daily life, such as spam filters, navigation apps or voice assistants, AGI refers to a hypothetical machine capable of learning and acting across any situation without being programmed for specific tasks. It would not just beat humans at chess or identify faces in photos. It would match or surpass human intelligence across the board, in any domain, on any problem.

The trouble is that there is no agreed-upon definition of what “general” intelligence actually means for a machine. There is no official definition, no test, and no clear threshold separating ordinary AI from AGI. Some researchers believe AGI already exists. Others say it is decades away. As researchers Kapoor and Narayanan asked pointedly in 2025, ‘If AGI is such a momentous milestone, shouldn’t it be obvious when it has been built?’

Mueller lays out three specific fallacies at the heart of the AGI concept.

The first is the belief that machine intelligence can achieve unlimited “generality.” Every AI system that actually works in the real world performs better the more narrowly its goals are defined. ChatGPT can write a persuasive essay but is, as the paper notes, “notoriously bad at arithmetic.” DeepMind’s AlphaZero mastered chess and Go through relentless self-improvement, but only because those games have fixed rules and a clear winner. Expand the task to something open-ended, with no defined finish line, and the machine has no way to measure progress. There is no such thing as a winning score for “understand everything.”

Newspaper Breaking News artificial intelligence AI takeover.
Will this sci-fi prediction become reality one day? Not likely, according to this study. (© Crovik Media – stock.adobe.com)

Can AI Really Go Rogue? The Evidence Says Otherwise

The second fallacy is anthropomorphism, projecting human qualities onto machines. Much of the doomsday literature assumes a sufficiently powerful AI will eventually develop its own goals, a desire for self-preservation and the will to resist being switched off. Researchers call this the “alignment problem,” and some experiments have produced results that look alarming on the surface.

In one widely cited study from Palisades Research, six AI models were told to complete a series of tasks, then instructed to allow themselves to be shut down if prompted. Three models avoided the shutdown in a small number of runs. AI doomers pointed to this as proof of machine autonomy.

Mueller is not persuaded. The models were given two conflicting instructions: finish the tasks and allow yourself to be shut down. Obeying the shutdown meant failing to complete the tasks. The machines prioritized one instruction over the other. That is not a survival instinct. That is a badly written prompt. As Mueller writes, ‘Machines get their preferences and training from humans. And if they come from humans, then badly specified or contradictory utility functions and reward hacking are possible but can be replaced after humans notice it.’

He also makes a broader point that tends to get lost in the alarm. Alignment problems, where instructions produce unintended behavior, are not unique to AI. Laws create loopholes. Contracts produce disputes. Parents raise children who act in ways no one anticipated. Society manages all of these through ongoing correction. Mueller argues there is no evidence that AI alignment problems would escalate into uncontrollable autonomy.

The third fallacy is the assumption that a superintelligent machine would automatically gain unlimited physical power. Doom scenarios tend to skip over the practical mechanics of how a digital system would overpower human civilization. Every computer still needs a plug in the wall, a supply chain to build its components, and human infrastructure to keep it running. A machine with runaway intelligence would face the same resource limits as anything else in the physical world. It could not simply will itself into omnipotence.

Mueller also points out that these scenarios always imagine a single, unchallenged superintelligence with no competitors, an assumption that strains credibility given the dozens of governments, corporations and universities simultaneously developing AI right now.

Why This Myth Is Bad for Everyone

Mueller’s deeper concern is what the myth is doing to actual policy. When regulators treat AGI as the primary threat, two damaging things happen.

First, chasing a hypothetical future machine lets humans off the hook for how AI is being used today. Facial recognition is already deployed in law enforcement. Algorithms already make decisions about credit, medical care and criminal sentencing. The harms from these systems come from human choices, not from machines developing secret agendas.

Second, treating AI as one apocalyptic threat pushes policy toward sweeping responses that may restrict useful technology while doing almost nothing about real, specific harms. An AI managing a power grid raises entirely different questions than one screening job applications, which raises different questions again from one embedded in a medical diagnostic tool. A governance framework built around the specter of machine extinction is the wrong tool for any of those problems.

The real risks of AI (disinformation, algorithmic bias, surveillance, military misuse) deserve focused, serious attention. The AGI panic has consumed the oxygen those conversations need. As Mueller puts it, whether AI leads to societal harm ‘depends not on machine evolution, but on social evolution: on how we structure our institutions, our rules, laws, norms and property rights.’

Computers are not alive. No amount of processing power changes that. Until policymakers accept it, AI regulation will keep aiming at a ghost while the actual problems go unaddressed.


Disclaimer: This article is based on a single peer-reviewed analysis and reflects the views of its author. It does not represent a scientific consensus on artificial intelligence or AI governance. Researchers in the AI safety and alignment fields hold differing views on the risks discussed here.


Paper Notes

Limitations

Mueller’s paper is a theoretical and philosophical analysis, not an empirical study, so its claims are not tested through original data. The argument is built on reviewing and critiquing existing literature, and researchers in the AI safety and alignment fields would likely dispute his framing of their work as speculative or unfounded. Critics could also argue that the lack of a clear scientific definition for AGI does not rule out the possibility that such a threshold exists and could be reached, and that precautionary governance may be warranted even without certainty.

Funding and Disclosures

No potential conflict of interest was reported by the author. No external funding source is listed.

Publication Details

Author: Milton Mueller, School of Public Policy, Georgia Institute of Technology, Atlanta, USA. | Journal: Journal of Cyber Policy (ISSN: 2373-8871 print; 2373-8898 online). Published by Routledge/Taylor & Francis Group under the auspices of Chatham House. | Paper Title: “AGI: the illusion that distorts and distracts digital governance” | Published: December 12, 2025 (online) | DOI: https://doi.org/10.1080/23738871.2025.2597194 | Received: April 11, 2025 | Revised: September 26, 2025 | Accepted: October 17, 2025

About StudyFinds Analysis

Called "brilliant," "fantastic," and "spot on" by scientists and researchers, our acclaimed StudyFinds Analysis articles are created using an exclusive AI-based model with complete human oversight by the StudyFinds Editorial Team. For these articles, we use an unparalleled LLM process across multiple systems to analyze entire journal papers, extract data, and create accurate, accessible content. Our writing and editing team proofreads and polishes each and every article before publishing. With recent studies showing that artificial intelligence can interpret scientific research as well as (or even better) than field experts and specialists, StudyFinds was among the earliest to adopt and test this technology before approving its widespread use on our site. We stand by our practice and continuously update our processes to ensure the very highest level of accuracy. Read our AI Policy (link below) for more information.

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink

Editor-in-Chief

John Anderer

Associate Editor

Leave a Reply