Young,Woman,Using,Laptop,Computer,To,Take,Part,In,Online

Credit: Prostock-studio on Shutterstock

Nutrition Posts Don’t Have To Lie To Be Dangerous

In A Nutshell

  • Diet misinformation doesn’t have to be false to be dangerous. Omitting key risks or selectively framing accurate facts can be just as harmful as outright lies.
  • Researchers built a five-tier risk rating tool for nutrition content, borrowing from the same hazard-assessment logic used for chemical and biological threats.
  • The tool was validated across five rounds of testing with dietitians, nutrition students, experienced professionals, and ChatGPT, all of whom aligned closely with expert benchmarks.
  • ChatGPT, given no task-specific training, outperformed every human group in matching expert risk assessments, pointing to a potential path for scalable AI-assisted misinformation screening.

A viral post about the carnivore diet might get every nutritional fact exactly right about protein. It might simply forget to mention what high intake does to cardiovascular health, or that very high consumption can sharply raise cholesterol levels, in some cases leading to visible skin changes. An extreme fasting guide might cite legitimate research on weight loss and skip over bone and muscle wasting, cognitive impairment, and the heightened risk of triggering disordered eating. Neither post is lying. Under the systems currently used to police online health content, neither might be flagged at all.

That gap is a public health problem. Roughly 23,000 Americans land in emergency rooms each year because of herbal and dietary supplements. During the early COVID-19 pandemic, viral videos prompted some people to wash produce with bleach. One reported case involved a fatal outcome linked to a water-only fasting regimen found online. In each case, the content doing the damage wasn’t necessarily made up. It was incomplete, selectively framed, or stripped of the context that would have made the danger clear.

Researchers at University College London set out to build something a standard fact-check cannot produce: a tool that rates how dangerous diet and nutrition content is, even when it isn’t outright false. Published in Scientific Reports, the resulting Diet-Nutrition Misinformation Risk Assessment Tool (Diet-MisRAT) scores content across a five-tier scale, from very low to very high risk, using the same risk-grading logic public health officials apply to chemical and biological threats. A piece of diet content, the researchers argue, doesn’t have to be false to be dangerous.

When Nutrition Misinformation Hides in Plain Sight

Most current systems for catching misleading health content work on a simple yes-or-no basis: true or false, real or fake. Diet-MisRAT was built on a different premise. Misleading diet content functions more like a toxic substance than a simple lie, with harm that varies in degree depending on what a post contains, how it’s framed, and what it quietly leaves out.

Lead researcher Alex Ruani and colleagues drew on scientific literature, case reports, regulatory enforcement records, and existing misinformation research to identify recurring risk patterns in nutrition content. Those included dangerous omissions, deceptive framing, emotionally manipulative language, and posts that selectively highlight accurate information while burying health risks. Each became a potential question in the final tool.

From there, the team organized those warning signs into four categories of risk: inaccuracy, incompleteness, deceptiveness, and health harm. Items were weighted by severity, so a post omitting a serious drug interaction scores higher than one with a minor factual slip. Totaling those scores produces one of five risk ratings, a graded picture of how dangerous a piece of content actually is rather than a simple ruling on whether it is technically true.

diet advice online
Diet misinformation doesn’t have to be false to be dangerous. Scientists built a tool to rate how dangerous nutrition content really is. (Credit: Iryna Imago on Shutterstock)

Testing a Nutrition Misinformation Scale Across Human Experts and AI

Before releasing the tool, the researchers ran it through five rounds of real-world testing. Round one brought in two senior academic experts with a combined teaching experience exceeding 45 years. Both matched the developer’s benchmark almost perfectly.

Rounds two through four expanded testing to seven trainee dietitians, 33 postgraduate nutrition students, and 15 highly experienced nutrition professionals, each group independently scoring the same sample article. Experienced professionals produced results closest to the expert benchmark of any human group. Trainees performed well despite still being in training. Students showed the most variability, particularly those without an undergraduate background in nutrition.

Round five produced the most arresting result. ChatGPT, given only the tool’s instructions and the same article, without task-specific training or access to benchmark answers, showed the highest alignment with expert benchmarks among all groups tested. Model 4o hit a mean accuracy of 93.9% against the expert benchmark. Model o3 followed at 84.4%. Both rarely flagged content as risky when it wasn’t.

Why Rating Nutrition Misinformation by Danger Level Matters More Than Fact-Checking

That result carries weight for a specific reason. Most AI misinformation detectors depend on large, pre-labeled training datasets, which are expensive to build and scarce in specialized fields like nutrition. Diet-MisRAT gave ChatGPT a structured, expert-validated framework to reason from, and that framework alone was enough to match human expert performance. A prompt-based approach like this could potentially work at scale without the enormous labeled datasets that conventional systems require.

A five-tier risk scale also creates options a yes-or-no system cannot support. Platforms could triage their most dangerous content for immediate action while handling moderate-risk posts differently. Educators could use risk scores as a teaching framework. Regulators could match their response to actual severity rather than treating a mildly misleading wellness post the same as something that could put a reader in the hospital. Available at misinformation.science under license, the tool’s underlying model was also designed to extend beyond diet and nutrition to medicine, mental health, and food safety.

Some of the most dangerous diet content circulating online tells no outright lies. Accurate facts and dangerous omissions often travel together in the same post, and the credibility that comes with getting some things right makes what gets left out far harder to catch. Diet-MisRAT was built for exactly that gray zone: content that clears a fact-check and still sends someone to the hospital.


Disclaimer: This article is based on a single published study and is intended for general informational purposes only. It does not constitute medical or dietary advice. The Diet-MisRAT tool described has not yet been tested at scale in real-world platform or clinical settings, and its broader effectiveness remains to be established. Readers with specific health or nutrition concerns should consult a qualified healthcare professional.


Paper Notes

Limitations

The validation study carries several constraints worth noting. Testing was conducted in English and predominantly with UK-based participants, which limits how broadly results apply across other languages and cultures. Sample sizes in each round were relatively small, though participants were specialized and engaged throughout. Users with weaker subject knowledge or lower English proficiency showed more variability in their scores, suggesting that wider rollouts may benefit from onboarding support. The study did not measure whether using the tool made people better at spotting misinformation over time. Formal statistical reliability measures were not applied across all rounds, and how well results hold up across different AI systems or future model versions remains to be tested.

Funding and Disclosures

According to the paper, this research was undertaken without financial support. The authors declare no competing interests. Ethical approval was obtained under the Ethics Review Procedures at the Institute of Education, University College London, with data registration number Z6364106/2018/06/67. Participation in all human study rounds was voluntary.

Publication Details

Authors Alex Ruani, Michael J. Reiss, and Anastasia Z. Kalea, all affiliated with University College London, conducted this research. Ruani holds positions at both the UCL Institute of Education and The Health Sciences Academy, London. Kalea is based in the Faculty of Medical Sciences, Division of Medicine, University College London. The study, “Development and validation of a tool for detecting misinformation risk in diet, nutrition, and health content (Diet-MisRAT),” was published in Scientific Reports (2026), Volume 16, Article 9207. DOI: https://doi.org/10.1038/s41598-026-40534-2. Received June 13, 2025; accepted February 13, 2026; published online March 27, 2026.

About StudyFinds Analysis

Called "brilliant," "fantastic," and "spot on" by scientists and researchers, our acclaimed StudyFinds Analysis articles are created using an exclusive AI-based model with complete human oversight by the StudyFinds Editorial Team. For these articles, we use an unparalleled LLM process across multiple systems to analyze entire journal papers, extract data, and create accurate, accessible content. Our writing and editing team proofreads and polishes each and every article before publishing. With recent studies showing that artificial intelligence can interpret scientific research as well as (or even better) than field experts and specialists, StudyFinds was among the earliest to adopt and test this technology before approving its widespread use on our site. We stand by our practice and continuously update our processes to ensure the very highest level of accuracy. Read our AI Policy (link below) for more information.

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink

Editor-in-Chief

John Anderer

Associate Editor

Leave a Comment