MakeUsWell

All of Us

Why We Can’t Trust Most Nutrition Headlines

By Michael J. Critelli | MakeUsWell Newsletter, 


We all know that much of today’s reporting—whether in mainstream outlets or alternative media—fails to meet even minimum journalistic standards. Too often, writers start with a preferred narrative and then cherry-pick or “force-fit” data to support it. Nowhere is this more pervasive than in nutrition reporting.

1. Confounding Variables and Reverse Causation

A favorite media trope is the “diet soda causes diabetes” story. One headline in Eating Well declared: “Diet Sodas May Actually Be Raising Your Diabetes Risk, New Study Says.” The fine print reveals that researchers merely found an association: participants who drank the most diet soda had a 129% higher relative risk of Type 2 diabetes than those who drank the least.

But an association is not causation. People who choose diet sodas may already be struggling with weight gain or insulin resistance—precisely the people at higher risk for diabetes. In other words, drinking diet soda may be an effect, not a cause, of underlying problems.

Even when studies attempt to control for confounding variables (age, gender, existing health status, income, exercise), unmeasured differences remain. Observational studies can suggest relationships, but they rarely prove anything.

2. Overinterpreting “Ultra-Processed” Food Studies

Few terms are as loosely defined as “ultra-processed.” Media coverage of these studies often treats the category as if it were a single, toxic substance. Yet foods labeled “ultra-processed” range from protein bars and yogurt drinks to frozen dinners and sodas—vastly different products with different nutritional profiles.

The broadness of this label makes it difficult to isolate cause and effect. A diet high in ultra-processed foods may indeed correlate with poor health, but that doesn’t prove that “processing” itself is the culprit. Lifestyle, income, education, and access to fresh food are usually intertwined. Headlines implying a clear causal link ignore that complexity.

3. Meta-Analyses Built on Weak Foundations

To overcome the limitations of small studies, researchers often perform meta-analyses—statistical reviews combining data from multiple studies. In principle, this should yield stronger conclusions. In practice, if the underlying studies are weak, the combined result simply multiplies their flaws.

A Cornell University analysis titled “Evaluation of a Meta-Analysis of the Association Between Red and Processed Meat and Selected Human Health Effects” concluded bluntly that the base papers used “do not provide evidence for the claimed health effects.” The authors noted that questionable research practices and small sample sizes likely explained the weak statistical significance of the findings.

As another 2017 review observed, stacking poorly controlled nutritional studies on top of each other doesn’t magically make them reliable. Biases, overinterpretation, and selective reporting can be amplified rather than corrected.

4. Extrapolating Animal and Cell Studies to Humans

Animal studies can be useful for exploring biological mechanisms, but they’re routinely overhyped in headlines. A chemical that harms a mouse’s liver at enormous doses may have no effect on humans at realistic levels.

As the National Academies Press summarized: “The human population, because of its extremely diverse genetic, environmental, nutritional, and disease status, is far more variable in response to chemicals than are populations of experimental animals.”

In short, human biology is far too complex for such one-to-one extrapolation. Yet “mouse study proves X causes cancer” remains a media staple because it generates clicks.

5. Selective or Sensational Reporting

Journalists face relentless pressure to produce stories that attract attention. When two studies appear on the same day—one confirming an old finding, another suggesting a surprising new one—guess which gets coverage? The “new” result, even if weak, becomes news; the replication study, which might be more trustworthy, is ignored as “not newsworthy.”

This structural bias ensures that the most surprising, least reproducible findings dominate headlines. When later studies fail to confirm them, few outlets bother to follow up. There’s no penalty for publishing sensational claims that quietly collapse under scrutiny.

What We’re Trying to Do Differently

Our purpose in launching this newsletter—and the browser-based product to follow—is to change that dynamic. We aim to provide balanced, intellectually honest commentary on nutrition and food research. We’re not chasing ad revenue or “clickbait” engagement. Our goal is to help readers become better critical thinkers about health information.

We recognize that most people don’t have time to read full research papers or evaluate study design. The average person sees dozens of conflicting headlines every month—coffee is good for you, then bad; eggs are poison, then miracle food; red wine extends life, then shortens it. It’s no wonder people give up trying to make sense of it all.

Our mission is to help you separate signals from noise. That means explaining not just what a study found, but what it didn’t—and what questions remain unanswered. We won’t tell you what to eat or drink. We will, however, help you understand how to think about the evidence.

We also acknowledge the limits of our role. We don’t replace healthcare professionals or your own lived experience. The most powerful health data often come from your body’s feedback, not a lab report.

Still, better understanding of research can transform everyday decisions—what you buy, how you cook, how you respond to marketing claims, and which health advice deserves your trust.

By building a community of readers who think critically and share insights, we can create a healthier culture of information—one where curiosity and humility replace clickbait certainty. Nutrition science will always evolve, but if we stay skeptical, informed, and open-minded, we can evolve with it.