All Articles
8 minUpdated December 2024

How to Read Health Claims

A BS detector for wellness marketing

"Clinically proven." "Backed by science." "Studies show." These phrases are everywhere in health and wellness marketing. But what do they actually mean? Often, less than you'd think.

The wellness industry is full of exaggerated claims, cherry-picked studies, and outright misinformation. This guide teaches you to read health claims critically—so you can separate interventions that might actually work from expensive placebos and potentially harmful snake oil.

The Hierarchy of Evidence

Not all evidence is created equal. Here's a rough hierarchy from strongest to weakest:

Strongest:

  1. Systematic reviews and meta-analyses — Pool data from multiple studies. The gold standard.
  2. Randomized controlled trials (RCTs) — Participants randomly assigned to treatment or placebo. The only way to show causation.
  3. Cohort studies — Follow groups over time. Can show associations but not causation.

Weaker:

  1. Case-control studies — Compare people with a condition to those without. Prone to bias.
  2. Case reports — Single patient stories. Interesting but not proof of anything.
  3. Animal studies — Useful for mechanisms but don't always translate to humans.
  4. In vitro (cell) studies — Even further from human reality.

Not actually evidence:

  • Testimonials and anecdotes
  • Expert opinion (without supporting data)
  • Traditional use ("used for centuries")
  • Mechanistic speculation ("should work because...")

When evaluating a claim, ask: What's the actual evidence? A supplement backed by a few cell studies and testimonials is very different from one backed by multiple RCTs.

The gap between cell studies and human trials is massive. Many things kill cancer cells in a petri dish—including bleach and gunfire. That doesn't make them cancer treatments.

Red Flags to Watch For

These patterns should trigger skepticism:

1. "Studies show..." without specifics

Which studies? In what population? With what methodology? If the claim doesn't cite actual research you can look up, be suspicious. Vague references to "science" are often cover for no real evidence.

2. Dramatic claims with weak evidence

"Reverses aging!" based on a mouse study. "Cures depression!" based on 12 people without a control group. The more dramatic the claim, the more robust the evidence should be.

3. Single study cited repeatedly

One study doesn't prove anything—it's a data point. Especially if it's old, small, or hasn't been replicated. Look for multiple independent studies reaching similar conclusions.

4. Mechanism-only arguments

"This compound increases BDNF, which is important for brain health, so it must improve cognition." Logical, but skips the part where you actually test if it works. Many things that should work in theory don't work in practice.

5. Cherry-picking positive results

If 10 studies exist and 2 show benefits while 8 show nothing, citing only the 2 positive studies is misleading. Look for systematic reviews that assess all available evidence.

6. Testimonials as primary evidence

"Changed my life!" is not evidence. Placebo effects are real, regression to the mean exists, and people selling products have incentives to share only positive experiences.

7. Appeal to nature

"Natural" doesn't mean safe or effective. Arsenic is natural. So is cobra venom. The naturalistic fallacy is pervasive in wellness marketing.

8. Conspiracy framing

"Big Pharma doesn't want you to know..." is often used to explain away lack of evidence. Real treatments get studied because there's money in things that work.

Questions to Ask

When evaluating any health claim, run through these questions:

About the evidence:

  • What type of study is this? (RCT > observational > anecdotes)
  • How many participants? (Larger is generally better)
  • Was there a control group? (Essential for knowing if it actually works)
  • Was it blinded? (Participants not knowing if they got treatment or placebo)
  • Has it been replicated? (One study is a starting point, not proof)
  • Who funded it? (Industry-funded studies often have more favorable results)

About the claim:

  • Is the effect size meaningful? (Statistically significant ≠ clinically meaningful)
  • What population was studied? (Does it apply to you?)
  • What dose was used? (Is that what the product contains?)
  • What's the time frame? (Long-term vs. acute effects)
  • What were the side effects? (Everything has trade-offs)

About the source:

  • Who's making this claim? (Scientist vs. salesperson)
  • Do they have something to sell? (Conflict of interest)
  • Are they citing primary sources? (Or just making assertions)
  • What do independent experts say? (Not affiliated with the product)

When in doubt, check what Examine.com says. They review supplements based on actual research, grade the evidence quality, and have no financial ties to supplement companies.

How to Read Studies Yourself

You don't need a PhD to evaluate basic research. Here's how to read a study:

The Abstract

This summarizes the study. Read it first for the overview. But don't stop there—abstracts often oversell results.

The Methods Section

This tells you how the study was done. Look for:

  • Sample size (how many people)
  • Study design (RCT, cohort, etc.)
  • Duration (how long did they follow people)
  • What exactly was measured
  • The control or comparison group

The Results

What did they actually find? Look for:

  • Effect sizes (how big was the difference)
  • Confidence intervals (the range of likely true effect)
  • P-values (though these are overemphasized)
  • Dropout rates (did people leave the study)

The Limitations Section

Good studies acknowledge their weaknesses. This is often the most honest part of the paper. If there's no limitations section, be concerned.

The Funding/Conflict of Interest

Who paid for this? Industry-funded research isn't automatically wrong, but it should be viewed more critically.

Understanding Statistical Tricks

Some common ways statistics can mislead:

Relative vs. Absolute Risk

"50% reduction in risk!" sounds impressive. But if the baseline risk was 2%, a 50% reduction means going from 2% to 1%. That's a 1% absolute difference—much less dramatic.

Always ask for absolute numbers, not just percentages.

P-Hacking

If you measure 20 different outcomes, one will likely show p<0.05 by chance alone. That's how you get "studies" showing eating chocolate makes you smarter. Look for pre-registered studies that stated their hypothesis before running the analysis.

Surrogate Endpoints

"Raises HDL cholesterol" doesn't mean "prevents heart attacks." Sometimes interventions improve markers without improving outcomes. Real endpoints (death, disease) matter more than lab values.

Subgroup Analysis

"It didn't work overall, but in left-handed men over 60 who ate breakfast, it worked!" Slicing data finely enough will always find something. Pre-specified subgroups are valid; post-hoc fishing is not.

Correlation vs. Causation

Observational studies can only show correlation. People who take vitamin D have lower cancer rates—but is vitamin D preventing cancer, or do healthy people just happen to take more vitamins?

A famous example: People who own horses live longer. But buying a horse won't extend your life—wealthy people are both more likely to own horses and to have access to good healthcare.

Practical Heuristics

Some quick rules of thumb when you don't have time for deep analysis:

The More Dramatic the Claim, the More Skeptical You Should Be

"May modestly improve X in some populations" is much more credible than "Revolutionary breakthrough that changes everything!"

If It Sounds Too Good to Be True, It Usually Is

Weight loss without effort, reversed aging, cured diseases—these claims should require extraordinary evidence. They rarely have it.

New ≠ Better

The newest supplement isn't necessarily superior to boring basics. Often it just means less safety data and more marketing hype.

Your Body Already Has These Processes

Claims about "boosting metabolism" or "supercharging immunity" often ignore that your body already regulates these things. Adding more of something isn't always helpful.

Follow the Money

Who profits if you believe this? The person selling the product has different incentives than an independent researcher. Factor this in.

Consensus Matters

If the scientific community broadly agrees on something, that's meaningful. One dissenting voice isn't equally valid to the entire field's consensus.

Time Tests Claims

Every few years there's a new miracle supplement. Most fade away. The basics—exercise, sleep, diet, not smoking—remain consistent because they actually work.

Closing Thoughts

Critical thinking about health claims isn't about being cynical—it's about being accurate. Some supplements work. Some medications are genuinely beneficial. Some interventions are worth your time and money.

The goal is to identify which ones, rather than accepting claims at face value or rejecting everything equally.

Key takeaways:

  • Understand the hierarchy of evidence—RCTs beat testimonials
  • Watch for red flags that signal weak claims
  • Ask the right questions about evidence and sources
  • Be more skeptical of dramatic claims
  • Remember that conflict of interest matters
  • Default to proven basics when in doubt

The wellness industry has strong incentives to oversell and under-deliver. Your defense is informed skepticism combined with genuine openness to evidence. That's what we try to practice on this site, and it's what we encourage you to develop for yourself.

When we write about interventions on this site, we try to clearly state the evidence level and our confidence. "Strong evidence from multiple RCTs" is very different from "theoretically interesting but unproven." Reading those qualifiers matters.

Keep Reading

basicscritical-thinkingevidence