Book ArticlePsychology & Mindset3 min read2 sources

Who to Believe: A Practical Framework for Evaluating Sources in the Modern Information Environment

Professors, certificates, confident tones, and large social followings are weak predictors of accuracy. Here's the hierarchy of evidence and the specific signals that predict whether someone is actually right.

The information environment has a specific failure mode: the signals people use to evaluate source credibility correlate poorly with the signals that actually predict accuracy.

High-confidence delivery — the assured tone of a person who has no doubt — correlates with persuasiveness and with Twitter/TikTok followings. It does not correlate with accuracy. Calibrated uncertainty — expressing confidence proportional to actual evidence — is rarely rewarded by social media engagement algorithms, and so rare in high-visibility communicators.

The Signals That Predict Nothing

  • Confidence of delivery — poorly calibrated people are systematically more confident than calibrated people (Dunning-Kruger effect)
  • Social following size — engagement algorithms optimize for emotional activation, not factual accuracy
  • Credential type alone — a PhD in one field doesn't predict accuracy in an adjacent field; credential relevance matters
  • Anecdotal testimonial volume — large numbers of individual stories do not constitute generalizable evidence; publication bias and survivorship bias systematically inflate anecdote quality

> 📌 A 2018 study in Science found that false news spread 6× faster than accurate news on social platforms — driven not by bot amplification but by human engagement choices prioritizing novelty and emotional activation — demonstrating that virality is a reliable negative indicator of accuracy, not a neutral or positive one.[1]

The Evidence Hierarchy (Applied Practically)

| Evidence type | Weight | Note |

|---|---|---|

| Systematic review / meta-analysis | Highest | Aggregates evidence across multiple studies |

| Randomized controlled trial (RCT) | High | Controls for confounders; appropriate for causal claims |

| Prospective cohort study | Moderate | Observational; identifies associations, not causation |

| Case series / case reports | Low | Not representative; insufficient for inference |

| Expert opinion | Lowest scientific | Can guide when no data exists; expert consensus > individual |

| Anecdote / testimonial | Not evidence | Selection bias, placebo effect, regression to mean |

The Practical Questions

Before accepting a claim:

  • 1. What's the evidence type? Anecdote or RCT? If a claim is from an observational study, is causation being inferred where only association was demonstrated?
  • 2. Who is the claim serving? Does the person have financial or ideological interest in the claim being accepted?
  • 3. What would change their mind? A person who cannot identify what evidence would change their mind is not reasoning from evidence — they're reasoning toward a conclusion.
  • 4. Is the confidence level proportional to the evidence strength? High certainty on weak evidence means poor calibration. Weight those sources accordingly for high-stakes decisions.

The minimum viable filter: Does the claim link to peer-reviewed research? Does that research actually support the specific claim, or is it a related finding being extrapolated?

---

Connected Reading

Keep the same argument moving.

If this page opens a second question, stay inside the book world: jump to the nearest chapter or the next book-linked article.