← pugalenthimagendran.com
A guide to thinking clearly

The Scientific Process: Humanity’s Best Tool for Finding Truth.

Science is not perfect. It is not a religion. It is the best error-correction system humans have built for separating what feels true from what survives reality.

01 — The problem

Why people are confused

The honest reason most people feel lost about health, nutrition, supplements, religion, politics and everything in between is not that they lack information. It is that nobody taught them how to weigh it.

Open any feed and you will be told that a peptide will rebuild your knees, that a podcast guest has cracked the code on longevity, that one nutrient is the secret, that another is poison, that a supplement is “science-backed,” that an institution is corrupt, and that a single study changes everything. Some of these claims are real. Most are exaggerated. A few are dangerous.

Underneath the noise, the situation is genuinely hard:

  • science changes over time
  • experts disagree
  • some studies are funded by companies with a financial stake in the result
  • nutrition advice has shifted across decades
  • podcasts, influencers, supplements, peptides and wellness trends all claim to be “evidence-based”
  • religion, ideology, tradition and social media all compete for the same belief-shaped slot
  • most people have never been taught how to decide what is probably true

The problem is not that people lack information. The problem is that people lack a method for judging information.

This page is the method — the one I would have wanted at twenty. It is not a manifesto for blind trust in “the science,” and it is not a vibes-based rejection of institutions either. It is a working toolkit for adults who have to make decisions without certainty.

02 — What it actually is

Science is a process, not a fixed book of facts

When most people say “the science,” they mean “the current best summary of what has survived testing so far.” The summary is not the thing. The process underneath it is.

The scientific method is a disciplined loop — a way of asking questions that forces ideas to put themselves at risk before they can be called knowledge.

Observe

Notice something in the world that wants explaining.

Ask a question

Frame it precisely enough that an answer can be wrong.

Form a hypothesis

A specific guess that predicts what should happen.

Define variables

Decide what is being changed, measured and held constant.

Design a test

Set up a procedure that could prove the hypothesis wrong.

Collect data

Run the test honestly, without nudging the result.

Analyse results

Use statistics carefully. Distinguish noise from signal.

Invite criticism

Publish, share methods and data, and let others attack the work.

Replicate

A different team should be able to find the same result.

Update beliefs

Hold conclusions provisionally. Revise as evidence accumulates.

Science is not truth itself. Science is a disciplined process for reducing error.

03 — Why it changes

Changing your mind is the feature, not the bug

People often say “they keep changing their minds, so it must be fake.” The opposite is closer to the truth. A field that never updates is a field that has stopped looking.

When recommendations shift, it is usually because at least one of these has happened:

  • new evidence arrived
  • better tools or measurement made old data look noisy in retrospect
  • bigger, longer, more representative studies were finally completed
  • previous assumptions were challenged and found weak
  • old recommendations were oversimplified for the public and the simplification didn’t survive
  • incentives, politics or funding distorted early conclusions, and later work corrected them

A worked example: nutrition. The mid-20th-century food pyramid mixed real evidence with public-policy choices, agricultural lobbying and a heavy bet against dietary fat that turned out to be too simple. Decades later, large reviews and reassessments have moved the consensus toward whole foods, fibre and unsaturated fats, with caution around ultra-processed foods. That movement is not science failing. It is science doing the only thing it is allowed to do: respond to better evidence.

Two things can be true at the same time. Nutrition science is genuinely difficult — humans are messy, diets are complex, long-term studies are expensive, and food guidelines are partly public policy. And the modern recommendations are still better grounded than what they replaced. Hard fields move slowly. Slow movement is not the same as no movement.

04 — When it’s wrong

When science gets it wrong

Honest defenders of science don’t pretend it has a clean record. The argument is not that science is always right. The argument is that science has the only built-in mechanism for catching its own mistakes.

Case 01

Semmelweis and handwashing

In the 1840s, Ignaz Semmelweis showed that doctors washing their hands between the dissection room and the maternity ward dramatically cut deaths from childbed fever. He had no germ theory to explain why it worked. The medical establishment rejected him for decades. The data was right; the community wasn’t ready. Eventually, germ theory caught up and his protocol became standard.

Case 02

Smoking and lung cancer

Doll and Hill’s 1950 study and the long British Doctors Study built up a clear link between smoking and lung cancer over years. The tobacco industry funded counter-research, manufactured doubt, and delayed regulation. Evidence eventually overwhelmed the noise; the 1964 U.S. Surgeon General’s report consolidated the case. It took decades because the system that funded the wrong answer was powerful, not because the science couldn’t see.

Case 03

Replication crisis

Starting around 2010, large efforts in psychology and biomedicine showed that many flagship results could not be reproduced by independent teams. The response was not denial; it was reform: pre-registration, registered reports, sharing data, larger samples, and stricter statistics. The crisis itself is the error-correction mechanism doing its job out loud.

Case 04

Industry-funded research bias

A Cochrane methodology review of pharmaceutical and device studies found that industry-sponsored research is more likely to report results favourable to the sponsor than independent research, even when methodological quality looks similar. A separate JAMA Internal Medicine analysis traced how the sugar industry, in the 1960s, shaped the framing of dietary fat versus sugar in heart-disease research. Funding doesn’t make a finding wrong, but it shifts the prior.

Science can be wrong at one point in time. The scientific process has a built-in mechanism to expose that wrongness. Most belief systems do not.

05 — Why it’s still the best

Compared to what?

Science is not in competition with religion, tradition or intuition for the same job. They answer different questions. But when the question is “is this claim about the physical world likely to be true?”, science wins on the only metric that matters — how it treats its own mistakes.

Science (at its best) asks

  • Can this claim be tested?
  • Can someone else reproduce the result?
  • What evidence would prove it wrong?
  • How big is the effect, in real units?
  • What are the alternative explanations?
  • Who funded this, and who benefits if I believe it?
  • Has it survived criticism by competent skeptics?

Most other belief systems ask

  • Does this fit what I already believe?
  • Does the source have authority or charisma?
  • Has this been said for a long time?
  • Does it feel emotionally true?
  • Does my group accept it?
  • Does it tell a satisfying story?
  • Does rejecting it have a social cost?

This is not arrogance about science. Religion, philosophy, ethics and lived experience answer questions science cannot — how to live, what to value, how to grieve, what we owe each other. But for empirical claims about the physical world, the comparison is one-sided.

Most belief systems protect their conclusions. Science, at its best, puts its conclusions at risk.

06 — Evidence ladder

Not all “science-backed” is equal

The phrase “science says” can mean anything from a single mouse study to a decade of replicated human trials. Always ask which level of evidence is being used. The ladder below is the rough hierarchy clinicians, regulators and methodologists actually use.

L1

Anecdote

A single person’s experience. Useful for generating questions, almost worthless for proving general claims.

Example: “It worked for me.”

L2

Mechanism or theory

A plausible biological or physical reason something should work. Necessary, but not enough — bodies don’t care about elegant mechanisms.

Example: “This compound binds to that receptor, therefore…”

L3

Animal or lab study

Useful for safety screens and biology, but mice are not small humans. Most things that work in mice fail in human trials.

Example: “In rats, the molecule extended lifespan.”

L4

Observational human study

Tracks people in the real world. Can show patterns and correlations, but confounding (other things that travel with the variable) is a major issue.

Example: “People who eat X tend to live longer.”

L5

Small human trial

Better than observation because the researcher controls who gets the intervention. But small samples produce noisy results — underpowered trials over- and under-state effects routinely.

Example: “In 30 volunteers, a marker improved.”

L6

Large randomised controlled trial

A big sample, randomly assigned, ideally double-blinded and pre-registered. The closest most fields ever get to clean cause-and-effect in humans.

Example: a multi-thousand-person blinded trial of a drug versus placebo.

L7

Multiple independent replications

Different teams, different settings, similar findings. Replication is the part most consumer headlines skip and the part that matters most.

Example: three independent labs find the same effect within a similar range.

L8

Systematic review or meta-analysis

Pools many studies under explicit rules. Strong only when the underlying studies are strong; garbage in, garbage out is real here too.

Example: a Cochrane review on intervention X.

L9

Guideline-level consensus

A formal recommendation built on transparent evidence review using systems like GRADE. Strongest when the process is clean — weaker when politics, funding or rushed timelines shape it.

Example: a national clinical guideline citing graded evidence.

Watch for the level mismatch

Most viral “science-backed” claims are sitting at L2 or L3 while implying L7 or L8. The single most useful question you can ask is: at what rung is this evidence, really?

07 — A practical checklist

How to think when someone says “science says”

Run any strong claim through this list before letting it shape what you do, what you buy or what you tell someone you love. None of these questions require a PhD.

?

Was this in humans, or in animals or cells?

?

How many people were studied?

?

Was there a control group?

?

Was it randomised?

?

Was it blinded? Single, double, or not at all?

?

Is the result clinically meaningful, or only statistically significant?

?

Has it been replicated by independent researchers?

?

Who funded the study?

?

Are the people promoting it selling something?

?

What is the downside if the claim turns out to be wrong?

?

Is the claim being made stronger than the evidence supports?

?

Is the outcome a real-world endpoint, or just a biomarker?

?

Is this a broad consensus, or one exciting paper?

You will not get clean answers to all thirteen questions on every claim. The point is not perfection. The point is that running the questions, even partially, blocks most of the bad claims that fly past ordinary attention.

08 — A grounded look

Why peptides, supplements and biohacking are hard to judge

Wellness culture is the highest-volume area where the words “science-backed” are used and the lowest-density area where actual high-rung evidence exists. Treat it accordingly.

A few honest points to hold in mind:

  • Some peptides are legitimate, regulator-approved medicines (e.g. insulin, GLP-1 agonists). Many others, including widely promoted ones, are sold as “research chemicals” outside any approval pathway.
  • A plausible biological mechanism does not prove real-world benefit. The body is not a single pathway.
  • Early-stage evidence is interesting, not conclusive. Most early signals fade once the studies get bigger.
  • Safety is a separate question from effectiveness. A compound can “work” on a marker and still hurt you elsewhere.
  • Long-term effects of most novel compounds are simply unknown. Absence of harm in 8-week trials is not absence of harm at 8 years.
  • Influencers earn from belief, not from being right ten years later. Their incentive structure rewards strong claims now.

Australia’s TGA, the U.S. FDA and other regulators have publicly warned about unapproved peptides being marketed for cosmetic, performance or anti-ageing use. That isn’t the regulators being slow. That is the regulators noticing what the wellness market is doing and saying: we have not seen the evidence you’re implying.

The question is not “does it sound scientific?” The question is: how strong is the evidence, how safe is it, who benefits from the belief, and what is the cost if I’m wrong?

09 — Belief framework

A better way to form beliefs

The goal is not to never be wrong. The goal is to be wrong less often, and to update faster when you are.

Hold beliefs with confidence levels, not certainty. “Probably true,” “leans true,” “genuinely uncertain,” “leans false” are real categories.

Separate what is known, what is likely, what is unknown and what is speculative. Almost no one does this in conversation, and it is the single biggest upgrade.

Update when better evidence arrives, not when louder voices arrive.

Be more skeptical when someone is selling something — a product, a worldview, a side.

Be more careful when the downside is large or irreversible. The bar for “probably fine” should scale with what happens if you’re wrong.

Prefer claims that have survived criticism over claims that have only survived applause.

Do not confuse confidence with truth. Loud certainty is a vibe, not evidence.

Do not confuse skepticism with intelligence. Reflexive doubt of everything is just contrarianism with better branding.

Do not reject all institutions just because some institutions failed. The replacement for a flawed institution is a better-functioning one, not no institution.

10 — The thesis

Why this is worth taking seriously

Science is not perfect because humans are not perfect. But it remains the best system we have because it does not ask us to believe forever. It asks us to test, criticise, replicate and update.

The goal is not to worship science. The goal is to become harder to fool.

11 — Further reading

References & resources

Primary sources where possible. If a claim on this page felt strong, the source it leans on is here. Read these; they are better than any single summary.

  1. 01
    National Academies of Sciences, Engineering, and Medicine — Reproducibility and Replicability in Science (2019) The definitive recent treatment of what reproducibility means and how seriously to take the so-called replication crisis. nap.nationalacademies.org →
  2. 02
    Cochrane Handbook for Systematic Reviews of Interventions (current edition) The methodological reference for how systematic reviews should be conducted. training.cochrane.org/handbook →
  3. 03
    GRADE Working Group — rating the certainty of evidence The framework most modern guidelines use to grade evidence and recommendations. gradeworkinggroup.org →
  4. 04
    Ioannidis JPA — Why Most Published Research Findings Are False, PLoS Medicine (2005) A foundational paper on why a published “positive” result is often less reliable than people assume. PLoS Medicine →
  5. 05
    Lundh A et al. — Industry sponsorship and research outcome, Cochrane Database of Systematic Reviews (2017) The methodological review showing industry-funded studies are more likely to favour the sponsor. Cochrane Library →
  6. 06
    Kearns CE, Schmidt LA, Glantz SA — Sugar Industry and Coronary Heart Disease Research, JAMA Internal Medicine (2016) The historical analysis of internal sugar-industry documents and their influence on heart-disease research. JAMA Internal Medicine →
  7. 07
    Open Science Collaboration — Estimating the reproducibility of psychological science, Science (2015) The large multi-lab effort that brought the replication crisis into the open. Science →
  8. 08
    Center for Open Science — Registered Reports A practical reform: peer-review the study design before the data is collected, so a null result still gets published. cos.io/initiatives/registered-reports →
  9. 09
    Doll R, Hill AB — Smoking and Carcinoma of the Lung, BMJ (1950); and the British Doctors Study (1951–2001) The original case-control paper and the long cohort study that built the smoking–lung-cancer evidence base. BMJ 1950 →
  10. 10
    U.S. Surgeon General — Smoking and Health (1964) and the History of the Surgeon General’s Reports on Smoking The consolidation of the evidence and a useful study in how scientific consensus crosses into policy. CDC archive →
  11. 11
    Therapeutic Goods Administration (Australia) — consumer guidance on personal importation and unapproved therapeutic goods Useful primer on what “unapproved” means in practice, including for many marketed peptides. tga.gov.au →
  12. 12
    U.S. Food and Drug Administration — statements on compounded peptides and unapproved use Background on why “sold as a research chemical” is not a safety claim. fda.gov →
  13. 13
    World Health Organization, U.S. NIH and Australia’s NHMRC — evidence-based guidance portals Useful starting points when you want to check what consensus actually says about a specific health topic. who.int → · nih.gov → · nhmrc.gov.au →