Research

What ‘Evidence-Based’ Actually Means

What ‘Evidence-Based’ Actually Means

If you spend any time reading about wellbeing, mental health, or psychology, you will encounter the phrase “evidence-based” often enough that it begins to feel like furniture. It appears on therapy directories, supplement labels, corporate wellness programs, self-help books, and app store descriptions. It is used so frequently and so loosely that it has lost much of its meaning, which is a problem, because the meaning matters.

Understanding what evidence-based actually means in psychology is not a technical exercise. It is a practical skill. It helps you evaluate the claims you encounter, spend your time and money more wisely, and distinguish between what research genuinely supports and what someone hopes you will believe.

What the Term Is Supposed to Mean

In clinical and research psychology, “evidence-based” has a specific meaning that developed through decades of effort to ground practice in something more reliable than clinical intuition or tradition. The American Psychological Association defines evidence-based practice as the integration of the best available research with clinical expertise in the context of patient characteristics.

That first element, “best available research,” is doing a lot of work. Not all research is equal. A single study, however well designed, carries less weight than a replication. A replication carries less weight than a series of independent replications across different populations and settings. And a collection of studies can be synthesised in a meta-analysis, a statistical method that pools results across many studies to produce a more reliable estimate of an effect.

Effect size matters too. A study might show that an intervention produces a statistically significant improvement, meaning the result is unlikely to be due to chance, but the actual size of that improvement might be small enough to be clinically irrelevant. Evidence-based claims are stronger when they can point to meaningful effect sizes, not just statistical significance.

Peer review is another layer. Research published in peer-reviewed journals has been evaluated by independent scientists in the field before publication. That process is imperfect, but it is substantially more rigorous than a blog post, a white paper published by the company selling the product, or a claim that “studies show” without any citation.

Key Insight: A 2012 analysis in the journal Psychological Science in the Public Interest found that many popular self-improvement techniques, including commercial memory training programs and some widely marketed learning strategies, were not supported by the level of evidence their promotion implied. The researchers, led by Hal Pashler and colleagues, noted that the gap between popular claims and actual research findings was often substantial.

The Common Ways the Term Gets Stretched

Once you know what the term should mean, it becomes easier to spot how it gets misused.

“Studies show” without a citation is a flag worth noticing. Which studies? Published where? With what methodology? Without answers to those questions, the phrase functions as rhetorical decoration rather than evidence.

Testimonials are not evidence, in the technical sense, no matter how compelling or numerous they are. Individual accounts of improvement are valuable as human experience. They can generate hypotheses worth studying. But they cannot establish that an intervention caused the change, because there is no control group, no accounting for placebo effects, no way to rule out the dozens of other things that might have shifted in a person’s life at the same time.

Citing a single study to support a broad claim is another common pattern. One study is a starting point. It becomes evidence-based practice when that finding has been tested, challenged, and replicated. The replication crisis in psychology, which began receiving serious attention around 2011 when a large collaborative effort called the Reproducibility Project found that many published findings did not replicate reliably, is a reminder that even peer-reviewed, published research can be wrong.

Appeals to neuroscience deserve particular caution. Brain imaging is genuinely fascinating and increasingly useful, but it has also been applied in ways that overstate what the technology can tell us. A correlation between neural activity and a psychological state does not, by itself, tell us that an intervention caused a meaningful change. The term “neuroplasticity” in particular has been stretched far beyond what most research supports.

Key Insight: Psychologist Timothy Wilson, in his book Redirect, makes the point that some intuitively appealing interventions have turned out to be ineffective or even harmful when tested rigorously, while some interventions that sound almost too simple have robust support. The research does not always match common sense. That is precisely why it matters.

What to Look For When Evaluating a Claim

When you encounter a wellbeing claim, a few questions are worth asking.

Is there a specific citation? Not just “research shows,” but an actual named study, author, or publication? The absence of any citation is informative.

Who conducted the research, and is there a conflict of interest? Research funded by the company selling the product is not automatically invalid, but it warrants more scrutiny. Independent replication carries more weight.

What kind of study was it? A randomised controlled trial, where people are assigned to intervention or control conditions at random, is better positioned to support causal claims than an observational study. Both are useful, but they answer different questions.

Has the finding been replicated? A single study is a provisional finding. It becomes more credible when independent research groups have reproduced it across different populations.

What is the effect size? A statistically significant improvement might still be practically small. Larger, well-replicated effects in diverse populations are stronger grounds for confidence.

Being a Better Consumer of Wellbeing Information

None of this means approaching every claim with cynicism. There is genuine, rigorous, replicated research supporting a range of psychological interventions and wellbeing practices. Cognitive-behavioural therapy, behavioural activation for depression, mindfulness-based stress reduction, and several positive psychology interventions have substantial empirical support. The research base is real and growing.

The goal of asking these questions is not to dismiss everything, but to calibrate trust appropriately. High-quality evidence deserves more confidence than anecdote. Replicated findings deserve more weight than single studies. Transparent citations deserve more credibility than vague appeals to science.

This connects to why the framing of positive psychology as a rigorous academic discipline, distinct from the self-help tradition, is worth understanding. The post on why positive psychology is not self-help goes into that distinction in more depth, because the difference affects how we evaluate the claims made within the field.

It also matters practically. If you are deciding whether to invest time in a particular approach to wellbeing, the question of whether it has genuine research support is the same as asking whether it is likely to help. The PERMA model, for example, draws on decades of research across multiple areas, not a single study or a persuasive theory.

Becoming a more discerning reader of wellbeing research is not about becoming harder to persuade. It is about reserving persuasion for claims that have earned it, and the Upward Spiral program is built on that same commitment.

Related Reading

Upward Spiral is a 52-week program grounded in positive psychology and neuroscience, designed for people who are functioning but not flourishing. Each week builds on the last. Learn more and start your free trial.

Back to all posts