Not evidence-based: Finding evidence to support your argument does not make an evidence-based argument
  • Home /
  • Blog /
  • Blog /

Not evidence-based: Finding evidence to support your argument does not make an evidence-based argument

Not evidence-based

The unidirectional argument is not evidence-based

There are a lot of blogs, articles and books that at first sight appear to be evidence-based. They are entertaining, interesting and makes sense. Some call it evidence-based writing. It is not evidence-based and here’s why…

As editor of The Oxford Review I read a lot. A lot of research and a lot of articles and books. And there is one thing that constantly irks me: The unidirectional argument. It runs like this: “I have a theory. I have found evidence that supports my theory. My theory is evidence based”. The logic looks like this:

Theory + evidence = proof = evidence based assertion = truth/reality

A lot of authors have got very rich using this formula. The formula works especially well if it is wrapped up in a story or case study. The book version looks like this:

  1. Tell a story based on reality.
  2. Present some evidence from academic research, preferably from (Oxford, Harvard, Stanford, Cambridge etc.) researchers that backs up the author’s/ teller’s theory.
  3. Advocate a theory / principle or explanatory law.

It’s neat, interesting and makes sense. Some call it evidence-based writing. It is not evidence-based and here’s why. 

99% of everything you are trying to do...

...has already been done by someone else, somewhere - and meticulously researched.

Get the latest research briefings, infographics and more from The Oxford Review - Free.

We won't send you spam. Unsubscribe at any time. Powered by ConvertKit



The fallacy of incomplete evidence

There is a big difference between evidence-based thesis or thinking and cherry picking or choosing data or evidence (they are not the same thing) that supports your argument or point of view. 

The fallacy of incomplete evidence is a logical illusion or misinterpretation that occurs when we are only being given or only subject ourselves (through selective reading or only approaching experts who agree with our viewpoint, for example) to evidence that backs up one point of view or proposition. Because the evidence is incomplete, it creates a bias and a of assumptions that may be false or different to those we might otherwise have, if we knew the full facts. 

As a result, any thinking or conclusions drawn are also likely to be biased. The fallacy of incomplete evidence refers to the phenomenon whereby people tend to become swayed and believe a proposition or argument, if there appears to be evidence for the argument, even if that evidence is incomplete or biased. This effect is particularly strong, if the argument put forward is in agreement with the individual’s values. In effect, people are unlikely to realise that the evidence they are being presented with is incomplete and biased. 



not evidence-based books


Evidence suppression, reference mining and cherry-picking

It is actually manipulation to cherry pick evidence, (also known as evidence suppression or quote/reference mining).  Finding evidence that only supports your argument and either ignoring or even actively misinterpreting, suppressing or hiding evidence that does not support, or disagrees with, your argument is a form of manipulation. The reason why people engage in such practices is to bolster their argument; if people are made aware of the counter-evidence, they may well question the argument.

Evidence suppression can be done for a variety of reasons, however, the effect is the same – to promote and enhance the claims made by a particular argument. 




Real research is different 

Real research, on the other hand, is not there to promote a particular argument or theory, but rather to find out the truth of a situation. For example, is this theory or idea/hypothesis acceptable, or does it have flaws? Real research actively tries to disprove its own theory and argument, so as not to fall into this trap. When researchers test a hypothesis or theory, they ‘stress test’ it by testing or trying to prove the opposite theory, or what is known as the null hypothesis. 


The null hypothesis

When researchers are testing a hypothesis, for example, they are testing an assertion that two things are related, say coffee drinking and increased heart problems, they actually test to show that the two things aren’t related. They try to prove the opposite, the null hypothesis. They only accept a hypothesis,if they can show or prove alternative explanations. This is similar to the presumption in law that an accused is innocent, until proven guilty ‘beyond all reasonable doubt’. In the case of science and research, we only ‘accept’ a hypothesis or argument when, and if, we can show that any and every alternative explanation has been exhausted. 

This is the direct opposite of what many authors, blog writers, purveyors and dealers of argument and many ‘experts’ do. They gather evidence to support their view, as opposed to testing their ideas to see if they are true. 

This means that one or two stories, experiences or anecdotes do not prove a theory. This is not rigorous testing, it is just an idea with some circumstantial evidence. 


Confirmation Bias


Confirmation bias

Confirmation bias is a form of cognitive favouritism, whereby we naturally tend to filter and actively search for data and evidence that supports our beliefs. You can see this in action. Say, for example, that you are about to buy a new car and you are attracted to a BMW 3 series. You then start to notice BMW 3 series cars all over the place. You hadn’t particularly noticed them before, but now they are everywhere. This is your brain searching for and filtering for the object of your desire. We do this with our beliefs, which is how prejudices grow. We develop a theory about blacks, whites, people with red hair, people driving Volvos, old people, fat people, thin people etc. Pretty soon our brain starts serving up confirmatory evidence to support our belief. 

Peer-reviewed research

One of the reasons for the peer-reviewed process in academic journals (opening your research to other researchers and scientists to critique) is to try to counteract biased research that is trying to prove a case. The intention (at least) is that peers will spot someone who is cherry-picking or suppressing evidence in support of an argument. That’s the idea. However, it is a sad fact that some journals are better at this than others. I frequently come across evidence-mining in published papers. Whilst it is usually a bit more subtle than many books and blogs, it is still the same confirmatory process and will inevitably end in rejection for inclusion in a research briefing. Just because its in an academic journal does not automatically mean it’s truly evidence based.

One of the things we do find is that the impact factor of a journal is usually a good indicator of the quality of the peer-reviewing process. It’s not foolproof by any means, but there is a strong likelihood that higher rated journals have been properly peer-reviewed. For description of impact factor and why the impact factor of the Harvard Business Review is so low click here. 



The point – not evidence-based

The main point here is that a lot of what can often appear to be evidence-based writing actually isn’t. Rather, they are often reference-mined arguments used to put forward and ‘prove’ someone’s theory not evidence-based in themselves. 



The Essential Guide to Evidence-Based Practice

Be impressively well informed

Get the very latest research intelligence briefings, video research briefings, infographics and more sent direct to you as they are published

Be the most impressively well-informed and up-to-date person around...

Powered by ConvertKit
Like what you see? Help us spread the word

David Wilkinson

David Wilkinson is the Editor-in-Chief of the Oxford Review. He is also acknowledged to be one of the world's leading experts in dealing with ambiguity and uncertainty and developing emotional resilience. David teaches and conducts research at a number of universities including the University of Oxford, Medical Sciences Division, Cardiff University, Oxford Brookes University School of Business and many more. He has worked with many organisations as a consultant and executive coach including Schroders, where he coaches and runs their leadership and management programmes, Royal Mail, Aimia, Hyundai, The RAF, The Pentagon, the governments of the UK, US, Saudi, Oman and the Yemen for example. In 2010 he developed the world's first and only model and programme for developing emotional resilience across entire populations and organisations which has since become known as the Fear to Flow model which is the subject of his next book. In 2012 he drove a 1973 VW across six countries in Southern Africa whilst collecting money for charity and conducting on the ground charity work including developing emotional literature in children and orphans in Africa and a number of other activities. He is the author of The Ambiguity Advanatage: What great leaders are great at, published by Palgrave Macmillian. See more: About: About David Wikipedia: David's Wikipedia Page