I am a
Home I AM A Search Login

Skewed inspection and malleable hypotheses

RECENT POSTS

GLOBAL YEAR

The 2024 Global Year will examine what is known about sex and gender differences in pain perception and modulation and address sex-and gender-related disparities in both the research and treatment of pain.

Learn More >

Science isn’t perfect and research findings often stray from the truth [1]. Researchers miss the bullseye for a number of reasons, but one explanation might be that we let our intuitions give way to cognitive biases.

Below, I’ve summarised a news article in Nature [3] that touches on why even the most rigorous thinkers are influenced by cognitive biases.

Problem 1: Skewed inspection

People have a tendency to focus their attention toward information that supports their preconceived ideas or beliefs. This phenomenon is called “confirmation bias” [4], and it affects most people, including scientists [5]. A study in 2004 gave a stark demonstration of just how susceptible scientists are to confirmation bias. Here, researchers recorded and analysed the conversations of a group of biologists working in a molecular biology lab (a bit like the Truman show). They found that when the biologists were confronted with findings that were inconsistent with their preconceived notions, they almost always (9 times out of 10) searched for possible explanations to justify the inconsistent finding. They also managed to find a reason to explain the unexpected finding, usually by pointing the finger at a limitation in their methodology. Rather than admit to their theory being wrong, these scientists refused to consider inconsistent findings as “real”, and they were more likely to look closely at “where they might have gone wrong”.

It seems that researchers are less likely to scrutinise results that “align” with their theories, but when we’re faced with results that appear out of the ordinary, intuition tells us to take another look. Some will even re-analyse the data to find plausible explanations for the unexpected result. This “asymmetric attention” to research findings leaves the door open for biased interpretations and unnecessary secondary analyses that skew accurate representations of the data. This phenomenon can, of course, also work in the opposite way. Sometimes researchers might find results that challenge the status-quo to be more interesting! When this is the case, disconfirmation bias can encourage researchers to wholeheartedly accept the unexpected finding without subjecting it to much scrutiny.

The bottom line is that we often pay unequal attention to findings that seem incongruent (or congruent) with our expectations. It might be OK to question the findings and check for major blunders in the methods, but a problem arises when this process is skewed, or driven by preconceived inklings or invested interests in the findings. 

Problem 2: Malleable hypotheses

Coming up with a hypothesis is like baking a cake. First you gather the ingredients – this is when you’re sitting on the bus, or in the shower – accumulating bits and pieces of information with random thoughts. Then you start mixing the ingredients together into a runny mixture – this is when we have corridor discussions and meetings to build the hypothesis. Then, you slide the cake mix in the oven and it bakes – this is when we lock our hypothesis in a protocol, never to be touched again. The difference is that while you can’t deconstruct a cake, you can deconstruct a hypothesis.

Scientific communication can take on a sour taste when we start tinkering with our original hypothesis after having been exposed to the data. This is why it’s often advised to take a deductive approach and to lock down an a priori hypothesis before data collection, and stick to the recipe when analysing the data.

Some have taken alternative views on this. For example, in 1987 Daryl Bem, a distinguished social scientist, wrote: “There are two possible articles you can write: (1) the article you planned to write when you designed the study; or (2) the article that makes the most sense now that you have seen the results”. He argues that option 2 is the correct approach, especially when the study is exploratory [6].

A decade later, Norbert Kerr criticised Bem’s lenient perspective in a very thoughtful paper and coined the term: HARKing – Hypothesising After the Results are Known [7]. Kerr defines HARKing as “presenting a post hoc hypothesis in the introduction of a research report as if it were an a priori hypothesis”. Surveys showed that HARKing was becoming more prevalent. Even journal editors promoted HARKing to facilitate a “coherent story”. Kerr became concerned about the effects of HARKing and outlined a list of harmful consequences including that it 1) encourages scientists to retain broad and dis-confirmable old theories, and 2) inhibits the discovery of plausible alternative hypotheses (the full list can be found in his original paper [7]). The trouble is that it is so easy to HARK! For now, open protocol registration is the working solution to this problem, and BiM have adopted this approach for some time.

This quote from Daniel Kahneman’s book Thinking Fast and Slowillustrates a key motive that seems to drive cognitive biases.

“We see a world that is vastly more coherent that the world actually is. The confidence that people have in their beliefs is not a measure of the quality of evidence, but is a judgement of the coherence of the story that the mind has managed to construct. When there is little evidence, no conflict, the story is good. People tend to have great belief, great faith in stories that are based on little evidence. It generates what Amos and I call ‘natural assessments’ ”. 

Everyone loves a good story, but maybe the pressure to tell a good story (in a journal article or a grant application) is fuelling intuitive minds to HARK. Sadly, there are also lucrative incentives to HARK. But, not to worry, there are some innovative developments that have opened up honest discussions among scientists, and new approaches that incentivise scientist to not HARK. Hopefully in due course, these efforts will overcome the problem of half-baked, skewed science.

This is the third and final part of a series of posts looking at what cognitive bias is, and how cognitive bias influences our clinical practice and research.

About Hopin Lee

Hopin LeeHopin is part of the PREVENT team at Neuroscience Research Australia and is diving into the final stages of his PhD. Hopin’s research primarily focuses on understanding causal mechanisms that underlie the development of chronic pain, and the interventions that aim to treat it. On the side, Hopin is also interested in the use of social media data and the utility of mobile apps in musculoskeletal health.

Hopin catastrophizes about caffeine and chilli, loves football, tries to perfect the Spagetti alla Puttanesca, mindfully regulates his OCD for vinyl, and tries to attend as many gigs in Sydney while avoiding the heat. Oh and he tweets… sporadically… @hopinlee

References

[1] Ioannidis JPA. Why most published research findings are false. PLoS Med 2005;2:e124. doi:10.1371/journal.pmed.0020124

[2] Estimating the reproducibility of psychological science. 2015;349. doi:10.1126/science.aac4716

[3] Nuzzo R. How scientists fool themselves – and how they can stop. Nat News 2015;526:182. doi:doi:10.1038/526182a

[4] Wason PC. On the failure to eliminate hypotheses in a conceptual task. Q J Exp Psychol 1960;12:129–40. doi:10.1080/17470216008416717

[5] Fugelsang J a, Stein CB, Green AE, et al. Theory and data interactions of the scientific mind: evidence from the molecular and the cognitive laboratory. Can J Exp Psychol 2004;58:86–95. doi:10.1037/h0085799

[6] Bem DJ. Writing the empirical journal. Complet Acad A Pract Guid Begin Soc Sci 1987;:171.

[7 ] Kerr NL. HARKing: hypothesizing after the results are known. Pers Soc Psychol Rev 1998;2:196–217. doi:10.1207/s15327957pspr0203_4

 

Share this