I am a
Home I AM A Search Login

Decoding Fake Pain

RECENT POSTS

GLOBAL YEAR

The 2024 Global Year will examine what is known about sex and gender differences in pain perception and modulation and address sex-and gender-related disparities in both the research and treatment of pain.

Learn More >

How easy is it to fake that you are in pain? This question has interested people for a long time, particularly those who have to make decisions about compensation claims related to chronic pain (where large amounts of money are often at stake) and anybody who feels a disconnect between other people’s pain expression and objective evidence for the presence of pain (or better, the lack thereof).

Marian Stewart Bartlett and colleagues tested a computer system devised to differentiate true from fake facial expressions of pain (published in Current Biology, open access). In two experiments, healthy participants were shown short video clips in which faces were shown of individuals who either experienced genuine pain while holding their arm in ice water, or pretended to be in pain. Participants were asked to indicate whether they thought the facial expression was true or fake. In a second experiment, participants were shown pairs of videos of the same person. In one instance the person was faking the expression of pain; in the other video the pain was real. This time, participants were told which of the two videos showed the fake expression of pain, to train them in differentiating between the genuine and the fake expression.

The same videos were also shown to a computer vision system called the Computer Expression Recognition Toolbox (CERT) that can analyse facial expressions from video in real time. How does it do that? In short, this system analyses facial muscle movements that are characteristic for the expression of a certain emotion (in this case, pain). Using a pattern recognition approach, the computer “learns” which muscular activity is characteristic for both conditions, and subsequent video sequences are compared against these “templates”.

The study produced two remarkable findings. First, humans are incredibly bad at telling true and fake expressions of pain apart. With an accuracy rate of 51% (that only increased to a meagre 54% with training), their judgements were not better than chance. Second, the computer system outperformed the human participants by far, and categorised 85% of the expressions correctly. Needless to say, this result is all grist to the mill of those who advocate the need for objective markers of pain in clinical settings and courtrooms. Instead of telling their story, patients would simply have to express their pain in front of CERT and we would know if they were faking the pain or not. Really? As much as I share the authors’ enthusiasm for CERT as a research tool and find it fascinating that there are subtle but detectable differences between real and faked pain expression, I fear for those who express their (real) pain in an unusual way.  As in most decoding approaches, idiosyncrasies are likely to be disregarded and individuals with an unusual presentation run the risk of being misclassified. However, such techniques are gaining ground and have recently stirred up fierce discussions in the context of “brain reading” (i.e. the application of pattern recognition approaches to brain imaging data), which might be a great topic for another blog post …

Let us know your thoughts about “automatic detection of deceit” in clinical and/or legal settings in the comment section below.  And just in case you were wondering which movement it is in particular that reveals an expression is faked: it was the all-too-common mouth opening – a finding which led the authors to put forward that “training physicians to specifically attend to mouth-opening dynamics to improve physicians’ ability to differentiate real pain from fake pain” might be the way forward. In times where a large number of patients complain that their doctors rarely look at them during consultations, focusing on the patients’ mouths seems an odd starting point to remedy this shortcoming.

About Katja Wiech

Katja WiechKatja Wiech is Associate Professor at the Pain & Mind, Nuffield Dept. of Clinical Neurosciences & FMRIB Centre, John Radcliffe Hospital, University of Oxford.  She is also Section Editor here at BiM!

References

Marian Stewart Bartlett, Gwen C. Littlewort, Mark G. Frank, and Kang Lee (2014) Automatic decoding of facial movements reveals deceptive pain expressionsCurr Biol.; 24(7): 738–743. Published online 2014 Mar 20. doi:  10.1016/j.cub.2014.02.009 Open Access!

Share this