Read I Think You'll Find It's a Bit More Complicated Than That Online
Authors: Ben Goldacre
The first group were told about eighty patients who got the drug, and twenty who didn’t. The second group were told about twenty patients who got the drug, and eighty who didn’t. That was the only difference between the two groups, but the students in the first group estimated the drug as more effective, while the estimates of the students who were told about only twenty patients receiving it were closer to the truth.
Why is this? One possibility is that the students in the second group saw more patients getting better without the treatment, and so got a better intuitive feel for the natural history of the condition, while those in the other group who were told about eighty patients getting Batarim were barraged with data about people who took the drug and got better.
This is just the latest in a
whole raft of research
showing how we can be manipulated into believing that we have control over chance outcomes, simply by presenting information differently, or giving cues which imply that skill had a role to play. One
series of studies
has even shown that if you manipulate someone to make them feel powerful (by remembering a situation in which they were powerful, for example), they imagine themselves to have greater control over outcomes that are determined purely by chance, which perhaps goes some way to explaining the hubris of the great and the good.
We know about optical illusions, and we’re familiar with the ways that our eyes can be misled. It would be nice if we could also be wary of cognitive illusions that affect our reasoning apparatus. But more than that, like the ‘Close door’ buttons in a lift – which, it turns out, are often
connected to nothing
at all – these illusions are beautiful modern curios.
Guardian
, 2 October 2010
Like all students of wrongness, I’m fascinated by research into irrational beliefs and behaviours. But I’m also suspicious of how far you can stretch the findings from a laboratory into the real world. A cracking new paper from
Social Psychology and Personality Science
makes a neat attempt to address this shortcoming.
Loran Nordgren and Mary McDonnell wanted to see whether our perception of the severity of a crime was affected by the number of people affected. Sixty students were given a vignette to read about a case of fraud, where either three people or thirty people were defrauded by a financial adviser, but all the other information in the story was kept the same.
In an ideal world, you’d imagine that someone who harmed more people would deserve a harsher treatment. Participants were asked to evaluate the severity of the crime, and recommend a punishment: even though fewer people were affected, participants who read the story with only three victims rated the crime as more serious than those who read the exact same story, but with thirty victims.
And more than that, they acted on this view: with a maximum possible sentence of ten years, people who heard the three-victim story recommended an average prison term one year longer than the thirty-victim people. Another study, where a food-processing company knowingly poisoned its customers to avoid bankruptcy, gave similar results.
Now, it’s nice that two studies were carried out into the same idea, but I always worry about experiments like this, because they demonstrate an effect in the rarefied environment of the lab, while the real world can be much more complicated.
So what’s great about this paper is that it has two halves: the researchers went on to examine the actual sentences given in a representative sample of 136 real-world court cases to people who were found guilty of exactly these kinds of crimes, but with different numbers of victims, to see what impact the victim-count had.
The results were extremely depressing. These were cases where people from corporations had been found guilty of negligently exposing members of the public to toxic substances such as asbestos, lead paint or toxic mould, and their victims had all suffered significantly. They were all from 2000 to 2009, they were all jury trials, and the researchers’ hypothesis was correct: people who harm larger numbers of people get significantly lower punitive damages than people who harm smaller numbers of people. Juries punish people less harshly when they harm more people.
Now, it seems to me that alternative explanations may possibly play a contributory role here: cases where lots of people were harmed may involve larger companies, with more expensive and competent lawyers, for example. But in the light of the earlier experiment, it’s hard to discount a contribution from empathy, and this is a phenomenon we all recognise.
When he appeared on
Desert Island Discs
, Rolf Harris chose to take his own song ‘Two Little Boys’ with him. When the First World War broke out, Rolf explained, his father and uncle had both joined up, his father lying about his younger brother’s age so they could both join the fight. But their mother found out and dobbed them in, because she couldn’t bear the thought of losing both her sons so young. Rolf’s uncle joined up two years later when he came of age, was injured, and died on the front. Rolf’s dad was beside himself, and for the rest of his life he believed that no matter what the risks, if he had been in the same infantry group he could have crawled out and saved his younger brother, just like in the song. Rolf played ‘Two Little Boys’ to his grandmother just once. She sat through it quietly, took it off at the end, and said quietly, ‘Please don’t ever play that to me again.’
1
This story always makes me cry a little bit. Two million people die of Aids every year. It never has the same effect.
Guardian
, 4 September 2010
Everyone likes to imagine they are rational, fair and free from prejudice. But how easily are we misled by appearances?
Noola Griffiths
is an academic who studies the psychology of music. This month she’s published a cracking paper on
what women wear
, and how that affects your judgement of their performance. The results are predictable, but the context is interesting. Four female musicians were filmed playing in three different outfits: a concert dress, jeans, and a nightclubbing dress. They were also all filmed as points of light, wearing a black tracksuit in the dark, so that the only thing to be seen – once the images had been treated – was the movement of some bright white tape attached to their major joints.
All these violinists were music students, from the top 10 per cent of their year, and to say they were vetted to ensure comparability would be an understatement: they were all white European, size 10 dress, size 4 or 5 shoe, and aged between twenty and twenty-two. They were even equivalently attractive, according to their score on the MBA California Facial Mask, which is some kind of effort to derive a number denoting hotness, using the best fit of a geometric mask over someone’s face. That may well be ridiculous: I’m just saying they tried.
In fact they did better. All the performances were also standardised at 104 beats per minute, so the audio tracks from each musician could be replaced with a recording of a single performance, by someone who was never filmed, for each of the various pieces in the study. This meant there was no room for anyone to argue that the clothes made the musicians perform differently, because the audio was the same for everyone; and when the researchers checked, in a pilot study, nobody spotted the dummy audio track.
Then they got thirty different musicians – a mixture of music students and members of the Sheffield Philharmonic – and sat each of them down to watch video clips with various different permutations of clothing, player and piece. All were invited to give each performance a score out of six for technical proficiency and musicality.
The results were inevitable. For technical proficiency, performers in a concert dress were rated higher than if they were in jeans or a clubbing dress, even though the actual audio performance was exactly the same every time (and played by a different musician, who was never filmed). The results for musicality were similar: musicians in a clubbing dress were rated worst.
Experiments offer small, constricted worlds, which we hope act as models for wider phenomena. How far can you apply this work to wider society? There’s little doubt that women are still discriminated against in the workplace, but each individual situation has so many variables that it can be difficult to assess clearly.
The world of music, however, makes a good test tube for bigotry. That’s because in the 1970s and 1980s most orchestras changed their audition policy, in an attempt to overcome biases in hiring, and began to use screens to conceal the candidates’ identity.
Female musicians in the top five US symphony orchestras gradually rose from 5 per cent in the 1970s to around 25 per cent. Of course, this could simply have been due to wider societal shifts, so
Goldin and Rouse
conducted a very elegant study (titled ‘Orchestrating Impartiality’): they compared the number of women being hired at auditions with and without screens, and found that women were several times more likely to be hired when nobody could see that they were women.
What’s more, using data on the changing gender make-up of orchestras over time, they were able to estimate that from the 1970s to 2000 – the era in which casual racism and sexism in popular culture shifted to more covert forms – between 30 per cent and 55 per cent of the trend towards greater equality was driven simply by selectors being forced not to see who they were selecting. I don’t know how you’d apply the same tools to every workplace. But I’d like to see someone try.
Yeah
, Well, You Can Prove Anything with Science
Guardian
, 3 July 2010
What do people do when confronted with scientific evidence that challenges their pre-existing view? Often they will try to ignore it, intimidate it, buy it off, sue it for libel, or reason it away.
The classic paper on the last of those strategies is from
Lord in 1979
: the researchers took two groups of people, one in favour of the death penalty, the other against it, and then presented each with a piece of scientific evidence that supported their pre-existing view, and a piece that challenged it. Murder rates went up, or down, for example, after the abolition of capital punishment in a state, or were higher or lower than in neighbouring states with or without capital punishment. The results were as you might imagine: each group found extensive methodological holes in the evidence they disagreed with, but ignored the very same holes in the evidence that reinforced their views.
Some people go even further than this, when presented with unwelcome data, and decide that science itself is broken. Politicians will cheerfully explain that the scientific method simply cannot be used to determine the outcomes of a drugs policy. Alternative therapists will explain that their pill is special, among all pills, and you simply cannot find out if it works by using a trial.
How deep do these views go, and how far do they generalise? In a study now published in the
Journal of Applied Social Psychology
, Professor Geoffrey Munro took around a hundred students and told them they were participating in research about ‘judging the quality of scientific information’. First, their views on whether homosexuality might be associated with mental illness were assessed, and then they were divided into two groups.
The first group were given five research studies that confirmed their pre-existing view. Students who thought homosexuality was associated with mental illness, for example, were given papers explaining that there were proportionally more gay people than members of the general population in psychological treatment centres. The second group were given research that contradicted their pre-existing view. (After the study was finished, we should be clear, they were told that all these research papers were fake, and given the opportunity to read real research on the topic if they wanted to.)
Then they were asked about the research they had read, and asked to rate their agreement with the following statement: ‘The question addressed in the studies summarized … is one that cannot be answered using scientific methods.’
As you would expect, the people whose pre-existing views had been challenged were more likely to say that science simply cannot be used to measure whether homosexuality is associated with mental illness.
But then, moving on, the researchers asked a further set of questions, about whether science could be usefully deployed to understand all kinds of stuff, all entirely unrelated to stereotypes about homosexuality: ‘the existence of clairvoyance’, ‘the effectiveness of spanking as a disciplinary technique for children’, ‘the effect of viewing television violence on violent behavior’, ‘the accuracy of astrology in predicting personality traits’, and ‘the mental and physical health effects of herbal medications’.
The students’ views on each issue were added together to produce one bumper score on the extent to which they thought science could be informative on all of these questions, and the results were truly frightening. People whose pre-existing stereotypes about homosexuality had been challenged by the scientific evidence presented to them were more inclined to believe that science had nothing to offer – on any question, not just on homosexuality – when compared with people whose views on homosexuality had been reinforced.
When presented with unwelcome scientific evidence, it seems – in a desperate bid to retain some consistency in their world view – many people would rather conclude that science in general is broken.
Guardian
, 12 June 2010
We all strive to be right. But what if people who are wrong have better lives? This week, a German study in
Psychological Science
appears to show that being superstitious improves performance, on a whole string of different tasks.