Non-Religious People Are More Compassionate Than Religious People. Right?
By Dave Hitt on May 2, 2012 in Atheism, Junk Science
We all like to believe our group is better than their group, and atheists and skeptics are not immune, so this article, which claims that religious people are less compassionate than the non-religious,  didn’t surprise me.  Read it and see how many times your Bullshit Meter goes off.
Done? If you’d like to continue testing your meter here’s a more thorough and slightly less loaded report on the same study.
Your BSM should have gone off several times. Here are the points that set off mine. (If I missed any please smartnize me in the comments.)
First off, we can’t look at the actual study, because it hasn’t been published yet. “Science by press release” always pins my BSM needle.
The first experiment is based on a survey of people reporting on themselves. This is one of the least accurate ways to collect data, so it also pins the needle. The study was fairly small, including only 1,300 American adults. There was no mention of the demographics of these people. Were they all Berkley students? Old people? Young people? Where they from the upper, middle or lower economic class? One-legged lesbian Morris dancers? (Probably not that.) We’re never told.
There is also no indication any confounders were considered.
So the experiment’s conclusions don’t matter, because the data is useless.
The second was a sample size of 101, very small, and again, there were no demographics or confounders mentioned. Â Bzzzt! Â (I’ve rigged my BSM with a buzzer, just in case I don’t notice the needle moving.)
In the final experiment at least they tell us that we’re dealing with college students. But a significant amount of the conclusion was based on how compassionate they felt when they started the experiment. What? How do you measure such a thing, and how valid is it when you base your conclusions on self-reported emotion?
There are a plethora of people whose profession is knocking our bullshit meters out of whack. Some of them do it by providing conclusions we’d like to believe. We need to be just as skeptical of studies with results we like as we are of studies that immediately raise our hackles.  Personally, I’d like to believe we are more compassionate than our superstitious neighbors.  We might be, but this study doesn’t prove that.  It doesn’t prove much of anything.
Emotions are one of the best ways to bypass an otherwise well tweaked BSM. Knowing that, when a study makes us want to do a little superior dance we need that emotional response to trigger the BSM self-critical sub-circuit and apply extra stringent skepticism to it instead of just assuming it’s legitimate. If, unlike this one, it turns out to be legit, we can do a superior dance if we want to.
But we might want to do it briefly and in private, because even when sanctimony is justified it still makes us look like dicks.
From my perspective, people are people. For the most part, when the going gets tough they’re as likely to knock you down as pick you up. I think a level of self-control is about all that can be measured. What’s your boiling point? What’s the point at which you switch from defense to offense mode? How fast do you switch to herd mentality, etc.
Clinton | May 2, 2012 | Reply
As I noted in a Facebook comment, the study IS published. It’s behind a pay wall, but it’s available to those with access. I’ve read the study and although you’ve identified some of the problems, only reading the article will show you the others.
“Confounders” are covered. The study itself isn’t actually a bad design at all. The problems are mostly in the reporting of the study. Your second link, the press release, is not inaccurate like the first one (did you note the blatant error in the first paragraph?), but it spins the findings into a speculative mess.
The scientists may or may not have contributed to the spin; the quotes in the press release could easily have been cherry-picked after questioning the authors. The truth is that you don’t know and neither do I. The best that we can do is provide a good analysis of the study itself (by reading it).
I’m working on an analysis now, so I’ll just say, please don’t bash a study that you haven’t read. That doesn’t help. Instead, question the reporting of it and try not to make up the answers as you go.
badrescher | May 2, 2012 | Reply
Thanks for the additional info, and please stop back and post a link to your analysis when you’ve completed it.
Dave Hitt | May 2, 2012 | Reply
My bsm went off when I saw where the study was done. Good point about being extra careful of research that seems to support what you believe.
jbatch | May 3, 2012 | Reply
Mine too, but I didn’t mention that because it was too obvious to any smartenized person (like yourself.)
The name of that institution should always be pronounced as if there were a question mark on the end, and perhaps followed with a “pffft.”
Dave Hitt | May 4, 2012 | Reply
All studies which involved correlation are bad studies. The best they can ever do is suggest a line of research that MIGHT tell you an underlying cause. If you get a Relative Risk/Relative Benefit 3 are rarely an indicator of an actual correlation. All you ever potentially find in a study is an indicator that you should set up an experiment to identify something.
Statistical studies like this suffer from a fundamental flaw in analysis that can be linked to Statistics 101. We learn in that first class of statistics to answer the following question. “What is the probability that you roll at least 1 six in 5 rolls of the dice!” The answer is reached by calculating the chance that you don’t roll a six. Think of the six as “death” or “compassion”.
The statisticians in these studies are evaluating the death side. The problem with evaluating death is that you sometimes ignore life. Looking at the ratios of death you attempt to get an idea of what is good and bad. Problem is life is still there. Smokers manage to live despite their evil ways. The religious manage to do good things even though they are irrational. Keep track of the living. Never lose sight of the living.
The problem with keeping track of the living (and this goes for studies of compassion) is that you can’t publish much… the real data to be found in any study is the data that show nothing is there. It is the studies that have negative results that teach us real science. It deletes the part of the picture not worth looking into. Ironic that negative studies are the ones that are thrown away.
brad.tittle | May 4, 2012 | Reply