Wednesday, 6 November 2013

Are you a nanokiller?

I have been intrigued for some time with Ronald A. Howard's idea of micromorts: a way of putting the risks we take in our lives on a human scale. The idea is that one micromort is a 1-in-a-million chance of dying. So, for example, if we say doing a skydive has a risk of 7 micromorts (as that Wikipedia page that I've linked to claims), that means 7 jumps out of each million lead to somebody dying. Or, in other words, if you jump from a plane there's a 7/1,000,000 chance you'll die (assuming you've used a parachute - without the parachute I suspect the chances are far worse). As we'll all die one day, of course, just being alive carries a background risk level of more than 30 micromorts, as the article also explains.


Anyway, I wondered if we might apply a similar principle to road deaths, as a way of making salient a very important point: each time you drive a motor vehicle, there's a small chance someone will die. I've long thought about how, each time I drive, I am effectively killing a tiny fraction of a person because I'm complicit in the overall number of deaths that take place. Today I realised that something analogous to the micromort concept provides a useful way of quantifying this.


So let's find some statistics! The Department for Transport statistics web page reveals that in the United Kingdom in 2012, motor traffic travelled 302.6 bn miles and led to 1754 deaths. Let's do the maths:


  • 302.6 bn / 1754 = 172,519,954.39 miles for each death
  • 1 billion / 172,519,954.39 = 5.8

And so, ladies and gentlemen, I present you with the nanokilling. Every mile you drive, you commit 5.8 nanokillings. Drive 12,000 miles in a year and you've committed 69,600 nanokillings, or 0.0000696 killings.


So clearly, the typical individual is fairly unlikely to kill over the course of their driving career. Let's say someone drives 10,000 miles per year for 50 years. 50 * 10000 * 5.8 = 2,900,000 nanokillings, or 0.0029 killings. This means you'd need to get together with about 344 other people before you could be reasonably sure that, collectively, you've managed to kill somebody.


But that's the thing, isn't it? 345 people isn't really that many. There's probably that many within a few streets of you. And there are a lot of streets in the country, aren't there?


Obviously the nanokilling would need to be recalibrated from time to time as new statistics on numbers of deaths and the amount of travelling that took place to cause them emerge, but of course that's also kind of beside the point. The point is that as long as there is motorized travel and deaths on our roads, the number of nanokillings will never be zero, which means the fundamental point of this article will endure - when we use a motor vehicle, we commit nanokillings. Unless you foreswear motoring (and the products of motoring, and do nothing to push for alternatives) you're to some extent complicit in causing little bits of a death. I know I am, even if I'm not happy about it.

Monday, 12 November 2012

A thought on SMIDSYs

As I was out cycling yesterday, I found myself thinking about SMIDSYs (Sorry, Mate, I didn't see you - the seemingly standard response to users of two-wheeled vehicles from inattentive motorists; I have heard, word for word, this very phrase myself). I was riding in a predominantly black outfit on a black bicycle and I found myself thinking about what might happen if the worst came to the worst and some distracted or inattentive driver failed to see me and there was a collision.

Now, let's be clear: it was a bright, sunny day, and I was perfectly visible for anybody who cared to look, but I knew from years of observing these things that, should a driver pull a SMIDSY, there was every chance that they, any police who got involved, and any media who might report on it, would automatically and unthinkingly shift the blame to me, saying or implying that it was my fault that the driver didn't see me because I didn't take sufficiently extreme steps to attract that person's attention (and, of course, no actions could ever be extreme enough to overcome the societal inertia on this point...!).

But then, I realized this: if there were a SMIDSY event that day, there were actually hard data to which I could point to show that my appearance wasn't in fact the issue. This idea could potentially be very useful! Specifically, I had been out cycling at that point for about 2 hours; hundreds of motorists had successfully seen me and dealt appropriately with me in a range of streets, junctions and country lanes. Given that hundreds of other motorists had had no problem with my appearance, if somebody did hit me, the data demonstrably show that my appearance wasn't the problem. My appearance had been tested and had passed the proving in hundreds of separate events. If somebody overlooked me that day, clearly the first thing we should be looking at to explain the event is them, and not my appearance, even if only for reasons of Occam's Razor.

So my thought for today is that this issue, of "Why did this particular driver and none of the others have a problem seeing me?" should perhaps be raised more often in SMIDSY discussions.


Wednesday, 16 May 2012

Thinking out loud: Are cyclists the new weather?

You find yourself standing next to a stranger, perhaps in a pub or a Post Office queue whilst waiting an unbelievable time for a simple stamp. You decide to strike up a conversation with this person to pass the time. But what do you talk about? The list of topics one can raise with a stranger is quite slim. You can hardly start out with "Isn't the current Prime Minister an incompetent buffoon?" as you risk upsetting their political sensitivities. Sport is also risky - they might support a team you do not. So what's safe? What can you be sure they'll not get offended by? Two topics that never fail are weather and traffic.

"Traffic's bad today, isn't it?" is as safe a conversation opener as you can find. The traffic might be light, but don't worry: this won't go challenged. What else is a safe opening line with a stranger? How about "It's hard to find a parking space, isn't it?" or "Cyclists are a nuisance aren't they? Always riding through red lights and on pavements?" Nod nod nod. Safe. Nobody's going to be offended here. We all agree, just as we all agree that winters aren't what they used to be.

These statements about cyclists, of course, raise the hackles of cyclists a great deal. One only needs to look at yesterday's drama about a survey of red-light jumping behaviour and how it was reported. The old saw about cyclists and red lights is one of a family of statements that are so often repeated I recently suggested somebody should make a bullshit bingo card: red lights, pavements, no tax, no insurance, license plates, helmets, lycra...

And this got me thinking. Yes, these statements are repeated an AWFUL lot, aren't they? I've been hearing them regularly for at least 8 years. They crop up in the comments on almost every article about cycling that gets published online (they'll appear below this, no doubt). Yes, they recur suspiciously often. Hmm...

The thing is, what should we take from these statements? Should we take them as evidence for endemic anti-cyclist feeling? I'm starting to doubt that. It's the fact these statements are repeated SO OFTEN and practically verbatim from a hundred thousand different mouths and keyboards that got me thinking. Because they appear almost as a reflex, and because so many people who don't know one another repeat exactly the same phrases, I suspect that these aren't true opinions; I reckon they are merely memes. They are cultural conventions that have grown up over the past years.

I'd like therefore tentatively to suggest that all these statements such as "Cyclists? They all ride through red lights, don't they?" are fundamentally NOT ABOUT CYCLISTS and should not really be taken as such. I believe they are really a set of social conventions that serve the same role as conversations about the weather: They allow a socially acceptable and safe way to find common ground with strangers. They are (in many people's minds) as uncontroversial as statements about how gravity still seems to be working fine, or how politicians can't really be trusted. They are not intended to challenge or provoke; they are intended to provide comfort through the repetition of a familiar and long-standing ritual, not unlike a religious service.

So perhaps we should not make the mistake of thinking that such statements are the product of considered thought, or really represent people's true opinions. People have not looked into these matters deeply enough to really have deep-seated opinions. If people really studied the weather and climate data they'd stop saying that winters aren't what they used to be. If they studied the traffic behaviour and accident data, they'd stop pointing fingers at cyclists.

Because these beliefs aren't really being examined in depth, people take evidence as it comes rather than going and looking for it, and when this happens one usually sees confirmation bias: the tendency to pay attention to information that confirms what we already believe and ignore information that challenges it. So a person doesn't really notice 25 cyclists stopping at a red light and 50 riding on the road, but spots the one who cuts the light and the one who rides on the pavement because these are what they expect to see.

Of course, the notion that a subgroup of society is a menace could not have taken hold were that subgroup not relatively small and perceived as outsiders. The context in which these social norms arose is fascinating and something I've also thought about, but would be a digression here. The main point I want to explore is that perhaps these statements we see so often are merely conventions that are repeated as part of the social glue that holds society together, and do not necessarily reflect people's true opinions about cyclists.

At first glance, the idea that these incorrect views about cyclists are not deeply examined convictions might suggest they will be easy to change. But if I'm correct in what I'm thinking here, we'd have to suggest the opposite: these views will be difficult to change - they came to hold the position they do in our society because they seemed so self-evident and obvious. Perhaps to challenge the idea that cyclists are all law-breakers is like challenging the idea that winters aren't what they used to be.

Thursday, 22 October 2009

Why I hate pedestrians

You know what I hate? Pedestrians. That self-satisfied, striding, boot-bedecked bunch of scum. Is it just me, or does the country suddenly seem to be full of them? I've never tried walking anywhere myself -- why would I? I'm a successful adult -- but it seems I can hardly travel down the street these days without one of them stepping off the pavement in front of me without looking, their face set in a holier-than-thou expression as they jump out of the way of my car in a burst of expletives. Something clearly needs to be done, and it's good that the government are starting to realise this.

The thing is, it's not just that pedestrians are all smug and annoying when they bang on about "health" and "pollution". That's sickening enough, but if their smugness was the only problem I could just ignore them - after all, they and their silly 'shoes' flash past quick enough when I get going, and their smugness can't penetrate my car's tinted windows. But the thing is there's more to it than that, because have you noticed that even though pedestrians walk millions of miles on our road system every single day, they contribute nothing at all to the cost of that road system? They have thousands and thousands of miles of dedicated pedestrian-only travel routes -- pavements, they're called, or sidewalks if you're that way inclined -- which they don't pay a penny for! Whilst honest motorists are taxed left, right and centre, they don't pay anything at all for all these facilities they enjoy. It beggars belief.

And recently, of course, it's got worse. As I'm driving up the street I constantly come across pedestrians walking across my part of the road to get from one of these pavements to another. I mean, what the hell...? Do they want the shirt off my back as well? They've been given vast tracts of pedestrian-only routes, where I'm certainly not allowed to drive, but apparently this isn't enough for them. Oh no, they want to keep encroaching into my space as well. Sure, we've all heard these walking zealots who say that it's because the 'pavements' don't form a joined-up network, meaning they can't walk to where they want to go without having to step onto the road from time to time. Aw, bless their little hearts. To pedestrians I say this: get off my part of the road. If you walk there when I'm coming along then I'll happily run you down, that's all.

In the long term there's clearly only one solution to all this. If pedestrians want to walk on our streets, which we pay for with all our driving taxes, then they need to pay their share and take their part of the responsibility. Anybody who walks anywhere should undergo training, should have to pay an annual tax towards the facilities they enjoy, should display a license plate so they can be identified, and should each be made to carry insurance in case they are ever involved in any accidents. Until then, they can sod off back to Shoeville or wherever it is they go when they aren't freeloading off the rest of us.

Friday, 24 July 2009

Open source, open razors and how I learnt to love Microsoft (sort of)


Photograph by Andrew Dyer.

I've made two little lifestyle changes in the past few months. The first was that I started using Linux as my main operating system. I've long been a Mac user, and used Windows at work, but decided out of pure curiosity to see how Linux had advanced since I last used it, during my first job, about 10 years ago. Well, all I could say was 'wow'. I tried a few different Linux flavours - Ubuntu, Linux Mint (best choice for newcomers, I'd say), Crunchbang (not for newbies!), and, above all, Kubuntu, where I have found a happy home. When I saw there was so much excellent software out there - for free! - I saw that I couldn't really justify continuing to pay for software as I had been doing, and practically overnight made the switch to running Kubuntu as my main operating system.

The second big change was that I started shaving with an open razor. A friend had made this switch some time ago and pointed out all the advantages: it gives you a better shave than even the most expensive Gillette-type blades and is a one-off purchase, with no ongoing costs. The two of us were at a meeting in Hanover and spotted a shop which specialized in razors. I snapped one up and spent many happy hours chopping my face to ribbons, whilst basking in the warm glow that comes from knowing I'll never again spend money on razor blades.

Whilst mopping up the blood one day, I realized there was actually quite an interesting parallel between open source software and open razors. Now that might sound a bit weird, but bear with me. Both these changes - the new razor and the new OS - involved a lifelong shift to no longer paying for products, and no longer supporting large and cynical corporations (look up the origins of Gillette if you want to know why I use the word 'cynical'). Moreover, both these changes involved learning new skills, and both were initially a little bit difficult. But most interestingly - and this is the point I'm working towards - both gave me a new respect for the mass-producers of razors and software. It was only when I started using the open razor that I saw just how amazing my previous razor was: I could carelessly flick it around my face in moments, without worrying about cuts, in a way I never could with the open razor. Yes, it didn't shave quite so close, but it did a really quite impressive job, all things considered. It works just fine for a lot of people. There are better things out there, but why would most people need to look for them when they're pretty well served with the standard fare?

And it was just the same with the operating system. It was only after switching to Linux, and seeing just how difficult it is for the people who write an operating system to make it work with the thousands of different computers that exist, that I realized just what a clever job the folks at Apple and - particularly - Microsoft have done. Windows isn't perfect. It doesn't shave as close as Linux, to push the metaphor too far, but it does a really quite impressive job, considering. It works just fine for a lot of people. There are better things out there, but why would most people need to look for them?

So in summary: learning to shave with an open razor stopped me being a Microsoft basher. I'm sure there's a lesson there somewhere.

Tuesday, 30 June 2009

Public advice: we need more information

I keep finding myself pondering government advice, and how we really need more information if the genuine aim of this advice is to change people's behaviour for the better. Take the UK government's advice on drinking alcohol safely, where men are advised to drink no more than 4 units of alcohol a day (so 28 per week, 21 for women). Now, I don't believe for a second that this particular figure is based on anything more than guesswork and the perceived need to provide some (any) figure, but it is useful for illustrating my wider point, which is that with any advice like this, how exactly are the public supposed to translate the number into action? You see, I can think of at least 4 completely plausible interpretations of this 28-units-per-week advice:

  • If I drink 28 units per week then I will definitely come to no harm
  • If I drink 28 units per week then there is a 95% chance I will come to no harm
  • If I drink 28 units per week then there is a 50% chance I will come to no harm
  • If I drink any more than 28 units per week I will definitely come to harm

So which is it? This really matters, because each interpretation would lead me to respond in a totally different way. This is something I keep finding myself thinking about with any sort of official advice on behaviour: we need more facts about how the advice is arrived at if we are to make sensible decisions about whether and how to change our behaviour. Or at least I do: others might be happy to follow dogma ;o)

Finally, big cheers to Google. When I searched for "five a day" its top result was "five a day = 5.78703704 × 10-5 hertz". Superb!

Friday, 29 May 2009

Hello, Orange!

I just sent this message to the Orange mobile phone company through their website

Hello! As you'll notice I've chosen the 'I'm not an Orange customer' option at the top of this form. In fact, I haven't been an Orange customer for over a year. However, I'm really pleased to see that my not being an Orange customer hasn't deterred you from sending me regular quarterly statements saying I owe you £0.00. Thanks for keeping me informed! It's nice to know that, as a non-user of your services, I don't owe you any money. I was already pretty certain that I don't owe you any money - what with not being an Orange customer and all - but it's nice to be reassured. Presumably you send similar letters to the 5.95 billion other people around the world who aren't your customers, to reassure them too?

On the off-chance you would like to stop sending me these statements - saving yourself some money, my postman some effort and our planet some trees - the statements come with the account number xxxxxxxx written on them. I have telephoned you about these statements at least three times before now and have, on each occasion, been assured that my account is definitely definitely definitely closed - definitely! - and I would not get another statement, so I don't know if this number will be of any use to you. I offer it for what it's worth, with the knowledge it might be as random and meaningless as your telephone operators' assurances.

Best wishes, and have a good weekend,

Ian Walker

Wednesday, 15 April 2009

Research with People published

It's a red-letter day in the Walker household. My textbook, Research with People: Theory, Plans and Practicals has been published. This is a practically based introduction to the issues involved in human testing. It is written to work for anyone who needs to collect information from people - medics, psychologists, sociologists, management types, etc. - and also works for readers who don't carry out research but who want to understand the research process so they can better make sense of what they read. Enjoy!

Monday, 16 March 2009

Bicycle overtaking and rebuttals

Something I've become slightly infamous for is my 2007 work on drivers overtaking bicyclists. A couple of weeks ago I was alerted to a US website where somebody called Dan Gutierrez posted a surprisingly angry critique of my findings as well as some data from his own replication of parts of the study (the main document is here [pdf]). Dan found some different results to me, which is great as I've long expected there would be differences in driver behaviour between the UK and the US, particularly because of differences in road design between the two countries. However, rather than simply conclude our countries are different, Dan seems to conclude I'm either a big numpty who can't do research, or a deliberate liar. Either way: ouch, Dan.

Two weeks ago I emailed Dan to try to clear things up, but haven't had a reply, so I thought I'd reproduce my email here. Given he's been quite so stinging about my work in a public forum I feel I should have some right to reply. And, more critically, I spent ages writing this email and at least by posting it here the effort is less wasted. Again: ouch, Dan.

Dear Dan,

Hello! Someone recently pointed me to your online article discussing the findings of the bicycle overtaking study I conducted a couple of years ago. I had a look at your document and I have to say, I was slightly surprised by the general tone, and use of words like 'deceptive' when referring to my presenting findings. But hey! I've been called worse things than that. I hope you don't mind my writing to try and clear up one or two things, as having read that document I almost feel I've offended you somehow?

First, the graphs. I'm a big fan of How to Lie with Statistics too, but I really wasn't trying to hide anything with those graphs. They were intended primarily for use by my colleagues, who regularly use graphs of this sort and who would be totally familiar with the practice of truncating the y-axis. It's done simply to make the differences that exist clearer, to facilitate discussion, not to hide the overall magnitude of an effect - I'd fully expect people to look at the bottom of the axis and see it doesn't start at zero. I'm also satisfied I didn't build up any insignificant differences into significant ones by plotting the graphs this way - I can give you more details to explain this if you're interested.

Moreover, you're dead right that the overall mean passing distance is about 4 feet, but I think by focusing only on the average passing distances you overlook the really important thing, which is all the variation in the data. As you'd expect, the gaps drivers leave when passing cyclists vary a great deal. The distribution of gaps wasn't far off being a Gaussian distribution - a bell curve - which means most drivers left an amount of space somewhere near the average, a few left a massive amount of space and, critically, a few down in the left-hand tail-end of the distribution left very little space indeed (in fact, two of the drivers I encountered left less than zero space).

This last point, about the small number of drivers who leave very little space, is probably important. Every day, many cyclists are passed by motor vehicles. And we know that some of these events end with the cyclists being hit, sadly. Given there are drivers who leave very little space indeed (to the extent some leave less than none), I'd say that it probably does matter if the average gap left by drivers gets smaller. If the average gap gets smaller, this very likely means the whole distribution is shifting along to the left. It's probable - but by no means proven, certainly - that if the average gap declines by one inch, the very near misses shift by an inch too, so all the vehicles that would have just missed the bicycle by a whisker (which we know happens fairly often) instead become vehicles that just hit by a whisker. And the bigger the shift in average passing distance, the more near-misses might become hits. Noone can prove this, but given there are so many near-misses already every day, I really wouldn't want to see drivers doing anything to decrease the gaps they leave, even by only a centimetre on average. I don't know what your thoughts are on this tail-end issue?

You went on to mention that I didn't also look at riding in a position to command the lane. There is a cultural misunderstanding here! British roads are quite small compared to yours (typically an urban lane is about 2.5m wide; often half that size in the countryside). The 1.25m riding position is pretty much in the centre of the lane, so you can take those 1.25m data as being the centre-of-the-lane commanding data you were looking for. Also, you're totally correct to say that drivers did change their behaviour in response to changes in bicycle lane position - I think I said something to that effect in the paper's discussion - but the key point remains: as the bicycle moved further towards the centre of the road, the gap between it and passing vehicles tended to decline. That's why I said 'to a first approximation' vehicles don't respond to changes in the bike's position: I know they do respond, but they don't respond enough! As you say, a 1 foot move by the bike led to a 0.75 foot response in my data; the further out the bike was, the smaller the gap between it and the passing vehicles. Hence my saying 'to a first approximation': that statement was worded to convey my surprise at this finding, not to ignore it. (Incidentally, I suspect the strange disappearance of the helmet effect at the 1m riding position in my data is something to do with this position forcing motorists to approach the centreline of the road; at 1.25m they definitely have to cross it, but at 1m I suspect they had just enough space that they tried to stay entirely within the lane. Or something like that. You have to remember this was the first study looking at such things, and it wasn't clear in advance that the centreline would be an issue. Research builds over time.)

The difference between our countries' road systems, which I mentioned above, is the key to the final part of your paper. I'm honestly really impressed by the lengths you've gone to in collecting in those data. I'm on record saying that I'd expect other countries (especially in North America) to see different results to those I found in the UK, and that I'd love it if people were able to test this. That was why I was quite surprised to see that when somebody finally did do this test, it was in a document that seems distinctly hostile to me! It's great that you found something different to what I found - we now have concrete data showing there's a difference between our countries. But might it not have been fairer simply to describe this as what it is - a difference between our countries - rather than suggest I don't know what I'm talking about?

Anyway, this was only going to be a short email and it's grown into a lengthy one. I write in a genuine spirit of friendship, rather than to moan, although I fear it won't come across that way. I doubt either of us likes the idea of cyclists being struck from behind by passing cars (which, in the UK's accident data at least, has a really high probability of killing the cyclist). Any information we can gather which might make this less likely is incredibly valuable. I hope in future we might work together, rather than in opposition, towards this goal.

With good wishes,

Ian

Quite reasonable, I hope you'll agree. It's a shame we cyclists can't get along more. Goodness knows, we should be united against the common enemy.

Sunday, 1 March 2009

Science stories: the devil IS in the detail but you've got to look for it

One of my colleagues, Chris Ashwin, made a bit of splash in the news this last week from being involved in a study on the genetics of optimism and pessimism. Looking at the Guardian article, which the link takes you to, I saw something in the readers' comments which was painfully familiar:

100 people from God knows where isn`t really representative of humanity. Was is a nice day? How healthy were the volunteers? How old? What were they being paid? Its all in the detail folks, or is that my cynicism gene kicking in?

Oh goodness me, but aren't you clever? I saw an awful lot of this sort of thing when my bicycle overtaking work was being reported a couple of years ago: again and again people would write comments which essentially said "This three-paragraph newspaper report I've just read doesn't mention X. I can't believe this researcher didn't consider X! The whole thing is clearly bollocks!" An entire study is dismissed because some detail isn't immediately in front of the reader's nose. You can see this behaviour whenever science is reported in the media.

Of course these details matter in scientific research, which is why researchers write up their procedures with painstaking care in journal articles so the details are there for everyone to see, examine and judge. Typically, this sort of information runs to several closely-typed pages for any given study (I've looked at Chris's paper and the procedure and results run to over 1200 words). So why on earth do people expect to find the same level of reporting in a newspaper story or blog post? Do people think scientists spend years doing research and then, faced with the challenge of reporting their findings, dash off a quick press release in five minutes before moving on to the the next project? Or is it that they believe the details don't exist, and that teams of professional researchers manage routinely to overlook undergraduate-level design issues when doing their jobs?

It's great people are thinking about what they read and showing some critical thought, but can't they take the logical step and actually look for answers to their questions rather than simply assuming the answers don't exist because they're not in a newspaper report? Am I wrong to find this dismissal of people's efforts so irritating?

Wednesday, 25 February 2009

Invading Poll-land

Did you see the BBC's report on how, contrary to what we all thought, the UK actually loves religious values and wants more religious influence on our lives? Now I should start out by saying, for the record, I'm not a big fan of religion and don't think people should have a say in law-making, or how I live my life, simply because they feel more comfortable living by the moral standards of, say, an Iron-Age Middle-Eastern society. I fully understand why people would feel more comfortable with the simple black-and-white morals of an ancient and distant society, which save one dealing with the scary complexities of a modern pluralist society, but then I also fully understand why other historical re-enactors like to dress up as Vikings at weekends.

Anyway, as I wasn't sure I believed the BBC's conclusions - which claimed the majority of Britons wanted much more religious influence over their lives - I thought I'd delve into the source of the data a little, to see if they would persuade me. It turns out the survey was conducted by a polling organization called Comres. Comres, on their website, boast all sorts of big-name clients and proudly declare they are a member of the British Polling Council and the Association for Qualitative Research. Sounds like a group of researchers who know what they're doing.

Looking at their portfolio of 'Social polls' is interesting. Most of their recent 'social polls' have been about religion, and all these were conducted at the behest of Christian-interest groups. Hmm. Why might Christian-interest groups give so much business to Comres? I wondered.

I then went back and looked at the results of the Comres/BBC poll [PDF link]. Gosh, what detailed analysis! The data are there, broken down in minute detail by gender, age, social grade and region. Big pages full of scary numbers: this looks like a thoroughly rigourous and scientific study!

But let's look at these numbers in a little more detail. Down on page 3 we can see how the sample of 1045 people breaks down into religious groups. Of the 1045 people surveyed, it turns out 639 were Christians and 279 were of no religion. Now this immediately sent alarm bells ringing. To show you why, let me digress slightly into some introductory sampling theory...

In human research, an early step is to identify the population of interest. This is the group about which you want to reach a conclusion. For example, if you want to learn something about the opinions of all the people in the UK, your population is 'all the people in the UK'. In an ideal world you would then conduct a census, whereby you speak to every member of this population. At the end of such research you know exactly what are its opinions.

Of course, when you're dealing with really large populations - like the population of the UK - conducting a census becomes logistically difficult so you instead use a sample. A sample is a subset of your population which you hope will behave exactly like the population. In other words, it should be the population in miniature, a microcosm of the population. You are hoping it will behave exactly like the population, whilst being of a manageable size. If it does, that's great: you've learnt something about a big population from studying a convenient number of people. But if your sample is in some way biased, or behaves differently to the population as a whole, you will reach false conclusions about the population. The only information you have about the population comes from your sample, so every effort must be taken to ensure that sample isn't biased in some way. Ideally this is done by keeping the sample large, and using methods such as random sampling to choose the people included.

And this is what first worries me with Comres's sample. This survey is being used to represent the views of the UK population (it certainly is in the BBC article). For it to have any validity, then, the sample has to be a smaller version of the UK population - it has to look just like the UK population, in miniature, or else we can't meaningfully generalize from it. But there's the thing: the UK population isn't 64% Christian and 28% non-religious (I'm ignoring Comres's 'weighted' numbers as they haven't bothered to report what they were weighted by). Nor is the population of the UK 0.02% Muslim, and nor does it have exactly 10 times more Muslims than Jews. This sample is clearly biased. With the majority self-identifying as Christians, whatever the sample 'says' is simply going to represent Christian views (at least to the extent Christians all agree with one another on things). Strange - you'd expect a member of the British Polling Council to be a bit more careful than that.

Where did this sample bias come from? Critically, we cannot know. Any sort of proper scientific report would contain full details of how the sample was recruited, so we could read the report fully informed and judge its findings according to the strengths or weaknesses of its methodology. But Comres's report doesn't bother to say how the sample was recruited. Given the massive skew towards representing Christians, I'm tempted to suspect they did a lot of their polling outside churches, at religious group meetings, or something similar. But I don't know, because they don't tell us. Nor do we know how they defined groups like 'Christian', 'non-religious' and so on. This lack of detail really matters: you're going to see very different results if you define 'Christianity' as 'I actively go to church at least once a week, am born-again and believe Jesus Christ is my personal saviour' or if you define it to include all those people who say they're Church of England as a sort of 'default' option because they don't feel very strongly one way or the other (like my mother), or who choose that option because they feel 'spiritual, like there must be something bigger' and so won't choose the non-religious tag. In this case I suspect Christianity was defined somewhat like the first of these options, and the wishy-washy undecided made up the non-religious group. But again, I can't tell because these crucial details aren't reported.

Moving on from the sampling, let's look at the survey itself. It included questions like "The media reports my religion fairly and accurately (agree/disagree/don't know)". From experience of similar surveys I can tell you this is a very strange question to ask someone who is not religious. It simply doesn't make sense - not having a religion is not a religious position, except possibly for some of the more hard-line atheists. Asking someone who isn't religious about their religion is like asking someone who doesn't own a hat about their hat: what are they to answer other than "Huh? I don't have one"?

But it seems Comres have something of a history here. Let's look at their questions in other surveys. How about their "Rescuing Darwin" survey, conducted at the behest (i.e., payment) of Theos, a Christian think-tank? Here we see questions like "Young Earth Creationism is the idea that God* created the world sometime in the last 10,000 years. In your opinion is Young Earth Creationism: definitely true, probably true, probably untrue or definitely untrue". With 11% saying this is definitely true, again alarm bells are ringing about which particular evangelical church they got their sample from (and again, they don't tell us), but let's ignore that for a moment as we're looking at the questions. How about question 3: "Atheistic evolution is the idea that evolution makes belief in God unnecessary and absurd. In your opinion is Atheistic evolution: definitely true, probably true, probably untrue or definitely untrue" with 30% saying 'definitely untrue'. I'm sorry, is that question dispassionate and scientific, carefully designed to elicit opinion, as it should be, or is it emotive and written in the language of fundamentalist Christianity? There's plenty more of this sort of thing in Comres's oeuvre.

(* 'God' you notice. Not '...the idea that a powerful entity created the world' but 'God', with a capital letter.)

So what's the conclusion? Basically, it rather looks as though Comres have established themselves as the polling organization of choice for religious groups wanting to find the 'right answers' in national opinion polls. With dubious questions which only make sense to a subset of those questioned (seriously: go and read the rest of the Rescuing Darwin questions), and apparently biased samples (which we can't even properly evaluate, without information on where they came from), they seem always to support exactly what the paying customer wants to find - which is nice, as that's a good way of getting repeat business.

I'm tempted to call for important organizations such as the General Medical Council to stop giving their business to such a polling organization, but I think the bigger question here is why on earth the supposedly dispassionate BBC News commissioned this particular organization - with their track-record of questionable polling in the interests of religious bodies - to conduct their snapshot survey of religious feeling in the UK. And I'm also curious as to why the BBC didn't notice the rather flagrant sample bias in the data they eventually received. I would be very very interested in knowing the religious background of the individual who commissioned this 'research'. Very interested indeed.

And this is where I turn all Ben Goldacre: this doesn't really bother me because of its religious aspects, but rather because it is the sort of thing which gets proper and effective researchers a bad name. Public opinion polls can play an important role in testing the Zeitgeist, and also contribute a great deal to our modern discourse about society. But for them to have any use they have to be done properly, and reported transparently. This sort of thing not only breeds distrust of opinion polls in general, but is also a classic example of how you can't just believe any sort of research reported in the media but rather need to go back to the source of the data and evaluate where they came from. I know this for a fact: I learnt it from a rigourous survey of me.

EDIT: Here's something I just typed in the comments to this post. Oh, and there's another issue I forgot to mention in the post, which is a shame as it was one of the things that really bothered me. One of the questions was "Our laws should respect and be influenced by UK religious values (agree/disagree)". Surely that's two questions rolled into one! That question was rolling people who think the law should respect religion into saying they also think the law should be influenced by religion. Because those are really quite separate ideas: personally I wouldn't be too worried by the first part of the question (as I think the law should respect our right to believe what we want), but I'd vehemently oppose the second part of the question. Tricksy, I'd say. I do wish I'd remembered to put that into the main article!

Monday, 2 February 2009

Population: at last someone's mentioned it

I've been thinking for quite a while now that in environmental debate, the one thing nobody mentions is the impact of having children. Thank goodness someone's mentioned it in public at last. Now I'm no biologist, but I'm fairly sure having children is the only way we can hang around as a species. But that said, we can't really have them without there being an environmental impact. It's something we really need to talk about more.