Monday, November 15, 2010

A quick note on testing research

"So when you use a test that has no bias whatsoever, i,e, a double blind randomized placebo controlled trial, homeopathy always fails. Always."

Uh. This is from a comment on Andrew Collin's distressingly misinformed blog post on Gillian Mc Keith. Throughout the comment thread produces comments which indicate a complete misunderstanding of science and alternative medicine (short story- individual doctors may be bad, but alternative therapists are almost always worse). I've mostly agreed with what people who disagree with him say, but I have to disagree with the commenter here.

Whenever we conduct a double blind randomised placebo controlled trial, we must be aware of certain things. First of all, while the scientists involved will do their best to randomise, they will not have done so perfectly. They probably recruited from a sample space that wasn't the entire country, and there will always be a non-compliance issue- firstly with people refusing to take the trial and some failing to follow the drug regime properly. The former is typically controlled more than the latter, which is much more difficult to measure. However, having a blinded placebo should theoretically protect us against most of those effects, but its true to say that all studies are not a truly random sample of the population.

Now, even if we ignore those issues (which, to be fair, we often can), then every single study we do is balanced to have a false positive rate. That is, when we calculate whats called our test statistic, we obtain the probability that we would have seen that result by mere chance. Usually, if there was a 5% or less chance of that occuring, we say that its likely that there is some difference between placebo and the active treatment. This means that if I was to conduct a perfect study 20 times on a homeopathic remedy, even if it was ineffective I would expect to think it was effective in exactly one of those studies. Be aware that sample size shouldn't affect this, because I have explicitly designed my study to HAVE a 5% false positive rate. Of course the fact that I needed to run 20 studies to get a positive is rather telling, but it does contradict the quote above.

But heres whats worse, and this is something that the excellent Mr Goldacre often forgets. When I run my experiments I make certain assumptions. In particular I usually assume that the mean of my outcome is normally distributed (effectively the probability of seeing a particular value is shaped like a bell curve, so I'll see less and less on either side of the mean). Now this is always not true. Unless we set up our example very carefully its unlikely that we'll actually get a perfect normal distribution (height is a stereotypical example of a normally distributed variable, but a normal distribution assumes one can have negative values, and one cannot have negative height). This means the theoretical probabilities that we've calculated are not going to be exactly true in practice. That said, theres a theorem, called the central limit theorem which says the more observations we get, the closer our means get to being normally distributed. This means that the bigger our survey gets, the happier we can be with our assumptions.

Yet this comes with an even bigger disclaimer. For most drugs its not actually hard to show a difference. The standard statistical test assumes that the outcome is identical for placebo and a drug. If the treatment and the placebo are physically different in some way this will almost never be the case (indeed I suspect that it is almost always true, which is a probabilistic statement we don't need to worry about too much here), so we need to worry about clinically significant differences- thats the standards most drugs need to meet (actually, they usually need to beat their competing drug).

So whats my point here? My point is that while double blinded placebo controlled trials are a good standard for the industry to have, theres an argument to be had that they can be misinterpreted, and caution must always be used when applying their results. In particular, pointing at any one study with an impressive result is a terribly bad idea. We can become confident when several such, independently conducted studies, get the same results. The fact that these trials can be misinterpreted is why homeopaths can mislead by pointing to studies where they have succeeeded- when actually looking over the body of research the trend is the reverse.

Labels: ,

0 Comments:

Post a Comment

<< Home