A Conversation for Things to Consider when Reading Medical Research
Wombling on Started conversation Apr 28, 2003
One additional "acid test" for deciding if some weirdo research result is worth worrying about is the number of participants in the test (n). If it's less than a thousand, then you can safely disregard it. This is from the book "Everyday Math for Dummies" by, um, a mathematician whose name escapes me.
Z Posted Apr 28, 2003
I'm sorry, but sometimes in medicine that isn't possible, it doesn't mean you can't get useful results and it depends on what you are analysing. There is much good research that is done with less than a thousand studies. For instance if a particular operation was done 500 times and 478 times the patient died then would you say that the operation should stop if there was as safer alturnative, or would you ask for it to be done another 500 times so that N would equal 1000.
Wombling on Posted Apr 28, 2003
I suppose since I got the original info from a Dummies book, then I've shown myself up to be a dummy!
However, in fairness to the original mathematician who wrote Everyday Math for Dummies, I didn't include his whole piece - his intention was to advise readers about the kind of "medical" finding along the lines of "baldness is linked to cheese toasties" (taken from the article), which is the kind of story that very commonly reaches people as they're digesting their breakfasts in front of the TV.
In other words, your point is well taken, especially for the more serious and life threatening stuff.
Z Posted Apr 28, 2003
I understand what you're saying as well, understanding when a piece of research is significant and when it isn't is one of the most difficult parts of reading it.
The advice, ignore it if "n" is less that a 1000 is pretty good advice for a lot of subjects, certainly new drugs curing diseases the average rubbish that is published in some newspapers .
There is a three n rule for estmating the cause of a side effect of a particular drug. If Drug X has been tested on a n people then the chances of drug X causeing side effect Y is bettween 0 and 3 time N.
I admit I'm biased because I'm the middle of working on a study for my degree when n is around 100, but it is all the relevent cases in a year. I do still hope that the results will be off use!
Joe Otten Posted Jun 16, 2003
The "ignore if n 0.05 will generally not get published. (Note that this is not the probability that a positive result is false, but it is the probability that a false (ineffective) treatment will give a positive result)
What large n will give you is a low probability of false negatives. But negative results are not published, so you do not have to worry about n here either.
Yes, small trials are often pretty pointless, but the statistical machinery takes care of this.
If you are still not convinced, consider the trial of my longevity pill on 1,000,000 subjects. In the 500,000 test group 100 died and in the 500,000 control group 101 died. That proves the pill works? Not at all - if I calcluated p, it would be about 0.5 and I wouldn't get published. But is passes the n test.
Wombling on Posted Jun 17, 2003
If you reread my original postings above, you'll see that the "ignore if n<1000" advice does NOT apply to serious research publications, but rather to tiny articles found in the corners of dodgy newspapers; the kind of "research findings" that we read about while we're having our breakfast. Proper scientific publications are written to advance our knowledge (and to increase the number of publications on the scientist's CV). Newspaper articles on the other hand are written specifically to entertain and/or scare us and to make us buy more copies of that newspaper. Do newspaper editors have our best interests at heart when they tell us that eating more aubergines "can" lead to lower incidence of heart disease? Do they bunnies!! (Another tip: where you see the word "can", replace it with "probably won't".) (Incidentally I made up the example about the aubergines.)
Summary: I am talking about statistics in irresponsible reporting! Not REAL scientific research! Big difference!
Joe Otten Posted Jun 18, 2003
OK fair enough.
And if n>1000? Probably still worth ignoring.
I agree with you about the appalling standard of journalistic reporting of science. There frequently is new information or evidence of some sort behind each new report, but it is irritating having to find the source before I have any idea what it is.
There is an interesting parallel I heard on the radio with the interpretation of scientific advice by civil servants for the benefit of politicians. A civil servant was expressing irritation at the faliure of scientists to give definitive answers to their questions, and saw it has his duty to work out what the scientist really believed, and report this to the politician. Explains a lot.
Wombling on Posted Jun 26, 2003
An excellent example of irresponsible statistics is taken below from the Telegraph, 10 February 1989. The context is the Great Egg Scare (there were widespread fears of salmonella at that time). The official statistic is exaggerated in the report by a factor of over one hundred.
"Last November Mr Lacey said the number of people who fell ill with salmonella poisoning in 1988 was 24,500, but because not all cases were reported the actual number was probably 250,000 [...] the total figure could be as high as 2,500,000."
Comment: Yes, and it could be as low as 30,000, but that wouldn't sell as many copies of the Telegraph, would it? I wonder what advanced mathematical techniques Mr Lacey was using. Did scientific calculators back in 1989 have a random number generator function? I can't remember.
Monsignore Pizzafunghi Bosselese Posted Aug 16, 2003
I've got another example of such nonsense science: The study revealed that babies, if taken to the swimming pool in their very early childhood on a regular basis, would grow up to be 'significantly' more intelligent than others that were not.
I wonder how they managed to exclude all other influences and to track down a single reason. And all that in a study that ran for a couple of weeks, not years. Of course, they didn't bother to disclose alpha, n, P, sigma, or any other of those weird math thingies
Key: Complain about this post