What Do Scientists Know Anyway? Systematic Review and Meta-analysis
Created | Updated Feb 25, 2012
If you fancy doing a scientific experiment, the first thing that you have to do, before you get stuck into cloning a sheep or colliding a large Hadron, is to find out whether anyone else has done it already. A good way to do this is to find a paper in a journal that summarises and critiques the literature. The most useful sort would be one that carefully looks at all the literature, making sure it doesn't miss any, even if it doesn't agree with it, then also pulls the numbers together: a Systematic Review.
The History of Systematic Reviews
At first scientists worked alone in isolation from each other. If they were lucky they wrote letters to other scientists who found their results interesting and wrote back. If they were very lucky they lived near other scientists and could meet up. Soon they formed learned societies and clubs. The societies held conferences and published journals. You joined the society relevant to your field of science, read a few journals a year, went to a conference, and you'd find that you were up to date. Everyone would be aware of all the other work done in their field.
The more work that was done, the harder it became to keep up, so more and more scientific journals began to publish reviews – articles where a learned expert reviewed what was already published on a subject. Everyone assumed that a learned expert would know about all the relevant papers in the field, and come to an unbiased conclusion. Often however the learned expert would only refer to papers that agreed with his or her pet theory and would ignore the ones that contradicted it. If the contradictory papers couldn't be ignored they would find flaws in them and ignore the conclusions.
Systematic reviews save lives
One of the first systematic reviews looked at a treatment for heart attacks. A heart attack is caused by a blood clot blocking an artery in the heart. In the 1960s and 70s doctors started treating heart attacks by thrombolysis, giving drugs that broke down blood clots. Usually it helped people, but in some cases it caused a bleed in the brain, which was often fatal. There were a number of studies which seemed to contradict each other; some of them showed that it saved lives, and some of them showed that it killed people. The doctors that believed it worked found plenty to support their belief, and so did those that believed that it was dangerous.
Things got clearer when Dr Salim Yusuf1 and his colleagues looked at all the evidence and analysed all the numbers. Looking at all the evidence together it became very clear that the treatment was a good thing. Because of this a much larger study was done which showed that the treatment was effective, and many many lives were saved.
How to do Systematic Review and Meta-analysis
It all starts with a question2. Let's imagine you want to answer the question:
- Does eating chocolate cause heart disease?
The first thing to do is to ask if it's a reasonable question based on your knowledge of the world. Some questions are so silly that you don't need to go to the trouble of looking up the answers in the scientific literature, questions such as:
- Will drinking lavender oil cure my broken leg?
- Will cutting off my finger reduce the frequency of my migraines?
You don't need to do an experiment to know that there is no plausible means by which drinking lavender oil could heal a broken leg, or amputating a finger could help migraines.
Our question does seem reasonable: we know that chocolate contains lots of fat, sugar and calories; the fat may cause high cholesterol and the sugar may cause diabetes. High cholesterol and diabetes both cause heart disease, while on the other hand cocoa contains anti-oxidants which may protect against certain types of heart disease.
How could you answer this question?
If you google 'does chocolate cause heart attacks' you'll get all sorts of different answers. Most of those blogs, tweets and newspaper articles are going to cite a research study, while some of them are going to cite the ancient knowledge of the Incas, or the Big Food/Pharma conspiracy.
Let's assume that you're taking a more conventional and rational approach to your question, and you decide that you really only want to find out what the scientific papers have to say on the subject. The next step would be to go to a specialist search engine such as Medline or Pubmed and pop the search terms into there.
What if you find that some papers says that chocolate is good, and others say that it's bad?
Well, you could simply pick the argument that you would like to find true, and only cite supporting studies: this is called cherry picking. There's bound to be one argument that you prefer. Perhaps you love chocolate and want to prove your doctor wrong when she says you should cut down, or perhaps you believe that Big Chocolate is an evil conspiracy dedicated to destroying the world, so you cite the studies that support your argument and ignore the studies that don't. If the studies that don't support your argument are so big that they are hard to ignore, then you could point out some of the potential problems with them – and then ignore them.
On the other hand you might actually want to find out the truth
A Systematic Review is one where you have systematically searched the literature, taking care that you haven't missed anything, and recording what you have done. The first step is to clear your mind. You have to accept that it might give you an answer that you don't like, and that'll be rather annoying.
First you need to write a plan. It's important you do this before you actually start searching, as you could find the results you want and then write a plan that makes sure to exclude those that you don't want to find. A plan for an experiment is called a protocol.
If you want to answer the question about heart disease and chocolate, the first step would be to look at the question more closely. What sort of heart disease do you mean? Angina? Heart attack? Using which diagnostic criteria? You may want to define exactly what you mean by chocolate consumption: does that include the use of a chocolate face balm? The important thing is that you do it before you find the studies, so you can't fiddle your results by making up a reason to reject inconvenient ones. During this process you'll probably need to take some advice from someone who knows what they are doing, in order to establish which studies you are going to include and which you are going to exclude.
Inclusion criteria:- The subjects had their intake of chocolate measured.
- The number of heart attacks among the subjects was measured3.
- Studies on animals.
- Studies where the amount of chocolate is not recorded, just whether they did or didn't eat chocolate.
Next you'll have to decide on your search strategy. You've got to be more thorough than just going to Google. Take the advice of your local academic librarian. Searching might look easy, but if you're not careful it's easy to miss huge parts of the literature. For instance if you are looking for heart disease you'll need to include the terms 'cardiac' and 'coronary' as well as 'heart'.
What data are you going to get from each paper? You'll need to decide this now, not after you do your search – if you wait until afterwards you'll end up getting distracted by other interesting points. For this review, you might want to find out the number of subjects who ate chocolate, how much chocolate, and how many had heart attacks.
You might also want to record where the study was carried out, who the control group were, and whether there were any obvious flaws. You would define in advance what you thought were obvious flaws.
The actual search
You need to be able to show what you searched, using what terms, how many results you got, and how many papers you included and excluded. In your final write-up you'll need to have a sentence saying 'We searched Medline and Embase on the 4th of October 2010 using the terms ('cardiac' OR 'heart" OR 'coronary') AND ('chocolate' OR 'cocoa'); this found 2,100 results, of which we excluded 1,987.' You need to be able to justify why you excluded every single one, which is where the careful records come in.
Once you've done your search and identified the relevant articles you now need to carefully read all the articles and find the data that you are looking for. You might find that there are 20 studies, 18 of them showing that chocolate is associated with more heart attacks, and two showing that it is not, and that both of those had some problems. Then you can quite safely say that chocolate does in fact cause heart attacks. Alternatively you might find that 15 studies show that it doesn't, three showed that it did, and two weren't very sure. Or it might be that there were no good studies at all, and all the ones you could find were poor ones. This means that you'd better try and get some funding to do a study of your own.
You might want to do a Meta-analysis
If enough of the studies are very similar then you can join all the numbers together from all the studies and analyse them together. This is called a Meta-analysis, and it will give you a graph something like this:
Then?
You'll probably want to write everything up in a paper, publish it in a journal and take it to a conference or two. This is when you'll be glad you've got those careful records, as some of the people at those conferences will also have written articles on the subject, and they'll want to know why you didn't include their research in your review.
If you're lucky you won't have to go to all this trouble. Hopefully someone else will do all the work for you. If you find a well-conducted systematic review then you can be fairly sure that the conclusions are going to be close to the truth.
Problems with Systematic Review
No technique of finding out the truth is perfect, and there are a number of problems with Systematic Review.
Publication bias: Studies that show a positive result are more likely to be published than those that show a negative result. Journal editors are more interested in positive results, and scientists are less likely to submit negative results to journals. This could mean that you don't get all the results, just the positive ones. There are a number of ways to test for this mathematically.
Rubbish in – rubbish out: Meta-analyses are really useful if there are lots of well-conducted small studies, that just aren't big enough to show an effect. If you find that there are plenty of small studies which are all poorly conducted then it doesn't make sense to combine them. You would be better off writing up the research and saying that the studies were not of high enough quality to draw a conclusion.