I recently re-examined an article that is widely cited by tobacco control groups as supporting the contention that chronic exposure to secondhand smoke causes heart disease -- a claim that, right off the bat, I acknowledge I think is scientifically sound. However, in re-evaluating the article, I noticed something disturbing. The study showed evidence of a severe investigator bias -- one that raises serious questions about the scientific objectivity of the tobacco control movement.
The paper, published in 1997 in the British Medical Journal (BMJ), is entitled "Environmental tobacco smoke exposure and ischaemic heart disease: An evaluation of the evidence" (see Law MR, Morris RK, Wald NJ. BMJ 1997; 315:973-980).
The article presents a meta-analysis of 19 "acceptable" published studies of the risk of heart disease among nonsmokers living with smokers versus nonsmokers, and reports a pooled relative risk of 1.30 (95% confidence interval, 1.22-1.38).
The Rest of the Story
I was curious as to what represented an "acceptable" study. It turns out that there were 21 studies identified that met the inclusion criteria for the meta-analysis. However, the authors deemed that two studies (references 35 and 36 in the paper) were not acceptable and excluded these studies from the meta-analysis.
Nowhere in the methods section of this article does it state the reasons why these two studies were excluded. Nor are there any objective criteria offered by which the inclusion or exclusion decision was to be made.
Instead, in the discussion section of the paper, the following explanation for the decision to excluded these two studies is offered:
"A separate analysis of one of the studies of environmental tobacco smoke exposure and ischaemic heart disease in the set of 19 studies (fig 1), and of two data sets not published elsewhere (from the US National Center for Health Statistics and the American Cancer Society) has been published by Layard and LeVois, consultants to the tobacco industry. They reported a combined relative risk estimate from the three studies of 1.00, with a narrow 95% confidence interval (0.97 to 1.04). This negative result is statistically inconsistent with the estimate of 1.30 (1.22 to 1.38) from the above analysis of 19 studies (P<0.001). The difference is too great for the two groups of studies to be combined as separate valid estimates; one must be flawed. We took the estimate from the 19 studies as valid and rejected that of Layard and LeVois, since there is no reason to reject an analysis based on 19 independent studies in favour of one from a single group with a vested interest."
To be completely honest, this reasoning was quite shocking to me. Essentially, what the paper is saying is that an "acceptable" paper is one that contributes to finding an effect of secondhand smoke on heart disease, while an "unacceptable" paper is one which finds no such effect.
In other words, what the paper is explaining is that these two studies were excluded specifically because they failed to find a significant increase in heart disease risk associated with secondhand smoke exposure!
This paper has committed perhaps the most egregious possible cardinal sin of a meta-analysis: making a decision about what studies to include or exclude after the fact and based on the results, rather than on the methodologic aspects of the study. If you conduct your meta-analysis first among studies you know to support your pre-determined conclusion and then you exclude from the meta-analysis any specific studies that you know not to support that conclusion, then you are automatically guaranteed of concluding your pre-determined conclusion!
I don't know how something like this passed peer review. However, mistakes like this are sometimes made. The point is not how the mistake was made but the fact that the article would use this type of reasoning in the first place.
Think about it this way: you have a collection of studies, most of which support your hypothesis, but several of which do not. You have two choices. First, you can combine all the studies together. If you do that, there is a risk that the negative studies will wash out (negate) or dilute the positive ones. Second, you can combine the positive ones to derive an estimate of the effect and then show that the negative studies are inconsistent with that finding. Then, you argue against including the negative studies because they are inconsistent. This is precisely the reasoning that was used in this paper.
Before discussing the implications, let me first make several points to dispel comments that I know some will make:
1. I am not arguing here that secondhand smoke does not cause heart disease. I think that even including the two studies that were excluded, if you look at Figure 1 in the paper you'll see that there is still an overall finding of a small increased heart disease risk among those exposed to secondhand smoke.
2. I am not arguing here that the two studies that were excluded should not necessarily have been excluded. There may be valid methodologic reasons to exclude these papers. However, any such methodologic flaws are not the reason provided for why these papers were excluded.
3. That the studies in question were published by authors with tobacco industry ties is not the reason given by the paper for excluding them. Even if it were, the nature of the funding is probably not sufficient reason to exclude the studies. But the point is moot, because the article makes it clear that the two studies were excluded not because they were commissioned by the industry, but because they failed to find positive results.
This paper represents a revelation to me. It really is a revelation for me, because it is something that I previously failed to look closely at - I assumed that a paper of this nature, published in a journal like BMJ, would naturally have used reasonable criteria for study inclusion and exclusion. It would never have occurred to me that the paper would have made the decision about excluding studies after the positive studies were combined and that the decision would be based on whether or not a study found an effect or not.
There are several important implications of this revelation.
First, it suggests that there is a serious bias inherent in the tobacco control movement, one which raises serious concerns about the scientific objectivity of the movement.
I have already expressed similar concerns based on the fallacious claims being made by a large number of anti-smoking groups about the acute cardiovascular effects of secondhand smoke. But this is the first instance in which I have found blatant scientific bias in the literature on the chronic effects of secondhand smoke.
Second, it suggests that the tobacco control movement has a crisis of scientific integrity on its hands. We need to respond to this crisis immediately and definitively in order to reclaim our scientific integrity and to prevent any loss of scientific credibility.
Third, it suggests that the loss of objectivity in tobacco control is something that is not restricted to the past eighteen months - a time period during which I have been documenting numerous examples of misrepresentation of the science by anti-smoking groups, including by the Surgeon General. This particular story would have made my blog headlines in 1997, if I had a blog at that time (or even knew what a blog was).
Finally, it reminds us that the peer review process, while an essential and usually an effective one, is not perfect. As I teach my students, one must always take the time to critically evaluate any published study, no matter how prestigious the journal. There is no substitute for careful evaluation of the published literature. The scientific review process must not end when the journal's peer review process ends.
No comments:
Post a Comment