Last week, an Institute of Medicine (IOM) committee released a report whose major conclusion was that smoking bans have a significant short-term effect on heart attacks, reducing the incidence of acute coronary event admissions to hospitals due in part to reduced secondhand smoke exposure. The press release headline read: "Smoking bans reduce the risk of heart attacks associated with secondhand smoke."
Although the report's conclusions have received widespread publicity, a little-noticed but severe flaw in the basic epidemiology/biostatistics foundation of the report renders its conclusion invalid.
In this commentary, I attempt to explain the nature of this flaw and why it renders the report's conclusion invalid.
The Rest of the Story
The report was very clear in asserting that the committee could draw no conclusion about the magnitude of the effect of smoking bans in reducing heart attack admissions. In fact, the report made it clear that the committee had no confidence in even estimating the magnitude of this effect.
According to the report: "However, because of the weaknesses discussed above and the variability among the studies, the committee has little confidence in the magnitude of the effects and, therefore, thought it inappropriate to attempt to estimate an effect size from such disparate designs and measures."
In other words, what the committee is saying is they have no confidence in making any estimate of the size of an effect of smoking bans on heart attack rates.
In epidemiology/biostatistics, we call this the "point estimate." The point estimate is the estimate of the magnitude of a particular association or effect. In other words, what is the estimate of the percentage by which smoking bans reduce heart attacks? Is it 4%, 10%, 20%, 47%?
The report makes it clear that we have no idea and that the studies are plagued with weaknesses such that there is no confidence in even making an estimate of the effect of smoking bans on heart attacks.
Note that I am not drawing my own conclusion about the quality of the studies. I am merely repeating what the report itself concludes.
Now, despite failing to be able to even guess what the point estimate might be, the report nevertheless clearly concludes that smoking bans cause a significant decrease in heart attacks. What this means is that the committee is certain that the 95% confidence interval around the point estimate for the effect of smoking bans does not include zero.
Suppose that the point estimate was 20%. Can one conclude that the effect on heart attacks is significant? It depends on the variability of the point estimate, which we can indicate through a confidence interval. With 95% certainty, what range are we sure that the actual point estimate falls into?
Suppose that the 95% confidence interval goes from 15% to 25%. Then, while we're not exactly sure whether the true effect is 15%, 20%, or 25%, we are sure that the effect is no lower than 15%, and we certainly know that it is greater than 0%. In other words, we can conclude that there is a significant effect of smoking bans on heart attacks.
In contrast, suppose that the 95% confidence interval goes from -5% to 50%. While we think the best estimate of the effect is a reduction of 20%, it could be anywhere between an increase of 5% and a reduction of 50%. The confidence interval includes zero (0%), meaning that we cannot conclude that there is a significant effect of smoking bans on heart attacks, because it is possible that the true effect is zero.
I hope readers see that in order to conclude that there is a significant effect of smoking bans in reducing heart attacks, one would have to derive a confidence interval and that confidence interval could not include zero. Another way of saying this is that the "lower bound" of the confidence interval would have to be greater than zero.
In essence, what the IOM report is concluding is that we have no idea what the point estimate for the reduction of heart attacks by smoking bans is, but we are nevertheless sure that the lower bound of the confidence interval around that point estimate does not go down as far as zero.
But there's two problems here.
First, you can't estimate the confidence interval unless you make some guess about the variability around the point estimate of the purported effect. If you haven't estimated a confidence interval, then you can't possibly conclude that the confidence interval doesn't include zero.
Second, if you can't even take a guess at a point estimate, then even if you know the variability around that estimate, you can't figure out the lower bound of the confidence interval, because you don't know where to start counting down from.
In other words, if you are not able to make a point estimate, then you have no way of knowing what the lower bound of the confidence interval is.
Do you see that the IOM report's conclusion is based on a complete leap of faith? What the report is saying is that we have no idea what the lower bound of the confidence interval is; however, we are nevertheless certain that it does not extend down as far as zero.
This is equivalent to drawing a pre-determined conclusion. If you are not willing to actually see what the true confidence interval is before drawing a causal conclusion, then you might as well just draw your conclusion prior to doing the actual review. It is a leap of faith, because it is accepting something without empirical evidence.
The committee provides no evidence about what the lower bound of the confidence interval is and it makes no attempt to estimate either a point estimate or a lower bound of the confidence interval around that estimate. Nevertheless, despite the complete absence of any empirical evidence of what that lower bound is, they are quite certain that the lower bound is greater than zero.
Note that I am not, in this commentary, even getting into the issue of how the failure to consider unpublished, but highly reliable, meaningful, relevant, and population-representative data from Scotland, England, Wales, Denmark, and the United States as a whole would lead to a biased point estimate. I am just noting that even taking the report's conclusions as a given, its ultimate conclusion is not supported and appears to be a leap of faith rather than a serious attempt to use the evidence to derive a lower bound for the confidence interval of any possible effect.
Finally, I have to say that if one of my students handed in a paper which did an epidemiologic analysis of a potential causal relationship and concluded in the paper that it was impossible to even make a guess as to the point estimate of the purported effect, I would hope that the student would not conclude the paper by stating: "While we have no idea what the point estimate is or the variability around that point estimate, I conclude nevertheless that the confidence interval must not cross zero." I suspect I would give the student a failing grade on the paper.
Just to be clear, I would love to be able to take a leap of faith and conclude that all of my efforts over the past 24 years have resulted in policies that produced dramatic declines in heart attacks within one to two years. But as scientists, our role is not to take leaps of faith. It is to consider the empirical evidence and base our conclusions on that scientific evidence.
It is not the IOM committee's fault that the underlying studies are frought with severe weaknesses, that only a few of them employed comparison groups, and that the existing evidence is simply not sufficient to even guess as to the point estimate for a purported effect. But you aren't required to draw a causal conclusion from weak data. You could also come out and say: "The evidence is suggestive of an effect, but we simply don't have enough evidence to draw a definitive causal conclusion at this point. We just can't rule out the possibility that random variation in heart attacks, especially in small communities, and the existing secular trend of decreasing heart attacks due to substantial advances in medical treatment for heart disease during the time period of these studies are a plausible alternative explanation for the observed declines in heart attack rates in these studies."
Instead, it appears that the report felt it necessary to draw a definitive causal conclusion, even in the absence of what is essentially its own admission that there is insufficient evidence to draw such a conclusion.
No comments:
Post a Comment