A new meta-analysis that is in press at Preventive Medicine purports to show that smoking bans result in a 19% immediate decline in heart attack admissions. The meta-analysis pools data from 8 published studies which have examined changes in heart attack admissions following implementation of smoking bans. The study, funded by the National Cancer Institute, concludes that smoking bans result in an immediate reduction in heart attacks, estimating the drop to be 19%, with a 95% confidence interval of 14% to 24% (see: Glantz SA. Meta-analysis of the effects of smokefree laws on acute myocardial infarction: an update. Preventive Medicine 2008 [in press]).
The Rest of the Story
Unfortunately, a meta-analysis is only as good as the studies that go into it. Since the studies used in this meta-analysis have each been discredited, the meta-analysis is not of particular value. A meta-analysis cannot legitimately show that smoking bans dramatically and immediately reduce heart attacks if the individual studies have no validity in drawing such conclusions because they are based on shoddy science.
So it troubles me that tobacco control researchers would even think of conducting a meta-analysis at this early point in time, when we don't even have a single convincing study to suggest that there is a causal relationship between smoking bans and immediate, dramatic reductions in heart attacks.
But more troubling to me is the fact that researchers would include in a meta-analysis (of the effects of smoking bans on heart attacks) studies in which there is no control or comparison group to determine whether observed changes in heart attacks are merely mirroring trends that are occurring everywhere, despite the smoking ban.
If smoking ban opponents produced economic impact studies using the same methodology (showing that there was a decline in restaurant sales or a decrease in the number of restaurants in a particular location, but not employing a control or comparison location), we in tobacco control would trash those studies, pointing out that without a comparison group, one cannot legitimately demonstrate that the smoking ban was what caused the change in restaurant business.
Professor Glantz certainly understands the importance of a comparison group because in his own study (the Helena study), he used a comparison group of non-Helena residents to make sure that the observed changes in heart attacks that occurred in Helena did not also occur outside Helena.
The authors of the studies in Pueblo and Bowling Green also understood the importance of a comparison group because their conclusions were largely based on the finding of a reduction in heart attacks in those cities which did not occur in the comparison areas (El Paso County and Kent, respectively).
Unfortunately, only 3 of the 8 studies used in the meta-analysis employed a comparison group (Helena, Pueblo, and Bowling Green). The other 5 studies did not have a comparison group (Italy x2, Ireland, Saskatoon, and New York State).
There is simply no way that the studies without any comparison group should have been included in this meta-analysis. I don't believe that an objective scientific approach would allow one to use such studies. How can one possibly know that the observed changes in heart attacks were simply a reflection of changes that were taking place everywhere, or at least in similar, neighboring areas?
You can't possibly know that unless you specifically check for it. And 5 of these studies failed to do that.
The more I examine these smoking ban and heart attack studies, the more I am realizing that tobacco control science has become a very highly biased field right now. The bias is so apparent in these studies that it is practically dripping off the pages. Why peer reviewers of the journals do not pick this up is a mystery to me. One possibility, however, is that it is the same set of also-biased tobacco control researchers who are reviewing these articles.
I would be very interested to see how a statistician or econometrician - someone not associated with the tobacco control movement in any way - would review these studies.
Another major problem with the meta-analysis is that it fails to address the very strong possibility of publication bias. It is very likely that tobacco control researchers have only written manuscripts about this issue when they have found or suspected a decline in heart attacks. There has so far been no systematic study of changes in heart attacks in a number of locations to see objectively whether or not this hypothesis is correct. All the studies have been single-site studies.
I myself have examined the heart attack data throughout the U.S. in a systematic way, and based on my review of these data, I was not able to find any evidence that statewide smoking bans led to a dramatic, immediate decline in heart attack admissions. I suspect that if a systematic study were conducted, it would not find any dramatic effect. I think that publication bias is a severe problem in this situation.
The bottom line is that I think it is far too premature to be conducting a meta-analysis in the first place. But if you are going to conduct one, at least have some decent criteria for inclusion of studies.
The rest of this story is not so much about whether smoking bans affect heart attacks or not. It is more about how investigator bias is creeping into tobacco control research these days. And I'm not sure what can be done to stop it.
No comments:
Post a Comment