Over at Reason Magazine's Hit & Run blog, Jacob Sullum has criticized the Institute of Medicine (IOM) report on smoking bans and heart attacks for failing to consider relevant data and for reaching a pre-determined conclusion. In a post which he wittily but aptly entitles "Myocardial Infractions," Sullum describes how the IOM report not only failed to consider unpublished data which did not support the report's conclusion, but also drew conclusions that were inconsistent with the assertions made in the report itself.
While I had previously criticized the report for failing to consider unpublished, but relevant and significant data from Scotland, England, and Wales, Sullum points out that there are additional data from California, New York, Florida, Oregon, and the United States as a whole that were not considered and which do not support the report's conclusion that smoking bans result in substantial, short-term declines in heart attacks.
Sullum writes: "a closer look at the IOM report, which was commissioned by the U.S. Centers for Disease Control and Prevention, suggests its conclusions are based on a desire to promote smoking bans rather than a dispassionate examination of the evidence. Thousands of jurisdictions around the world restrict smoking. Some of them are bound to see significant drops in heart attacks purely by chance, while others will see no real change or significant increases. Focusing on the first group proves nothing unless it is noticeably bigger than the other two groups. The largest study of this issue, which used nationwide data instead of looking at cherry-picked communities, concluded that smoking bans in the U.S. "are not associated with statistically significant short-term declines in mortality or hospital admissions for myocardial infarction." It also found that "large short-term increases in myocardial infarction incidence following a workplace ban are as common as the large decreases reported in the published literature." That study, published by the National Bureau of Economic Research (NBER) in March, suggests that publication bias—the tendency to report positive findings and ignore negative ones—explains the "consistent" results highlighted by the IOM committee. But even though the panelists say they tried to compensate for publication bias by looking for relevant data that did not appear in medical journals, they ignored the NBER paper, along with analyses that found no declines in heart attacks following smoking bans in California, Florida, New York, Oregon, England, Wales, Scotland, and Denmark."
Sullum also points out that although the report emphasized that few (only two) of the studies actually looked separately at heart attacks among smokers and nonsmokers, it nevertheless concludes that smoking bans specifically reduce heart attacks among nonsmokers by reducing acute secondhand smoke exposure. As Sullum writes: "If smoking bans reduce heart attacks, the effect could be due to declines in smoking, declines in secondhand smoke exposure, or both. The IOM report settles on that last explanation, quite a leap given that 'only two of the studies distinguished between reductions in heart attacks suffered by smokers versus nonsmokers.'"
Sullum concludes: "Siegel, who faults the IOM committee’s "sensationalistic" approach, is a longtime supporter of smoking bans who nevertheless tries to separate his political advocacy from his scientific analysis. It’s too bad the authors of the IOM report, who immediately used it as an excuse to demand strict smoking regulations throughout the country, did not follow his example."
The Rest of the Story
Occasionally, a review of data on an epidemiologic issue will miss one or two studies. But in the case of the IOM report on smoking bans and heart attacks, the report fails to consider relevant, objective, population-based data from the following, each of which fails to find any effect of smoking bans on heart attacks, in the short-term:
8. New York
9. United States
The IOM committee states that it did not consider the data from these nine states or countries because they were unpublished. In an email, a committee member states: "The data from England, Scotland and Wales and their analyses referred to in your email are not found in the peer-reviewed literature and, therefore, were not reviewed in the committee’s report. It was beyond the scope of our study to seek out data available from all the municipalities, counties, states or countries that might be relevant to smoking bans and to conduct our own original studies."
In another email, a committee member acknowledges that he had not even seen the relevant data referred to above.
It seems clear that the committee did not consider these unpublished data in its report.
However, the press release states: "The IOM committee conducted a comprehensive review of published and unpublished data and testimony on the relationship between secondhand smoke and short-term and long-term heart problems."
If the report failed to consider the unpublished data and at least one of the committee members acknowledges not having even looked at that data, then why does the press release untruthfully state that the committee conducted a comprehensive review of unpublished data? How comprehensive a review is one in which the data are apparently not even examined? How comprehensive a review is it if the unpublished data are not even mentioned in the report?
The committee is of course free to restrict its analysis to published data, but you can't have it both ways. You can't restrict your analysis to published data and then lie to the the public and tell them that you comprehensively reviewed the unpublished data as well.
Why not just tell the truth and state that you examined only published data, not unpublished data? Why is it that tobacco control groups seem to have so much trouble these days simply telling the truth?
The rest of the story is that the IOM report failed to consider important, significant, and relevant unpublished data from nine different states or countries which do not support the conclusion of a significant short-term reduction in heart attacks from smoking bans. This has resulted in a severe bias in the report because of the presence of publication bias: clearly, only the positive studies are being published. This throws the report's conclusions into serious doubt.
But the other aspect to the rest of the story is that rather than simply acknowledge that they failed to examine unpublished data, the committee has essentially lied to the public by claiming that it conducted a comprehensive review of the unpublished data. Bias is one thing, but dishonesty is quite another.