A new study conducted by researchers at Mississippi State University concludes that smoking bans in Starkville and Hattiesburg resulted in a significant decline in heart attacks. The study was announced via a press release issued by the University.
The study methods were as follows: "These studies applied a controlled observational approach to objectively examine the hypothesized impact of smoke-free laws on hospital admissions for heart attacks in Starkville and in Hattiesburg. The Starkville Study examined the number of heart attack admissions between July 29, 2004 and April 7, 2009; while the Hattiesburg Study examined admissions between April 21, 2005 and June 30, 2009. Each study compared the number of heart attack admissions among people living within the city limits to those living in the local hospital catchment area, but outside of the city limits (and thus not protected by the
The results for Starkville were reported as follows: "During the 1,053 day period following the implementation of the smoke-free ordinance in Starkville, there were 38 heart attack admissions among Starkville residents, compared to the standardized rate of 52.57 admissions prior to the implementation. Outside of Starkville, there were 19 heart attack admissions, compared to the standardized rate of 22.30 admissions prior to the implementation. Thus, Starkville residents experienced a 27.7% reduction in heart attack admissions compared to the 14.8% reduction observed among those who did not live in Starkville."
For Hattiesburg, the reduction in heart attacks among residents was 13.4%, compared to a 3.8% reduction in heart attacks among non-residents.
Based on the finding that the decline in heart attacks among Starkville residents was greater than that among non-Starkville residents, the study concludes that the difference was due to the smoking ban and that during the study period, the ban resulted in a cost savings of $288,270. Using a similar calculation for Hattiesburg, the report claimed that the smoking ban resulted in a savings of $2.4 million.
The Rest of the Story
This study violates the most fundamental principal of epidemiology and biostatistics: you must evaluate any scientific hypothesis to see whether the results could be explained by chance. In other words, you must determine whether your results are statistically significant.
No scientific journal will publish findings such as these without some test to see if the difference in the reduction in heart attacks between Starkville and non-Starkville residents is statistically significant. Nor should they. Without such a statistical test, we cannot determine whether the observed difference in the decline in heart attacks reflects a true difference, or whether it is merely due to chance.
I will focus on the Starkville findings, but the same arguments apply to the Hattiesburg results as well.
The study reports that heart attacks in the Starkville hospital declined by 27.7% after the smoking ban among Starkville residents, but by only 14.8% among non-Starkville residents. Based on the fact that the decline among Starkville residents was greater, the study concludes that the smoking ban was the cause of the difference.
Now suppose that instead of a 14.8% decline among non-Starkville residents, the decline had been 26.8%. Would the paper still conclude that there was a significant difference between the degree of decline among Starkville and non-Starkville residents? Presumably not. With such small sample sizes (there were only a total of 104 heart attacks among the entire study population during the entire study period), it is not possible to conclude that a decline of 27.7% and a decline of 26.8% are statistically different.
What if the decline among non-Starkville residents was 25.4%? Would that be statistically different from a decline of 27.7%? Most people looking at the small sample size would think: "probably not."
OK, then. At what level of decline would the difference become statistically significant?
The answer is: you can't tell just by looking at the data. You have to analyze the data using some statistical test to assess the significance of the difference in the change in heart attacks. This is a most basic principal of scientific analysis.
The rest of the story is that, shockingly, this study presents absolutely no statistical analysis. There is no test conducted to determine whether the observed decline in heart attacks of 27.7% among Starkville residents is indeed statistically different than the observed decline in heart attacks of 14.8% among non-Starkville residents.
This would be equivalent to a pollster conducting a poll with a sample size of 300 showing that 50.1% of likely voters indicate that they intend to vote for Harry Reid and 49.9% indicating a preference for Sharron Angle, and then the pollster concluding that Harry Reid can be penciled in as the winner. You have to know the margin of error. With a sample size of just 300, the margin of error is probably about +/- 8 percentage points. Clearly, Reid and Angle would be in a statistical dead heat and you'd have to be devoid of any scientific integrity to claim that your poll showed a significant difference in preference for these two candidates.
It is therefore shocking that this study makes no attempt to assess the significance of the difference between a 27.7% decline in heart attacks and a 14.8% decline in heart attacks. Even more shocking is the fact that the study is willing not only to draw a causal conclusion in the absence of any such statistical comparison, but that it goes so far as to calculate the exact number of dollars saved as a result of the smoking ban without having first determined whether there is any real statistical difference between a 27.7% and a 14.8% decline based on only 57 post-ban heart attacks.
There are various ways one can statistically compare the difference in the declines of 27.7% and 14.8%. However, in this study, no method was used at all.
I did my own calculations based on the results reported in the study and based on a conservative estimate which maximizes the likelihood of finding a statistically significant difference, I found that the difference between the two rates of decline was not even close to being statistically significant.
This is not surprising when you look at the actual numbers. Among Starkville residents, there were 33 heart attacks during the 660 days prior to the smoking ban and 38 heart attacks during the 1053 days post-ban. Among non-Starkville residents, there were 14 heart attacks during the 660 days pre-ban, and 19 heart attacks during the 1053 days post-ban.
Suppose that instead of 19 post-ban heart attacks among non-Starkville residents, there had been just 16 post-ban heart attacks. That's a difference of just 3 heart attacks!
Had that been the case, then there would have been a greater decline in heart attacks among non-Starkville residents than among Starkville residents!
This demonstrates how these findings have no statistical significance. A difference of just 3 heart attacks would have completely negated the study findings.
The same thing is true if there had been 3 more pre-ban heart attacks among Starkville residents. Had that occurred, the decline for non-Starkville residents would have been 30.0%, compared to 27.7% for Starkville residents.
In fact, had there been just 1 fewer pre-ban heart attack for Starkville residents, 1 more pre-ban heart attack for non-Starkville residents, 1 more post-ban heart attack for Starkville residents, and 1 fewer pre-ban heart attack for non-Starkville residents, the results would have completely reversed, and by its own logic, the study would have had to conclude that the smoking ban resulted in an increase in heart attacks (the decline in heart attacks among Starkville residents would have been only 24%, compared to 25% among non-Starkville residents).
For these findings, which are exquisitely sensitive to a simple shift in one heart attack here and one heart attack there, one must not put any confidence in their statistical meaning. Clearly, the role that these are just chance differences cannot be ruled out given the small sample size. Nevertheless, the study goes as far as telling us the exact cost savings from the heart attacks averted due to the smoking ban.
Had the study provided a simple additional piece of information - the confidence intervals around the key estimates in the study (i.e., the decline in heart attacks for Starkville and non-Starkville residents) - it would have been readily apparent that the study findings are not statistically significant. By my calculations, the confidence intervals around the 27.7% and 14.8% point estimates overlap, meaning that one cannot conclude that the 27.7% and 14.8% figures are statistically different from one another.
For the life of me, I cannot understand why studies of the relationship between smoking bans and heart attacks seem to bring out the weakest science. I can't think of another field of inquiry where I've seen conclusions like this which are being made without any statistical analysis whatsoever.
It certainly appears that we are dealing with a pre-determined conclusion and that the research is being done solely for the purpose of proving that pre-determined conclusion. The interest in addressing the research question as a legitimate scientific one is simply not there. What is the point of doing the research, however, if we are not going to actually objectively analyze the data?
(Thanks to Michael J. McFadden for the tip.)