Tuesday, May 22, 2007

New Study Concludes that Bowling Green Smoking Ban Reduced Heart Disease Admissions by 47%; Unfortunately, Science is Weak and Conclusions Unjustified

A new study published online ahead of publication in the journal Preventive Medicine concludes that a smoking ban in Bowling Green, Ohio resulted in a 47% decline in hospital admissions for heart disease. The study compared the standardized monthly rates of hospital admissions for heart disease in Bowling Green (which implemented a smoking ban in March 2002) and Kent (which did not implement a smoking ban) from January 1999 through June 2005 (Khuder SA, Milz S, Jordan T, Price J, Silvestri K, Butler P. The impact of a smoking ban on hospital admissions for coronary heart disease. Preventive Medicine 2007 [in press]).

The study concludes: "A reduction in admission rates for smoking-related diseases was achieved in Bowling Green compared to the control city. The largest reduction was for coronary heart disease, where rates were decreased significantly by 39% after 1 year and by 47% after 3 years following the implementation of the ordinance. ... The findings of this study suggest that clean indoor air ordinances lead to a reduction in hospital admissions for coronary heart disease, thus reducing health care costs."

The Bowling Green ordinance eliminated smoking in public places, including restaurants, but exempted free-standing bars. Bar areas of restaurants were also exempted, as long as they were isolated in an enclosed room.

The Rest of the Story

Unfortunately, the study conclusions are, in my view, unsupported by the data, and the science backing up the study is quite poor. Like the other studies which have claimed to have found a drastic reduction in heart attack admissions attributable to a smoking ban, this one is yet another example of shoddy science making its way into tobacco control research.

The chief flaw of the study is that it is unable to rule out the very likely possibility that the observed changes in heart disease admission in Bowling Green during the study period are due primarily to random variation, rather than to the smoking ban.

To see what I mean, let's look at the actual data. Here are the annual standardized heart disease admission rates for Bowling Green during the study period. Note that the ordinance went into effect in March 2002:

1999: 35
2000: 24
2001: 24
2002: 36
2003: 22
2004: 26

The paper presents data for 2005, but since only the first six-month period of data are available, it is not valid to compare the 2005 data with the preceding year. The paper simply doubles the admission rates from the first six months, but that is invalid due to the well-established seasonal variation in these rates.

You can see several important things by examining these data.

First, there is tremendous natural (random) variation in the heart disease admission rates in Bowling Green. Because we are dealing with such small numbers of admissions, the percentage change in admissions from one year to the next is very high, even without any smoking ban. For example, from 1999 to 2000, there was a 31% decline in admissions. From 2001 to 2002, there was a 50% increase in admissions. Clearly, these changes were not due to the smoking ban. They reflect, at least in part, the underlying random variation in these data.

Given the fact that annual changes in heart attack admission rates of between 30% and 50% are common in Bowling Green, it is completely unjustified to conclude that the observed 39% decline in the first year following the smoking ban was attributable to the smoking ban.

More likely, it was simply random variation that led to an "abnormally" high heart disease admission rate in 2002. The rate was bound to fall in 2003 due simply to this pattern of random variation.

Does the study conclude that the absence of a smoking ban in Bowling Green caused the 31% decline in admissions in Bowling Green from 1999 to 2000? Of course not. That "drastic" decline was due to absolutely nothing. Just random variation in the data.

Does the study conclude that the whopping 50% increase in heart disease admissions between 2001 and 2002 was due to the implementation of the smoking ban? Of course not. That would be an unfounded conclusion given the high degree of random variation in these data.

The second very important thing to notice is that the rate of heart disease admissions in Bowling Green was exactly the same before the smoking ban as after the smoking ban. In 2001, the year prior to the smoking ban, the rate was 24. In 2003, the year after the ban, the rate was 22. In 2004, two years after the smoking ban, the rate was 26. Thus, the average rate in the first two years following the smoking ban was 24 - exactly the same as it was prior to the ban.

How can one possibly conclude that the Bowling Green smoking ban decreased heart disease admissions by 47% when the rate in 2000 and 2001 (prior to the ban) was 24, and the rate in 2004 (after the ban) was 26?

The conclusion of the study, it turns out, is based heavily upon the low admissions rate during the first six months of 2005. You can't possibly draw any valid conclusion until the full 2005 data are in. And you certainly don't want to just double the 2005 early-year data in order to present the appearance of a very low annual rate for the entire year.

The study findings are also largely dependent on whether you categorize the "abnormally" high rate of 36 observed in 2002 to pre-ban or post-ban. If you call the heart attacks in 2002 mostly post-ban observations, then the high rate in 2002 is attributed mainly to post-ban, and thus there is the appearance of an increase, not a decrease in heart disease admissions immediately following the ban.

On the other hand, if you call the heart attacks in 2002 mostly pre-ban observations, then the high rate in 2002 suddenly becomes attributed to pre-ban, and thus there is created an appearance of a decrease, not increase in heart disease admissions following the ban.

It is quite interesting, then, to note how the paper treated the 2002 data. Although the smoking ban went into effect early in 2002, the study treats the 2002 data as being pre-ban. In reporting the change in heart disease admissions rates during the first year of the ban, the study compares the rate of 36 in 2002 (which it considers pre-ban) with the rate of 22 in 2003 (post-ban). This yields an estimate of a 39% decline in admissions.

The problem is that most of that rate of 36 in 2002 is actually post-ban, since it went into effect in March of that year. If you want a true idea of the pre-ban rate, go back to 2001 which is unequivocally pre-ban. The rate in 2001 was 24. The rate in 2003, which is unequivocally post-ban, was 22. That represents a decrease of 8%, not 39%.

It seems odd that the paper uses the rate of 36 in 2002 as the pre-ban rate to compare both the one-year change and three-year change (for which the paper reports a drop of 47% - from 36 to 19). The reason I say it is odd is because this rate was actually a post-ban rate, for the most part.

The paper justifies its characterization of the 2002 data as pre-ban by arguing that a full six months of enforcement are necessary before health effects from reduction in secondhand smoke exposure or reduction in smoking prevalence or smoking intensity would be observed: "Due to the novelty of the ban, initial resistance by its opponents and legal wrangling over its enforcement, we believed that several months of consistent enforcement would be needed before citizens would actually change their behavior."

It sounds to me like this is more of a convenient excuse for treating the high observed post-ban heart disease admission rate in 2002 as pre-ban than it is an objective way to conduct this analysis. I'm not suggesting that this was intentionally done to try to make it appear that there was a decline in heart disease rates; I'm just pointing out that this type of manipulation is highly subjective, and given the actual data in question, is unacceptable. It basically throws the entire analysis into question.

Given the question about what points exactly should be considered pre-ban versus post-ban, the most objective way to handle this would be to use the 2001 data as pre-ban and the 2003 data as post-ban. These categorizations are unequivocal. And if you do that, you find a drop from 24 to 22 -- hardly evidence of any substantial effect of the smoking ban.

And if you go to the second year post-ban, you find an increase from 24 to 26. Again, this is hardly evidence of a decline in heart disease admissions due to the smoking ban.

I should also point out that the assumption that it takes several months of consistent enforcement before people change their behavior is unsupported. The data I have published from Boston's smoking ban demonstrates that within the first few days of the ban, there was basically 100% compliance. The change in secondhand smoke exposure was almost immediate.

There are several other serious flaws in the study.

The study provides no evidence to support its assertion that there was a drastic drop in secondhand smoke exposure, cigarette consumption, and smoking prevalence in Bowling Green in response to the smoking ban. If you are going to conclude that the smoking ban is what was responsible for the observed changes in heart disease rates, you ought to document that there actually was a dramatic reduction in secondhand smoke exposure, cigarette consumption, and smoking prevalence. The paper does none of these things.

The study also makes another serious mistake. It uses a fancy statistical model (an ARIMA - or autoregressive integrated moving average model) to estimate the change in monthly heart disease admission rates in those months in which the smoking ban was in force. It finds that in Bowling Green, the rate was 1.7 lower per month when the ban was in force, and that estimate is statistically significant. In Kent, the comparison community, the rate was 1.1 lower per month when the ban was in force, and that estimate was not statistically significant.

The paper then concludes that since the 1.1 per month decline in admission rate in Kent was not statistically significant, these data do not show a parallel significant change in heart disease admission rates in Kent following the implementation of the smoking ban in Bowling Green.

However, this is not the appropriate way to conduct this analysis. The proper way to compare these estimates is to statistically determine whether the observed decline of 1.7 per month in Bowling Green is different from the observed decline of 1.1 per month in Kent. Since we know the standard errors of each of these estimates, we can determine whether these estimates are statistically different from each other.

Since the paper does not provide the standard errors, I cannot conduct that analysis. However, given the levels of statistical significance for these estimates in the paper, I was able to make a rough estimate of what the standard errors likely were. Based on my calculations, it is highly likely that the decline in heart disease admission rate of 1.7 per month in Bowling Green is not statistically different from the decline of 1.1 per month in Kent. In other words, it is likely that the actual analysis in the paper confirms that there was no significant decline in heart disease admission rates in Bowling Green due to the smoking ban.

I recognize that this may be a difficult point to understand, so let me give an example to illustrate it. Suppose that I want to determine whether the smoking ban in Massachusetts resulted in an increase in average temperatures in Massachusetts compared to temperatures in New Hampshire. We want to therefore compare the change in annual mean temperatures in Massachusetts with the change in annual mean temperatures in New Hampshire. We find that in Massachusetts, the average mean temperature increased by an average of 0.5 degrees per year, with a standard error of 0.1. Thus the increase was statistically significant. In New Hampshire, the average mean temperature also increased by an average of 0.5 degrees per year, but with a standard error of 0.3, so that the increase was not statistically significant.

By the reasoning provided in the paper, one would conclude that there was no parallel significant increase in temperatures in New Hampshire; thus, the change in Massachusetts must have been due to the smoking ban. However, it is readily apparent from these data that there was exactly the same observed increase in temperature in the two states. The correct way to do this analysis is to compare the two estimates of the average annual decline in temperatures and see if they are statistically different. In this case, the two estimates are 0.5 and 0.5, which are clearly not different. This shows how one can draw the wrong conclusion if one conducts the analysis in the wrong way.

The final point that deserves mention is that the smoking ban in Bowling Green was a partial smoking ban. It exempted free-standing bars and bars within restaurants. Thus, it is much less plausible that such a ban would have had a dramatic effect on smoking prevalence. Smokers likely chose to go out to restaurants or bars that continued to allow smoking. There is little evidence that partial smoking bans result in significant smoking cessation.

The rest of the story is that upon closer examination, the study which purports to demonstrate that a smoking ban in Bowling Green resulted in a massive decline in heart disease admissions demonstrated nothing of the sort, and possibly demonstrated that there was no significant decline in admissions attributable to the smoking ban. Like its predecessors (e.g., Helena and Pueblo), this is another example of shoddy science that apparently now passes as acceptable in tobacco control research.

No comments: