On Friday, I criticized an ANR document which purports to share with the public a list of the U.S. studies examining the short-term effects of smoking bans on heart attack rates. Under the category of "United States," ANR lists just one study - a study by Lippert and Gustat which concluded that smoking bans do lead to immediate reductions in heart attacks. I note in my commentary that ANR omitted from its list the two largest studies conducted in the U.S., both of which failed to find a short-term effect of smoking bans on heart attacks.
In response to my commentary, ANR wrote me and accused me of acting unprofessionally by sharing these opinions on my blog. ANR wrote: "Your approach to 'discussing' these sorts of issues continues to be exceedingly unconstructive and unprofessional."
The Rest of the Story
To translate ANR's message to me: "It is unprofessional of you to share your dissenting opinion publicly on your blog. You are free to disagree, but not to express your disagreement with others. That is unprofessional behavior."
This is a strange interpretation of academic freedom, free speech, and scientific integrity.
By the way, I don't begrudge ANR's disagreement, as it may exist, with my opinions regarding the scientific evidence related to the short-term effects of smoking bans on heart attacks. I would have been happy to discuss with ANR the scientific evidence and its strength, as well as the analytic and statistical issues involved with the scientific interpretation of these studies.
However, ANR's note was not simply a statement of scientific disagreement. It was an accusation of unprofessional behavior on my part. In other words, it was an attack on my personal character and integrity.
As I have noted before, this is a common tactic in the anti-smoking movement for dealing with dissent. Rather than deal substantively with the scientific issues, you attack the dissenter, trying to discredit him personally. This is also a tactic that I observed tobacco companies using in years past.
This is mildly ironic, as it was my disagreement with and discomfort with this very tactic that led to my resigning from the ANR Board of Directors in the first place. I now see that they haven't come very far since that time.
This story simply reinforces (and demonstrates) the argument I made in my commentary: "This is not science, it is politics. ANR has ceased being a science- or policy-based organization and has entered the political realm. ... It's sad for me to see the deterioration of the scientific integrity of the tobacco control movement, and it is particularly disheartening to see our organizations adopting many of the same tactics that we attacked the tobacco companies for using in years past."
While it would not be surprising to see ANR using this tactic against its "opponents" (and I have documented how ANR indeed uses this tactic against opposing groups), it is unfortunate that the organization has to resort to using this tactic against its own colleagues in the tobacco control movement.
ANR's response also demonstrates how it is cherry-picking the studies which have favorable results and intentionally excluding studies that don't have favorable results. The response argued that the Shetty et al. study should not included because of several methodological weaknesses, including the fact that it defined all smoking restrictions as "smoking bans" even if they were only partial bans. That's fine, but if one is going to restrict studies that have methodological weaknesses, then one has to do that with all studies, not just with the ones that have unfavorable findings.
In fact, the Lippert and Gustat study - which ANR cites as its only U.S.-based multi-state study, is the weakest of all studies on smoking bans and heart attacks. As I have pointed out previously, there are two major flaws of this study which render its conclusion invalid.
1. There is no control group.
The study simply compares changes in self-reported prevalence of heart attacks in states with smoking bans from approximately 2006 to 2009. The study finds that in some states, there was a significant decline during this three-year period. However, without knowing what happened in states without a smoking ban, it is impossible to attribute this change in heart attack prevalence to the smoking ban. One needs to know what was the change in heart attack prevalence from 2006 to 2009 in states that did not enact smoking bans.
The study does not report this information. However, from the Health Care Utilization Project (HCUP) data, we can obtain the changes in hospital discharges with a primary diagnosis of heart attack (i.e., incident heart attacks) in states without smoking bans between the years 2006 and 2009. Here are the data for all states without smoking bans in the HCUP database for which there are data for these years (the last column shows the percentage change from 2006 to 2009):
From this table, one can see that in every state without a smoking ban for which HCUP data are available during the study period, there was a substantial decline in heart attacks, ranging from a decline of 3.1% in Kentucky to a decline of 11.3% in West Virginia. Overall, the decline in heart attacks in these 7 states without smoking bans was 6.1% from 2006 to 2009.
Therefore, how can this study conclude that the decline in self-reported heart attacks in the 17 smoking ban states from 2006 to 2009 was different than what would have been observed in the absence of these bans? Clearly, there is a secular trend of declining incident heart attacks in the United States that is independent of statewide smoking bans.
Given this baseline secular trend, the study cannot conclude that the observed declines in self-reported heart attacks observed in the 17 study states were attributable to the smoking bans in those states, as opposed to simply reflecting underlying secular trends, which are readily observable in states without such smoking bans.
2. The study conducts the wrong statistical analysis.
The study's conclusion that the smoking bans led to a significant reduction in heart attacks is based on the observation that in 10 of the 17 states, the prevalence of heart attacks declined. Of course, another way to look at this is to say that in 7 of the 17 states, the prevalence of heart attacks increased. The real question is this: if there were no true change in heart attacks, what percentage of the time would 10 out of 17 states show a decrease in heart attacks by chance alone?
Think of it this way. Suppose you flip a coin 17 times and come up with 10 heads. Can you conclude that this is not a fair coin, and that it must be weighted more heavily towards heads?
Well one can calculate the probability of obtaining 10 or more heads out of 17 coin tosses with a fair coin. Using the binomial distribution, one can determine that if one flips a fair coin 17 times, the chances of getting at least 10 heads is 31.5%.
Thus, by chance alone, if one were to examine changes in heart attack prevalence in 17 states, one would find that heart attacks decreased in 10 of those 17 states 31.5% of the time (if there were actually no true change in heart attacks). This is far beyond any reasonable level of statistical significance (which is usually set at about 5%).
If one is going to exclude any study because of methodological weaknesses, it would have to be the Lippert and Gustat study.
My impression remains that ANR is not objectively analyzing the methodology of these studies and excluding those whose methods are not scientifically solid. Instead, ANR is finding reasons to exclude the unfavorable studies while not applying the same standards to studies with favorable results.
The rest of the story is that this adds to the evidence that ANR's omission of all studies with unfavorable findings is an intentional action on the part of ANR to deceive the public about the scientific evidence by hiding negative studies and only sharing studies that support the organization's pre-determined conclusions.