Briefly, as I reported last week: A new study published online ahead of print in the journal Tobacco Control purported to demonstrate that a smoke-free bar and restaurant law implemented in São Paulo, Brazil in August 2009 resulted in a 11.9% decline in the heart attack death rate for the first 17 months after the law was in effect (through December 2010). The paper used a time-series analysis to compare the monthly rate of heart attack deaths prior to the smoking ban to the rate after the ban was implemented. The baseline period was January 2005 through July 2009. The implementation period was August 2009 through December 2010. Thus, the researchers had data for approximately 5 1/2 years before the ban and for 17 months after the ban. The paper concluded: "In this study, a monthly decrease of almost 12% was observed in mortality rate for myocardial infarction in the first 17 months after the enactment of the comprehensive smoking ban law in São Paulo city."
The problem is that if you take the time to look at the actual data, you find that there was actually a striking increase in heart attack deaths in the year following the smoking ban:
Christopher Snowdon over at Velvet Glove, Iron Fist has graphed out the monthly data, and the picture looks the same:
From: Christopher Snowdon. Velvet Glove, Iron Fist. "Brazilian Smoking Ban Miracle," December 1, 2016.
You can see that there was a seasonal decline in heart attack deaths late in 2009, but a striking increase in heart attack deaths in 2010 that was sustained throughout the year. In fact, the number of heart attack deaths for each month in 2010 was higher than the number of heart attack deaths during the same month in any of the previous years in the study period!
As Snowdon explains the actual data: "Before the ban, the number of deaths hardly ever exceeded 600 per month and was often below 500. Within a few months of the ban, there were never fewer than 700 deaths per month."
In my post last week, I struggled to understand how an error like this could have escaped the attention of the investigators, the reviewers, and the journal and speculated that: "It appears that either nobody looked at the actual data or that they looked but ignored it. Either way, this demonstrates a severe bias on the part of the investigators, reviewers, and editorial team. Had the study found no effect of a smoking ban, you can rest assured that everyone would have scoured over the paper for hours, trying to find some explanation for why the results came out "wrong." But here, since the results were "right" (that is, favorable), it appears that there was no desire to sincerely "review" the paper."
However, in the back of my mind, I wondered whether this was all just a mistake. Perhaps there was just a typographical error and the data presented in Table 2 were mistaken. Perhaps the 2010 data presented were an anomaly and were not transcribed correctly from the original manuscript to the typeset paper. I was actually "hoping" that this would be the case and that I would have to write a retraction and correction.
The Rest of the Story
It was not to be so. The lead author of the paper confirmed that the data in Table 2 are correct. In other words, the statistical analysis miraculously turned a clear and rather striking increase in heart attack deaths into an 11.7% decline: a true miracle. Instead of blindly reporting these results, this inconsistency should have instead led the investigators to figure out what went wrong, and it should have led the reviewers and journal editors to question the analysis and interpretation of the data.
This might seem like a rather obvious epidemiological point to make but you cannot have a decline in the heart attack death rate if the heart attack death rate goes up.
Before anyone suggests as a possible explanation that perhaps the population rose drastically in 2010, thus making the high number of deaths in 2010 translate into lower death rates, let me emphasize that explanation is impossible. The paper did not use the actual annual populations but simply used a geometric progression to interpolate the populations based on censuses conducted in 2000 and 2010. Thus, the rate of population growth throughout the study period was constant, by definition.
It also needs to be remembered that even if the paper had found an actual decline in heart attack deaths in 2010, this would not justify the conclusion that the smoking ban caused a decrease in heart attacks. Another critical and fatal methodological flaw of this paper is that there is no comparison group. It is very possible that heart attack death rates were declining during the study period anyway, even in the absence of smoking bans. We actually know this to be the case from abundant international data. To conclude that the smoking ban had an effect on heart attacks, one would need to first control for secular trends in heart attack mortality that were occurring anyway, independent of the smoking ban. The paper could easily have done this by including some comparison group -- such as a nearby city, the county, the state, or the country. But there needs to be some control for secular trends.
The rest of the story is that we have here another example of the severe bias in modern-day tobacco control research. The zealotry has reached such a level that you can report frivolous findings that are not even consistent with visual observation of the data and still publish your paper without questioning from peer reviewers or journal editors, as long as the findings you are reporting are "favorable" to the cause. You can rest assured that had the paper found "unfavorable" findings, they would have received critical scrutiny.
For a similar take on this study, see Christopher Snowdon's commentary.