Drawing on arguments I made in a commentary in The Rest of the Story, the article explains why relying on a single data point to draw a causal conclusion about the reason for an observed decrease in heart attacks is problematic.
According to the article: “A Boston public health professor and anti-smoking advocate is taking issue with a Saskatoon Health Region study saying the City of Saskatoon's smoking ban could be responsible for a dip in the number of heart attacks. 'I think it's really shoddy science to just attribute that automatically to a smoking ban,' said Dr. Michael Siegel, a professor at Boston University School of Public Health. 'These are random phenomena that happen and these statistics go up and down,' he said. 'To just catch it at one time, and say, 'Oh, it went down,' and attribute that to the smoking ban, it could just be a chance variation or it could just reflect the natural trends in heart attacks.'
At a meeting of the Saskatoon Regional Health Authority on April 19, deputy medical health officer Johnmark Opondo presented preliminary results of a region study on the smoking ban. He said in the year after the ban took effect on July 1, 2004, the heart attack rate dropped by 10 per cent, which translates into 32 fewer heart attacks in one year. …
'What appears to be happening here is that anti-smoking advocates are so eager to demonstrate an impact of their policies on disease rates that they are jumping the gun in concluding that any changes in disease rates are due to their policies,' he wrote on the blog. He adds that while all public health practitioners would love to see such changes, 'we need to hold our eagerness in check in order to ensure that we are relying upon sound science before reporting such conclusions widely to the public.'
While the
Siegel said he's also concerned the health region would release such results without also making public the data and methodology that led them to their conclusions. … 'I'm really concerned the credibility of the anti-smoking movement will suffer because I don't think the public knows the difference,' he said.
Neudorf fired back last week, questioning how the critic could assemble such a detailed analysis of the health region's results from a news report, rather than seeing the data or contacting them directly for more information. While Neudorf has submitted the heart attack data in hopes of publication in the British Medical Journal … When the health region presents some of their data to the city's administration and finance committee on May 15, a copy of the study may still not be publicly available, Neudorf said -- the medical journal's submission rules stipulate it must be kept under wraps. However, if the journal rejects the paper, Neudorf said he'll release the data to the public and try other journals.”
The Rest of the Story
The major problem here is that a single data point does not and cannot establish a trend. It is virtually impossible to demonstrate that a change observed in a single year’s data is different from what would have occurred in the absence of a particular intervention. One has to account both for secular trends in the phenomenon of interest and for random variation in the underlying data. And with a city as small as
The same problem plagues the Helena study, in which a causal conclusion was drawn based on only six months of heart attack admission data following the smoking ban. And it also plagues the Pueblo study, in which heart attack changes in the 18-months prior to a smoking ban compared to the 18-month period following the ban were attributed to the smoking ban.
But the larger problem here is what I would refer to as the phenomenon of “science by press release.” I have noticed that anti-smoking groups are increasingly using the media to report science to the public prior to the publication of the conclusions in a peer-reviewed journal and without releasing the data to the public so that the conclusion can be critically evaluated.
I don’t think that it is necessarily inappropriate to release findings to the public before they have been peer-reviewed. It takes many months to publish a paper and it may not be worth waiting that long to let the public know about important findings. However, in cases where sweeping scientific claims are made (such as a single observed data point being attributable to a specific intervention), I think it is essential that if you go to the media with your conclusions, you be willing to publicly release your data so that your conclusion can be critically evaluated. Otherwise there is no opportunity for critical review of your conclusions and the public is left with no option other than blindly accepting the claim, even though it has not been peer-reviewed and it may be an unwarranted conclusion.
Now I understand that in this case, a paper was submitted for publication which precluded the findings from being publicly released. But if that’s the case, then the findings should not have been publicly released. It’s too late. The cat is already out of the bag. As I pointed out two weeks ago, the Saskatoon Star-Phoenix has already publicly reported the findings of the study and the basic conclusion: that the smoking ban was responsible, in part, for the observed 10% reduction in heart attacks in
I don’t know the details of how these data came to be released, but it hardly matters. The point is that the most essential findings of the study and its basic conclusion were publicly released. It sounds to me like a reporter was present at the meeting when these data were discussed. If true, then it means that the study was in fact publicly released. If you’re going to keep the data under wraps and not release the study, then you shouldn’t release the study. You can’t have it both ways.
Essentially, what happened here is that the basic study findings and conclusion were released publicly but the underlying data and reasoning behind the conclusion were not released. What this means is that the public is being asked to believe the conclusion of the Saskatoon Health Region, but without any opportunity to critically evaluate, assess, review, or question the claim.
Blind acceptance of the conclusion is all that is possible for the public in this interim period between when the conclusion was released and when the study will eventually be published (or rejected for publication). But it could be many months before that happens, and by then, it won’t matter. The “damage” will be done: the message is already out there and is being disseminated widely.
Luckily, I’m not willing to accept blind acceptance on the part of the public to an anti-smoking claim that I believe, on its face, is without scientific merit. So I’m pleased that the Saskatoon Star-Phoenix has publicized my perspective on the issue, which at least gives the public the opportunity to use their own judgment and to critically think about the scientific issues involved, rather than blindly accepting a conclusion that I think is likely unwarranted.
No comments:
Post a Comment