Tuesday, October 03, 2006

Piedmont Study Methodology is Similar to Studies Showing Adverse Economic Effects of Smoking Bans; Junk Science Cuts Both Ways

A careful examination of the methodology used in the Piedmont study which purported to show that the smoking ban in Italy reduced heart attack admissions by 11% reveals that it is essentially the same as that used by a number of studies, attacked by anti-smoking groups, which found that restaurant smoking bans have an adverse economic impact on businesses.

The Piedmont study compared age-standardized heart attack admission rates of Piedmont residents during the 5-month period February-June 2005, which immediately followed the smoking ban (implemented in January 2005), with heart attack admission rates during the same period (February-June) for the previous 4 years.

The Rest of the Story


Here is a brief review of studies using similar methodology to that used in the Piedmont study, which concluded that smoking bans caused an adverse economic impact on restaurants.

1. Laventhol & Horwath, 1990

Laventhol and Horwath conducted a study that was quite comparable to the Piedmont study in terms of the methodology used to assess the effect of a smoking ban (Laventhol & Horwath. Preliminary Analysis of the Impact of the Proposed Los Angeles Ban on Smoking in Restaurants. Los Angeles: Laventhol & Horwath, 1990).

This study compared restaurant sales in Beverly Hills during a 3-month period (2nd quarter, 1987) following the implementation of a restaurant smoking ban to the comparable period the year earlier (2nd quarter, 1986). As in the Piedmont study, the authors excluded data from the first month in which the ordinance was in effect, considering this to be a transition period.

The authors found a 6.7% decline in restaurant sales in Beverly Hills, which they attributed to the smoking ban. Unlike the Piedmont study, these authors did go to the trouble of including a comparison group in the study. They examined changes in restaurant sales in Los Angeles during the same period and found that there was a 10.3% increase.

2. Masotti & Creticos, 1991

Masotti and Creticos also examined the effects of a smoking ban using nearly the same methodological approach as in the Piedmont study (Masotti L, Creticos P. The Effects of a Ban on Smoking in Public Places in San Luis Obispo, California. Northwestern University and Creticos & Associates, Inc., 1991).

They compared sales tax receipts indicating the level of restaurant sales in San Luis Obispo (California) during a 5-month period (exactly the same amount of time) following the smoking ban's implementation to restaurant sales during the comparable period during the previous year.

Compared to the 3rd and 4th quarters of 1989, there was a drop of 3% and 26%, respectively, in restaurant sales in the 3rd and 4th quarters of 1990 (the smoking ban went into effect during the 3rd quarter of 1990).

Unlike the Piedmont study, a comparison group was included (actually, two). The study finds that there was no corresponding decline in restaurant sales for the same time periods within the county of San Luis Obispo as a whole or in the state as a whole. In fact, there were slight increases in restaurant sales (2.6% and 4.2% for the 3rd quarter and 2.0% and 2.4% for the 4th quarter).

3. Lilley & DeFranco, 1999

Lilley and DeFranco studied the change in the number of bars and number of bar employees in California before and after the statewide bar smoking ban went into effect (Lilley W, DeFranco L. The Impact of Smoking Restrictions on the Bar and Tavern Industry in California. Washington: InContext Inc., October 26, 1999).

The ban was implemented on January 1, 1998. So Lilley and DeFranco compared the number of bars and bar employees on January 1, 1997 to the corresponding figures for January 1, 1999. They reported a 7.4% decline in the number of bars and a 12.7% decline in the number of bar employees, which they attributed to the smoking ban.

As a comparison, the report examined overall retail trade trends, finding that during the same period, retail trade employment increased by 4.2% and total employment was up by 5.7%.

Like the Piedmont study, this report stratified the data to look specifically at effects among the bars which the authors expected to suffer the most from a smoking ban: smaller bars. In fact, they found that the decrease in establishments and employment was most dramatic among small bars with 5-9 employees (17.9% drop in establishments; 16.4% drop in employment).

4. Lilley and DeFranco, 1996

Another study by Lilley and DeFranco examined trends in the number of restaurant jobs in New York City before and after its restaurant smoking ban (Lilley W, DeFranco L. Restaurant Jobs in New York City, 1993 Through First Quarter 1996, and the Restaurant Smoking Ban. Washington, DC: InContext Inc., 1996.).

Compared to January 1, 1993 (prior to the smoking ban), the number of restaurant jobs in New York City declined by 2,779 by the first quarter of 1996 (about one year after the smoking ban went into effect). This represented 4.0% of New York City's restaurant job base. In contrast, the political jurisdictions surrounding New York City gained 1,937 restaurant jobs during the same period, representing 5.0% of the restaurant job base. The paper attributes these changes to the smoking ban.

Conclusion

The rest of the story is that the same methodology relied upon by anti-smoking groups to tout a dramatic effect of smoking bans on heart attack rates has been used to demonstrate a dramatic effect of smoking bans on restaurant sales. But while the latter studies have been condemned by anti-smoking groups and called junk science, the former studies are being used to support widespread public claims.

It appears that the anti-smoking movement judges the quality of science by the nature of its findings. Here, we have a number of studies that used essentially the same methods. The studies which produced results that hurt the anti-smoking agenda were attacked and trashed by the tobacco control movement; yet when studies with essentially the same methodology produced results that support our agenda, we tout those same studies around as proof that our agenda is supported by this science.

In other words, when the science produces favorable results, it is science; when it produces unfavorable results, it is junk.

I'm afraid that there is a double standard here. Our scientific judgment appears to be obscured in a mesh of hypocrisy.

One anti-smoking group's website is devoted to debunking a series of what it claims are invalid studies which reported an adverse economic impact of smoking bans. Interestingly, one of the criteria that the TobaccoScam web site sets forward in determining what makes a "bad" study is the failure to use at least one full year's worth of data. If anything, the need for a full year's worth of data is even more important for a study of changes in heart attacks than restaurant sales, since one would expect that there could be drastic changes in restaurant sales that could occur very rapidly, while changes in heart attack rates would take considerable time to be realized.

Based on the criteria set forth on the TobaccoScam website, the Piedmont study would be classified as a "bad" study, in the category of "Cooking the Books."

In other words, the methods used in the Piedmont study were characterized by anti-smoking groups as being an attempt to "cook the books" and artificially find an adverse economic impact of smoking bans. These methods, however, are apparently rock solid as long as the resulting finding is a favorable one. The criteria set forth on the TobaccoScam web site don't seem to apply to tobacco control studies, only to those conducted by our "opponents."

Also of interest is the fact that many of the above economic impact studies were paid for, or commissioned by the tobacco industry. What this means is that we are essentially using the same techniques that the tobacco companies used to try to demonstrate an adverse economic impact of smoking bans to demonstrate a positive immediate health benefit of these policies.

Are we so biased that our desire to produce results that support our agenda is clouding our scientific judgment enough that we are willing to condemn a study with unfavorable findings but tout around a study of essentially the same methodology with favorable findings?

It appears that the answer is yes. At least I will be forced to assume the answer is yes until I see someone or some group from within tobacco control which is willing to criticize the Piedmont study and others like it, as they did the tobacco industry-sponsored studies which used similar methodology to find an adverse economic impact of smoking bans.

No comments: