Thursday, September 08, 2005

IN MY VIEW: Why American Legacy Foundation Claims About the Impact of Its "Truth" Campaign Are Wildly Exaggerated

Last week, I posted about the American Legacy Foundation's claim that its "truth" campaign will save "hundreds of thousands of lives" (or less modestly, "millions of young lives"). I opined that this is an unsubstantiated and unfounded claim that is based on a single paper which Legacy itself authored and that it was reported in a public statement by Legacy's President and CEO which failed to disclose that the paper being cited in support of the claim was actually authored by her and paid for by Legacy.

Since then, a number of readers have asked me to explain why I find Legacy's claim to be unsubstantiated, in light of the conclusions made in the AJPH paper.

There are 5 basic points that I think readers should be aware of to understand my reasoning:

1. The Paper Found No Linear Relationship Between Exposure to the "truth" Campaign and Smoking Prevalence

The first test the paper made to see whether higher exposure to the "truth" campaign was associated with lower youth smoking prevalence was whether youth smoking decreased linearly with "truth" campaign exposure over the range of observed exposures in the sample. The paper found that there was no significant linear relationship between the predictor and outcome variable.

This itself would not be the expected outcome if the media campaign was as effective as Legacy claims that it has been. Why would one not find an increasing effect of the campaign on smoking prevalence as exposure to the campaign increased?

The paper suggests that perhaps an explanation for this is that the relationship between exposure and outcome was not linear, but L-shaped. Perhaps the campaign's effects increased as exposure increased, but only up to a point - higher levels of exposure did not have any increased value after that point.

The problem is that this is not what the paper found. It actually reported a U-shaped relationship, whereby the campaign's purported effect on youth smoking increased up to moderate levels of exposure, but then actually decreased as exposure became very high. In fact, at the very highest levels of exposure, there was no effect of the campaign on smoking behavior.

While the paper concludes that this is evidence for a diminishing returns model (whereby increasing exposure effects youth smoking, but only up to a certain point, then further increases in exposure have no further effect on smoking), I believe that the data best support the conclusion that the campaign simply did not work as intended.

Had the campaign really worked, then one would have expected to see the absolutely greatest effect at the highest levels of exposure. Since the highest levels of exposure were actually associated with no effect, I find it difficult to conclude from this single study that the "truth" campaign was effective. But even if I concluded that it was effective, I certainly would not go so far as to quantify the effects in precise terms (i.e., attributing "22%" of the observed decline in smoking prevalence to the campaign), given some of the inconsistencies in the observed findings.

It's kind of like finding that smoking 1-2 packs per day of cigarettes caused lung cancer but smoking 3 packs per day did not. Yes - it is possible that there is some sort of threshold by which additional smoking would not increase the lung cancer risk, but there is no plausible reason why it would take away the risk. From results such as these, one would certainly pause before concluding that a causal connection existed, and more research would certainly be warranted to clarify what is going on.

2. The Paper Did Not Measure Exposure to the "truth" Campaign Among a Single Individual

The paper was essentially a cross-sectional, ecological study. It did not measure exposure to the "truth" campaign among individuals. Instead, it compared rates of youth smoking in regions of the country that had varying levels of intensity of exposure to the "truth" ads. But the study cannot document that the youth subjects were or were not in fact exposed to the campaign, and to what extent.

This does not invalidate the study. However, it does suggest that extreme caution should be used in drawing causal conclusions. It means that the research is very susceptible to the potential effects of confounding variables, since there are a large number of differences between youths in different regions of the country - not just differences in their exposure to the "truth" campaign. It would really require multiple studies to ensure that the observed results of the study are not due to a spurious relation between exposure to the campaign and youth smoking (i.e., that they are not due to some confounding variable that is related both to campaign exposure and to youth smoking rates in different areas of the country). At very least, I would be hesitant to take the results of a single cross-sectional, ecological study and use them to draw a conclusion of causality and then to go so far as to quantify such an effect and apply it to future generations of youths.

3. This is a Single, Cross-Sectional Study

Even if the results of the study were completely consistent with an effect of the "truth" campaign on youth smoking behavior, I would almost never take the results of such a cross-sectional study and: (1) draw a conclusion of a causal relationship; (2) quantify such a relationship in precise terms; and (3) extrapolate those results to what effects one could expect from the campaign in future years and express those results in terms of the number of lives that will be saved by the campaign.

There are methods available to assess the effects of anti-smoking media campaigns using a longitudinal design, and such methods have been used to document the effectiveness of anti-smoking media campaigns in Massachusetts and Florida.

In my own research, I was unwilling to draw a conclusion that the Massachusetts anti-smoking media campaign resulted in a decrease in youth smoking until I found that in a longitudinal study, youths exposed to the ads were more likely to start smoking (over a four-year follow-up period) than youths who were not exposed to the ads. I would never have drawn such a conclusion from a single cross-sectional study of regional exposure differences and regional differences in youth smoking.

4. The Study was Paid for and Authored by the American Legacy Foundation

The simple fact that the study was funded by and authored by Legacy does not in any way invalidate the study. It needs to be judged on its scientific validity. However, in drawing a causal conclusion about whether the "truth" campaign reduces youth smoking (and certainly in quantifying the number of lives saved by such a campaign), one would prefer to see at least one independent study confirm the results reported by the Foundation. Actually, in this case, one would want to see that the results were not confirmed, and that instead, the very highest levels of exposure to the campaign were associated with the strongest effects on youth smoking.

5. The Paper's Findings Rely Upon a Conclusion of Diminishing Campaign Effects

Even if it were sound to conclude from the paper that the campaign did reduce youth smoking, such a conclusion would rely upon the conclusion that the campaign's impact had a diminishing effects dose-response relationship. In other words, campaign exposure was effective up to a point, but at some point, increased exposure to the ads had no further effect on smoking behavior, and may have even been counter-productive. If true, that is hardly what I would call an effective advertising campaign.

But more importantly, even if these results supported a conclusion that the campaign was effective in its early years, it would actually suggest that the campaign's effectiveness may have worn out and that future exposure might not be expected to have similar effects on smoking. Thus, even if a conclusion that the campaign was effective in its first two years were reasonable, it would be inconsistent with the paper's findings to claim that the campaign will have similar effects in the future. In fact, the research would actually suggest that the campaign would not be effective, or would have limited effectiveness, in the future. One would certainly not want to go around claiming that the campaign was going to save "millions of young lives."

The Rest of the Story

In summary, I think there are a number of important reasons why Legacy's claims about the impact of its own "truth" campaign are wildly exaggerated. Perhaps most importantly, I don't find that the results, as reported in the original AJPH article, actually support a conclusion that the campaign was effective in reducing youth smoking. But even if they did, I think it is not particularly responsible from a scientific standpoint to draw a conclusion of causality from a single cross-sectional, ecological-style study. And it certainly doesn't seem reasonable to use a single cross-sectional study to quantify a precise campaign effect, and further, to extrapolate that finding to infer what the campaign's effects will be in the future (especially since the AJPH paper itself concludes that there were diminishing effects of increased campaign exposure).

Note that I am in no way concluding that the "truth" campaign did not reduce youth smoking; I am simply opining that the evidence does not support a conclusion that it did. It may or may not have a significant effect on youth smoking. More research is necessary to make this determination. But until that evidence is available, I do not think it is reasonable or particularly responsible for Legacy to be making these wildly exaggerated claims of its campaign's effectiveness, especially in the setting of trying to convince a federal court to give it billions of dollars.


Matthew Farrelly said...


Once again I feel obliged to reiterate what evidence the original AJPH paper provides and respond to some of your comments and conclusions. Because I do not represent Legacy, I will not speak for their actions.

1.) Lack of a linear relationship.

The best model fit was a quadratic (not linear) relationship between campaign exposure and youth smoking. The model suggests that over the range of gross rating points (GRPs) from 0 to about 10,000 the effect of the truth campaign is increasing. After that point, the effect decreases. For 2 out of the 210 markets (representing .3% of the target population) with the highest level of exposure there is no effect.

On average across all markets, there is a negative association between exposure to the campaign and youth smoking.

Mike choses to conclude from this that after 2002 as exposure to the campaign increases, that the effects of the campaign will diminish. While that might be a reasonable interpretation, it is speculation--an out of sample prediction that can only be verified with additional analyses. Thus far, we have found no evidence to support Mike's assertion that the campaign loses effectiveness after 2002.

Because it is difficult to interpret odds ratios for continuous variables, we thought it would help the readers to predict smoking rates in 2002 in absence of the truth campaign. Doing that we estimated that the truth campaign explained 22% of the decline in youth smoking. While that number is not written in stone, it was the most concrete and useful way to illustrate the size of the estimated association.

2.) Individual exposure

Mike is correct that we measured market level exposure and not individual exposure. However, individual recall of campaign exposure has its own problems. Smokers may tune out antismoking ads while non-smokers may attend to them more or vice versa. That can bias analyses that rely on individual recall. The advantage of the aggregate measure of expsoure is that it is not subject to recall bias and can also indirectly pick-up diffusion of the campaign (peer to peer discussion) that results from exposure as well. That said, one has to be careful to rule out alternative explanations that might confound an aggregate measure.

In our analysis, we took care to rule out alternative explanations given the cross-section time-series approach. Our study carefully provides evidence to rule out possible alternative explanations:

a. As anticipated, no truth effect was found in the early months of the campaign in 2000.

Had we found an effect, this might suggest a spurious correlation between smoking and exposure to the truth campaign because it is extremely unlikely that a campaign could influence youth smoking in just a few months.

b. There was no association found between the truth campaign and binge drinking, another risk behavior that the campaign did not address. Had we found an association with an outcome not explicitly targeted by the campaign, it could cast doubt on our findings because it is unlikely that a campaign aimed at reducing smoking could have a large
effect on related behaviors such as drinking.

c. We found evidence of increasing effects over time for the truth campaign. This is consistent with theories of behavior change that it takes time to influence knowledge, attitudes, and intentions and ultimately behavior.

3. Single cross-sectional study.

I agree, more research (including by others) needs to be done to verify the results.

I would classify this as a cross-sectional time-series analysis. Because of the repeated measures at the media market level, one could classify it as longitudinal at the media market level. As a result, it shares some of the statistical advantages that individual longitudinal studies have.

4. Funded by Legacy.

I'm all in favor of more studies.

5. Results rely on dimishing effects

It think its inappropriate to conclude that just because the campaign's effect diminishes at higher levels that its ineffective. It suggests that the campaign would have be more effective if there was a way to limit extremely high levels of exposure. Legacy didn't know that until they saw these results.

Additional (preliminary) analyses suggest that the relationship between exposure and youth smoking has changed as the campaign has changed. We no longer find the "U" shape curve that Mike is concerned about. However, those results will need to go through the peer review process before they are shared.

In summary, I believe that the results of the March 2005 AJPH paper suggest that there was a statistically significant association between the truth campaign and youth smoking through 2002. Sure, the study's design has its limitations, but I disagree with the interpretation that the campaign was ineffective.

Matthew Farrelly, PhD
Center for Health Promotion Research
RTI International

Anonymous said...

I'm new at this...who or what is RTI International...

Michael Siegel said...

Thanks for adding your comments and perspective on this issue. I should emphasize that most, if not all of my comments, were directed not at the presentation of the results in the study itself, but at the extrapolations and claims made by Legacy (not by you).

While I disagree about the ultimate conclusion of the study, I also do not think it is unreasonable to believe that this study provides some preliminary evidence to suggest that the campaign may have been effective. But as you state, more research is clearly needed. And that is where the statements made by Legacy depart from the very reasoned and cautious statements that you have made here.

While you have drawn a reasonable conclusion: "the results of the March 2005 AJPH paper suggest that there was a statistically significant association between the truth campaign and youth smoking through 2002," Legacy has gone far beyond this conclusion and has concluded, instead, that the campaign is going to save "millions of young lives." Elswewhere, Legacy claims that the campaign will save "hundreds of thousands of lives."

I simply think that the leap from "the results of the March 2005 AJPH paper suggest that there was a statistically significant association between the truth campaign and youth smoking through 2002" (your conclusion) to "the campaign is going to save millions of lives" (Legacy's public statements) is an extreme one. And it is that leap which I think is inappropriate, unsubstantiated, and perhaps irresponsible, not anything that you have stated or concluded.

So I hope that you will not interpret my comments as a criticism of anything you have done or stated. I think we are actually not too far apart in terms of the level of evidence that this study provides. Where the big difference lies is the leap from your reasoned conclusions to the wild and unsubstantiated exaggeration in Legacy's public statements that the campaign will save millions of lives.

There needs to be some scientific integrity in public health and in tobacco control, and I just don't see any in the way Legacy is wildly over-stating and mis-presenting science in the name of promoting itself.

Anonymous said...

Yes, please. Let's all be as understated as possible about how we interpret studies so that we can put any potential reporter to sleep with long-winded caveats about the scientific evidence. In fact let's not try to interpret academic studies at all but just send them directly to health reporters. Then we can guarantee that the studies will either not be reported or that the media will only get the industry side of things.

Michael -- you used to be an advocate. What happened? Why are you expending so much energy "debunking" everyone and everything in tobacco control? Is this really the most effective way of using your considerable talents?

Michael Siegel said...

I'm not talking about a "long-winded caveat" here. I'm talking about a wild exaggeration from a finding that I don't think provides substantial evidence that the "truth" campaign was effective to a public claim that not only is the "truth" campaign highly effective, but that it is going to save millions of lives.

I agree that one must take some liberties with reporters because getting the message out is the critical concern, not the scientific details. However, this is not the case in this situation. It's not a scientific subtlety I'm addressing - it's an exaggeration so wild that it brings into question the very scientific integrity of the organization making the claim.

Jonathan said...

The Centers For Disease Control's report shows that from 2002 to 2004 there was no reduction in the numbers of scholl age children who reproted smoking.

The "truth" chose to ignore more recent numbers in favor of older and outdated numbers to support their efforts to receive further funding. Based on the report their campaign has been a failed miserable from 2002 to 2004