Tuesday, April 05, 2005

AJPH Paper on "truth" Campaign Suggests No Effect on Youth Smoking

A paper published in the March issue of the American Journal of Public Health, which examined the relationship between exposure to the American Legacy Foundation's "truth" campaign and the prevalence of youth smoking, concluded that the campaign "was associated with significant declines in youth smoking prevalence" (Farrelly MC, Davis KC Haviland L, Messeri P, Healton CG. Evidence of a dose-response relationship between "truth" antismoking ads and youth smoking prevalence. Am J Public Health 2005; 95:425-431). Now, the results of that study have been called into question by Dr. Joel Moskowitz of the UC Berkeley School of Public Health, who published electronic letters in AJPH as well as the BMJ (British Medical Journal) which point out that the dose-response relationship found in the study actually does not support the hypothesis that the campaign was effective in reducing smoking prevalence.

Moskowitz writes: "The theoretical rationale for inclusion of the GRP-squared term was to test whether the campaign had diminishing returns" (p. 428). This would suggest an L-shaped relationship between campaign advertising and smoking prevalence; not the U-shaped relationship found. The results suggest that the campaign had no detectable effect on smoking prevalence among those who resided in media markets that received higher levels of exposure which included students in most major metropolitan areas. Yet, the paper obscured this finding and failed to address its policy implications. Did an overdose of truth render the campaign ineffective? Or were the models improperly specified to estimate campaign effects?

When examined by grade level, the effect of truth advertising on smoking prevalence was significant only for students in grade 8 in media markets with moderate exposure (Table 2). That the campaigns impact did not sustain through high school suggests that truth advertising was no more effective than school-based, smoking prevention programs (e.g., Wiehe et al., 2005)."

The Rest of the Story

I evaluated the study in light of the alternative interpretation of the results presented by Dr. Moskowitz. My interpretation of these results is that they fail to support the conclusion of a significant effect of the "truth" campaign on youth smoking prevalence.

First, it is important for the reader to understand that the study failed to find any significant relationship between the intensity of exposure to the "truth" campaign and youth smoking prevalence under the assumption that campaign effects would increase linearly with campaign exposure. This is probably the most sound initial assumption, since the study measures not actual, individual-level exposure but the penetration of various media outlets in a geographic region (which is used as a proxy for actual exposure).

This basic result is hidden in the second half of the final paragraph on page 428 of the paper, and the quantitative result is not provided in the paper: "We also estimated a set of regressions excluding the quadratic GRP term (GRP squared) (results available on request). In this set, the effect was marginally statistically significant for 12th-grade students (OR=0.879; P<.07) but statistically nonsignificant overall and for 8th- and 10th-grade students."

This means that youth smoking did not decrease as campaign exposure increased linearly over the entire range of exposure (from 647 to 22,389 GRPs). Gross rating points (GRPs) represent the percentage of the target audience that is reach by the campaign multiplied by the number of times they are reached. If the campaign had a significant effect on youth smoking, then one would expect smoking prevalence to fall as exposure increased. This was not the case.

The paper attempts to find a relationship between campaign exposure and youth smoking by using an alternative assumption about media effects - that there are "diminishing returns" as exposure gets very high (i.e., once you reach a certain level of exposure, additional exposure is unlikely to have much of an additional effect on smoking prevalence).

While this is not an unreasonable assumption to test, the study results failed to detect a relationship that was consistent with this pattern of diminishing returns. Instead, the paper found support for a U-shaped relationship, in which the campaign was found to be associated with lower smoking prevalence at moderate exposure levels, but to have minimal, if any, effect at both very low and very high exposure levels.

This type of relationship is really not consistent with the conclusion that the "truth" campaign was effective in reducing youth smoking prevalence. While results that showed diminishing returns would be consistent with such a conclusion, the fact that there was not an observed effect among youths living in the areas with the absolute highest exposure to the campaign is not.

I will try to demonstrate this problem with some examples:

1. Under the basic analysis of the study, in which smoking rates in 2000-2002 were compared to those in the the baseline period (1997 to 1999), the results show that at an exposure level of 22,000 GRPs (the highest level), the odds ratio for youth smoking is 0.96, indicating essentially no effect of campaign exposure.

2. In the comparison of smoking in 2002 versus the baseline period, the results show that at an exposure level of 20,000 GRPs (about average for the major metropolitan areas with highest exposure), the odds ratio for youth smoking is 0.97, again indicating essentially no effect of campaign exposure.

3. In the analysis restricted to 8th-grade students, which was the only one to find a campaign effect (there was no significant effect of the campaign on 10th-grade or 12th-grade students), the results show that at an exposure level of 19,000 GRPs, there is no effect of campaign exposure (odds ratio is 0.99), and at an exposure level of 22,000 GRPs, exposed youths are actually more likely to smoke (i.e., there is actually a reverse effect of the campaign at very high exposure levels).

The reason why these findings are not apparent in the paper is that, as Dr. Moskowitz points out, Figure 2 cuts off at an exposure level of 15,000 GRPs, which leaves out a large part of the sample. Had the figure been extended to show the results for the full range of exposure, it would have become apparent that there was no campaign effect detected for youths with very high campaign exposure. It would have become clear that the pattern was not consistent with a hypothesis of diminishing campaign effects, but rather, consistent with the absence of an effect of the campaign on youth smoking prevalence.

The important point, I think, is that the results just don't appear to support a conclusion that the "truth" campaign resulted in a significant decrease in youth smoking prevalence. They certainly do not, I believe, support a causal conclusion.

While there are plenty of reasons to support and continue funding anti-smoking media campaigns such as the "truth" campaign, I do not see the results of this particular study as being one of them.

8 comments:

Anonymous said...

The reason why these findings are not apparent in the paper is that, as Dr. Moskowitz points out, Figure 2 cuts off at an exposure level of 15,000 GRPs, which leaves out a large part of the sample. Had the figure been extended to show the results for the full range of exposure, it would have become apparent that there was no campaign effect detected for youths with very high campaign exposure.

Does this suggest that the researchers deliberately misreprented their findings?

Matthew Farrelly, PhD said...

First, it is important to remind the readers of the main study findings:

1.) Overall, there was a statistically significant relationship between truth exposure and youth smoking (across all grades combined).

2.) In absence of the truth campaign (i.e., if GRPs would have been 0), smoking rates would have been 22% higher in 2002 than was observed. This translates to 300,000 fewer youth smokers.

3.) The effects grow over time, consistent with expectations.

4.) Campaign exposure is not associated with other risk behaviors like binge drinking, helping to rule out alternative explanations.

5.) These results are consistent with other studies (and Dr. Siegel's own reviews and papers) that show that mass media campaigns can be effective.

The underlying issues in the commentaries by Drs. Siegel and Moskowitz are important--that it appears that the campaign was most effective with younger students and that at the highest levels of exposure, the campaign may have no effect.

Unfortunately, some of these issues are being distorted because the results of the study are being misinterpreted.

1.) Drs. Siegel and Moskowitz assert that Figure 2 inappropriately truncates the range of GRPs. I want explain the rationale for the range of GRPs presented in Figure 2 of the paper (up to 16,000 GRPs). This range represents the AVERAGE GRPs from 2000-2002, not the maximum or 2002 values presented in Figure 1. Average GRPs for 2000-2002 are the appropriate values of GRPs that correspond to the odds ratios presented in Column 1 of Table 2 in the paper. Unlike Columns 2-4, which show the year-specific GRP effects, Column 1 represents the average effects over the 2000-2002 time period. The appropriate range of GRPs for this set of results is the 2000-2002 average GRPs, as is displayed in the figure. We had this main finding in mind when creating this figure.

Therefore, it is incorrect to state that a large part of the sample is left out of Figure 2. For the curve that represents the overall results (Column 1, Table 2) the full relevant range is presented. This range also captures the relevant range for the year-specific results for 2000 and 2001. What I now see is that the full range of GRPs for the 2002-specific curve is not captured. This was not intentional. Had we created a figure for the 2002 results with the full range of GRPs for 2002, it is true that a few media markets would have little to no campaign effect (exactly 1 media market out of 210 would have an odds ratio slightly above 1). However, this figure would also illustrate that most youth would be in media markets with a campaign effect.

It also does not follow that there was no overall effect of the truth campaign on youth smoking in the U.S. if some media markets (and not most large metropolitan areas as asserted by Dr. Moskowitz) have no effect while many more do.

What it implies is that it may have been more efficient to try to slow the media buy in certain high exposure markets. But I think this topic needs more thought and investigation.

To recap, the underlying issues in the commentaries by Drs. Siegel and Moskowitz are important--that it appears that the campaign was most effective with younger students and that at the highest levels of exposure, the campaign may have no effect. It should be noted, however, that the results of the study imply that the campaign did have an effect on youth smoking in the majority of media markets in the U.S. and this translated to a a 22% decline in youth smoking from 1999-2002 and 300,000 fewer smokers by 2002.

Matthew C. Farrelly, PhD
Center for Health Promotion Research, RTI International.

Michael Siegel said...

Since Dr. Farrelly agrees with Dr. Moskowitz and my commentary that "it appears that the campaign was most effective with younger students and that at the highest levels of exposure, the campaign may have no effect," it is now important to consider the most reasonable interpretation of these findings.

I certainly do not think that the first explanation that should come to mind is that the campaign was effective, but that for some strange reason that we can't explain, it was only effective in media markets where there was moderate exposure and that it did not work at the highest exposure levels. Such an explanation is not consistent with the underlying hypothesis about the relationship between exposure and effect and is also not consistent with the conclusion of a dose-response relationship between exposure and effect, as suggested in the paper's title.

Another explanation, that I think is more reasonable, is that what the paper is really picking up is not campaign effects, but simply differences in smoking rates between media markets of different socioeconomic characteristics. Why could it be that smoking rates dropped more in suburban-type areas than in very rural and very urban areas? The most logical explanation, I believe, is that this reflects differences in major socioeconomic factors that are highly related to smoking prevalence, and that these factors were not adequately controlled for by the variables measured in the study.

At any rate, the most important thing for readers to judge is whether the finding that the association between exposure to the "truth" campaign and youth smoking prevalence did not hold for very high exposures is most consistent with the interpretation that the campaign was effective. I do not think that is the case.

But even if it is a plausible explanation, the fact that the paper offers no discussion of this point and only considers this one possible explanation is troubling.

Michael Siegel said...

An example to illustrate why the interpretation that the "truth" campaign was effective is not appropriate may be helpful. Suppose a study found that smoking was related to lung cancer, but only for those who smoked up to 2 packs per day. For those who smoked 3-4 packs per day, there was no association between smoking and lung cancer.

True, one could offer the explanation that for some reason, very high doses of cigarette smoke are not harmful, but is that really consistent with the overall conclusion that smoking causes lung cancer?

In such a situation, it is more likely that some other factor that is related to the amount of smoking is what is causing the association between smoking and lung cancer, and that this factor explains why heavy smokers did not have increased lung cancer rates.

Matthew Farrelly, PhD said...

I would like to respond to Dr. Siegel’s alternative explanation, “that what the paper is really picking up is not campaign effects, but simply differences in smoking rates between media markets of different socioeconomic characteristics…[that] were not adequately controlled for by the variables measured in the study. I strongly disagree with this statement. We employed two alternative approaches to control for media market characteristics. The first is an approach described in Heckman JJ, Hotz VJ. Choosing among alternative nonexperimental methods for estimating the impact of social programs: the case of manpower training. J Am Stat Assoc. 1989;84:862–880. In our case, this involved including an indicator variable (0/1) for every media market in the data set (minus one for a reference group). In essence, this makes each media market a control for itself—in essence this controls for pre-existing differences in smoking rates across media markets. This is done to account for potential bias that may exist because the campaign exposure was not randomly assigned. In fact, failure to do so does bias the truth effects upward modestly. The alternative approach that leads to comparable results, involved controlling for media market sociodemographic characteristics. If Dr. Siegel’s concern is true, why do we not see a spurious correlation between truth exposure and youth drinking and binge drinking? We tested this to help rule out such alternative explanations.

Back to the point about the non-linear relationship between truth exposure and youth smoking, I believe this relationship is being overinterpreted. First of all, when correctly applied, there are only 14 (out of 210) media markets that have an odds ratio in 2002 at 0.9 or higher. So while there are some markets where the level of exposure is so high (based on the quadratic function of GRPs) that is implies there is little to no effect from the campaign, this does not consistent a large population.

That said, is that implausible? Campaign planners fret over “wearout” and backlash to seeing the same message too many times. It is possible that alternative non-linear relationships may fit the data better (cubic or log-linear). These can and should be further investigated, but it’s premature to conclude that the campaign was ineffective in the highest levels of exposure.

Michael Siegel said...

After discussing the paper with Dr. Farrelly, I am reassured about the methods used in the study to account for possible media market-specific effects. One approach used was to allow for fixed effects of each individual media market, an approach that would account for differences in underlying characteristics across markets that would relate strongly to youth smoking prevalence.

So I think that the main issue is really the nature of the dose-response relationship between campaign exposure and youth smoking. It is possible that examining a logarithmic or log-linear model may help elucidate the relationship between campaign exposure and effects, particularly at high exposure levels.

Anonymous said...

Prevalence is slippery. Consumption is more solid. Is there a way to measure consumption by customer age?

-- Jon

Anonymous said...

The use of GRPs for predictive purposes is not supported by the Advertising Research Foundation, since GRPs represent what media is bought, not what is actually achieved (in this case change in attitude, prevalence, or behavior). The dose-response model applied to advertising is not appropriate for the simple reason that administering a 'dose' of advertising is not the same as administering a dose of medicine - there is no control, unless exposure to advertising is loinked to some effect. A tricky problem. More work needs to be done.