Scattergood Foundation

Advancing Innovative Strategies for Change in Behavioral Health

Caveat Emptor: Careful with that Data

As the second decade of the 21st Century reaches its midpoint, using data and evaluating outcomes has become more and more important across a variety of fields. While data has long been the lifeblood of science, technology, and commerce, it is becoming increasingly essential for organizations and agencies within the social sector. However, these organizations and agencies must be careful that they don’t fall prey to the trap of valuing useless data and arriving at false conclusions. 
 
Stigma reduction is one arena in which collecting data and measuring outcomes are vitally important. A stigma reduction program could be effective, but without outcome evaluation, how would we ever know? Beyond using evidence-based methods—like contact strategies and education strategies—the most important aspect of any stigma-reduction program is measuring impact. And, very importantly, you should use valid and reliable instruments to capture data. If you create your own metrics, you better have a good reason for why you are doing that instead of using verified tools. 
 
I just mentioned measuring “impact”. Impact is frequently deployed as a buzzword that carries varied definitions that can range from quantitatively measureable changes, to speculative musings supported by little or no evidence, like most of Malcolm Gladwell's Work
 
In terms of stigma reduction, impact means the effect your program has on attitudes about people with mental health conditions. If you’re trying to reduce public stigma, you should be measuring the effect your program has on stereotype endorsement, prejudicial attitudes, and behavioral intentions, like this. If you’re trying to reduce self-stigma, or mitigate its effects, you should be measuring changes in stereotype endorsement, self-application of stereotypes, self-esteem, and self-efficacy. 
 
What you absolutely shouldn’t do is measure “impact” with irrelevant data that will never lead to any meaningful analysis. That’s worse than not measuring anything at all. For example, let’s say an organization conducts a contact strategy in the community. This program is supposed to reduce stigma among members of the public; negative attitudes and beliefs community members may have about people with mental illness. This organization measures “impact” by giving community participants a pretest and a posttest. Okay, that’s outstanding; that’s exactly how it should be done. 
 
So, what’s the problem? The survey measures participant’s levels of perceived stigma. In other words, it measures their beliefs about how stigmatizing other people are, not their own beliefs about people with mental illness. That reveals absolutely nothing about the effect the program had on the participant’s potentially stigmatizing attitudes. They may have been the most stigmatizing individuals in the history of mankind and the intervention coule have radically changed their beliefs. Or not. But, there’s no way anyone will ever know because participant attitudes about people with mental illness were never measured. 
 
The reason the above example—which may or may not be real—is worse than measuring nothing at all is that someone could look at this data and be deceived into thinking the program is indeed reducing stigma when in fact it might not be—“Hey, these people scored high on the Perceived Stigmatization by Others for Seeking Help Scale before our program, but scored much lower afterwards! WE REDUCED STIGMA!!!” 
 
Maybe you did and maybe you didn’t, but there is no way that scale could tell you. 
 
This is the challenge faced by organizations that want to move into the era of data. Collecting data and measuring impact can be unproductive or even counterproductive if you aren’t capturing the right information or measuring true impact. Absolutely measure your outcomes, but make sure you are doing it right. And remember, if you’re not sure, don’t be afraid to ask.