The Power of Numbers: Making Sense of Media Statistics
|
Those sensational statistics sbout kids and media may only be half of the story
"TV violence continues to rise, study finds," proclaimed the headline in a major newspaper after the release of the third annual National Television Violence Study this past April. It made for a compelling headline, but was this actually what researchers found?
In fact, what the researchers concluded is that the overall percentage of programs on television that contain some violence was virtually unchanged from the two previous years of the study. What the writer of the article (and the editor who wrote the headline) chose to emphasize, however, was another finding from the study reported in a news release: that the percentage of prime-time programs containing violent content rose a little. The headline that may have seduced us into reading the rest of the article — or may have only caused us to wince — was ultimately misleading.
"When we hear grim-sounding statistics about television violence or how many hours kids watch TV, we collectively shudder and vow not to let our own children become "one of the numbers." But what do these numbers, so glibly bandied about, really mean?"
As parents, when we hear grim-sounding statistics about television violence or how many hours kids watch TV, we collectively shudder and vow not to let our own children become "one of the numbers." But what do these numbers, so glibly bandied about, really mean? To examine statistics more carefully and critically, we need to evaluate how and why the study is being reported. We also need to learn more about how a study was conducted and who conducted it. Understanding these elements will enable us to evaluate what we can — and can't — learn from the numbers we so often hear cited.
Presentation Is Everything
A colleague and I were recently interviewed on a local cable news program about a study we'd published about race, gender, and language in children's animated programming. Though we'd sent the study to the producer in advance, it was clear to me when we were on the set that the on-camera interviewers had skimmed the study only moments before. Their questions focused on a point that was relatively minor, but probably made for better television — the ways in which Fred and Wilma Flintstone display stereotypical patterns of language. Fred and Wilma evidently made for better bantering between the interviewers than our most important findings, which had to do with the ways in which heroes and villains are portrayed in the world of animated programming. The conventions of television clearly took precedence and no doubt left viewers with a slant on our research not truly reflected in the study.
"Just as headlines and interviewer questions tend to focus audience attention on certain aspects of a study, so do graphics associated with a story."
Just as headlines and interviewer questions tend to focus audience attention on certain aspects of a study, so do graphics associated with a story. Graphics grab our attention and may influence how (and whether) we attend to the rest of a story. And like headlines or questions, graphics can be misleading. Sometimes the graphs that contain the most information are too difficult to interpret, but the highest bar on a bar graph, or the apex of a curve will catch our attention, and that's what we'll remember about a study. Pie charts are simpler and more colorful, but they, too, can be misleading because they sometimes lump together categories to make for more dramatic statistics.
It's important to remember that journalists not only want to convey information to us, but they also need to assure themselves of an audience in an increasingly competitive marketplace. That sometimes means that the most sensational statistics will be used. It's not that journalists want to mislead us; rather, they just want to be sure we're there. Though we like to think of journalists as being totally objective, they, too, see stories through the lenses of their own backgrounds. Tight deadlines and lack of training in quantitative analysis may also contribute to reporting studies in less-than-complete ways.
The People Behind the Statistics
Just as important as understanding the presentation of research is knowing who conducted it. Studies done under the auspices of universities or nonprofit centers produce information that is entirely public, and those conducted by polling organizations or companies produce proprietary, or private, information. It's not that proprietary information doesn't get into the public domain; it's just that not all of it does, and it's harder to learn how the information was gathered.
Listen or look for who is cited (an academic? a company spokesperson?) and how the study was funded — especially when studies cited in the media reach contradictory conclusions. For example, think about tobacco studies conducted by researchers employed by cigarette companies versus those conducted by government researchers. At the same time, some studies are funded by groups with political, economic, religious, or other partisan agendas, yet are conducted independently by academics or research firms. The National Television Violence Study, while funded by the cable industry, was conducted by four teams of university-based academics.
"The more accessible — and the more sensational — [scholars] can make their work seem, the more likely it is that the media will want to cover it."
Of course, even academic researchers release their work to the press with an agenda or two in mind. It is difficult to raise funds for research, and hard to get known beyond one's own narrow area of academia. Scholars who can get their work known in the mainstream media often do better in both areas. Increasingly, academics work with public-relations departments to get their work picked up by the press. And of course, the more accessible — and the more sensational — they can make their work seem, the more likely it is that the media will want to cover it.
Truth Is in the Details
Given all the debate about the amount of television that children purportedly watch, it's astounding to realize how poorly — and variably — defined "children's television viewing" really is. As parents, we know that kids do lots of other things when a TV is on, but does this mean that they're really "watching television?" A 1997 study from the Annenberg Public Policy Center addressed this question head-on, pointing out that children are often "counted" in assessments of television viewing hours, such as Nielsen ratings, if they were simply present in the room when the set was on.
There are also many different definitions of "television violence." Is an anvil dropping on Wile E. Coyote's head, when he bounces right back up (albeit somewhat wrinkled) equivalent to showing someone shot at point-blank range? Some studies have taken into account the degree to which physical and emotional distress results from violence; others have not. Some studies explore the context of violent acts, such as which characters perpetrated it and which were the victims; others simply count instances of physical aggression.
"Children are often 'counted' in assessments of television viewing hours, such as Nielsen ratings, if they were simply present in the room when the set was on."
Another important consideration in interpreting studies is understanding how large a sample was used. Think about the ads you see that say, "four out of five dentists recommend Brand X gum for their patients who chew gum." But how many dentists did they poll: 5? 20? 100? The size of the sample is important to know because if the chewing gum company researchers only asked five dentists, the results they report may have occurred by chance. If they polled several hundred dentists and still found that four out of five preferred their brand, this is a far more significant result.
As another example, the National Television Violence Study researched both broadcast and cable programming, including shows geared for both children and adults. But the Violence Index, which was part of the Cultural Indicators Project conducted by researchers at the University of Pennsylvania, considered only programs aired in prime time, which for the most part excludes cartoons and other programs geared toward children. Both may be accurate, but what they are measuring is somewhat different.
Even the phrasing of questions can influence a study's outcome. If researchers asked dentists in the chewing gum survey, "Out of all the chewing gums in the world, which one would you recommend?" they might get a different response than if they asked, "Out of brands X and Y, which do you prefer?"
Reading Between the Numbers
There's a slim book that my graduate school statistics professor assigned when I took his course: How to Lie with Statistics by Darrell Huff. At first I thought it an odd choice, but as the course progressed, I learned that, although we as a culture invest a great deal of truth in "facts" when they are presented quantitatively, often there's more to the story than the compelling statistics might suggest. Statistics can be used in ways that sensationalize, oversimplify, and present work out of context.
"Even the phrasing of questions can influence a study's outcome."
But numbers are important. We hear statistics cited all the time; polling is big business in everything from marketing to politics. We may form opinions or make useful decisions based on numbers, and we often glean relevant information from statistics.
This may be a very legitimate and sound basis for decision making. In thinking about where to set limitations on our children's TV viewing, for example, it may be reassuring to know that children who watch moderate amounts of television also tend to read a lot. This information comes from averages found in many studies that have looked at the relationship between television viewing and scholastic achievement.
"It may be reassuring to know that children who watch moderate amounts of television also tend to read a lot."
To become media-savvy readers of statistics, we need to ask these kinds of questions:
- How is the research presented in the press?
- How was the research conducted?
- Who constituted the sample, and how were they selected?
- Who conducted the research?
- Who funded the research?
- Is there anyone who stands to gain from the way numbers are presented?
The answers to these questions won't necessarily render the study's conclusions invalid — but they will help us decide how to use that information.
From Better Viewing Magazine, Nov/Dec 1998 issue. Copyright 1998, CCI/Crosby Publishing, Inc. Reprinted with permission.