‘Indignant Minnesotans’ remind journalists that data can hurt

The Washington Post’s Wonkblog map ranking every county in the US by “scenery and climate”

‘Indignant Minnesotans’ remind journalists that data can hurt

Minnesotans got a bitter lesson on the limits of data this week when the Washington Post’s Wonkblog published a map ranking every county in the US by “scenery and climate.” Reporter Christopher Ingraham built the graphic with data from the 1999 “natural amenities index,” a list based on climate, topography, and water. The North Star state didn’t fare well, with all but two of its 87 counties rating low on the list. (Red Lake County nabbed last place.) Residents took to social media to protest. They even formed a coalition on Twitter to express their dissatisfaction, calling themselves the “Indignant Minnesotans.”

There’s a lesson here for journalists. The hype surrounding websites like Vox, FiveThirtyEight, and The New York Times blog The Upshot show that numbers-driven rankings, charts, and maps can be compelling storytelling tools. But data is subjective.

Alberto CairoIf reporters don’t take time to understand the quirks and pitfalls of the data sets they’re referencing, and then plainly share that information with readers, they risk more than publishing inaccurate stories. When used carelessly or out of context, data can also harm communities. The swarms of stories on the country’s best and worst places to live—whether they’re based on nature or a city’s number of libraries—affect where people move or vacation, which subsequently touches local economies.

The good news? A growing number of journalists are working to better understand the intricacies of data, says Alberto Cairo, a University of Miami visual journalism professor whose upcoming book, The Truthful Art, focuses on data reporting.* “Just learning a little bit of logic, a little bit of scientific reasoning, and a little bit of stats at the conceptual level can help journalists avoid 90 percent of problems,” he says. It’s important advice for traditional beat reporters and number crunchers alike.

Subjectivity is inherent to data. Researchers choose which topics to examine and how to measure them. Biases don’t enter only through cracks; they live in the foundations of research. Cairo points to studies on the prevalence of domestic violence—some include instances of verbal aggression in their findings, while others only tally physical abuse. Depending on which report you read, you get a different version of “reality.”

The Post’s county ranking, for instance, used information intended to describe natural attributes that “enhance the location as a place to live.” The data ignored the fact that some people like snowy winters and don’t mind the lack of mountains. Never mind outside factors such as population density, property values, and schools.

Nobody wants to read a 2,000-word academic rant, but the subjectivity of data demands thorough yet engaging descriptions in news stories. Free from the constraints of print, digital outlets have no excuse to be mum here. Ingraham says he “shortchanged” the methodology a bit, although several portions of the article did call attention to the data’s bias toward low humidity and mild winters. He also opted to change a description of the ranking from “natural beauty” to “climate and scenery” after the story went live. His takeaway from the punches he took over the rankings, he says, is to figure out how to better portray the shortcomings of studies in the future.

It’s important to take time to pore through the figures and interview experts who really know the subject and data at hand. It’s too tempting for journalists—and scientists—to draw conclusions that don’t tell the whole story or are downright false. In FiveThirtyEight’s sharp take on research, Science Isn’t Broken, author Christie Aschwanden writes:

… as anyone who’s ever tried to correct a falsehood on the Internet knows, the truth doesn’t always win, at least not initially, because we process new evidence through the lens of what we already believe. Confirmation bias can blind us to the facts; we are quick to make up our minds and slow to change them in the face of new evidence.

Just because the statistics won’t become the spine of a years-long, society-shaking investigation doesn’t mean reporters should cut any corners. Rankings grab eyeballs and rile readers. When done properly, they lend clarity. “It’s really kind of the sweet spot when it comes to … helping them understand their world a little bit better,” Ingraham says. Think about the college rankings that inform the choices of many Americans. Those lists, of course, are subjective, but clearly labeled criteria can help prospective students choose schools based on what they value most.

The problem with statistical rankings is that they’re often taken at face value; they’re considered scientific because they are based indirectly on science. But as a number of indignant Minnesotans learned this week, numbers aren’t immune from bias. And that means all rankings call for scrutiny, from journalists and readers alike.

 

*This sentence has been updated to include the correct name of Alberto Cairo’s upcoming book, The Truthful Art, which is due in March 2016. His previous book, The Functional Art, was published in 2012.

 

ARTICLE SOURCE:  Columbia Journalism Review

 


MASTHEAD IMAGE SOURCE:  “Colton’s Atlas of the World Illustrating Physical and Political Geography,” 1855 (Wikimedia Commons).