Sign up for The Media Today, CJR’s daily newsletter.
A version of this article appeared in the Tow Center’s weekly newsletter. To stay updated about the Tow Center’s work on how technology is changing journalism, subscribe here.
While the term “news desert” effectively conveys the extent of the local news crisis, there is no widespread agreement among journalists and researchers—let alone funders, policymakers, and the public—about how to define and measure one. When dealing with a crisis as urgent as the one currently facing local news, methodological questions might seem beyond the point. But any solutions to the problem of news deserts will be predicated on how we understand them.
This question was at the forefront of a recent webinar I moderated for the Tow Center with some of the top researchers in the growing field of news ecosystems studies. During the webinar—the first in a three-part virtual series on local news—the four panelists discussed the theoretical frameworks and methodologies that guide their work, as well as how their field has already evolved in recent years despite its relatively short history.
Penny Abernathy, whose foundational Expanding News Desert project at the University of North Carolina helped popularize the term, said that her team’s initial research on counting and mapping “communities without newspapers” has expanded to address issues like technological and financial “access [to information], as well as the quantity and quality of the news outlets.” Other researchers have expanded upon this work to include digital and broadcast outlets and provide more demographic context about the places in which news organizations are based.
Sarah Stonbely, research director for the Center of Cooperative Media at Montclair State University, noted that in any study assessing the health of local news, “it’s very difficult to achieve a level of granularity while also providing scale, which provides opportunities for comparative analysis and seeing patterns.” Stonbely’s News Ecosystem Mapping Project focuses on creating a highly detailed portrait of a state’s media landscape—with information on a local publication’s coverage area, frequency, ownership, and medium, as well as municipal-level details on education and median income, among other things. While Stonbely’s project currently focuses on the small state of New Jersey, she hopes its methodology can be applied widely.
Matthew Weber, an associate professor at the University of Minnesota, is hoping to achieve both scale and depth. The News Measures Research Project, an initiative he works on with Duke’s Philip Napoli and others, has mapped outlets across all mediums in one hundred randomly sampled communities and then analyzed sixteen thousand stories produced by them to assess their quality. The project, Weber said, ran up against one of the most significant methodological challenges for studies like this: how incredibly labor-intensive the data collection and analysis is, frequently involving teams of several people conducting months of trade database collection, Google research, scraping, and manual coding.
Weber and his team realized it was no longer feasible to manually analyze the fifty-five thousand articles they’ve collected and have now turned to automation, using named-entity recognition and natural language processing to perform content analysis. While still in the first stages, their automated process aims to assess quality of coverage, using criteria such as whether the articles are original; whether they are truly local in terms of subject matter; and whether they address eight critical-information needs, on topics ranging from political information to education and health, as defined in a study for the FCC.
Abernathy identified “a real tension in our profession” between the use of contemporary technology such as algorithmic tools to track trends and the demands of analyzing coverage for quality. As an example, Abernathy spoke of a Facebook data set of local news stories shared on the platform to which both her team and Weber’s were granted access. In some cases, she said, the automated process described above by Weber’s team found that a story contained valuable public safety information, but when Abernathy’s team manually analyzed it, they concluded it was a sensationalist crime story that didn’t meet the critical-information-needs criteria. An algorithm can’t tell you the difference between a “five-inch story saying there will be a city council meeting, and one where someone actually attended [the meeting] and talks in depth about what happened,” Abernathy said.
Aaron Foley, director of the newly created Black Media Initiative at the Center for Community Media at cuny, warned that “we need to be cautious about how we define quality” in the first place. The Black Media Initiative is in the process of conducting its own mapping study of Black media in the US, to be published later this fall. Because Black media has been especially hard hit financially amid the local news crisis, many outlets are so understaffed that they don’t have the capacity to produce much original content, Foley said. Instead, they often opt to directly publish press releases, which “may bother some people within our industry. But this is still information delivery for a certain audience that might be ignored by a bigger paper, so where do we put this in our scale?”
Foley also said that even defining local news for Black communities is surprisingly complicated. “Does a hair magazine count? Hair can be political; hair can be local news. Does that count within our ranks?” he asked. “It challenges our traditional methodologies…but if we are dealing with how news outlets serve certain communities, this is what we will have to address.”
The tensions between scale and depth, manual and automated research methods, and competing definitions of terms like “news desert” will likely continue to resonate as projects like these proceed at a time when local news is more imperiled than ever. And while they may differ in approach, “it’s exciting that a lot of our concepts are overlapping,” Stonbely said. “[We’re] at the beginning of a research genre.… It’s better to be iterating in the same direction.”
Has America ever needed a media defender more than now? Help us by joining CJR today.