Illustration by James Yang

Building a More Honest Internet

What would social media look like if it served the public interest?

November 26, 2019

Over the course of a few short years, a technological revolution shook the world. New businesses rose and fell, fortunes were made and lost, the practice of reporting the news was reinvented, and the relationship between leaders and the public was thoroughly transformed, for better and for worse. The years were 1912 to 1927 and the technological revolution was radio.

Radio began as bursts of static used to convey dots and dashes of information over long distances. As early as 1900, sound was experimentally broadcast over the airwaves, but radio as we know it—through which anyone with an AM receiver can tune in to hear music and voices—wasn’t practical until 1912, when teams around the world independently figured out how to use the triode vacuum tube as an amplifier. In the years that followed, three countries—the United States, the United Kingdom, and the Soviet Union—developed three distinct models for using the technology. 

In the US, radio began as a free-market free-for-all. More than five hundred radio stations sprang up in less than a decade to explore the possibilities of the new medium. Some were owned by radio manufacturers, who created broadcasts so they could sell radio receivers; others were owned by newspapers, hotels, or other businesses, which saw radio as a way to promote their core product. But 40 percent were noncommercial: owned by churches, local governments, universities, and radio clubs. These stations explored the technical, civic, and proselytizing possibilities of radio. Then came 1926, and the launch of the National Broadcasting Corporation by the Radio Corporation of America, followed in 1927 by the Columbia Broadcasting System. These entities, each of which comprised a network of interlinked stations playing local and national content supported by local and national advertising, became dominant players. Noncommercial broadcasters were effectively squeezed out.

In the Soviet Union, meanwhile, ideology prevented the development of commercial broadcasting, and state-controlled radio quickly became widespread. Leaders of the new socialist republics recognized the power of broadcasting as a way to align the political thinking of a vast land populated primarily by illiterate farmers. Radio paralleled industrialization: in the 1920s, as workers moved to factories and collective farms, they were met with broadcasts from loudspeakers mounted to factory walls and tall poles in town squares. And once private radios became available, “wired radio”—a hardwired speaker offering a single channel of audio—connected virtually every building in the country.

Illustration by James Yang

The United Kingdom went a different route, eschewing the extremes of unfettered commercialism and central government control. In the UK’s model, a single public entity, the British Broadcasting Company, was licensed to broadcast content for the nation. In addition to its monopoly, the BBC had a built-in revenue stream. Each radio set sold in the UK required the purchase of an annual license, a share of which went to fund the BBC. The BBC’s first director, John Reith, was the son of a Calvinist minister and saw in his leadership a near-religious calling. The BBC’s mission, he thought, was to be the British citizen’s “guide, philosopher, and friend,” as Charlotte Higgins writes in This New Noise (2015), her book on the BBC. Under Reith, the BBC was the mouthpiece of an empire that claimed dominion over vast swaths of the world, and it was socially conservative and high-minded in ways that could be moralistic and boring. But it also invented public service media. In 1926, when a national strike shut down the UK’s newspapers, the BBC, anxious to be seen as independent, earned credibility by giving airtime to both government and opposition leaders. Over the subsequent decades, the BBC—rechristened the British Broadcasting Corporation in 1927—has built a massive international news-gathering and distribution operation, becoming one of the most reliable sources of information in the world. 

Those models, and the ways they shaped the societies from which they emerged, offer a helpful road map as we consider another technological revolution: the rise of the commercial internet. Thirty years after the invention of the World Wide Web, it’s increasingly clear that there are significant flaws in the global model. Shoshana Zuboff, a scholar and activist, calls this model “surveillance capitalism”; it’s a system in which users’ online movements and actions are tracked and that information is sold to advertisers. The more time people spend online, the more money companies can make, so our attention is incessantly pulled to digital screens to be monitored and monetized. Facebook and other companies have pioneered sophisticated methods of data collection that allow ads to be precisely targeted to individual people’s consumer habits and preferences. And this model has had an unintended side effect: it has turned social-media networks into incredibly popular—some say addictive—sources of unregulated information that are easily weaponized. Bad-faith actors, from politically motivated individuals to for-profit propaganda mills to the Russian government, can easily harness social-media platforms to spread information that is dangerous and false. Disinformation is now widespread across every major social-media platform.

Sign up for CJR's daily email

In response to the vulnerabilities and ill effects associated with large-scale social media, movements like Time Well Spent seek to realign tech industry executives and investors in support of what they call “humane tech.” Yes, technology should act in the service of humanity, not as an existential threat to it. But in the face of such a large problem, don’t we need something more creative, more ambitious? That is, something like radio? Radio was the first public service media, one that still thrives today. A new movement toward public service digital media may be what we need to counter the excesses and failures of today’s internet.

 

The dominant narrative for the growth of the World Wide Web, the graphical, user-friendly version of the internet created by Tim Berners-Lee in 1989, is that its success has been propelled by Silicon Valley venture capitalism at its most rapacious. The idea that currently prevails is that the internet is best built by venture-backed startups competing to offer services globally through category monopolies: Amazon for shopping, Google for search, Facebook for social media. These companies have generated enormous profits for their creators and early investors, but their “surveillance capitalism” business model has brought unanticipated harms. Our national discussions about whether YouTube is radicalizing viewers, whether Facebook is spreading disinformation, and whether Twitter is trivializing political dialogue need to also consider whether we’re using the right business model to build the contemporary internet.

As in radio, the current model of the internet is not the inevitable one. Globally, we’ve seen at least two other possibilities emerge. One is in China, where the unfettered capitalism of the US internet is blended with tight state oversight and control. The result is utterly unlike sterile Soviet radio—conversations on WeChat or Weibo are political, lively, and passionate—but those have state-backed censorship and surveillance baked in. (Russia’s internet is a state-controlled capitalist system as well; platforms like LiveJournal and VKontakte are now owned by Putin-aligned oligarchs.)

The second alternative model is public service media. Wikipedia, the remarkable participatory encyclopedia, is one of the ten most-visited websites in the world. Wikipedia’s parent company, Wikimedia, had an annual budget of about $80 million in 2018, but it spent just a quarter of 1 percent of what Facebook spent that year. Virtually all of Wikimedia’s money comes from donations, the bulk of it in millions of small contributions rather than large grants. Additionally, Wikimedia’s model is made possible by millions of hours of donated labor provided by contributors, editors, and administrators.

Wikipedia’s success has been difficult to extend beyond encyclopedias, though. Wikinews, an editable, contributor-driven daily newspaper, often finds itself competing with its far larger sibling; breaking news is often reported in Wikipedia articles even before it enters Wikimedia’s newsroom. Wikibooks, which creates open-source textbooks, and Wikidata, which hosts open databases, have had more success, but they don’t dominate a category the way Wikipedia does. Of the world’s top hundred websites, Wikipedia is the sole noncommercial site. If the contemporary internet is a city, Wikipedia is the lone public park; all the rest of our public spaces are shopping malls—open to the general public, but subject to the rules and logic of commerce.

For many years, teachers warned their students not to cite Wikipedia—the information found there didn’t come from institutional authorities, but could be written by anyone. In other words, it might be misinformation. But something odd has happened in the past decade: Wikipedia’s method of debating its way to consensus, allowing those with different perspectives to add and delete each other’s text until a “neutral point of view” is achieved, has proved surprisingly durable. In 2018, when YouTube sought unbiased information about conspiracy theories to provide context for controversial videos, it added text sourced from Wikipedia articles. In the past decade, we’ve moved from Wikipedia being the butt of online jokes about unreliability to Wikipedia being one of the best definitions we currently have of consensus reality.

If the contemporary internet is a city, Wikipedia is the lone public park; all the rest of our public places are shopping malls—open to the general public, but subject to the rules and logic of commerce.

While it’s true that public service media like Wikipedia have had to share the landscape with increasingly sophisticated commercial companies, it’s also true that they fill a void in the marketplace. In 1961, Newt Minow, who had just been appointed Federal Communications Commissioner, challenged the National Association of Broadcasters to watch a full day’s worth of their insipid programming. “I can assure you that what you will observe is a vast wasteland,” he declared. Rather than simply limiting entertainment, Minow and his successors focused on filling the holes in educational, news, and civic programming—those areas left underserved by the market—and by the early 1970s, public service television and radio broadcasters like PBS and NPR were bringing Sesame Street and All Things Considered to the American public. 

A public service Web invites us to imagine services that don’t exist now, because they are not commercially viable, but perhaps should exist for our benefit, for the benefit of citizens in a democracy. We’ve seen a wave of innovation around tools that entertain us and capture our attention for resale to advertisers, but much less innovation around tools that educate us and challenge us to broaden our sphere of exposure, or that amplify marginalized voices. Digital public service media would fill a black hole of misinformation with educational material and legitimate news.

Recently, President Trump referenced a widely discredited study to make the absurd claim that Google manipulated search results in order to swing the 2016 presidential election toward Hillary Clinton. Though Trump’s claim is incorrect (and was widely shared with his massive following on Twitter, demonstrating the untrustworthiness of social media), it rests atop some uncomfortable facts. Research conducted by Facebook in 2013 demonstrated that it may indeed be possible for the platform to affect election turnout. When Facebook users were shown that up to six of their friends had voted, they were 0.39 percent more likely to vote than users who had seen no one vote. While the effect is small, Harvard Law professor Jonathan Zittrain observed that even this slight push could influence an election—Facebook could selectively mobilize some voters and not others. Election results could also be influenced by both Facebook and Google if they suppressed information that was damaging to one candidate or disproportionately promoted positive news about another.

This sort of manipulation would be even harder to detect than Russia’s disinformation campaign during the 2016 US election, because evidence would consist not of inaccurate posts, but subtle differences in the ranking of posts across millions of users. Furthermore, it may be illegal to audit systems to determine if manipulation is taking place. Digital media and information professor Christian Sandvig and a team of academics are currently suing the Department of Justice for the right to investigate racial discrimination on online platforms. That work could fall afoul of the Computer Fraud and Abuse Act, which imposes severe penalties on anyone found guilty of accessing a system like Facebook or Google in a way that “exceeds authorized access.”

One way to avoid a world in which Google throws our presidential election would be to allow academics or government bureaucrats to regularly audit the search engine. Another way would be to create a public-interest search engine with audits built in. The idea is not quite as crazy as it sounds. From 2005 to 2013, the French government spearheaded a collaborative project called Quaero, a multimedia search engine designed to index European cultural heritage. The project faltered before it could become a challenger to platforms like Google, but had it continued, EU law would have mandated that it have a high degree of transparency. In 2015, Wikimedia began planning a new search engine, the Wikimedia Knowledge Engine, to compete with systems like Wolfram Alpha and Siri, both of which provide data-driven, factual analysis in response to queries. A key part of the project’s design goals was auditability. (The project was abandoned when it created dissension within the Wikimedia community.)

We can imagine a search engine with more transparency about, for example, why it ranks certain sites above others in search results, with a process in place to challenge disputed rankings. But it might be more exciting to imagine services and tools that don’t yet exist, ones that will never be created by for-profit companies.

 

Consider social media. Research suggests that social platforms may be increasing political polarization, straining social ties, and causing us anxiety and depression. Facebook is criticized for creating echo chambers and “filter bubbles” in which people only encounter content—sometimes inaccurate content—that reinforces their prejudices. The resulting disinformation is, in part, a fault of its financial model. It happens because the platform optimizes for “engagement,” measured in time spent on the site and interactions with content, so the company has a disincentive to challenge users with difficult or uncomfortable information. The key reason misinformation spreads so fast and far is that people like sharing it. The stories that offer the biggest opportunities for engagement—and thus the stories that Facebook is built to direct our attention to—are stories that reinforce existing prejudices and inspire emotional reactions, whether or not they are accurate.

Can we imagine a social network designed in a different way: to encourage the sharing of mutual understanding rather than misinformation? A social network that encourages you to interact with people with whom you might have a productive disagreement, or with people in your community whose lived experience is sharply different from your own? Imagine a social network designed to allow constituents in a city to discuss local bills and plans before voting on them, or to permit recent immigrants to connect with potential allies. Instead of optimizing for raw engagement, networks like these would measure success in terms of new connections, sustained discussions, or changed opinions. These networks would likely be more resilient in the face of disinformation, because the behaviors necessary for disinformation to spread—the uncritical sharing of low-quality information—aren’t rewarded on these networks the way they are on existing platforms.

Can we imagine a social network designed a different way: to encourage the sharing of mutual understanding rather than misinformation? Illustration by James Yang

What’s preventing us from building such networks? The obvious criticisms are, one, that these networks wouldn’t be commercially viable, and, two, that they won’t be widely used. The first is almost certainly true, but this is precisely why public service models exist: to counter market failures. 

The second is more complicated. The two biggest obstacles to launching new social networks in 2019 are Facebook and… Facebook. It’s hard to tear users away from a platform they are already accustomed to; then, if you do gain momentum with a new social network, Facebook will likely purchase it. A mandate of interoperability could help. Right now, social networks compete for your attention, asking you to install specific software on your phone to interact with them. But just as Web browsers allow us to interact with any website through the same architecture, interoperability would mean we could build social media browsers that put existing social networks, and new ones, in the same place.

The question isn’t whether a public social media is viable. It is if we want it to be. The question is what we’d want to do with it. To start, we need to imagine digital social interactions that are good for society, rather than corrosive. We’ve grown so used to the idea that social media is damaging our democracies that we’ve thought very little about how we might build new networks to strengthen societies. We need a wave of innovation around imagining and building tools whose goal is not to capture our attention as consumers, but to connect and inform us as citizens.

This article has been updated to reflect a correction in Christian Sandvig’s academic title.

Ethan Zuckerman is director of the Center for Civic Media at MIT and associate professor of the practice at the MIT Media Lab. He is the author of Digital Cosmopolitans (2013) and cocreator of the MediaCloud.org media analytics platform.