A decade ago, the consensus was that the digital revolution would give effective voice to millions of previously unheard citizens. Now, in the aftermath of the Trump presidency, the consensus has shifted to anxiety that online behemoths like Twitter, Google, YouTube, Instagram and Facebook have created a crisis of knowledge — confounding what is true and what is untrue — eroding the foundations of democracy.
These worries have intensified in response to the violence of Jan. 6, and the widespread acceptance among Republican voters of the conspicuously false claim that Democrats stole the election.
Nathaniel Persily, a law professor at Stanford, summarized the dilemma in his 2019 report, “The internet’s Challenge to Democracy: Framing the Problem and Assessing Reforms,” pointing out that in a matter of just a few years
the widely shared utopian vision of the internet’s impact on governance has turned decidedly pessimistic. The original promise of digital technologies was unapologetically democratic: empowering the voiceless, breaking down borders to build cross-national communities, and eliminating elite referees who restricted political discourse.
Since then, Persily continued:
That promise has been replaced by concern that the most democratic features of the internet are, in fact, endangering democracy itself. Democracies pay a price for internet freedom, under this view, in the form of disinformation, hate speech, incitement, and foreign interference in elections.
Writing separately in an email, Persily argued that
Twitter and Facebook allowed Trump both to get around legacy intermediaries and to manipulate them by setting their agenda. They also provided environments (such as Facebook groups) that have proven conducive to radicalization and mobilization.
Margaret Roberts, a political scientist at the University of California-San Diego, puts it differently. “The difficult part about social media is that the freedom of information online can be weaponized to undermine democracy.”
Social media, Roberts wrote by email,
isn’t inherently pro or anti-democratic, but it gives voice and the power to organize to those who are typically excluded by more mainstream media. In some cases, these voices can be liberalizing, in others illiberal.
The debate over the political impact of the internet and social media raises the question: Do the putatively neutral instruments of social media function for both good and evil or are they inherently divisive?
Lisa Argyle, a political scientist at Brigham Young University, stressed additional aspects of the question in an email: “When talking about social media and politics,” she writes, “it is really important to think about who is engaged in the conversation and who is not.”
There are, she points out,
Demonstrated race, class, age, and other demographic divides in who uses different platforms, so heavy reliance on social media for democratic ends has the potential to exacerbate existing inequalities.
In addition, Argyle notes,
Within each platform there are a set of people who are highly politically interested, who discuss politics often, and who are most likely to have extreme opinions. Therefore, when people use social media as a proxy for political opinions writ large, they are likely to overestimate that amount of conflict and polarization that exist in the offline world.
Yochai Benkler, a law professor at Harvard, contends in an email that “it’s a mistake to conceive of technology as an external force with a known definitive effect on social relations.”
“Radio,” Benkler argues,
was as available for F.D.R.’s fireside chats as it was for Hitler’s propaganda. Ten years ago the internet in general, and Facebook in particular, was widely perceived as a liberation. Now it’s blamed for the collapse of liberal democracy.
Digital media has distinctive characteristics that “can work both to improve participation and democratic governance and to undermine it,” Benkler adds.
“It was citizens’ video journalism capturing the evidence and broadcasting it on social media, coupled with the mass protests,” he notes, “that changed the public conversation about police shootings of Black Americans. And it was also social media that enabled the organization and mobilization of Unite the Right in Charlottesville.”
Ultimately, according to Benkler,
The epistemic crisis we experience in the United States today is elite-driven (Trump, other GOP leadership) and led by broadcast media — cable TV (Fox), radio (Limbaugh, Hannity), and major newspapers or large commercial websites (NY Post, Breitbart), coupled with some very bad reporting in the mainstream press.
Along parallel lines, Yannis Theocharis, a professor of digital governance at the Technical University of Munich, makes the point that
Social media need to be seen as an incredibly potent medium in the toolset of both those who wish to strengthen democratic governance and those who wish to undermine it. They are used just as effectively and extensively as mobilizing tools by organized hate groups and those wishing to marginalize and silence others or challenge core democratic values, as they are used by activists and social movements aiming to strengthen citizens’ political voice, increase the quality of democratic representation, or protest racial injustice.
There is an ongoing argument about whether the promotion of divisiveness and polarization is built into the marketing structure of social media.
Jack Balkin, a law professor at Yale, writes in an email:
Some of the most troubling features of social media come from business models based on surveillance and monetization of personal data. Social media will not improve as long as their current surveillance-based business models give them the wrong incentives.
Trump, in Balkin’s view, “showed how to use social media for demagogic ends to harm democracy.”
But, he added,
Trump’s success built on decades of polarization strategies that relied on predigital media — talk radio and cable. Without talk radio and Fox News, Trump would have been a far less effective demagogue.
Do social media drive polarization? Balkin’s answer:
The larger and more profound causes of polarization in the United States are not social media, which really become pervasive only around 2008 to 2010, but rather decades of deliberate attempts to polarize politics to gain political power. Once social media became pervasive in the last decade, however, they have amplified existing trends.
Robert Frank, professor emeritus of economics at Cornell, is a leading proponent of the argument that the current business model of Facebook and other social media is a significant contributor to political and social dysfunction.
Writing on these pages, Frank argued on Feb. 14 that the economic incentives of “companies in digital markets differ so sharply from those of other businesses.”
Digital aggregators like Facebook, he continued,
make money not by charging for access to content but by displaying it with finely targeted ads based on the specific types of things people have already chosen to view. If the conscious intent were to undermine social and political stability, this business model could hardly be a more effective weapon.
Frank notes that the algorithms digital companies use to
choose individual-specific content are crafted to maximize the time people spend on a platform. As the developers concede, Facebook’s algorithms are addictive by design and exploit negative emotional triggers. Platform addiction drives earnings, and hate speech, lies and conspiracy theories reliably boost addiction.
The profit motive in digital media, Frank contends, drives policies that result in “the spread of misinformation, hate speech and conspiracy theories.”
Eric B. Schnurer, president of Public Works LLC, a policy consulting firm, is similarly critical of the digital business model, writing in an email:
The social media companies discovered that there were limited means for making money off social media, settling on an advertising-based model that required increasing and retaining “eyeballs,” which quickly led to the realization that the best way to do so is to exploit nonrational behavior and create strong reactions rather than reasoned discourse.
Digital firms, in Schnurer’s analysis,
have now metastasized into this model where their customers are their raw material, which they mine, at no expense, and sell to others for further exploitation; it is a wholly extractive and exploitive business model, whatever high-minded rhetoric the companies want to spread over it about creating “sharing” and “community.”
There were early warnings of the dangers posed by new digital technologies.
Shoshana Zuboff, professor emeritus at Harvard Business School, pursued a line of inquiry as far back as 1981 with “The Psychological and Organizational Implications of Computer Mediated Work” that led to the broad conclusions she drew in her 2016 paper, “Big other: surveillance capitalism and the prospects of an information civilization.”
“Big data” is above all the foundational component in a deeply intentional and highly consequential new logic of accumulation that I call surveillance capitalism. This new form of information capitalism aims to predict and modify human behavior as a means to produce revenue and market control. Surveillance capitalism has gradually constituted itself during the last decade, embodying a new social relations and politics that have not yet been well delineated or theorized.
From a different vantage point, Christopher Bail, a professor of sociology at Duke and director of the university’s Polarization Lab, writes in his forthcoming book “Breaking the Social Media Prism” that a key constituency is made up of those who “feel marginalized, lonely, or disempowered in their off-line lives.”
Social media, Bail writes in his book,
offer such social outcasts another path. Even if the fame extremists generate has little significance beyond small groups of other outcasts, the research my colleagues and I conducted suggests that social media give extremists a sense of purpose, community, and — most importantly — self-worth.
The social media prism, Bail writes,
fuels status-seeking extremists, mutes moderates who think there is little to be gained by discussing politics on social media, and leaves most of us with profound misgivings about those on the other side, and even about the scope of polarization itself.
One of the striking findings of the research conducted at Bail’s Polarization Lab is that contrary to expectations, increased exposure to the views of your ideological opponents does not result in more open-mindedness.
Bail emailed me to point out that “we surveyed 1,220 Republicans and Democrats” and
offered half of them financial compensation to follow bots we created that exposed them to messages from opinion leaders from the opposing political party for one month. When we resurveyed them at the end of the study, neither Democrats nor Republicans became more moderate. To the contrary, Republicans became substantially more conservative and Democrats became slightly more liberal.
Bail also offered an analysis of this phenomenon:
The reason I think taking people out of their echo chambers made them more polarized — not less — is because it exposes them to extremists from the other side who threaten their sense of status.
In his book Bail put it this way, “People do not carefully review new information about politics when they are exposed to opposing views on social media and adapt their views accordingly.” Instead, he observes, “they experience stepping outside their echo chamber as an attack upon their identity.”
Nate Persily makes a parallel — and important — point:
No one doubts that the internet provides “safe spaces” for individuals to find common cause for antisocial activity otherwise deterred in the offline world. Of course, the ability of individuals to find communities of like-minded believers unconstrained by geography is one of the great benefits of the internet. Nevertheless, the darkest corners of the internet provide self-reinforcing havens for hate, terrorist recruitment, and propagation of conspiracy theories.
In his email, Persily listed some of those havens:
For sizable groups of people, the internet affords environments, such as Facebook groups, Subreddits, Parler, or chat rooms on 4chan and 8kun, where they can make common cause with people they would not find in their neighborhood or in face-to-face forums. In other words, there are shadowy places on the internet where conspiracy-communities, like QAnon, or hate groups can thrive.
Joshua Tucker, a political scientist at N.Y.U., pointed out by email that
prior to social media, if you were the only one in your county who might support extremist views regarding the overthrow of the United States government, organizing with other like-minded but geographically dispersed compatriots would be a costly activity.
The arrival of social media, he argues, “drastically reduces these costs and allows such individuals to more easily find each other to organize and collaborate.”
In addition, according to Tucker,
the tools developed by authoritarian regimes to influence their own online conversations — online trolls and bots — can also be used by small numbers of extremists in democratic societies to amplify their presence online, making their positions appear to be more popular than they might be, in what has the potential to become a self-fulfilling prophecy.
Tucker, like a number of other scholars of social media, stresses that
prior to the internet, news was in the domain of professional journalists and there were powerful gatekeepers in the form of editors and publishers. While this may have also prevented more progressive messages from entering mainstream media, it undoubtedly also blocked extreme anti-democratic voices as well, in addition to enforcing a certain level of quality in news reporting.
The internet, according to Tucker,
lowered the barrier to publishing news dramatically, but social media accelerated this process by making it possible to consume news without even taking the step of seeking out the publisher of that news by going to their home page. In addition, social media exacerbated the premium placed on news that delivered clicks’, highlighting the appeal of certain types of news — including blatantly false news.
Bryan Ford, a professor of computer and communication sciences at the Swiss Federal Institute of Technology in Lausanne, has become a technopessimist.
While I think technology has tremendous potential to strengthen democratic governance, in balance I think most of the major recent technological advances have unfortunately weakened it.
The factors include (a) social media contributing to social echo chambers that more readily become detached from objective reality or truth; (b) the related global infatuation with big data and deep learning leading us to concentrate ever more decision-making power into opaque and democratically-unaccountable algorithms run by profit-motivated and democratically-unaccountable technology companies; (c) society’s increasingly-ubiquitous use of manipulable and undemocratic online reputation metrics such as likes, follower counts, reviews, etc., as fundamentally-flawed proxies for democratic measures of reputation, public support for positions or opinions, truth or plausibility.
If the pessimists are right, what can be done to reverse the anti-democratic forces that find expression on the internet and its offspring, the social media?
There is no consensus on this question except that effective reform will be difficult in this country for a variety of reasons, including First Amendment restrictions on regulating speech and political and ideological opposition to government-mandated changes to private sector business models.
Persily points out that not only has election interference
become “professionalized,” it has also become, like other arenas of internet activity, vulnerable to gang-like actions. The statelessness and disorganization of online associational life enables international coalitions of hackers, troublemakers, anarchists, and criminals to find solidarity in wreaking havoc against the establishment.
Asked what the long-range prospects are, Persily said there was no definitive answer. He worries “that the lack of trust in the democratic process, that festered over the last four years and exploded on Jan. 6, will have a severe and long-lasting impact.”
For one thing, purveyors of misinformation and disinformation have become increasingly sophisticated.
Bryan Ford writes about advances in artificial intelligence:
The fashionable strategy in the tech sector — namely using more data, deeper deep learning, etc., to distinguish between real and fake news or real and fake accounts, is fundamentally misguided because it neglects to recognize the fact that all the bad guys have access to state-of-the-art machine learning too.
Given any machine learning classification algorithm intended to make an important distinction, it’s generally possible to train an ‘adversarial’ machine-learning algorithm that essentially figures out how to trick the first one systematically.
In other words, while designing systems to detect fraudulent postings “only gets harder and harder,” Ford writes, it gets
easier and easier for machines, and botnet operators to train algorithms to create progressively-more-convincing fake news and fake user profiles that before long will appear “more believable” to both machines and humans than real news or real user profiles.
Perhaps more significant, would-be reformers face an increasingly powerful array of digital firms that are certain to oppose any regulation that interferes with their exceptional profit margins.
The Bureau of Economic Affairs estimated that from 2006 to 2016, the digital economy grew at an average annual rate of 5.6 percent, more than three times the 1.5 percent average annual rate of growth for the overall U.S. economy. By 2017, the bureau estimated, the digital economy accounted for 6.9 percent of the U.S. gross domestic product, or $1.35 trillion.
And despite all the chatter, there is no significant public pressure to alter the practices of the digital industry. Insofar as these companies have transformed American politics, for a majority of the population it has been a slow, almost invisible process that has provoked little or no outcry. In a sense, this chain of events has resulted in the climate in which Donald Trump’s extraordinary false claims elicited no protest in half the country. Quite the opposite, in fact.
As long as truth can be disguised — and as citizens lose the ability to distinguish truth from falsehood — democracy will continue to weaken, ultimately becoming something altogether different from what we are accustomed to. And all of this is happening while most of us continue to be unaware of the transformation that has taken place during our lifetime, functionally oblivious to the “epistemic crisis,” both as a contributor to the problem and as an accelerant.
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here's our email: [email protected].
Follow The New York Times Opinion section on Facebook, Twitter (@NYTopinion) and Instagram.
Source: Read Full Article