Glenn Reynolds, the famous “Instapundit” and a law professor at the University of Tennessee, offers a short book about social media and the problems it brings. He frames his analysis and argument as a parallel to James C. Scott’s Against the Grain, which valorizes Mesopotamian hunter-gatherers. Reynolds’s point is that just as when hunter-gatherers became city dwellers they also became more susceptible to disease, so when we submitted ourselves to living on social media, we also became more susceptible to disease. That is, to diseases of the mind, and he offers some possible cures and vaccines.
The Social Media Upheaval is best read together with Tim Wu’s fantastic The Curse of Bigness, which covers in more detail how we should enhance, and increase enforcement of, antitrust law in order to break the pernicious political power of the Lords of Tech that results from economic concentration. Reynolds also recommends antitrust enforcement in preference to other ways of addressing the problems, but Wu offers a more expansive analysis of antitrust specifically, so that’s why you should read the two together. Combining the two also reinforces that the problems with Big Tech are a non-partisan issue. Reynolds is a conservative, or at least a libertarian, and Wu a liberal, but they have a great deal in common. True, conservatives face particular attacks from social media companies, as I discuss below, but most of the problems Reynolds (and Wu) outline affect all of society.
For Reynolds, the root of all problems with social media is that, like diseases in a Mesopotamian city, ideas spread far more quickly than they ever could in the past. Unlike earlier increases in the speed of ideas, such as that resulting from the printing press, this does not benefit society. Speed of idea transmission is common to all social media, because it is the core of social media’s business model. Of course, the creators of social media platforms characterize it with the neutral-sounding term “engagement.” But what that really means is that tens of thousands of smart people are paid to find ways to make readers or viewers skim over something they see and to encourage them to spread it quickly to others, such that they also will react with the same emotions. Since what is served up by the social media platform is algorithmically chosen to amplify the user’s emotions, most often negative and easy emotions, the usual result is a cascade of facile yet destructive negativity, with profit being collected at every step by the manipulators.
Why doesn’t faster transmission of ideas through social media benefit society? After all, books and newspapers created a more educated and better informed society. Because what “engagement” creates is classic mob behavior, not reflection. Reynolds does not cite Gustave Le Bon’s classic The Crowd, but that book explains much behavior on social media, by showing how people who are part of a crowd act wholly differently than they do as individuals. The most dramatic demonstrations of this are the shame mobs that arise frequently, attacking someone for perceived bad behavior. This is made worse by that social media tends to reduce empathy, by increasing herd behavior and nearly eliminating face-to-face interaction, even among close friends. The entire point of social media is to bypass the rational centers of the brain, that is, to deliberately create mobs, in order to make money for the Lords of Tech, who conceal their manipulations behind claims of “keeping the world connected.”
Aside from mob behavior, social media also leads to an overall degradation in the ability to think clearly and deeply, and in the ability to read and comprehend complex material. For a democracy, based on the idea of rational deliberation, this is a big problem (especially when the ruling classes are disproportionately represented in, and reliant for opinion formation on, social media, especially Twitter). An even bigger problem for society is that there is strong evidence that screen time is the cause of a global drop in IQ in the developed world, reversing the Flynn effect, whereby IQ had been increasing steadily for a hundred years. A dumb, unreflective mob is not the stuff of which high-functioning societies are made. And then, of course, there’s the privacy problem—that social media companies know far, far more about us than most people would prefer, and they use and sell that information with no limits at all. Reynolds does not talk about this in detail—it doesn’t really fit his disease frame, but more importantly, I suspect, he knows that everyone knows about this problem, and he probably thinks of this as a free market exchange of sorts, less worthy of social attention than negative externalities such as mob behavior and destruction of rational thought.
Of course, to some extent these problems are more general problems of modernity, exacerbated by social media. Television, for example, began the process of making people think in images rather than reasoning from the printed word after reflection. But social media has blown the problems up to enormous proportions, and added to them new problems, such as massive amounts of actual lies, which, because of the speed of transmission of information, and the algorithms of engagement, have much more impact on society than lies (or their cousins, conspiracy theories) used to. No doubt social media has positive sides, but those positive sides, such as the ability to find others with shared, but rare, interests, are not tied to or benefited by the pernicious mechanisms built into today’s social media.
Why do people jump with gusto into this poisonous stew, if it’s so bad for us? Nobody is forcing us to, after all. Because the internet, and social media in particular, is addictive. The jokes about dopamine are not jokes. It’s addictive, because it’s designed to be addictive. There is a reason that Facebook, years ago, stopped simply showing you the posts of friends in the order they appeared, and instead showed you what they chose to show you. It’s not to make it better for you. It’s to make it addictive and allow them to hoover up money, while decreasing your actual satisfaction with the experience. After all, you’re the product, not the customer.
So, given that we are diseased, what to do? Reynolds goes through various possibilities. He points out that speed of transmission of speech has traditionally been viewed as a reason to potentially regulate speech, since it reduces time to reflect, if the speech encourages harm. Reynolds’s point isn’t that social media speech should therefore be regulated, necessarily, but that speech that encourages people to act without thinking is more problematic than other speech, and therefore we shouldn’t rule out regulation in the reflexive belief that we are thereby somehow limiting free speech.
But if we did directly regulate, there’s little reason, Reynolds says, to believe that would solve the problems. He points out what is often not understood—big companies love regulation. They have the money, resources, and contacts to shape and manipulate it to their advantage, so it does not crimp their activities, but does prevent competition by smaller and poorer companies. And limiting competition is an obsession with all of Big Tech—as Wu outlines, all these companies, from Google to Facebook, have successfully bought up their competition, without any governmental antitrust objection, and thereby achieved the status of unassailable oligopolies. So regulation as it is currently being discussed, where Facebook leads old men in Congress around by the nose and some toothless law is ultimately passed, is a waste of time. It is no coincidence that as criticism of social media companies increases, there has been a huge increase in the amount of political spending by Big Tech—and that’s just the visible and traceable spending.
After setting this groundwork, Reynolds briefly examines several possible regulatory fixes. Ending online anonymity through some kind of ability for authorities to easily find out who is behind any particular social media action, in essence a form of licensing, gives far too much power to governments (and, Reynolds does not say, to the Lords of Tech—witness how Facebook, a few weeks back, was happy to hand out to left-wing media outlets the name of a private citizen who had supposedly created an altered video of Nancy Pelosi, in order to assist him being doxxed and attacked). Another solution Reynolds endorses is algorithmic transparency, which he sees as more promising. For example, requiring social media to allow you simply to see posts of people you choose to see in chronological order, rather than as determined by the platform, would decrease the problems caused by curated engagement designed to compel emotional behavior. Related to this is something Reynolds does not really discuss, but would presumably endorse, data portability, in which users would have to be provided an easy way to transfer all their data to alternate social media platforms. Both these seem like excellent ideas.
Reynolds only very briefly considers removing Section 230 immunity, which has gotten a lot of attention lately. To provide more background, this is a section of a 1996 law, the Communications Decency Act, that relieves social media platforms, and only them (not newspapers or other traditional media) from any responsibility for what is published on their platforms. The idea of granting Section 230 immunity, back in 1996, was that since technology companies, then a totally new thing, were providing strictly neutral platforms, they should not be held responsible for speech on them, and the result would be a free and open internet—with the exception that such platforms should also be encouraged to suppress pornography. Thus, Section 230 treated internet platforms as a type of common carrier, giving them the standard privileges of common carriers. For example, UPS is not responsible for the contents of what it ships, but as a quid pro quo has to offer services on equal terms to everyone similarly situated, the core requirement for all common carriers. Immunity for internet platforms is the same principle.
But trying to achieve both its goals, Congress then also allowed internet platforms to not offer service on equal terms in one narrow area, explicitly permitting censorship of obscene material, along with, the fatal words, “other objectionable content.” In retrospect, this was a huge mistake, because the exception erased the requirement of offering service to all. Internet platforms have used Section 230 to invert what Congress intended. They don’t prevent pornography, but they also don’t provide a neutral platform, yet they are clad from head to foot in legal armor plate.
Reynolds rejects as politically unfeasible removing Section 230 immunity, or conditioning its continuance on adopting neutrality with respect to political content. That seems like an odd reason to reject such a promising solution, since all regulatory solutions are, right now, politically unfeasible. In fact, all of the Section 230 discussion in this book is pretty cursory and unsatisfactory, especially from a law professor. All social media platforms are undeniably common carriers as traditionally viewed, since companies like Google, Facebook, Instagram and so forth have managed to nearly wholly occupy the entire space of utilities that are, for many people, for better or worse, essential for their daily lives. You only have to ask yourself one question. Would you permit the telephone company, or your mobile provider, to disconnect you and ban you from using the phone if they disliked what you said on your phone? Unless you say “yes,” no major social media platform should be allowed to engage in any kind of viewpoint discrimination. (In fact, Facebook made this exact argument when arguing for “net neutrality.” As usual, the company talks out of both sides of its mouth.) My guess is that Reynolds glosses over Section 230 immunity because removing it doesn’t really solve the problems he outlines, especially speed of idea transmission; most of the problems it would address are those of political bias.
Soon enough, though, Reynolds tightens his focus to antitrust. Not only is competition nonexistent, for the reasons and through the mechanisms Wu identifies, but the tech companies collude across supposedly unrelated lines of business. “[T]he huge tech companies constitute interlocking monopolies in various fields, and often support one another against competitors—as PayPal, for example, cut off money transfers to YouTube competitor BitChute, and Twitter competitor Gab.” This is only part of the problem—again, being nonpartisan, what Reynolds ignores is that PayPal did this not primarily in order to hamper competition, since neither BitChute nor Gab was a relevant competitor of YouTube or Twitter, and other small potential competitors were not cut off by PayPal. Rather, it was done as a political action, to hamper conservative political speech, since the point of those specific competitors is that they compete primarily by offering a censorship-free platform where conservatives can speak freely without the chilling effect of censorship, something that the Lords of Tech are obsessed with preventing. Just a few weeks ago, for example, all the internet platforms (including Pinterest!) colluded to forbid dissemination of Project Veritas’s expose of a top Google executive’s admission that they use the platform to actively aid Democrats. (Twitter used to say it was “the free speech wing of the free speech party.” That seems painfully laughable now.) Reynolds instead focuses on the traditional problems with monopoly and oligopoly—not the modern, Robert Bork vision of consumer harm being the only relevant concern, but the earlier, Brandeisian aversion to excessive political power resulting from massive economic power. Breaking up Big Tech, and forbidding collusion among the successor companies, thereby ensuring competition, would reduce all the problems with social media. It would hamper the wildfire spread of problematic content, and encourage competitors to compete on the axes of better privacy, more algorithmic transparency, less systemic bias, and so forth, perhaps obviating the need for more direct government action with respect to those matters.
Reynolds concludes that using antitrust to increase competition is the way to go, extensively quoting Wu. I completely agree with this. That’s the cure, and continuing to enforce antitrust is the vaccine. And being nonpartisan, this solution is the only one that might get political traction in the current environment. But since no vaccine is perfect, and continuing his disease metaphor, Reynolds notes that better education might help create the equivalent of immunity, reinforcing the beneficial effects of antitrust enforcement. I suppose, but what he specifically advocates is “training people in critical reading and critical thinking.” I don’t think he has children, so perhaps he doesn’t realize that’s already very popular in schools, and merely code for indoctrinating children into leftist thinking. I can assure him that at no public school in America is questioning Left pieties regarded as critical thinking. And schools aren’t going to go back to teaching old fashioned civics and history, even though Reynolds is right that would make discourse on the internet better.
More compelling is the idea that society may by itself develop immunities and workarounds that limit the damage. We forget that it’s only been ten years or so that we’ve really been in the social media world; how people behave often changes on its own, though it usually seems like current behaviors will go on forever. People may come to realize that social media isn’t real and isn’t important. Perhaps people will stop using social media or use it much less; perhaps other distractions (with their own problems) will arise. Such adaptations, or unforeseen other changes, could lead to the diseases wrought by social media becoming, as Reynolds says, endemic, rather than epidemic.
Along these lines, I am given to understand that private social networks have gained some ground. Facebook allows private groups, and one can imagine that if a platform could be created that allowed private social networks to operate easily and slickly, without any kind of interference and censorship of the type Facebook imposes, it could catch on. Listening to a recent Palladium magazine podcast, for example (I highly recommend Palladium), it was suggested that Urbit (with which Curtis Yarvin was until recently involved) might help with such a platform. Just because we now can’t see exactly how this would work, and that network effects are by definition lacking at the beginning, doesn’t mean it’s not going to happen. Similarly, Jordan Peterson keeps promising that his new platform, thinkspot, will be rolled out soon, though it’s opaque to me what it’s going to do. True, anything like this would be viciously attacked, just like BitChute and Gab, with the active cooperation of a vast ecosystem of third parties wholly committed to total Left dominance, including payment processors, funding platforms such as Patreon and GoFundMe, credit card companies, and corrupt state and local governments such as Andrew Cuomo’s New York. But that doesn’t mean we shouldn’t eagerly look into these platforms (I, for example, am about to start also publishing podcast narrations on LBRY, a blockchain based alternative platform).
A piece of practical advice Reynolds offers is for targets to ignore social media firestorms, since they are usually a tempest in a teapot. “Once the anger is discharged online, it’s very unusual for people to follow it up with concrete actions in the real world.” “A less fevered response might be healthier.” If those, individuals or entities, subject to attack on social media simply ignored the attacks for a few days, they would find it almost always simply disappears. I think this is true, and for that reason, I don’t see why people feel the need to hasten and apologize on social media when they are being chased by a mob. You will never catch me dead apologizing for anything, and most especially never offering pre-emptive apologies, but then, I’m not on social media in any relevant way, so I’d be impossible to attack on social media. I suppose if I did something perceived as awful, like shoot a baby elephant (not a desire of mine), and someone put a picture of it on social media, people could talk about it and whip up a mob. But I wouldn’t notice unless someone came to my house or work to threaten me as a result, and that’s what guns are for. And it seems to me that mobs don’t pick non-public targets who themselves aren’t on social media. I am no longer on Facebook, nor are my children on any form of social media.
But for those who have a profile on social media, again trying to be balanced, Reynolds seems to think that capitulating to the mob is something demanded by both Left and Right, when the reality is capitulation is only ever demanded, with any effect, by the Left, of the non-political or the Right. “And your capitulation, likely as not, will just set off an opposite-but-equal mob angry that you gave in.” This is a hollow and, frankly, silly attempt at appearing even-handed. There is only ever capitulation to the Left, and never any new mob of the Right arising as a result. I’m not aware of any right-wing social media campaign, or mob, that has had any impact on any person or entity. In fact, if it did, the Lords of Tech would swing into action, immediately and permanently deplatforming any person who was a node in such a campaign, and ensuring compliance with the critical dictate that social media can only be used to hound the Right and enhance the power of the Left. (Individuals on the Right, notably Donald Trump, can have an impact, but only if they are so larger-than-life, again like Trump, that it is difficult to directly censor or deplatform them. Still, Twitter announced last week that they would be censoring and hiding Trump tweets, battlespace preparation for far more aggressive actions that are certain to come in 2020. My guess is they will fully kick Trump off Twitter. If they don’t, it’s only because they fear what he might do to them in response.) So while Reynolds is correct that ignoring firestorms makes sense, his solution is of limited use in any firestorm with a political angle, since the political playing field on social media is not even.
I think we should have extremely aggressive antitrust enforcement, of the type that would break any entity like Twitter, Facebook, Google, or Apple into fifteen or twenty entities directly competing. They would be strictly forbidden, under criminal penalties, from any type of collusion, among themselves or with third parties such as payment processors. This should accomplish the twin goals of treating the mental diseases Reynolds identifies, and preventing Big Tech’s oppression and suppression of conservatives. Of course, this latter result is why there won’t be any increased antitrust enforcement, much less new antitrust legislation, because the Left is aware of this benefit to it of Big Tech’s monopolies. And as I say, they are rapidly getting far more aggressive—but actions today are only a pale shadow of what they will be in 2020, as a massive nationwide coordinated effort is unrolled to defeat Trump and ensure massive gains for the Left across the board. These actions should be, but will not be, criminally prosecuted under federal law as a conspiracy to violate civil rights. If Mark Zuckerberg and Jack Dorsey got to be cellmates, then we’d see some beneficial changes.
So what else could be done, in theory, if not in practice? Conservatives, or a certain subset of conservatives, basically consisting of the shrinking groups of libertarians and #NeverTrumpers, always respond that any defense, and even more any offense, against the attacks of Big Tech is inherently illegitimate, because they are private companies, and so should be allowed to do what they want. After all, if you don’t like it, you can always set up a competitor, right? Of course, that’s so dumb as to not even be worth spending time on refuting. As Wu outlines, no, in fact, you can’t compete, and the idea that we can’t regulate vicious behavior by private companies is stupid. We regulate the speech and actions of private companies all the time. Unless you think it’s OK for Facebook to hire only men or white people, this position is totally incoherent.
My basic idea, which I have outlined before, is that all of Big Tech should be subject to laws forbidding viewpoint discrimination. This is a well-developed area of law, so it’s easy to apply. Basically, if the government can’t ban or crimp the speech, nor should any entity in Big Tech be allowed to. This would only apply to Big Tech; smaller companies would be exempt, in order to limit the challenge of enforcement. Most enforcement, though, would be through a private right of action, both because that’s cheaper and it does not excessively increase government power. Each offense would have statutory damages of $250,000, with one-way fee-shifting in favor of successful plaintiffs (just like all the civil rights laws). Each time any type of speech was blocked or downgraded by the company, a detailed written reason, visible to all, would have to be provided without being asked, with failure to do so within forty-eight hours resulting in an automatic $10,000 fine for each instance. Like children, the Lords of Tech will respond to punishment and reward. In this case, though, they are only going to get punishments, and their reward for good behavior is not being punished.
All suppression or curation algorithms should have to be totally transparent, of course, and changeable by the user at his option. Allowing social media users much greater power of self-curation is desirable under any system. For example, YouTube has recently allowed users to identify videos they no longer wish to be served. True, this is responsive mostly to a Left complaint they were being showed things that offended them, but it serves everyone. Enhanced muting would be helpful, too (though censoring obscenity is not viewpoint discrimination). I can think of other useful laws. For example, I also think that having a social media account of any type should be limited by law to people eighteen and older, with strict liability and heavy fines imposed on social media companies for failure to comply. That would partially address the problems Reynolds identifies, and should be something both Left and Right can agree on. No doubt clever people can come up with more solutions. Despite all these good ideas, I doubt if anything is going to be done legislatively, but hey, I might be surprised, and technology may create its own answers. If not, there are other, more direct, solutions, and tomorrow is another day.