Blight At The Museum

The Smithsonian is supposed to be the people’s museum, but it tells corporate America’s version of history…

The Smithsonian has long carried a special virtuous sheen in the American imagination. It feels like one of our country’s few genuine projects for the common good. It was established out of the bequest of James Smithson, a wealthy British scientist who gave his estate to the young American nation in order to create an institution “for the increase and diffusion of knowledge.” In 1846, it became a trust administered by a special Board of Regents to be approved by the United States Congress. No other museum in the country has such an arrangement. And because its buildings line the National Mall, and admission is free, it has been regarded as something like the American people’s own special repository for knowledge. The Smithsonian helps define how America sees itself, and carries a weighty sense of dignity and neutrality.

It’s strange, then, that in certain parts of the Smithsonian, you may feel rather as if you’ve walked into the middle of a corporate sales pitch. When I visited the Smithsonian American History museum in December, for example, a “Mars Chocolate Demonstration” entitled “From Bean To Bar” was set up in a vestibule between exhibits. A half dozen people stood at a long table, showing how different stages in chocolate production worked. I had assumed they were docents until I noticed that most wore shirts embroidered with the Mars logo.

The lead presenter passed around a silicone model of a cacao pod, describing the process of growing the trees, explaining the role of hot chocolate in the American revolution, and telling us that the Aztecs used to consume only the white pulp that grows around the beans in the cacao pods. He informed us that nobody knows how the Aztecs discovered that the beans themselves had value, but offered a theory that they left the discarded beans by the fire, where they burned fragrantly. Then he passed around a bowl of roasted cacao nibs.

Later, I asked him whether he was a historian.

“I make M&Ms for a living,” he told me.

The demonstration was sponsored, I learned, by American Heritage Chocolate, a sub-brand of Mars that is sold exclusively at museums and historical sites. It is hard to critique a candy-making exhibit without seeming like a killjoy. But I don’t think it’s unreasonable to suggest that the Mars promotional demonstration has somewhat limited relevance to the core mission of the Museum of American History, or that having chocolatiers speculate about Aztec history is possibly below the expected Smithsonian standard of rigor. Having a chocolate-making demonstration is certainly a crowd-pleaser, and we did get free hot chocolate samples. But one cannot escape the suspicion that Mars, Inc. is using the Smithsonian to advertise chocolate to kids.

museum1

The chocolate exhibit is far from an isolated phenomenon. From the moment one arrives at the American History museum, in fact, its corporate sponsorship is evident. Much of the first floor is dedicated to the theme of “American Enterprise,” including the elaborate “Mars Hall of American Business.” The Hall of Business, sponsored by Monsanto, Altria (a.k.a. Philip Morris), S.C. Johnson, Intel, 3M, the United Soybean Board, and of course, Mars (among others) is intended to “convey the drama, breadth and diversity of America’s business heritage along with its benefits, failures and unanticipated consequences.” This kind of euphemistic, understated apologia is typical of the entire exhibition. American business may have experienced occasional “failures” and “unintended consequences,” but it has certainly never been guilty of, you know, “crimes,” or “wrongdoing.”

The Hall builds an entire history of the U.S. economy around “themes of opportunity, innovation, competition and the search for common good in the American marketplace.” (Note: not the search for profit.) When Mars first announced its multi-million dollar donation to build the Hall, the company’s president declared its intention to “provide examples of how U.S. companies and individuals have fundamentally, and positively, changed the way the world works and be a source of inspiration for future generations of business leaders and entrepreneurs.” The Smithsonian, in turn, promised to “provide visitors with a hands-on understanding of innovation, markets and business practice,” with activities including “choosing marketing campaigns for target audiences [and] making or losing simulated money through ‘trades.’” The museum also promised “larger personal and family stories featuring biographies of innovators and entrepreneurs.” (Note: not day laborers and shoe shiners.)

anatomyad2

Thus, there was no pretense whatsoever that the exhibit would be neutral on the question of whether American capitalism had been good for the world. This was to be a celebratory showcase of business’s positive achievements. Innovation, growth, and entrepreneurship were the watchwords; anyone expecting a Hall of American Labor Struggles, about the grinding exploitation and violence perpetrated on American workers (from slavery to the Ludlow Massacre to contemporary Florida orange groves) was in for disappointment. The sponsorship of Altria and Monsanto ensures that the history of American economic development is the history of cotton gins and Cadillacs, rather than of child laborers in Kazakhstan producing Philip Morris cigarettes, or Monsanto selling Agent Orange to the Department of Defense.

The pro-business perspective of the exhibition is present in every aspect of its carefully euphemistic language. Here is how the Hall’s text summarizes the “Merchant Era” that lasted from the 1770s to the 1850s:

“During the Merchant Era, many people profited from abundant land and resources—mining gold, acquiring territory, and establishing factories. A market revolution disrupted everyday life and ways of doing businesses. Indian nations struggled with loss of their land, while other Americans faced changes in work and managing debt.”

Of course, the period under discussion is the peak American era of massacre, indenture, conquest, and slavery. But in the Smithsonian’s description, these are incidental and unfortunate side-developments, rather than the entire foundation of the economic successes of the “Merchant Era.” And to the extent these unsavory details intrude at all, they are sanitized with calculated synonyms and painstakingly exculpatory grammatical constructions. Land is not “stolen,” it is “acquired.”  Indian nations are not exiled, displaced, and killed; instead they “struggle with loss,” as if this were some sort of private bereavement, rather than a deliberate campaign of despoliation and extermination waged by enterprising settlers, speculators, and government agencies.

The subsequent “Corporate Era” from the 1860s to the 1930s—the age of Chinese railroad work, the Triangle Shirtwaist factory fire, robber barons, Pinkertons, the re-enslavement of blacks through Jim Crow prison labor, and the massacre of striking miners—is characterized in a similarly upbeat manner.

“During the Corporate Era, industrialization, national competition, and business expansion brought widespread economic growth and social change to the United States. This period also saw turbulence in the form of widespread immigration, financial panics, and confrontations between labor and management.”

The central characteristic of the era, then, is not the long hours worked by 12-year-olds in mines and textile mills. It is not the burning alive of 146 Jewish and Italian garment workers locked by their employers inside a high-rise sweatshop, nor is it the opulence of the Rockefellers, Vanderbilts, or Astors. Rather, it is a time of game-show values like “expansion” and “competition.” Corporations are bringers of Growth and Change, while immigrants and unions are lumped in with financial panics as bringers of Turbulence. It’s an era that “saw” confrontation, in some kind of relaxed, desultory, impersonal manner, not an era in which incredibly rich people did their absolute utmost (using legal and extra-legal methods) to keep incredibly poor people working long hours in unsafe conditions.

museum3

The story expanded upon in the exhibition’s companion book, American Enterprise: A History of Business, which is largely edited and written by Smithsonian historians, but also features chapters from such acclaimed historical scholars as Bill Ford of the Ford Motor Company, ex-Chevron VP Patricia Woertz, and S.C. Johnson CEO Fisk Johnson (as well as a single token labor organizer). These days, corporations don’t just sponsor the history; they even help write it.

None of this is to say that the Hall ignores the existence of slavery, labor struggles, and immigrant workers. The Smithsonian is keen to present the story of the “turbulence” that ran through American history, which includes a sympathetic contextualization of the lives of working people. But fundamentally, the “Hall of Business” inherently treats American economic history as the story of businessmen rather than workers. It is a story of triumph, in which the railroads go west, the canals are dug, and exciting new kinds of marketing and advertising techniques are developed. The themes that predominate are precisely the themes emphasized by the Mars executive: America’s entrepreneurs and innovators are the heroes who built our country. Thus, the bulk of our economic history has been a story of triumph, rather than one of colonization and immiseration.

Elsewhere in the Museum of American History, the tone is similar. You will find exhibitions on “The Value of Money,” “Stories on Money,” and “The Price of Freedom.” Krispy Kreme, Nordic Ware, and Williams Sonoma all co-sponsor “Food: Transforming the American Table 1950-2000.” (To the “Food” exhibit’s credit, it does mention the United Farm Workers and point out that “many [have] raised questions about the long-term effects of mass production and consumerism, especially on the environment, health and workers.” These questions evidently have not caused the curators of the Smithsonian to lose any sleep, but it is considerate of them to acknowledge their existence.)

In the museum’s east wing, the General Motors Hall of Transportation houses an exhibit titled “America on the Move,” made possible with generous support from General Motors Corporation (along with AAA, State Farm Companies Foundation, the U.S. Department of Transportation, ExxonMobil, and others). A placard documents the history of public transit:

“In the early 1900s, streetcars and electric interurban systems helped fill the nation’s transportation needs. But over the next few decades the limitations of streetcar systems, government and corporate policies and actions, consumer choice, and the development of alternatives—especially the bus and car—helped make trolleys obsolete… Most important [sic], Americans chose another alternative—the automobile. The car became the commuter option of choice for those who could afford it, and more people could do so. In Washington, D.C., the last streetcar ran in 1962. In 2000, a public-transit authority runs an expansive bus service and operates a subway system. But as in most cities, the majority of D.C.-area residents prefer to drive alone in their cars from their homes to their workplaces.”

The language repeatedly emphasizes consumer choice as particularly important. Residents “prefer to drive alone” and the car became “the option of choice.” But consumers can only choose among the options that are provided to them. The implication here is that people don’t want good public transit, they want GM cars. We are thus led to infer that the people in Los Angeles, for example, prefer to sit in two hours of traffic to and from work, rather than be caught riding something as “obsolete” and déclassé as a trolley. But do they really have a choice? Driving may be preferable to the other existing transport options, but is really preferable to a functional, far-reaching, and efficient public transit system? By emphasizing that the sovereign consumer has already made up her mind, the exhibit rationalizes the status quo. There is no indication in these descriptions that the world we live in, and the options that are available to us, could possibly have looked otherwise than they do. Though the Smithsonian exhibits pay lip-service to individual autonomy, the visitor nonetheless gets the sense that the historical development of American business was as inexorable and irreversible as the formation of our mountains and coastlines. We can explain the forces that brought them into being, but we don’t think of these forces as having any kind of agency, or moral responsibility. Economic “production” continues to be a black box of hidden suffering, while the entrepreneurial spirit is lionized as the highest form of civic virtue.

museum2

One commonly hears arguments for why funding doesn’t influence content. It was, after all, Hillary Clinton’s claim regarding her considerable Goldman Sachs speaking fees. It’s also the claim made by corporate-funded researchers. And in theory, in the absence of direct quid-pro-quo corruption, the funder could just hand over the cash and leave the institution/candidate/researcher with total freedom to say and do as they please.

But wandering through the Smithsonian Museum of American History, it’s hard to believe that this can be entirely true in practice. Throughout, one gets the vague sensation that the information being consumed has been subtly molded by its sponsors. The Smithsonian has certainly been plagued in the past by sponsorship-related controversies.  In 2003, photographer Subhankar Banerjee debuted his exhibit “Seasons of Life and Land” at the American Museum of Natural History. His photos — of the Arctic National Wildlife Refuge in Alaska — were abruptly moved to the museum’s basement after Senator Barbara Boxer brought a photo of a polar bear from the exhibit to the Senate floor to bolster an argument against arctic drilling. The same museum paid for its Hall of Human Origins with a $15 million grant from David Koch. The exhibit strongly implies that climate change may not be man-made, and reminds visitors that the earth is cooler now than it was ten thousand years ago. According to the New Yorker’s Jane Mayer, a game at the exhibit suggested that humans could simply evolve to deal with climate change by building “underground cities” and developing “short, compact bodies” or “curved spines,” so that “moving around in tight spaces will be no problem.”

In some cases, corporate influence on informational presentation is direct and obvious, as in the S.C. Johnson CEO’s co-authorship of a Smithsonian book. But elsewhere the effect is more subtle; a phrase swapped in here, the use of passive voice there, and the strategic withholding of anything that might lead the public to demand a change in policy, or to abhor the actions of the ruling class. As I went through the museum, I felt confused and paranoid, not because I felt as if all of the facts were being manipulated to serve an agenda, but because I couldn’t tell which ones were being manipulated. 

That’s what should be concerning. Corporate sponsorship may only have a limited effect on museum content. Yet any effect at all erodes confidence in the museum’s status as a reliable guardian of fact. It’s understandable why chronically underfunded museums would turn to whatever revenue streams they can come by. But a museum sponsored by Mars and Monsanto cannot tell the full truth. To let American history be written by its corporations is to give preferential voice to the economy’s winner and profiteers, and to downplay or excuse the injustices inflicted upon its underclass.

Illustrations by Pranas Naujokaitis.

How The Economist Thinks

Is it fair to trash The Economist? You bet it is.

Current Affairs is well-known for its signature “Death to The Economist bumper stickers, which have greatly improved the expressive capacities of the American motorist when it comes to demonstrating a discerning taste in periodicals. But, occasionally, members of the public send us adverse feedback on our vehicular adhesive strips. “What,” they ask, “is your problem with The Economist? Why be so rude? How can you wish death upon a perfectly innocuous and respectable British political magazine?” Current Affairs, it is said, is behaving badly. We are being unfair.

It’s true that death is an extreme consequence to wish on another magazine, even if the magazine in question is The Economist. And sometimes I do wonder whether the sentiment goes a bit too far, whether it would be more fair to wish something like “a minor drop in circulation” or “a financially burdensome libel suit” on our London competitor.

But then I remember what The Economist actually is, and what it stands for, and what it writes. And I realize that death is the only option. A just world would not have The Economist in it, and the death of The Economist is thus an indispensable precondition for the creation of a just world.  

In his deliciously biting 1991 examination of the role of The Economist in shaping American elite opinion, James Fallows tried to figure out exactly what was so repellent about the magazine’s approach to the seeking of truth. Fallows puzzled over the fact that American intellectuals hold a disproportionate amount of respect for The Economist’s judgment and reporting, even though the magazine is produced by imperious 20-something Oxbridge graduates who generally know little about the subjects on which they so confidently opine. Fallows suggested that The Economist’s outsized reputation in the U.S. was partially attributable to Americans’ lingering colonial insecurity, their ongoing belief that despite all evidence to the contrary, British people are inherently intelligent and trustworthy. Fallows even dug up an embarrassingly snooty quote from Robert Reich, boasting about his sensible preference for British news: “I, for one, don’t get my economics news from Newsweek. I rely on The Economist — published in London.”

economist

But the most damning case put by Fallows is not that The Economist is snobbish and preys on the intellectual self-doubt of Americans through its tone of Oxonian omniscience. (Though it is, and it does.) Fallows also reveals the core flaw of the magazine’s actual reportage: thanks to its reflexive belief in the superiority of free markets, it is an unreliable guide to the subjects on which it reports. Because its writers will bend the truth in order to defend capitalism, you can’t actually trust what you read in The Economist. And since journalism you can’t trust is worthless, The Economist is worthless.

Fallows gives an example of how reality gets filtered as it passes through the magazine and reaches The Economist’s readers:

Last summer, a government man who helps make international economic policy told me (with a thoughtful expression) he was reading “quite an interesting new book” about the stunning economic rise of East Asia. “The intriguing thing is, it shows that market forces really were the explanation!” he exclaimed in delight. “Industrial policies and government tinkering didn’t matter that much.” By chance, I had just read the very book—Governing the Market by Robert Wade. This detailed study, citing heaps of evidence, had in fact concluded nearly the opposite: that East Asian governments had tinkered plenty, directly benefiting industry far beyond anything “market forces” could have done. I knew something else about the book: The Economist magazine had just reviewed it and mischaracterized its message almost exactly the way the government official had. Had he actually read the book? Maybe, but somehow I have my doubts… The crucial paragraph of The Economist review—the one that convinced my friend the official, and presumably tens of thousands of other readers, that Wade’s years of research supported the magazine’s preexisting world view—was this: “The [Asian] dragons differed from other developing countries in avoiding distortions to exchange rates and other key prices, as much as in their style of intervening. Intervention is part of the story—but perhaps the smaller part. That being so, Mr. Wade’s prescriptions seem unduly heavy on intervention, and unduly light on getting prices right.” These few lines are a marvel of Oxbridge glibness, and they deserve lapidary study. Notice the all-important word “perhaps.” Without the slightest hint of evidence, it serves to dismiss everything Wade has painstakingly argued in the book. It clears the way for: “That being so . . . ” What being so? That someone who has Taken a First [at Oxbridge] can wave off the book’s argument with “perhaps”?

Here, then, is the problem with the magazine: readers are consistently given the impression, regardless of whether it is true, that unrestricted free market capitalism is a Thoroughly Good Thing, and that sensible and pragmatic British intellectuals have vouched for this position. The nuances are erased, reality is fudged, and The Economist helps its American readers pretend to have read books by telling them things that the books don’t actually say.

Now, you may think that Fallows’ example tells us very little. It was, after all, one small incident. He spoke to one man, who had gotten one wrong impression from one faulty Economist review. Perhaps we were dealing with an exceptional case. Presumably Fallows encountered this kind of thinking regularly, but perhaps he’s singling out the minor part of the magazine’s otherwise-stellar reportage and reviews.

Let me, then, add a data point of my own. Until last week, I had not read The Economist since high school, where debate nerds subscribed to it in order to quote it to each other and prove themselves informed and worldly. But a few days ago, I was trying to compile a list of news outlets that Current Affairs staff should regularly glance at, in order to make sure we are considering a broad and ecumenical set of perspectives on contemporary geopolitics. I remembered Current Affairs’ ostensible rivalry with The Economist, and thought it might be a good idea to at least read the damn thing if we’re going to be selling bumper stickers calling for its execution. I am nothing if not open-minded and fair.

anatomyad2

What, then, did I find upon navigating over to The Economist’s website? The very first article on the page was a piece called “A selective scourge: Inside the opioid epidemic,” subtitled “Deaths from the drugs say more about markets than about white despair.” Its theme is classic Economist: the American opioid epidemic is not occurring because global capitalism is ruining lives, but is the tragic outcome of the operation of people’s individual preferences. A quote:

It has even been argued that the opioid epidemic and the Trump vote in 2016 are branches of the same tree. Anne Case and Angus Deaton, both economists at Princeton University, roll opioid deaths together with alcohol poisonings and suicides into a measure they call “deaths of despair”. White working-class folk feel particular anguish, they explain, having suffered wrenching economic and social change. As an explanation for the broad trend, that might be right. Looked at more closely, though, the terrifying rise in opioid deaths in the past few years seems to have less to do with white working-class despair and more to do with changing drug markets. Distinct criminal networks and local drug cultures largely explain why some parts of America are suffering more than others.

25 years after Fallows wrote his Economist takedown, not a single thing has changed. The 1991 Economist used the meaningless phrase “that being so” to dismiss an author’s entire argument and conclude that markets should be left alone. The 2017 Economist concedes that “as an explanation for the broad trend,” economic despair “might be right,” but that “looked at more closely,” drug deaths are not about despair. “Looked at more closely” functions here the same way that “that being so” did: it concedes the point, but then pretends it hasn’t. After all, if despair might be the correct “explanation for the broad trend,” what does it mean to say that “looked at more closely” the trend isn’t the result of despair at all? It’s either an explanation or it isn’t, and if it doesn’t hold when “looked at more closely,” then it wouldn’t be “right” as an explanation for the broad trend.

What happens when The Economist looks at opioid deaths “more closely” is simple obfuscation. The magazine shows that opioid use looks different in different parts of the United States, because the drugs themselves differ. For example, when it comes to heroin, “Addicts west of the Mississippi mostly use Mexican brown-powder or black-tar heroin, which is sticky and viscous, whereas eastern users favour Colombian white-powder heroin.” Note the subtle invocation of “free choice” language: heroin users in the Eastern United States “favour” Colombian heroin. It’s not just that this happens to be the available form of the drug; it’s also that they have a kind of rational preference for a particular form of heroin. Every subtle rhetorical step is toward exonerating capitalism for people’s suffering, and blaming the people and their own foolish choices within a free and fair marketplace.

The Economist’s article on the opioid epidemic offers some legitimately interesting observations about regional variation in types of drug use. Increases in deaths have been concentrated more heavily in places where drugs are available in easier-to-ingest forms. The trouble is that The Economist argues that this implies the idea in the article’s subtitle, that deaths from drugs “say more about markets than white despair.” That’s just a conclusion that doesn’t follow from the provided evidence. The magazine’s own charts show that drug use of all kinds has been rising, meaning that the differences between usage types can’t account for the broad trend. The drug type differences can tell us why different places may experience differing levels of rises in opiate deaths, but they can’t tell us why so many people are now drugging themselves who weren’t before. And we can’t answer that question without considering economic class; opiate addiction has disproportionately risen among poor white people, meaning we have to find a way to understand what specific race- and poverty-correlated factors are causing the change.

The Economist is not, therefore, an honest examiner of the facts. It is constantly at pains not to risk conclusions that may hurt the case for unregulated markets. This tendency reached its absurd apotheosis in the magazine’s infamous 2014 review of Edward Baptist’s The Half Has Never Been Told: Slavery and the Making of American Capitalism. The magazine objected to Baptist’s brutal depiction of the slave trade, saying the book did not qualify as “an objective history of slavery” because “almost all the blacks in his book are victims, almost all the whites villains.” When outraged readers pointed out that this is because, well, the victims of slavery tended to be black, The Economist retracted the review. But as Baptist observed in response, there was a reason why the magazine felt the need to mitigate the evils of slavery. Baptist’s book portrayed slavery as an integral part of the history of capitalism. As he wrote: “If slavery was profitable—and it was—then it creates an unforgiving paradox for the moral authority of markets—and market fundamentalists. What else, today, might be immoral and yet profitable?” The implications of Baptist’s work would have unsettling implications for The Economist. They would damn the foundations of the very Western free enterprise system that the magazine is devoted to championing. Thus The Economist needed to find a way to soften its verdict on slavery. (It was not the first time they had done so, either. In a tepid review of Greg Grandin’s The Empire of Necessity with the hilariously offensive title of “Slavery: Not Black or White,” the magazine lamented that “the horrors in Mr Grandin’s history are unrelenting.” And the magazine’ long tradition of defending misery stretches back to the 19th century, when it blamed the Irish potato famine on irresponsible decisions made by destitute peasants.)

Why, then, have a “Death to The Economist” bumper sticker? Because The Economist would justify any horror perpetrated in the name of the market and Western Enlightenment values, even to the extent of rationalizing the original great and brutal crime on which our prosperity was founded. Its tone, as Fallows observed, is one “so cocksure of its rightness and superiority that it would be a shame to freight it with mere fact.” And the problem with that is not that The Economist is cocksure (I of all people should have no objection to cocksureness in periodicals), but that it doesn’t wish to be freighted with inconvenient truths. The fact that The Economist has a clear set of ideological commitments means that it will pull the wool over its readers’ eyes in the service of those commitments, which saps it of intellectual worth. It will lie to you about the contents of a book by waving them away with a “that being so.” Or it will reassure you that capitalism has nothing to do with opiate deaths, by asserting without evidence that when “looked at more closely,” drug addiction is “less” about despair. It will fudge, fumble, and fool you in any way it can, if it means keeping markets respectable. And it will play on your insecurity as a resident of a former British colony to convince you that all intelligent people believe that the human misery created in “economically free” societies is necessary and just. It will give intellectual cover to barbarous crimes, and its authors won’t even have the guts to sign their names to their work. Instead, they will pretend to be the disembodied voice of God, whispering in your ear that you’ll never impress England until you fully deregulate capitalism.

So, then: Death to slavery. Death to injustice. Death to The Economist.

In Defense of Liking Things

The fidget spinner probably doesn’t tell us much about civilization’s decline, and it’s okay to enjoy it….

Nobody can reasonably accuse me of liking too many things. I am a veteran practitioner of the “Actually, This Thing You Thought Was Good Is Not Very Good At All” school of writing. I am promiscuous in my hatreds, grievances, and peeves, and I know well the pleasures of announcing that whatever my least favorite recent cultural development is spells ruin for the civilized order.

But it’s possible to take one’s disagreeableness to excess. There is always the risk of becoming that most unwelcome of characters, the curmudgeon. At its best, written criticism can usefully point out social problems in ways that help clarify people’s thinking. At its worst, it can be stuffy and joyless, a philosophy of “miserablism.” If you’re not careful, you can turn into P.J. O’Rourke, Joe Queenan, or even, *shudder*, Andy Rooney. The infamous “Whig View Of History” is the idea that things follow an inevitable trajectory towards progress, enlightenment, and decency. The curmudgeon’s view of history is that everything is just getting worse all the time, that all of the things people like suck, and that they suck harder than anything has ever sucked in the history of things sucking.

The “fidget spinner” is a little plastic toy that has become popular recently among large numbers of children and modest numbers of adults. You twiddle it in your fingers and it goes round and round. You can do nifty tricks with it. It seems like fun. It’s even alleged to be good for kids with ADHD, because it gives them something to do with their hands.

Critics from The Atlantic and The New Yorker, however, have declared that the fidget spinner captures everything that is wrong with our century. Far from being an innocuous and amusing cheap little rotating thingamajig, the fidget spinner is, according to The New Yorker’s Rebecca Mead, an embodiment of Trump-era values. It is a sign of a narcissistic and distracted culture, captivated by trifles, ignorant of its own decline, and oblivious to all that is sacred, intelligent, and morally serious. We are fidgeting while Rome burns.

Mead’s indictment of the fidget spinner is worth quoting at some length, in order that we may appreciate it in its full fustiness:

Fidget spinners… are masquerading as a helpful contribution to the common weal, while actually they are leading to whole new levels of stupid. Will it be dismissed as an overreaction—as “pearl-clutching,” as the kids on the Internet like to say—to discern, in the contemporary popularity of the fidget spinner, evidence of cultural decline? …. Perhaps, and yet the rise of the fidget spinner at this political moment cries out for interpretation. The fidget spinner, it could be argued, is the perfect toy for the age of Trump. Unlike the Tamagotchi, it does not encourage its owner to take anyone else’s feelings or needs into account. Rather, it enables and even encourages the setting of one’s own interests above everyone else’s. It induces solipsism, selfishness, and outright rudeness. It does not, as the Rubik’s Cube does, reward higher-level intellection. Rather, it encourages the abdication of thought, and promotes a proliferation of mindlessness, and it does so at a historical moment when the President has proved himself to be pathologically prone to distraction and incapable of formulating a coherent idea… Is it any surprise that, given the topsy-turvy world in which we now live, spinning one’s wheels… has been recast as a diverting recreation, and embraced by a mass audience? Last week, as the House voted to overturn the Affordable Care Act, millions of parents of children with special needs… began to worry, once again, about their children becoming uninsured, or uninsurable, an outcome the President had promised on the campaign trail would not occur. This week, after summarily firing James Comey…. [Donald Trump] issued a baffling series of contradictory explanations for what looks increasingly like the unapologetic gesture of a would-be despot. Each day, it becomes more apparent that Trump is toying with our democracy, shamelessly betting that the public will be too distracted and too stupefied to register that what he is spinning are lies.

There are at least eight things that I love about this passage. First, it adopts the full New Yorker hierarchy of values: from elevating that thing called the “common weal” as the highest good, to “outright rudeness” being the basest of transgressions. Second, the idea that the fidget spinner is somehow “masquerading” as contributing to the common weal, as if fidget spinners come in packaging that promises a morally edifying and intellectually nourishing experience. Third, the idea that the fidget spinner’s rise “cries out for interpretation.” (Does it really?) Fourth, I love rhetorical questions where the obvious answer is the opposite to the one the author wishes us to offer. (“Will it be dismissed as an overreaction…?” Yes.) Fifth, the idea that unlike the selfish fidget-spinner, the noble and pro-social Tamagotchi encourages us to care about the feelings of others. Sixth, the hilariously overstated and totally unsubstantiated claims (spinning a fidget spinner is an act of “solipsism” that causes us to “abdicate thought” and put our interests above everybody else’s). Seventh, the tribute to the great and deep “intellection” of the Rubik’s Cube. Eighth, the tortuous and contrived Trump parallel, in which the fidget spinner now tells us something about James Comey and the Affordable Care Act.

1791-Yo-Yo-Bandalore

But Mead is not alone in denouncing the spinner’s effect on human values. Ian Bogost of The Atlantic analyzes the economic dimensions of the toy, seeing it as the logical conclusion of a capitalistic logic that wishes to pacify us with doodads and trinkets to keep us blind to our own exploitation and ennui:

[Fidgets spinners] are a perfect material metaphor for everyday life in early 2017… [They are] a rich, dense fossil of the immediate present… In an uncertain global environment biting its nails over new threats of economic precarity, global autocracy, nuclear war, planetary death, and all the rest, the fidget spinner offers the relief of a non-serious, content-free topic… At a time when so many feel so threatened, aren’t handheld, low-friction tops the very thing we fight for?… Then commerce validates the spinner’s cultural status. For no cultural or social trend is valid without someone becoming wealthy, and someone else losing out. And soon enough, the fidget spinner will stand aside, its moment having been strip-mined for all its spoils at once. The only dream dreamed more often than the dream of individual knowledge and power is the dream of easy, immediate wealth, which now amounts to the same thing.

Now, I haven’t played with a fidget spinner. I’ve never even seen one. My understanding is that their prime audience is the 12-and-under set, and I am friends with very few middle schoolers these days. But I will admit that from my limited experience watching videos of the things on YouTube, I did not begin to suspect that the fidget spinners displayed “the dream of individual knowledge and power.” Nor did I notice the parallels between the twirling of the spinner and the chaos of Donald Trump’s presidency. Perhaps this shows the limits of my analytical capacities, or perhaps I am blinded by the pervasiveness of the American ideology of individualism. I’ll confess, though, my basic reaction so far is that the toys look nifty and the tricks you can do with them are pretty cool.

And I’d like to think that it’s okay to feel this way. Not everything that exists in the time of Donald Trump has to be a metaphor for Donald Trump, and not every silly trinket produced by capitalism is evidence of our decline in intellectual vigor. Sometimes a cigar is just a cigar. (Although in Freud’s case, the cigar was a penis.) Cultural critics often display an unfortunate tendency toward “Zeitgeistism,” the borderline-paranoid belief that there are Zeitgeists everywhere, massive social and historical essences to be found in all kinds of everyday practices and objects.

One problem is that the kind of theorizing done by Bogost and Mead amounts to the telling of “just so stories,” unfalsifiable narratives that merely confirm the theorist’s already-existing worldview. That means that anyone can tell whatever story they like about the fidget spinner. You could call it evidence of solipsism, because it causes humans to interact with the spinners rather than one another. But then I could offer a different story: the fidget spinner is evidence of social dynamism and of an increasingly tactile, physical, and body-conscious world. Which one of us is right? Neither. It’s all B.S.

Any critic who wishes to offer the fidget spinner as evidence of some wider destructive social tendency faces another problem: it’s not really any more pointless or individualistic than the yo-yo, and we’ve gotten along with those for about 2500 years. If you want to see fidget spinner as uniquely representative of Trumpism and so-called “late capitalism,” you have to find a way to argue that it is fundamentally different from a yo-yo in some philosophically significant way. And since it isn’t, and since if the fidget spinner shows civilizational decline then every dumb toy in history would necessarily have to prove the same thing, every cultural critic who tries to posit a Fidget Spinner Theory of Everything ends up somewhat stuck.

Yo-yo_player_Antikensammlung_Berlin_F2549

You can see this amusing dead-end whenever Bogost and Mead attempt to explain why the spinner is nothing like the generations of faddish knicknacks that came before it. (I’d imagine there were similar pieces in 2009 about how Silly Bandz explained the Obama era, or 1990’s thinkpieces on what Beanie Babies could tell us about the Clinton economy.) Mead is at pains to come up with reasons why Rubik’s Cubes and Tamagotchis are serious and worthwhile, while fidget spinners are decadent and stupid. Bogost, meanwhile, makes a hilariously convoluted attempt to meaningfully distinguish the fidget spinner from an ordinary spinning top:

A top is a toy requiring collaboration with the material world. It requires a substrate on which to spin, be it the hard earth of ancient Iraq or the molded-plastic IKEA table in a modern flat. As a toy, the top grounds physics, like a lightning rod grounds electricity. And in this collaboration, the material world always wins. Eventually, the top falls, succumbing to gravity, laying prone on the dirt… Not so, the fidget spinner. It is a toy for the hand alone—for the individual. Ours is not an era characterized by collaboration between humans and earth—or Earth, for that matter. Whether through libertarian self-reliance or autarchic writ, human effort is first seen as individual effort—especially in the West. Bootstraps-thinking pervades the upper echelons of contemporary American life, from Silicon Valley to the White House. … The fidget spinner quietly attests that the solitary, individual body who spins it is sufficient to hold a universe. That’s not a counterpoint to the ideology of the smartphone, but an affirmation of that device’s worldview. What is real, and good, and interesting is what can be contained and manipulated in the hand, directly.

Since they had spinning tops in the 35th century B.C., for Bogost to confirm his belief that fidget spinners must embody “bootstraps thinking” and “the ideology of the smartphone,” he knows he has to find some important difference. “Ah, well, you see, the top touches the ground but the fidget spinner goes in the hand, and individuals have hands, therefore the fidget spinner is individualistic and libertarian while the spinning top is humble, worldly, and environmentalist.” (Of course, Bogost is still powerless to deal with the yo-yo question. These both go in the hand and don’t touch the ground. What about the yo-yo, eh, Bogost?)

I’m particularly irritated by this kind of cultural criticism because it embodies one of the most unfortunate tendencies in left-ish political thinking: the need to spoil everybody’s fun by finding some kind of problem with everything. There is enough serious human misery in the world for the left to point out; there’s no need to problematize the fidget spinner as well. Whenever I see something like, say, Jacobin’s critique of Pokemon Go as being the “bourgeois” embodiment of an obedience-worshiping “technology of biopolitics,” I can’t help but think: “Do we really have to be these people? Because this isn’t the side I want to be on.” We’re allowed to like things. Even stupid things. And you don’t have to rain on every single parade that passes by. Rule #1 for creating a left that people will want to join: don’t be a humorless joykill who tells people that their stress toy makes them Donald Trump. 

anatomyad2

I might feel more sympathetic if criticisms like Bogost and Mead’s were intellectually rigorous or substantially true. But they aren’t. They don’t hold up to the most minimal logical scrutiny, because they fail to carefully answer the question of why we should consider the fidget spinner unique next to every other dumb little thing in history. They make ridiculous overstatements, and then don’t explain why we should accept their just-so stories rather than another, equally contrived but opposing, set of just-so stories.

The reason that the fidget spinner is popular is not that it embodies our society’s most depraved and fatuous tendencies, or that it signifies the erosion of our attention spans in the era of Trump. It’s popular because it’s a legitimately impressive little novelty device. Like all novelties, it will wear off. And there will be as much political significance to its disappearance as there was to its appearance: hardly any.

Fun is important, and sometimes people have fun by playing tiddlywinks or spinning a top or finding one of the myriad of other trivial diversions that keep us from having to face the full horror of our mortal existence. And people on the left shouldn’t spend their time coming up with implausible theories for why everyone is delusional and stupid for enjoying playing with spinny-things. They should be trying to understand the roots of human suffering, and proposing ways to alleviate it.

Anything else is just a distraction.

Speaking of Despair

How much can suicide hotlines do?

I started volunteering at a suicide hotline around three years ago. Whenever I happen to mention to someone that this is a thing I do, they usually seem a bit shocked. I think they imagine that I regularly talk callers off ledges, like a Hollywood-film hostage negotiator. “How many people have you saved?” an acquaintance asked me once. I have no idea, but the answer is probably none, or very few, in the immediate sort of sense the questioner was likely envisioning, where somebody calls the hotline intending to kill themselves and I masterfully persuade them not to. In reality, the vast majority of your time at a hotline is spent simply listening to strangers talk about their day, making little noises of affirmation, and asking open-ended questions.

The conversations you end up having on a suicide hotline are inherently somewhat peculiar. They’re more intimate than you would have in daily life, where an arbitrary set of social niceties constrains us from talking about the things that are close to our hearts. But they are also strangely impersonal. Operators at most call centers are forbidden from revealing personal details about themselves, offering opinions on specific subjects, or giving advice on problems: all of which tend to be central features of ordinary human conversation.

With practice, and a sufficiently lucid and responsive caller, you can sometimes make this bizarre lopsidedness feel a bit less awkward. At the same time, however, you also have to find a way to squeeze in a suicide risk assessment—hopefully, not with a bald non-sequitur like “Sorry to interrupt, but are you feeling suicidal right now?” but in some more fluid and natural manner. The purpose of the risk assessment is to enable the person to talk about their suicidal thoughts, in case they’re unwilling to broach the topic themselves, and also to allow you, the operator, to figure out how close the caller might be to taking some kind of action. From “are you feeling suicidal?” you work your way up to greater levels of specificity: “have you thought about how you might take your life?” “Do you have access to the thing you were planning to use?” “Is it in the room with you right now?” “Have you picked a time?” And so on.

I can’t speak for every operator at every call center, but in my own experience, I would estimate that fewer than 10% of the people I’ve ever spoken to have expressed any immediate desire or intention to end their lives. Well over half of callers, I would estimate, answer “no” to the first risk assessment question. This might, on its face, seem surprising. So who’s calling suicide hotlines, then, if not people who are thinking about killing themselves?

Well, for starters—let’s just get this one out of the way—a fair number of people call suicide hotlines to masturbate.

“Wait, but why?” you, in all your naïve simplicity, may be thinking. “Why would someone call a suicide hotline, a phone service intended for people in the throes of life-ending despair, to masturbate?” Friends, that question is beyond my ken: as theologians are fond of saying, we are living in a Fallen World. If I had to make a guess, I’d say a) suicide hotlines are toll-free, b) a lot of the operators are women, and c) there is a certain kind of person who gets off on the idea of an unwilling and/or unwitting person being tricked into listening in on their autoerotic exploits. The phenomenon would be significantly less annoying if some of the callers didn’t pretend to be kind-of-sort-of suicidal in order to keep you on the line longer: it’s rather frustrating, when one is trying one’s best to enter empathetically into the emotional trials of a succession of faceless voices, to then simultaneously have to conduct a quasi-Turing test to sort out the bona fide callers from the compulsive chicken-chokers.

All right, aside from that, who else is calling?

The other callers are the inmates of our society’s great warehouses of human unhappiness: nursing homes, mental institutions, prisons, homeless shelters, graduate programs. They are people with psychiatric issues that make it difficult for them to form or maintain relationships in their daily lives, or cognitive issues that have rendered them obsessively focused on some singular topic. They are people who are deeply miserable and afraid, who are repelled by the idea of ending their own life, but who still say that they wish they were dead, that they wish they could cease to exist by some other means. Among the most common topics of discussion are heartbreak, chronic illness, unemployment, addiction, and childhood sexual abuse.

Some people are deeply depressed or continually anxious, experiencing recurring crises for which the suicide hotline is one of their chief comforts or coping strategies; while others present as fairly cheerful on the phone, and are annoyed by your attempts to risk-assess them or steer the conversation towards the reason for their call. The great common denominator is loneliness. People call suicide hotlines because they have no one else, because they are friendless in the world, because the people in their lives are unkind to them; or because the people they love have said they need a break, have said don’t call me anymore, don’t call me for a while, I’ll come by later, we’ll talk later, and they are struggling to understand why, why they can’t call their sister or their friend or their doctor or their ex ten, twelve, fifteen times a day, when that’s the only thing that briefly alleviates the terrible upswelling of sadness inside them.

One thing you learn quickly, from taking these kinds of calls, is that misery has no respect for wealth or class. Rich and poor terrorize their children alike. Misery is everywhere: it hides in gaps and secret spaces, but it also walks abroad in daylight, unnoticed. The realm of misery is a bit like the Otherworld of Irish myth, or perhaps the Upside Down on the popular Netflix series Stranger Things. It inhabits the same geographic space as the world that happy people live in. You might pride yourself on your sense of direction, but if you were to wander unaware into the invisible eddy, if you were to catch the wrong thing out of the corner of your eye, you too could find yourself there all of a sudden, someplace where everything familiar wears a cruel and unforgiving face. Somebody you know might be in that place now, perhaps, and you simply can’t see it.

If misery could make a sound like a siren, you would hear it wailing in the apartment next door; you would hear it shrieking at the end of your street; a catastrophic klaxon-blast would shatter the windows of every single hospital and high school in the country, all an endless cacophony of “help me help me it hurts it hurts.” And even if most of the people who call hotlines never come close to taking their own lives, their situation still feels like an emergency.

printedit

We might ask, though, what is the rationale behind a hotline whose protocols are set up for assessing suicidality, when the vast majority of people who call the hotline do not, by their own account, have any concrete thoughts of suicide. The prevailing theory is that suicide hotlines are catching people “upstream,” so to speak, before they find themselves in a crisis state where suicide might start to feel like a real option for them. These people, in theory, are people who are at risk of becoming suicidal down the line if they aren’t given the right kind of support now. But is this actually true?

The fact is, we have no idea. If we take “suicide prevention” as the chief purpose of suicide hotlines, we soon find that the effectiveness of hotlines is very tricky to assess empirically. Of the approximately 44,000 people in the United States who complete suicide every year, we have no way of knowing how many may have tried calling a hotline in the past. Of the people who do call a suicide hotline presenting as high-risk, we don’t know how many ultimately go on to attempt or complete suicide. Small-scale studies have tracked caller satisfaction through follow-up calls, or have tried to measure the efficacy of hotline operators by monitoring a sample of their conversations. But these studies are, by their very nature, of dubious evidentiary value. There’s no control group of “distressed/suicidal people who haven’t called hotlines” to compare to, and the pool of callers is an inherently self-selecting population, which may or may not reflect the population of people who are at greatest risk. There are also obvious ethical concerns about confidentiality when it comes to actively monitoring phone calls by “listening in” without permission from the caller, or placing follow-up calls with people who have phoned the service. A substantial number of people who call suicide hotlines express anxiety about the privacy of their calls. Given the social and religious stigma that continues to be associated with thoughts of suicide, we might posit that the higher-risk a caller is, the more anxious they are likely to be. They may perhaps be reluctant to agree to a follow-up call when asked, and nervous to call the hotline again if they suspect they might be part of some study.

All of this is not to say that we need Hard Numbers to justify the existence of a service that provides a listening ear to people in distress. The value of human connection is self-evident, and when it comes to intangibles like happiness, spiritual purpose, and a sense of closeness to others, so-called scientific studies are mostly bunk anyway. Nonetheless, we can still use our imaginations and our common sense to hypothesize about the limitations of the current system and possible alternatives. I think there are two questions worth considering: first, are suicide hotlines generally accessible or useful to people who are actively suicidal? Secondly, for the “low-risk” callers who appear to be the most frequent users of suicide hotlines, is the service giving them what they need, or is there some better way to provide comfort and relief to these people?

As to whether high-risk individuals are actually being reached by suicide hotlines, as outlined above, it’s hard to tell. Anecdotally, the perception of suicide hotlines seems to differ pretty markedly when you peek in on suicide-themed message boards, as opposed to message boards centered around support for depression or other psychological issues. For example, posters on the mental health support forum Seven Cups describe suicide hotline operators as “supportive,” “non-judgmental,” “patient and understanding,” “some of the most loving people you’ll ever talk to,” and “varied from unhelpful-but-kind to helpful.” By contrast, on the Suicide Project, a site specifically devoted to sharing stories about attempting or losing someone to suicide, posters wrote that their calls were “awkward and forced,” “left me thinking I should just get on with killing myself [and] not speak to anyone before hand,” and “totally useless,” and commented negatively on long hold times or call time limits.

We can’t really draw conclusions from this tiny sample, not least because the kinds of people who frequent message boards and comments sections on the internet are not necessarily representative of broader populations who share some of the same self-identified characteristics. But—again anecdotally—I have noted that high-risk or more despairing callers on the hotline I volunteer for, when questioned about the extent of their suicidal intention, often express sentiments like, “If I were really suicidal, I wouldn’t be calling” or “If I wanted to commit suicide, I would just do it.” It’s hard to say exactly what this means, but it seems as if a general perception among borderline-suicidal callers is that an actively suicidal person wouldn’t bother to call a hotline. Given that suicide is sometimes a split-second decision, and that people who complete suicide tend to use highly lethal means, such as firearms, this perhaps isn’t surprising. (Calls where someone claims to be holding a gun are always the most alarming.)

For lower-risk callers, meanwhile, is a fifteen-minute conversation all we can do for them? People who call hotlines sometimes express frustration at the impersonality of the service. They want a give-and-take conversation, more like a normal interaction with a friend, but many suicide hotlines (including the one I volunteer for) forbid volunteers from giving out personal information about themselves. You never share your own opinion on a topic, even if the caller asks you directly: you merely express empathy, and give short reflective summaries of the caller’s responses to your questions, in order to demonstrate engagement and help the caller navigate through their own feelings.

This isn’t necessarily a bad approach, broadly speaking, since it keeps operators out of the thorny territory of giving possibly-useless, possibly-harmful advice to a person whose full life circumstances they know very little about, or of overwhelming or inadvertently shaming the caller with some inapposite emotional response of their own. For some callers, this non-reciprocal outpouring of feeling may be exactly what they need. But for other callers, who often become wise to a call center’s protocols over many repeated calls, this one-sided engagement is not at all what they say they want. What they want is a real human connection, even its messiness and impracticality, not a disembodied voice that might as well be a pre-programmed conversation bot. Reconciling these conflicting goals is a tricky thing. There are certainly people who use hotlines in what seems to be a compulsive kind of way: they’ll call every half-hour, and if you don’t impose some kind of limit, they’ll tie up the line for less persistent (but perhaps, by some metrics, more vulnerable) callers. But it nevertheless feels cruel to tell desperately lonely people that their insatiable need for the warmth of a human presence is Against The Rules.

“It feels cruel to tell desperately lonely people that their insatiable need for the warmth of a human presence is Against The Rules…”

I often wonder if a suicide hotline’s unique ability to reach a population of acutely unhappy people could be harnessed for more personal, community-based interventions. Currently, there are both national and local call centers, but even on local lines, the caller is still miles away from you, and operators aren’t allowed to set up meetings with the people they speak to. Many people call because of a serious crisis in their lives, but the most you can do is give them a referral to a mental health organization that might be able to help them. I’ve frequently wished it were possible to send an actual human to check up on the person, ask how they’re doing, and see what they might need help with. It would be nice if neighborhoods or cities had corps of volunteers who were willing to be on-call for that kind of thing.

This, it seems to me, might be especially important for callers who seem more desperate and perhaps at higher risk of suicide. When you’re a hotline operator, there’s no middle ground between giving somebody verbal comfort and perhaps a referral, and dispatching emergency services directly to their location. (Some hotlines will only do this if the caller gives permission, while others, if the situation seems imminently dangerous, will send any information associated with the caller’s phone number to local police.) People who have previously had ambulances called on them often express deep shame and embarrassment about the experience. It attracts attention of all their neighbors; depending on the circumstances, the caller might even have been taken out of their home on a stretcher and rushed to an emergency room. Callers who have had this happen, or know someone it’s happened to, will often be especially cagey about sharing their suicidal thoughts, or paranoid about the information that might be being gathered about them. This is extremely problematic, because it means that potentially high-risk callers might deliberately understate the extent of their emotional distress if they ever call again in the future. Moreover, if they’ve been to hospitals before under these circumstances and found the experience traumatizing, they may be unwilling to accept medical interventions in the future. Wouldn’t it be better if instead the caller could consent for a nice person to come discreetly check up on them at their house, have a nice chat, maybe make them a cup of tea? For lower-risk callers, especially people in hospitals or nursing homes who don’t have any company, shouldn’t we be able to find someone living nearby who can pay them a visit during the week?

Of course, suicide hotlines are already understaffed, and so expanding them into an even more labor-intensive grassroots organization wouldn’t be easy. The kinds of callers who call suicide hotlines repeatedly and obsessively would likely be pleading for visits on a constant basis: you would probably need some kind of rationing system to make sure they weren’t overwhelming the entire volunteer network. In a small number of cases, there might be safety concerns about going in person to a caller’s house. (No house-calls for the masturbators, obviously.) The bigger problem, however, is figuring out how to mobilize communities and get people to feel invested in the emotional wellbeing of their neighbors. Personal entanglement is inherently a hard sell. Part of the reason why people volunteer with charitable organizations rather than simply knocking on their neighbors’ doors is because they want to keep their regular lives and their volunteer obligations strictly separate. They want to perform a service for someone without becoming closely enmeshed in the day-to-day reality of that person’s problems. This kind of distance is preferred by most part-time volunteers—I certainly find it more convenient to compartmentalize my life in this way, though I’m not at all sure that’s a good thing—and it may be preferable for some callers, too, especially those who are dealing with issues they intensely desire to keep private, for whom a visit from the wrong neighbor might be mortifying.

But I think we must attempt to surmount these obstacles. When people lament the demise of communities or multi-generation family units in the United States, this is the kind of mutual support they’re thinking of. The extent to which America was once comprised of warm, child-raising villages in its real-life past is, of course, greatly exaggerated, and we certainly shouldn’t romanticize local communities per se: they always have the capacity to be meddling, oppressive, and exclusionary. But all communities don’t have to be like that, and instead of abdicating community ideals as outdated, we could be working to realize them better in the particular places we live. As American lifestyles become increasingly mobile and rootless, close involvement in a community may not be foremost on people’s minds; to the extent that people these days talk about “settling down” somewhere, they usually seem to be thinking in terms of sending their kids to a local school, patronizing nearby restaurants, and attending summer concerts in the park, not trundling around to people’s homes and asking what they can do for them.

But even if we aren’t planning to live in the same town for the entire rest of our lives, we mustn’t allow ourselves to use this as a convenient excuse to distance ourselves from local problems we may have the power to ameliorate. People who come to the U.S. from other parts of the world often find our way of living perverse, in ways we simply take for granted as facts of human nature, rather than peculiar societal failings. I was recently talking to a Haitian-born U.S. citizen who works long hours as a nurse’s aid, and then comes home each night to care for her mentally disabled teenage son. She told me that if it were possible, she would go back to Haiti in a heartbeat. She was desperately poor in Haiti, but there, she said, her neighbors would have helped her: they would have invited her over for dinner, they would have offered to look after the children. “Here,” she said, “nobody helps you.” That’s one of the worst condemnations of American civil society I’ve heard in a while.

As Current Affairs has written in the past, many of the problems that underlie or exacerbate people’s suicidal crises—homelessness, unemployment, lack of access to healthcare—are the result of an economic and political system that is fundamentally profit-driven, and fails to prioritize the well-being of its most vulnerable citizens. Large-scale political changes are necessary to free up the resources that would be necessary to truly tackle these problems in a lasting and meaningful sense, and foster a society that’s better geared towards the health and happiness of all its members. But we must also recognize that government programs—even if well-funded—will never be enough, if they’re administered by an impersonal bureaucracy. What people want, what they need, are real fellow-humans who will come talk to them, and look them in the eye, and genuinely care about what happens to them. At the moment, given the system we currently have to work with, to allocate all that responsibility onto a few poorly-paid, exhausted social workers and health sector employees just isn’t fair—nor is it effective. This is a responsibility that should belong to all of society: to anybody who has even a hour to spare.

Giving people a number to call is a start. It would make sense to use existing hotlines as a tool to find and reach people who need help, both those who are at high risk of harming themselves, and those that are simply unhappy. As for how local volunteer forces could be coordinated, this is something municipalities should trade ideas about: possibly there are communities who have successfully implemented programs like this. Organizations that work narrowly on certain types of social problems might have ideas about how to structure a multi-purpose community-wide organization that could intervene more generally in a variety of contexts. When it comes down to it, actually caring about—and taking care of—your neighbors, even when it’s difficult, is always the most radical form of political activism.

How TV Became Respectable Without Getting Better

On the rise of Prestige TV…

For a very long time, television was bad. A “vast wasteland,” as FCC chairman Newton Minow called it in his now-quaint 1961 speech Television and the Public Interest. Minow went on to ask his audience to sit through an entire day’s worth of television programs. He promised that: “You will see a procession of game shows, formula comedies about totally unbelievable families, blood and thunder, mayhem, violence, sadism, murder, western bad men, western good men, private eyes, gangsters, more violence, and cartoons. And endlessly commercials — many screaming, cajoling, and offending. And most of all, boredom.”

Anyone who took Minow up on his challenge would have been hard-pressed to disagree. For every Twilight Zone there was a My Three Sons and two Lassies. Television had barely come on the scene before people started calling it the boob tube and the idiot box.

Time did nothing to improve television’s quality, variety, or impact on culture. The only real innovation in the medium over its first fifty years was the reality show, which, by the end of the 20th century, was threatening to consume Western civilization entirely with increasingly dystopian nightmare offerings. By the time of George W. Bush’s election it seemed like only a matter of time before primetime network television would consist entirely of live executions and Regis Philbin. But then, when all hope seemed lost, a shaft of light burst through the covering darkness. A shaft of light in the bulbous, gabagoolian shape of James Gandolfini.

The Sopranos changed what television could accomplish artistically. It utilized the serialized storytelling, the depth of characterization and theme of a novel, and the visual sensibility of film. Episodes ended without the pat resolution that defined traditional TV drama. Stories stretched out over episodes and seasons. Characters underwent the sort of transformations that would have confused and alienated the audiences of previous generations of shows that thrived on archetypes. Inspired by show creator David Chase’s accomplishment, a whole generation of creative heavyweights set to putting their own mark on the medium. The Davids, Milch and Simon, created a pair of HBO shows, Deadwood and The Wire, respectively, that failed to match The Sopranos in viewership, but achieved posthumous critical canonization. Then, the network AMC, which originally showed old Hollywood movies to a small audience of nostalgic geriatrics, really managed to copy the Sopranos formula with Matthew Weiner’s Mad Men and Vince Gilligan’s Breaking Bad. These shows achieved levels of popularity and critical acclaim that had never been seen before, and certainly not on basic cable. TV got so good, in fact, that it wasn’t long before the dominant opinion among cultural tastemakers, from Vanity Fair to Newsweek, was that television had surpassed film as the most vital popular narrative art form.

As movie theaters were choked with sequels and reboots and soulless, obscenely-expensive comic book spectacles, the choice to stay home and absorb yourself in a rich, complex extended narrative just made sense. David Remnick, editor-in-chief of that ur-cultural tastemaker the New Yorker, said as much in a letter he wrote to the Pulitzer Prize committee recommending Emily Nussbaum, the magazine’s TV critic, for this year’s criticism award. According to Remnick, television is “the dominant cultural product of our age—it reaches us everywhere and has replaced movies and books as the thing we talk about with our friends, families, and colleagues.” (Nussbaum won that Pulitzer, by the way.) 

This new artistic consensus only holds up if you put a rather fat thumb on the scale. Critics who make the case for the superiority of television to film invariably compare their preferred boutique cable or streaming experience to the latest blockbuster hackwork, but this is an absurd and unfair comparison. It ignores the vast majority of television shows, from NCIS: Pacoima to Toddlers and Tiaras to the latest Kevin James fart-fest. You know, the shows people actually watch. The Big Bang Theory, a show that somehow never makes it into articles about the Golden Age of TV, averages over twenty million viewers, most of whom are the same people filling theaters for Transformers: Knight of the Day. A direct, apples-to-apples comparison would be between the best TV shows the medium has to offer and the best films cinema has to offer.

It would be pointless to argue that a given film is objectively better than a given television series. Tastes are relative. The formats are wildly different. The most revealing contrast is between what kind of critically-acclaimed movies are being made and what kind of critically-acclaimed TV shows are on offer. Just in the past few years, we’ve seen a film about the painful coming-of-age of a gay black youth in Miami (Moonlight), a period horror film about colonial America (The Witch), a stop-motion animated film about loneliness and loss (Anomalisa), and a film about an alternative reality where people who fail to find a romantic partner get turned into an animal of their choice (The Lobster).

While there are many kinds of television shows being made at the moment, it’s worth pointing out that a significant majority of critically-acclaimed, so-called “prestige television” shows are about angsty white criminals (The Sopranos, Breaking Bad), angsty white cops (The Wire), and angsty white ad execs (Mad Men). The current generation of prestige shows, which are universally inferior to that first wave by all accounts, rely on an assortment of genre tropes and the template laid by those pioneering programs. Mostly crime. Mostly male. Mostly extravagantly unlikeable anti-heroes whose sheer awfulness makes us feel better about our own, more mundane foibles.

prestige1

It’s also worth keeping in mind that television shows are, even more than films, advertisements for themselves. Issues of character, theme, story, setting, are, in practice, very often subsidiary to the primary objective of keeping people watching. All the cliffhangers and suspense sequences have less to do with artistic expression than in keeping the audience hooked. Even shows on streaming services like Netflix and Hulu, where binge-watching is the norm, are angling for that second season renewal. A movie can do its own thing for two hours, leave the audience confused or alienated or angry, and everyone involved moves on to the next project. A show that did that wouldn’t get to come back, and therefore wouldn’t be able to complete whatever grand design its creators insist is animating the entire thing. Staying on the air in a fractured media landscape, where the difference between a hit and a quietly-canceled flop is a few hundred thousand viewers, is essential if one wishes to be Part of the Conversation.

As a result, the subgenre of “Prestige TV” has become a tautological concept, with show after show earning the label simply by aping the aesthetic sensibility and glossy production value of the shows that first defined the genre. Everything is brooding, tortured anti-heroes, stillness punctuated by sudden acts of violence, montage and ironically counterposed musical choices. Plus bad writing—really, howlingly bad writing.  Kevin Spacey, in his Golden Globe-winning performance as House of Cards’s Frank Underwood, regularly looks into the camera and fake-Southern-drawls some fortune-cookie nonsense like “There’s no better way to overpower a trickle of doubt than with a flood of naked truth.” Jon Hamm’s Don Draper, meanwhile, routinely gifted Mad Men viewers with such high-level insights into the human condition as “People tell you who they are, but we ignore it because we want them to be who we want them to be.” Were epigrams such as these accompanied by, say, a tender swell of orchestral music, it would be immediately obvious how banal and lazily-written they are. But when uttered over the rim of a scotch-glass in a moodily-lit room by an exquisitely-dressed actor, they are, somehow, imbued with profundity.

The dirty little secret of the Golden Age of Television is that the main reason we all know that we’re living in the Golden Age of Television is because we’re told so by an emergent class of TV writers who have risen to prominence in tandem with it. The rise of the internet has as much, if not more, to do with the rise of perceived TV quality than any show-runner revolution. The Sopranos debuted at almost the same moment that the World Wide Web started reaching into the majority of homes, creating an explosion of websites that demanded content directed at a class of office workers who needed something to read to distract them from their white collar drudgery.

And so an army of recappers and critics were called from the digital ether to ceaselessly whisper a constant consolation for the future that never came. After a century of intense economic productivity, you still don’t have space colonies or even shorter work-weeks, but hey, you do have your couch and your Seamless and hundreds of hours of streamable, premium television at your fingertips.

And these new TV shows are not only to be watched, but to be endlessly obsessed over and speculated on: plot puzzles and opaque character motivations offer endless opportunity for fans to take to the web and start theorizing. There are certainly strong incentives in the direction of manufacturing contrived mysteries and intentional plotholes in order to fuel speculation and drive clicks to websites.

Such is certainly the case with the last prestige TV show to dominate the cultural conversation: Westworld. When HBO, the Zeus from whose head the Goddess of Quality Television sprang, debuted Westworld, a show with lavish production detail, acclaimed actors, and a Nolan brother behind the camera, there was no real doubt as to how the recappers and critics would respond. But is there anything truly interesting, fresh, and groundbreaking about Westworld? The pilot seeps through 80 lugubrious minutes of recycled meditations on man’s inhumanity to robot, spiked with gratuitous nudity and violence, and climaxing with one of the cheapest bits of dramaturgy in the prestige TV toolkit. I won’t spoil it for new viewers, but it’s the same kind of tired old stylistic tricks that the genre routinely uses to make a show’s violent, titillating aspects (i.e. the main reason everyone was watching) seem artistic and rewarding. A character monologues optimistically over a montage of her fellow cast members looking stricken or sadistic, all scored with an ironically foreboding ambient score, punctuated by a small act of violence and an abrupt fade-to-black. There’s an ominous low tone that signals you’re watching Something Very Serious and Important. Behind it, you can almost hear another voice, the voice of the internet opinions to come, assuring you that all of this is as it should be in the best of all possible worlds. 

man-in-black-1920
Ed Harris in HBO’s Westworld.

After a few episodes, the fundamental insufficiency of Westworld as a piece of art became impossible to ignore even to the most fervent television evangelist, but the flagship prestige show on the flagship prestige network was simply too big to fail. So the Westworld articles spit out by content-mills focused mostly on decoding the show’s central plot mysteries (Who is the Man in Black? What is the Labyrinth? Who killed Arnold?), rather than analysis of banalities like “character” or “theme” or “emotional resonance.” The glossier outlets insisted that Westworld’s wooly-headed pretentiousness and compulsive mystery-mongering were actually a satire of prestige TV tropes. (“An exploitation series about exploitation, full of naked bodies that are meant to make us think about nudity and violence that comments on violence”—Emily Nussbaum) Anything to avoid the obvious fact that everyone watched the show because it had boobs and blood and because everyone else was watching it and it’s so lonely out here.

One of the bitterest of the many bitter ironies of the digital age is that the explosion of television options and web-based platforms featuring cultural writing has led not to a flowering of creativity and a golden age of critical insight, but an all-consuming monoculture. A cargo cult where the trappings of a few groundbreaking cable shows from early in the millennium have hardened into tropes that power a legion of inferior imitators. Even more disturbingly, dozens of writers at dozens of outlets that depend on clicks and engagement have forged a hive-mind of positivity about the whole thing, assuring their audience that a diet of cultural junk food is just as healthy as balanced meals, because television is the medium best suited to the lives of internet-addicted office drones. Hopefully, as the Golden Age of the Golden Age of Television recedes farther into memory, and as viewers’ working conditions grow more and more intolerable, there will be a collective realization that we deserve better than Prestige TV.

Illustration by Mike Freiheit.

Imagining The End

The left should embrace both pragmatism and utopianism…

There’s a quote frequently used by leftists to illustrate how deeply ingrained society’s prevailing economic ideology is: “today, it’s easier to imagine the end of the world than the end of capitalism.” First offered by Fredric Jameson, and now almost starting to lose meaning from overuse, the quote points out something that honestly is quite astonishing: it does seem far easier to conceive of the possibility of being boiled alive or sinking into the sea than the possibility of living under a substantially different economic system. World-ending disaster seems not just closer than utopia, but closer than even a modest set of changes to the way human resources are distributed.

Jameson’s quote is often used to show how capitalism has limited the horizons of our imagination. We don’t think of civilization as indestructible, but we do seem to think of the free market as indestructible. This, it is sometimes said, is the result of neoliberalism: as both traditionally left-wing and traditionally right-wing parties in Western countries developed a consensus that markets were the only way forward (“there is no alternative”), more and more people came to hold narrower and narrower views of the possibilities for human society. Being on the right meant “believing in free markets and some kind of nationalism or social conservatism” while being liberal meant “believing in free markets but being progressive on issues of race, gender, and sexual orientation.” Questions like “how do we develop a feasible alternative to capitalism?” were off the table; the only reasonable question about political intervention in the economy became: “should we regulate markets a little bit, or not at all?”

There’s definitely something to this critique. It’s true that, where once people dreamed of replacing capitalism with something better, today human societies seem to face a choice between apocalypse, capitalism, and capitalism followed shortly by apocalypse. Every attempt to speak of a different kind of economy, however appealing it may be emotionally, seems vague and distant, and impossible to know how to actually bring about. Plenty of young people today are socialists, but socialism seems a lot more like a word than an actual thing that could happen.

Some of this is the result of a very successful multi-decade campaign by the right to present free-market orthodoxy as some kind of objective truth rather than a heavily value-laden and political set of contestable ideas. And the Jameson quote also partly succeeds through a kind of misleading pseudo-profundity: it’s always going to be easier to imagine visceral physical things like explosions than changes in economic structures, and so the relative ease of imagining the former versus the latter may not be the especially deep comment on 21st century ideological frameworks that the quotation assumes.

But if socialism seems more remote than ever, it’s also surely partly the fault of socialists themselves. If we ask the question “Why is it difficult to imagine the end of capitalism?”, some of the answer must be “Because socialists haven’t offered a realistic alternative or any kind of plausible path toward such an alternative.” It’s very easy to blame “neoliberal” ideology for convincing people that free-market dogmas are cosmic truths. Yet while Margaret Thatcher may have propagandized and evangelized for the principle that less government is always better government, she didn’t actually prevent people on the left from using their imaginations. If our imaginations have been stunted, it may also be because we have failed to use them to their maximal capacity, falling back on abstractions and rhetoric rather than developing clear and pragmatic pictures for what a functional left-wing world might look like.

anatomyad2

I blame Karl Marx for that, somewhat. Marx helped kill “utopian socialism” (my favorite kind of socialism). The utopian socialists used to actually dream of the kind of worlds they would create, conjuring elaborate and delightfully vivid visions of how a better and more humane world might actually operate. Some of these veered into the absurd (Charles Fourier believed the seas would turn to lemonade), but all of them encouraged people to actually think in serious detail about how human beings live now, and what it would be like if they lived differently. Marx, on the other hand, felt that this was a kind of foolishly romantic, anti-scientific waste of time. The task of the socialist was to discern the inexorable historical laws governing human social development, and then to hasten the advance of a revolution. According to Marx, it was pointless trying to spend time drawing up “recipes for the cook-shops of the future”; instead, left-wing thinkers should do as Marx believed he was doing, and confine themselves “to the mere critical analysis of actual facts.”

But analysis doesn’t actually create proposals, and it was because Marx believed that that things could sort themselves out “dialectically” that he didn’t think it was necessary to explain how communism might actually function day-to-day. Ironically, given Marx’s dictum that philosophers should attempt to change the world rather than merely interpreting it, Marx and his followers spent an awful lot of time trying to figure out social theories that would properly interpret the world, and precious little time trying to figure out what changes might actually improve people’s lives versus which changes might lead to disaster. (Call me crazy, but I believe this tendency to shun the actual development of policy might have been one reason why nearly every single government that has ever called itself Marxist has very quickly turned into a horror show.)

The left-wing tendency to avoid offering clear proposals for how left ideas might be successfully implemented (without gulags) is not confined to revolutionary communism. The same affliction plagued the Occupy Wall Street movement; a belief in democracy and a hatred of inequality, but a stalwart refusal to try to come up with a feasible route from A-B, where A is our present state of viciously unequal neofeudalism and B is something that might be slightly more bearable and fair. By refusing to issue demands, or consider what sorts of political, economic, and social adjustments would actually be necessary to actualize Occupy’s set of values, the movement doomed itself. The direct precipitating cause of its fizzling was Occupy’s eviction from Zuccotti Park by the NYPD. But it’s hard to think how a movement that isn’t actually proposing or fighting for anything clear and specific could ever actually get that thing. (Occupy’s “no demands” proponents would have done well to listen to Frederick Douglass, who declared that “Power concedes nothing without a demand.”)

There’s a bit of the same lack of programmatic strategy in the popular leftist disdain for “wonks” and “technocrats.” Nobody finds D.C. data nerds more irritating than I do, but these two terms have become casual pejoratives that can seemingly be applied to anyone who has an interest in policy details. Certainly, it’s important to heap scorn upon the set of “technocratic” Beltway-types who value policy for its own sake, and allow political process to become an end in itself, drained of any substantive moral values or concern with making people’s lives better. But in our perfectly justified hatred for a certain species of wonk, it’s important not to end up dismissing the value of caring about pragmatism and detail.

In fact, I almost feel as if the term “pragmatism” has been unfairly monopolized by centrists, with the unfortunate complicity of many people on the left. “Pragmatism” has come to mean “being a moderate.” But that’s not what the term should mean. Being pragmatic should simply mean “caring about the practical realities of how to implement things.” People like Bill Clinton and Tony Blair helped redefine “liberal pragmatism” to mean “adopting conservative policies as a shortcut to winning power easily.” But being pragmatic doesn’t mean having to sacrifice your idealism. It doesn’t mean tinkering at the margins rather than proposing grand changes. It just means having a plan for how to get things done.

Thus leftism should simultaneously become more pragmatic and more utopian. At its best, utopianism is pragmatic, because it is producing blueprints, and without blueprints, you’ll have trouble building anything. Yes, these days it’s hard to imagine a plausible socialist world. But that’s only partly because so many people insist socialism is impossible. It’s also because socialists aren’t actually doing much imagining. William Morris and the 19th century utopians painted vivid portraits of what a world that embodied their values might look like. Today’s socialists tell us what they deplore (inequality and exploitation), but they’re short on clear plans. But plans are what we need. Serious ones. Detailed ones. Not “technocratic,” necessarily, but certainly technical. It’s time to actually start imagining what something new might really look like.

The Dangerous Academic is an Extinct Species

If these ever existed at all, they are now deader than dodos…

It was curiosity, not stupidity that killed the Dodo. For too long, we have held to the unfair myth that the flightless Mauritian bird became extinct because it was too dumb to understand that it was being killed. But as Stefan Pociask points out in “What Happened to the Last Dodo Bird?”, the dodo was driven into extinction partly because of its desire to learn more about a new, taller, two-legged creature who disembarked onto the shores of its native habitat: “Fearless curiosity, rather than stupidity, is a more fitting description of their behavior.”

Curiosity does have a tendency to get you killed. The truly fearless don’t last long, and the birds who go out in search of new knowledge are inevitably the first ones to get plucked. It’s always safer to stay close to the nest.

Contrary to what capitalism’s mythologizers would have you believe, the contemporary world does not heap its rewards on those with the most creativity and courage. In fact, at every stage of life, those who venture beyond the safe boundaries of expectation are ruthlessly culled. If you’re a black kid who tends to talk back and call bullshit on your teachers, you will be sent to a special school. If you’re a transgender teenager like Leelah Alcorn in Ohio, and you unapologetically defy gender norms, they’ll make you so miserable that you kill yourself. If you’re Eric Garner, and you tell the police where they can stick their B.S. “loose cigarette” tax, they will promptly choke you to death. Conformists, on the other hand, usually do pretty well for themselves. Follow the rules, tell people what they want to hear, and you’ll come out just fine.

Becoming a successful academic requires one hell of a lot of ass-kissing and up-sucking. You have to flatter and impress. The very act of applying to graduate school to begin with is an exercise in servility: please deem me worthy of your favor. In order to rise through the ranks, you have to convince people of your intelligence and acceptability, which means basing everything you do on a concern for what other people think. If ever you find that your conclusions would make your superiors despise you (say, for example, if you realized that much of what they wrote was utter irredeemable manure), you face a choice: conceal your true self or be permanently consigned to the margins.

The idea of a “dangerous” academic is therefore somewhat self-contradictory to begin with. The academy could, potentially, be a place for unfettered intellectual daring. But the most daring and curious people don’t end up in the academy at all. These days, they’ve probably gone off and done something more interesting, something that involves a little bit less deference to convention and detachment from the material world. We can even see this in the cultural archetype of the Professor. The Professor is always a slightly harrumphy—and always white and male—individual, with scuffed shoes and jackets with leather elbows, hidden behind a mass of seemingly disorganized books. He is brilliant but inaccessible, and if not effeminate, certainly effete. But bouncing with ideas, so many ideas. There is nothing particularly menacing about such a figure, certainly nothing that might seriously threaten the existing arrangements of society. Of ideas he has plenty. Of truly dangerous ones, none at all.

If anything, the university has only gotten less dangerous in recent years. Campuses like Berkeley were once centers of political dissent. There was open confrontation between students and the state. In May of 1970, the Ohio National Guard killed four students at Kent State. Ten days later, police at the historically black Jackson State University fired into a crowd of students, killing two. At Cornell in 1969, armed black students took over the student union building in a demand for recognition and reform, part of a pattern of serious upheaval.

But over the years the university became corporatized. It became a job training center rather than an educational institution. Academic research became progressively more specialized, narrow, technical, and obscure. (The most successful scholarship is that which seems to be engaged with serious social questions, but does not actually reach any conclusions that would force the Professor to leave his office.)

anatomyad2

The ideas that do get produced have also become more inaccessible, with research inevitably cloaked behind the paywalls of journals that cost astronomical sums of money. At the cheaper end, the journal Cultural Studies charges individuals $201 for just the print edition, and charges institutions $1,078 for just the online edition. The science journal Biochimica et Biophysica Acta costs $20,000, which makes Cultural Studies look like a bargain. (What makes the pricing especially egregious is that these journals are created mostly with free labor, as academics who produce articles are almost never paid for them.) Ideas in the modern university are not free and available to all. They are in fact tethered to a vast academic industrial complex, where giant publishing houses like Elsevier make massive profits off the backs of researchers.

Furthermore, the academics who produce those ideas aren’t exactly at liberty to think and do as they please. The overwhelming “adjunctification” of the university has meant that approximately 76% of professors… aren’t professors at all, but underpaid and overworked adjuncts, lecturers, and assistants. And while conditions for adjuncts are slowly improving, especially through more widespread unionization, their place in the university is permanently unstable. This means that no adjunct can afford to seriously offend. To make matters worse, adjuncts rely heavily on student evaluations to keep their positions, meaning that their classrooms cannot be places to heavily contest or challenge students’ politics. Instructors could literally lose their jobs over even the appearance of impropriety. One false step—a video seen as too salacious, or a political opinion held as oppressive—could be the end of a career. An adjunct must always be docile and polite.

All of this means that university faculty are less and less likely to threaten any aspect of the existing social or political system. Their jobs are constantly on the line, so there’s a professional risk in upsetting the status quo. But even if their jobs were safe, the corporatized university would still produce mostly banal ideas, thanks to the sycophancy-generating structure of the academic meritocracy. But even if truly novel and consequential ideas were being produced, they would be locked away behind extortionate paywalls.

The corporatized university also ends up producing the corporatized student. Students worry about doing anything that may threaten their job prospects. Consequently, acts of dissent have become steadily de-radicalized. On campuses these days, outrage and anger is reserved for questions like, “Is this sushi an act of cultural appropriation?” When student activists do propose ways to “radically” reform the university, it tends to involve adding new administrative offices and bureaucratic procedures, i.e. strengthening the existing structure of the university rather than democratizing it. Instead of demanding an increase in the power of students, campus workers, and the untenured, activists tend to push for symbolic measures that universities happily embrace, since they do not compromise the existing arrangement of administrative and faculty power.

It’s amusing, then, that conservatives have long been so paranoid about the threat posed by U.S. college campuses. The American right has an ongoing fear of supposedly arch-leftist professors brainwashing nubile and impressionable young minds into following sinister leftist dictates. Since massively popular books like Roger Kimball’s 1990 Tenured Radicals and Dinesh D’Souza’s 1992 Illiberal Education: The Politics of Race on Campus, colleges have been seen as hotbeds of Marxist indoctrination that threaten the civilized order. This is a laughable idea, for the simple reason that academics are the very opposite of revolutionaries: they intentionally speak to minuscule audiences rather than the masses (on campus, to speak of a “popular” book is to deploy a term of faint disdain) and they are fundamentally concerned with preserving the security and stability of their own position. This makes them deeply conservative in their day-to-day acts, regardless of what may come out of their mouths. (See the truly pitiful lack of support among Harvard faculty when the university’s dining hall workers went on strike for slightly higher wages. Most of the “tenured radicals” couldn’t even be bothered to sign a petition supporting the workers, let alone march in the streets.)

But left-wing academics are all too happy to embrace the conservatives’ ludicrous idea of professors as subversives. This is because it reassures them that they are, in fact, consequential, that they are effectively opposing right-wing ideas, and that they need not question their own role. The “professor-as-revolutionary” caricature serves both the caricaturist and the professor. Conservatives can remain convinced that students abandon conservative ideas because they are being manipulated, rather than because reading books and learning things makes it more difficult to maintain right-wing prejudices. And liberal professors get to delude themselves into believing they are affecting something.

harmlessacedemics

Today, in what many call “Trump’s America,” the idea of universities as sites of “resistance” has been renewed on both the left and right. At the end of 2016, Turning Point USA, a conservative youth group, created a website called Professor Watchlist, which set about listing academics it considered dangerously leftist. The goal, stated on the Turning Point site, is “to expose and document college professors who discriminate against conservative students and advance leftist propaganda in the classroom.”

Some on the left are delusional enough to think that professors as a class can and should be presenting a united front against conservatism. At a recent University of Chicago event, a document was passed around from Refusefascism.org titled, “A Call to Professors, Students and All in Academia,” calling on people to “Make the University a Zone of Resistance to the Fascist Trump Regime and the Coming Assault on the Academy.”

Many among the professorial class seem to want to do exactly this, seeing themselves as part of the intellectual vanguard that will serve as a bulwark against Trumpism. George Yancy, a professor of philosophy and race studies at Emory University, wrote an op-ed in the New York Times, titled “I Am A Dangerous Professor.” Yancy discussed his own inclusion on the Professor Watchlist, before arguing that he is, in fact, dangerous:

“In my courses, which the watchlist would like to flag as ‘un-American’ and as ‘leftist propaganda,’ I refuse to entertain my students with mummified ideas and abstract forms of philosophical self-stimulation. What leaves their hands is always philosophically alive, vibrant and filled with urgency. I want them to engage in the process of freeing ideas, freeing their philosophical imaginations. I want them to lose sleep over the pain and suffering of so many lives that many of us deem disposable. I want them to become conceptually unhinged, to leave my classes discontented and maladjusted…Bear in mind that it was in 1963 that the Rev. Dr. Martin Luther King, Jr. raised his voice and said: ‘I say very honestly that I never intend to become adjusted to segregation and discrimination.’… I refuse to remain silent in the face of racism, its subtle and systemic structure. I refuse to remain silent in the face of patriarchal and sexist hegemony and the denigration of women’s bodies.”

He ends with the words:

“Well, if it is dangerous to teach my students to love their neighbors, to think and rethink constructively and ethically about who their neighbors are, and how they have been taught to see themselves as disconnected and neoliberal subjects, then, yes, I am dangerous, and what I teach is dangerous.”

Of course, it’s not dangerous at all to teach students to “love their neighbors,” and Yancy knows this. He wants to simultaneously possess and devour his cake: he is doing nothing that anyone could possibly object to, yet he is also attempting to rouse his students to overthrow the patriarchy. He suggests that his work is so uncontroversial that conservatives are silly to fear it (he’s just teaching students to think!), but also places himself in the tradition of Martin Luther King, Jr., who was trying to radically alter the existing social order. His teaching can be revolutionary enough to justify Yancy spending time as a philosophy professor during the age of Trump, but benign enough for the Professor Watchlist to be an act of baseless paranoia.

Much of the revolutionary academic resistance to Trump seems to consist of spending a greater amount of time on Twitter. Consider the case of George Ciccariello-Maher, a political scientist at Drexel University who specializes in Venezuela. In December of 2016, Ciccariello-Maher became a minor cause célèbre on the left after getting embroiled in a flap over a tweet. On Christmas Eve, for who only knows what reason, Ciccariello-Maher tweeted “All I Want for Christmas is White Genocide.” Conservatives became enraged, and began calling upon Drexel to fire him. Ciccariello-Maher insisted he had been engaged in satire, although nobody could understand what the joke was intended to be, or what the tweet even meant in the first place. After Drexel disowned Ciccariello-Maher’s words, a petition was launched in his defense. Soon, Ciccariello-Maher had lawyered up, Drexel confirmed that his job was safe, and the whole kerfuffle was over before the nation’s half-eaten leftover Christmas turkeys had been served up into sandwiches and casseroles.

Ciccariello-Maher continues to spend a great deal of time on Twitter, where he frequently issues macho tributes to violent political struggle, and postures as a revolutionary. But despite his temporary status as a martyr for the cause of academic freedom, one who terrifies the reactionaries, there was nothing dangerous about his act. He hadn’t really stirred up a hornet’s nest; after all, people who poke actual bees occasionally get bee stings. A more apt analogy is that he had gone to the zoo to tap on the glass in the reptile house, or to throw twigs at some tired crocodiles in a concrete pool. (When they turned their rheumy eyes upon him, he ran from the fence, screaming that dangerous predators were after him.) U.S. academics who fancy themselves involved in revolutionary political struggles are trivializing the risks faced by actual political dissidents around the world, including the hundreds of environmental activists who have been murdered globally for their efforts to protect indigenous land.

“University faculty are less and less likely to threaten any aspect of the existing social or political system…”

Of course, it’s true that there are still some subversive ideas on university campuses, and some true existing threats to academic and student freedom. Many of them have to do with Israel or labor organizing. In 2014, Steven Salaita was fired from a tenured position at the University of Illinois for tweets he had made about Israel. (After a protracted lawsuit, Salaita eventually reached a settlement with the university.) Fordham University tried to ban a Students for Justice in Palestine group, and the University of California Board of Regents attempted to introduce a speech code that would have punished much criticism of Israel as “hate speech.” The test of whether your ideas are actually dangerous is whether you are rewarded or punished for expressing them.

In fact, in terms of danger posed to the world, the corporatized university may itself be more dangerous than any of the ideas that come out of it.

In Hyde Park, where I live, the University of Chicago seems ancient and venerable at first glance. Its Ye Olde Kinda Sorta Englande architecture, built in 1890 to resemble Oxbridge, could almost pass for medieval if one walked through it at dusk. But the institution is in fact deeply modern, and like Columbia University in New York, it has slowly absorbed the surrounding neighborhood, slicing into older residential areas and displacing residents in landgrab operations. Despite being home to one of the world’s most prestigious medical and research schools, the university refused for many years to open a trauma center to serve the city’s South Side, which had been without access to trauma care. (The school only relented in 2015, after a long history of protests.) The university ferociously guards its myriad assets with armed guards on the street corners, and enacts massive surveillance on local residents (the university-owned cinema insists on examining bags for weapons and food, a practice I have personally experienced being selectively conducted in a racially discriminatory manner). In the university’s rapacious takeover of the surrounding neighborhood, and its treatment of local residents—most of whom are of color—we can see what happens when a university becomes a corporation rather than a community institution. Devouring everything in the pursuit of limitless expansion, it swallows up whole towns.

The corporatized university, like corporations generally, is an uncontrollable behemoth, absorbing greater and greater quantities of capital and human lives, and churning out little of long-term social value. Thus Yale University needlessly decided to open a new campus in Singapore despite the country’s human rights record and restrictions on political speech, and New York University decided to needlessly expand to Abu Dhabi, its new UAE campus built by low-wage workers under brutally repressive conditions. The corporatized university serves nobody and nothing except its own infinite growth. Students are indebted, professors lose job security, surrounding communities are surveilled and displaced. That is something dangerous.

Left professors almost certainly sense this. They see themselves disappearing, the campus becoming a steadily more stifling environment. Posturing as a macho revolutionary is, like all displays of machismo, driven partially by a desperate fear of one’s impotence. They know they are not dangerous, but they are happy to play into the conservative stereotype. But the “dangerous academic” is like the Dodo in 1659, a decade before its final sighting and extinction: almost nonexistent. And the more universities become like corporations, the fewer and fewer of these unique birds will be left. Curiosity kills, and those who truly threaten the inexorable logic of the neoliberal university are likely to end up extinct.

Illustrations by Chris Matthews.

How Liberals Fell In Love With The West Wing

Aaron Sorkin’s political drama shows everything wrong with the Democratic worldview…

In the history of prestige tv, few dramas have had quite the cultural staying power of Aaron Sorkin’s The West Wing.

Set during the two terms of fictional Democratic President and Nobel Laureate in Economics  Josiah “Jed” Bartlet (Martin Sheen) the show depicts the inner workings of a sympathetic liberal administration grappling with the daily exigencies of governing. Every procedure and protocol, every piece of political brokerage—from State of the Union addresses to legislative tugs of war to Supreme Court appointments—is recreated with an aesthetic authenticity enabled by ample production values (a single episode reportedly cost almost $3 million to produce) and rendered with a dramatic flair that stylizes all the bureaucratic banality of modern governance.

Nearly the same, of course, might be said for other glossy political dramas such as Netflix’s House of Cards or Scandal. But The West Wing aspires to more than simply visual verisimilitude. Breaking with the cynicism or amoralism characteristic of many dramas about politics, it offers a vision of political institutions which is ultimately affirmative and approving. What we see throughout its seven seasons are Democrats governing as Democrats imagine they govern, with the Bartlet Administration standing in for liberalism as liberalism understands itself.

More than simply a fictional account of an idealized liberal presidency, then, The West Wing is an elaborate fantasia founded upon the shibboleths that sustain Beltway liberalism and the milieu that produced them.

“Ginger, get the popcorn

The filibuster is in

I’m Toby Ziegler with The Drop In

What Kind of Day Has It Been?

It’s Lin, speaking the truth

—Lin-Manuel Miranda, “What’s Next?

During its run from 1999 to 2006, The West Wing garnered immense popularity and attention, capturing three Golden Globe Awards and 26 Emmys and building a devout fanbase among Democratic partisans, Beltway acolytes, and people of the liberal-ish persuasion the world over. Since its finale more than a decade ago, it has become an essential part of the liberal cultural ecosystem, its importance arguably on par with The Daily Show, Last Week Tonight, and the rap musical about the founding fathers people like for some reason.

If anything, its fandom has only continued to grow with age: In the summer of 2016, a weekly podcast hosted by seasons 4-7 star Joshua Malina, launched with the intent of running through all 154 episodes (at a rate of one per week), almost immediately garnered millions of downloads; an elaborate fan wiki with almost 2000 distinct entries is maintained and regularly updated, magisterially documenting every mundane detail of the West Wing cosmos save the characters’ bowel movements; and, in definitive proof of the silence of God, superfan Lin-Manuel Miranda has recently recorded a rap named for one of the show’s most popular catchphrases (“What’s next?”).

While certainly appealing to a general audience thanks to its expensive sheen and distinctive writing, The West Wing’s greatest zealots have proven to be those who professionally inhabit the very milieu it depicts: Washington political staffers, media types, centrist cognoscenti, and various others drawn from the ranks of people who tweet “Big, if true” in earnest and think a lanyard is a talisman that grants wishes and wards off evil.  

The West Wing “took something that was for the most part considered dry and nerdy—especially to people in high school and college—and sexed it up,” former David Axelrod advisor Eric Lesser told Vanity Fair in a longform 2012 feature about the “Sorkinization of politics” (Axelrod himself having at one point advised West Wing writer Eli Attie). It “very much served as inspiration”, said Micah Lasher, a staffer who then worked for Michael Bloomberg.

Thanks to its endless depiction of procedure and policy, the show naturally gibed with the wonkish libidos of future Voxsplainers Matt Yglesias and Ezra Klein. “There’s a cultural meme or cultural suggestion that Washington is boring, that policy is boring, but it’s important stuff,” said Klein, adding that the show dramatized “the immediacy and urgency and concern that people in this town feel about the issues they’re working on.” “I was interested in politics before the show started,” added Yglesias. “But a friend of mine from college moved to D.C. at the same time as me, after graduation, and we definitely plotted our proposed domination of the capital in explicitly West Wing terms: Who was more like Toby? Who was more like Josh?”

Far from the Kafkaesque banality which so often characterizes the real life equivalent, the mundane business of technocratic governance is made to look exciting, intellectually stimulating, and, above all, honorable. The bureaucratic drudgery of both White House management and governance, from speechwriting, to press conference logistics, to policy creation, are front and center across all seven seasons. A typical episode script is chock full of dweebish phraseology — “farm subsidies”, “recess appointments”, “census bureau”, “congressional consultation” — usually uttered by swift-tongued, Ivy League-educated staffers darting purposefully through labyrinthine corridors during the infamous “walk-and-talk” sequences. By recreating the look and feel of political processes to the tee, while garnishing them with a romantic veneer, the show gifts the Beltway’s most spiritually-devoted adherents with a vision of how many would probably like to see themselves.

In serving up this optimistic simulacrum of modern US politics, Sorkin’s universe has repeatedly intersected with real-life US politics. Following the first season, and in the midst of the 2000 presidential election contest, Salon’s Joyce Millman wrote: “Al Gore could clinch the election right now by staging as many photo-ops with the cast of The West Wing as possible.” A poll published during the same election found that most voters preferred Martin Sheen’s President Bartlet to Bush or Gore. A 2008 New York Times article predicted an Obama victory on the basis of the show’s season 6-7 plot arc. The same election year, the paper published a fictionalized exchange between Bartlet and Barack Obama penned by Sorkin himself. 2016 proved no exception, with the New Statesman’s Helen Lewis reacting to Donald Trump’s victory by saying: “I’m going to hug my West Wing boxset a little closer tonight, that’s for sure.”

Appropriately, many of the show’s cast members, leveraging their on-screen personas, have participated or intervened in real Democratic Party politics. During the 2016 campaign, star Bradley Whitford—who portrays frenetically wily strategist Josh Lyman—was invited to “reveal” who his [fictional] boss would endorse:

“There’s no doubt in my mind that Hillary would be President Bartlet’s choice. She’s—nobody is more prepared to take that position on day one. I know this may be controversial. But yes, on behalf of Jed Bartlet, I want to endorse Hillary Clinton.”

Six leading members of the cast, including Whitford, were even dispatched to Ohio to stump for Clinton (inexplicably failing to swing the crucial state in her favor).

anatomyad2

During the Democratic primary season Rob Lowe (who appeared from 1999-2003 before leaving in protest at the ostensible stinginess of his $75,000/episode salary) even deployed a clip from the show and paraphrased his own character’s lines during an attack on Bernie Sanders’ tax plan: “Watching Bernie Sanders. He’s hectoring and yelling at me WHILE he’s saying he’s going to raise our taxes. Interesting way to communicate.” In Season 2 episode “The Fall’s Gonna Kill You”, Lowe’s character Sam Seaborn angrily lectures a team of speechwriters:  

“Every time your boss got on the stump and said, ‘It’s time for the rich to pay their fair share,’ I hid under a couch and changed my name…The top one percent of wage earners in this country pay for twenty-two percent of this country. Let’s not call them names while they’re doing it, is all I’m saying.”

What is the actual ideology of The West Wing? Just like the real American liberalism it represents, the show proved to be something of a political weather vane throughout its seven seasons on the air.

Debuting during the twilight of the Clinton presidency and spanning much of Bush II’s, it predictably vacillated somewhat in response to events while remaining grounded in a general liberal ethos. Having writing credits for all but one episode in The West Wing’s first four seasons, Sorkin left in 2003, with Executive Producer John Wells characterizing the subsequent direction as more balanced and bipartisan. The Bartlet administration’s actual politics—just like those of the real Democratic Party and its base—therefore run the gamut from the stuff of Elizabeth Warren-esque populism to the neoliberal bilge you might expect to come from a Beltway think tank having its white papers greased by dollars from Goldman Sachs.  

But promoting or endorsing any specific policy orientation is not the show’s true raison d’être. At the conclusion of its seven seasons it remains unclear if the Bartlet administration has succeeded at all in fundamentally altering the contours of American life. In fact, after two terms in the White House, Bartlet’s gang of hyper-educated, hyper-competent politicos do not seem to have any transformational policy achievements whatsoever. Even in their most unconstrained and idealized political fantasies, liberals manage to accomplish nothing.

The lack of any serious attempts to change anything reflect a certain apolitical tendency in this type of politics, one that defines itself by its manner and attitude rather than a vision of the change it wishes to see in the world. Insofar as there is an identifiable ideology, it isn’t one definitively wedded to a particular program of reform, but instead to a particular aesthetic of political institutions. The business of leveraging democracy for any specific purpose comes second to how its institutional liturgy and processes look and, more importantly, how they make us feel—virtue being attached more to posture and affect than to any particular goal. Echoing Sorkin’s 1995 film The American President (in many ways the progenitor of The West Wing) it delights in invoking “seriousness” and the supposedly hard-headed pragmatism of grownups.

cast2

Consider a scene from Season 2’s “The War at Home”, in which Toby Ziegler confronts a rogue Democratic Senator over his objections to Social Security cuts prospectively to be made in collaboration with a Republican Congress. The episode’s protagonist certainly isn’t the latter, who tries to draw a line in the sand over the “compromising of basic Democratic values” and threatens to run a third party presidential campaign, only to be admonished acerbically by Ziegler:  

“If you think demonizing people who are trying to govern responsibly is the way to protect our liberal base, then speaking as a liberal…go to bed, would you please?…Come at us from the left, and I’m gonna own your ass.”

The administration and its staff are invariably depicted as tribunes of the serious and the mature, their ideological malleability taken to signify their virtue more than any fealty to specific liberal principles.

Even when the show ventures to criticize the institutions of American democracy, it never retreats from a foundational reverence for their supposed enlightenment and the essential nobility of most of the people who administer them. As such, the presidency’s basic function is to appear presidential and, more than anything, Jed Bartlet’s patrician aura and respectable disposition make him the perfect avatar for the West Wing universe’s often maudlin deference to the liturgy of “the office.” “Seriousness,” then— the superlative quality in the Sorkin taxonomy of virtues—implies presiding over the political consensus, tinkering here and there, and looking stylish in the process by way of soaring oratory and white-collar chic.   

“Make this election about smart, and not. Make it about engaged, and not. Qualified, and not. Make it about a heavyweight. You’re a heavyweight. And you’ve been holding me up for too many rounds.”

—Toby Ziegler, Hartsfield’s Landing (Season 3, Episode 14)

Despite its relatively thin ideological commitments, there is a general tenor to the West Wing universe that cannot be called anything other than smug.

It’s a smugness born of the view that politics is less a terrain of clashing values and interests than a perpetual pitting of the clever against the ignorant and obtuse. The clever wield facts and reason, while the foolish cling to effortlessly-exposed fictions and the braying prejudices of provincial rubes. In emphasizing intelligence over ideology, what follows is a fetishization of “elevated discourse” regardless of its actual outcomes or conclusions. The greatest political victories involve semantically dismantling an opponent’s argument or exposing its hypocrisy, usually by way of some grand rhetorical gesture. Categories like left and right become less significant, provided that the competing interlocutors are deemed respectably smart and practice the designated etiquette. The Discourse becomes a category of its own, to be protected and nourished by Serious People conversing respectfully while shutting down the stupid with heavy-handed moral sanctimony.  

In Toby Ziegler’s “smart and not,” “qualified and not” formulation, we can see a preview of the (disastrous) rhetorical strategy that Hillary Clinton would ultimately adopt against Donald Trump. Don’t make it about vision, make it about qualification. Don’t make it about your plans for how to make people’s lives better, make it about your superior moral character. Fundamentally, make it about how smart and good and serious you are, and how bad and dumb and unserious they are.

“The administration and its staff are invariably depicted as tribunes of the serious and the mature, their ideological malleability taken to signify their virtue…”

In this respect, The West Wing’s foundational serious/unserious binary falls squarely within the tradition that has since evolved into the “epic own/evisceration” genre characteristic of social media and late night TV, in which the aim is to ruthlessly use one’s intellect to expose the idiocy and hypocrisy of the other side. In a famous scene from Season 4’s “Game On”, Bartlet debates his Republican rival Governor Robert Ritchie (James Brolin). Their exchange, prompted by a question about the role of the federal government, is the stuff of a John Oliver wet dream:  

Richie: My view of this is simple. We don’t need a federal Department of Education telling us our children have to learn Esperanto, they have to learn Eskimo poetry. Let the states decide, let the communities decide on health care and education, on lower taxes, not higher taxes. Now he’s going to throw a big word at you — ‘unfunded mandate’, he’s going to say if Washington lets the states do it, it’s an unfunded mandate. But what he doesn’t like is the federal government losing power. I call it the ingenuity of the American people.”

Bartlet: Well first of all let’s clear up a couple of things: unfunded mandate is two words, not one big word. There are times when we are 50 states and there are times when we’re one country and have national needs. And the way I know this is that Florida didn’t fight Germany in World War Two or establish civil rights. You think states should do the governing wall-to-wall, now that’s a perfectly valid opinion. But your state of Florida got 12.6 billion dollars in federal money last year from Nebraskans and Virginia’s and New Yorkers and Alaskans, with their Eskimo poetry — 12.6 out of the state budget of 50 billion. I’m supposed to be using this time for a question so here it is: Can we have it back please?”

In an even more famous scene from Season 2 episode “The Midterms”, Bartlet humiliates homophobic talk radio host Jenna Jacobs by quoting scripture from memory, destroying her by her very own logic.

printedit

If Richie and Jacobs are the obtuse yokels to be epically taken down with facts and reason, the show also elevates several conservative characters to reinforce its postpartisan celebration of The Discourse. Republicans come in two types: slack-jawed caricatures, and people whose high-mindedness and mutual enthusiasm for Putting Differences Aside make them the Bartlet Administration’s natural allies or friends regardless of whatever conflicts of values they may ostensibly have. Foremost among the latter is Vinick: a moderate, pro-choice Republican who resembles John McCain (at least the imaginary “maverick” John McCain that liberals continue to pretend exists) and is appointed by Bartlet’s Democratic successor Matthew Santos to be Secretary of State. (In reality, there is no such thing as a “moderate” Republican, only a polite one. The upright and genial Paul Ryan, whom President Bartlet would have loved, is on a lifelong quest to dismantle every part of America’s feeble social safety net.)

Thus Bartlet Democrats do not see Republicans as the “enemy,” except to the extent that they are rude or insufficiently respectful of the rules of political decorum. In one Season 5 plot, the administration opts to install a Ruth Bader Ginsburg clone (Glenn Close) as Chief Justice of the Supreme Court. The price it pays—willingly, as it turns out—is giving the other vacancy to an ultra-conservative justice, for the sole reason that Bartlet’s staff find their amiable squabbling stimulating. Anyone with substantively progressive political values would be horrified by a liberal president’s appointment of an Antonin Scalia-style textualist to the Supreme Court. But if your values are procedural, based more on the manner in which people conduct themselves rather than the consequences they actually bring about, it’s easy to chuckle along with a hard-right conservative, so long as they are personally charming (Ziegler: “I hate him, but he’s brilliant. And the two of them together are fighting like cats and dogs … but it works.”)

“What’s next?”

Through its idealized rendering of American politics and its institutions, The West Wing offers a comforting avenue of escape from the grim and often dystopian reality of the present. If the show, despite its age, has continued to find favor and relevance among liberals, Democrats, and assorted Beltway acolytes alike, it is because it reflects and affirms their worldview with greater fidelity and catharsis than any of its contemporaries.

But if anything gives that worldview pause, it should be the events of the past eight years. Liberals got a real life Josiah Bartlet in the figure of Barack Obama, a charismatic and stylish politician elected on a populist wave. But Obama’s soaring speeches, quintessentially presidential affect, and deference to procedure did little to fundamentally improve the country or prevent his Republican rivals from storming the Congressional barricades at their first opportunity. Confronted by a mercurial TV personality bent on transgressing every norm and truism of Beltway thinking, Democrats responded by exhaustively informing voters of his indecency and hypocrisy, attempting to destroy him countless times with his own logic, but ultimately leaving him completely intact. They smugly taxonomized as “smart” and “dumb” the very electorate they needed to win over, and retreated into an ideological fever dream in which political success doesn’t come from organizing and building power, but from having the most polished arguments and the most detailed policy statements. If you can just crush Trump in the debates, as Bartlet did to Richie, then you’ve won. (That’s not an exaggeration of the worldview. Ezra Klein published an article entitled “Hillary Clinton’s 3 debate performances left the Trump campaign in ruins,” which entirely eliminated the distinction between what happens in debates and what happens in campaigns. The belief that politics is about argument rather than power is likely a symptom of a Democratic politics increasingly incubated in the Ivy League rather than the labor movement.)

Now, facing defeat and political crisis, the overwhelming liberal instinct has not been self-reflection but a further retreat into fantasy and orthodoxy. Like viewers at the climax of The West Wing’s original run, they sit waiting for the decisive gestures and gratifying crescendos of a series finale, only to find their favorite plotlines and characters meandering without resolution. Shockingly, life is not a television program, and Aaron Sorkin doesn’t get to write the ending.

The West Wing is many things: a uniquely popular and lavish effort in prestige TV; an often crisply-written drama; a fictionalized paean to Beltway liberalism’s foundational precepts; a wonkish celebration of institutions and processes; an exquisitely-tailored piece of political fanfiction.

But, in 2017, it is foremost a series of glittering illusions to be abandoned.

Illustrations by Meg T. Callahan.

I’m Not Sure It’s An Attack On Democracy

James Comey should have been fired. But Trump just committed a serious blunder.

The first point to make here is that firing FBI director James Comey was completely justified. Trump-appointed Deputy Attorney General Ron Rosenstein laid out an extremely persuasive case in his memorandum on “Restoring Public Confidence in the FBI.” Rosenstein said that in the FBI’s investigation of Hillary Clinton’s emails, James Comey seriously overstepped the boundaries of his role. Comey’s role was far too public, and in both his decision to issue his own public recommendation on whether Clinton should be prosecuted, and his gratuitous commentary on the investigation (a judicious silence is the preferred stance), Comey turned the email investigation into a spectacle. Rosenstein is witheringly critical of Comey’s infamous press conference, in which Comey chastised Clinton for her irresponsibility:

The Director ignored [a] longstanding principle: we do not hold press conferences to release derogatory information about the subject of a declined criminal investigation. Derogatory information sometimes is disclosed in the course of criminal investigations and prosecutions, but we never release it gratuitously. The Director laid out his version of the facts for the news media as if it were a closing argument, but without a trial. It is a textbook example of what federal prosecutors and agents are taught not to do.

Rosenstein’s memo should please Hillary Clinton’s supporters. He quotes from bipartisan legal authorities, and confirms what many Democrats have been insisting loudly since October: that James Comey’s actions were improper. Thus Democrats, many of whom believe Comey’s transgressions cost them the election, should agree that firing Comey was completely warranted and necessary, and that the Trump administration’s stated grounds for doing so were correct. If Hillary Clinton had done it, most of them would have cheered, and strongly defended the decision.

But Democrats aren’t cheering the firing of James Comey, because nobody actually believes that Donald Trump fired Comey for mishandling the Clinton email investigation. After all, since many people believe that Comey’s actions gave Trump the election, Trump should adore Comey. For Trump to have fired Comey for the reasons he said he did, Trump would have to have a high-minded and principled devotion to fairness and propriety. And since nobody can think of a single time when Donald Trump has taken an action for reasons of high-mindedness and principle, there is a near universal belief that Trump’s citation of the Clinton investigation as his reason for firing Comey is a flimsy pretext.

Instead, Trump is more likely to have fired Comey over the FBI’s ongoing investigation into possible connections between the Trump campaign and the Russian government. Politico reports that in recent days, Trump had been furiously yelling at his television whenever the Russia story came up, and may have been exasperated with Comey over Comey’s public confirmation of the existence of an investigation.

Because of that, the firing is being viewed as outrageous, possibly even a “constitutional crisis.” Trump is being compared to strongman rulers who try to eradicate all checks on their power; he is possibly even a “ruthless despot.” The Guardian reports a consensus among observers that Trump’s decision has “taken US democracy into dark and dangerous new territory.” The New Yorker’s John Cassidy calls Comey’s firing “an attack on American democracy,” and says that the incident confirms worries about Trump’s attitudes towards “democratic norms, the Constitution, and the rule of law.”

A lot of highly-charged criticisms are being leveled at Trump, and it might be good to sort them out. Is this an attack on the Constitution, democracy, and the rule of law? Is Trump becoming a dictator? Is this a usurpation or abuse of power? Well, first, we should consider what the terms involved actually imply. Terms like “democracy” and “rule of law” are often bandied carelessly about without regard to the distinctions between them. (People even use the words “democracy” and “Constitution” interchangeably sometimes, which obscures how obscenely undemocratic the Constitution actually is.) I hate to sound like a civics teacher, but let’s just be clear: the Constitution is a particular set of rules and procedures for the government, democracy is popular control of government institutions, and the rule of law is the principle that laws should be applied according to certain defined standards (consistency being foremost). So: (1) Trump violates the Constitution if he defies the rules and procedures that are specified in it (2) Trump erodes or destroys democracy to the extent that he removes government from popular control and (3) Trump contravenes the “rule of law” when he tries to keep laws that apply to other people from applying to himself, his family, and his cronies.

I swear this is not just semantics or pedantry (though I know the people who swear this the most insistently are the ones most likely to be semanticists or pedants). It has some important implications. In firing Comey, Trump has not attacked “democracy” or “the Constitution.” Under the Constitution, the president oversees the executive branch, and it is his prerogative to decide whether the FBI director is doing his job well. An argument that this decision shouldn’t be up to Trump is an argument that this decision should rest with someone other than the president. But constitutionally speaking, it’s up to the president, and Trump hasn’t ripped up the Constitution.

The suggestion that Trump has undermined “democracy” requires an even greater distortion. In fact, it’s far more undemocratic to believe that the FBI director should be independent. As it stands, the (unelected) FBI director is held accountable by the (elected) president. If the people don’t like what the president does with his FBI director, well, that’s why there’s a 2020 election. But insulating parts of the executive branch to operate on their own is not “democracy.” It may be a reduction of presidential power, which we might want, but it’s also an increase in a less accountable bureaucratic power, one made even more terrifying when it’s handed to law enforcement. The ultimate model of the “independent” FBI head is J. Edgar Hoover, who operated for decades as the controller of his own “government within a government.” This is important, because Democrats who loathe Trump may be increasingly tempted to want “independent” parts of the executive branch to put checks on his power. But that can amount to empowering the “deep state,” those parts of the government that aren’t subject to popular control at elections. And wishing that the FBI and CIA had more control over Trump may amount to wishing that a secretive unelected part of our government can wield power over the democratic part. Like or loathe Trump, at least the American people got to decide whether he would be president.

anatomyad2

But what about the “rule of law”? Here’s where Trump’s firing of Comey actually is alarming. Trump may not have violated the Constitution, and he may have exercised the power democratically handed to him by the voters. However, in trying to squelch an investigation into his own possible lawbreaking, Trump has undermined the idea that laws should apply equally to all. Having a two-tiered justice system, in which the powerful can simply wish away any attempts to hold them to the same standards as everybody else, creates a dangerously unequal society and can slowly lead to tyranny. Trump may not have broken any laws in firing Comey. But the “rule of law” is different than just “the application of all the laws that exist.” It is a principle for how laws ought to be. A state can have laws without having the “rule of law.” (The law might be “all justice shall be arbitrary,” for example. And while in that case we’d have law, we would have a grossly inconsistent law that doesn’t approach “rule of law” standards.) When Trump tries to limit the enforcement of laws against himself, he undermines that standard.

At the same time, this doesn’t make Trump a “dictator.” And if people are convinced that Trump has become a “dictator” or an “autocrat,” they may actually fail to see that firing Comey was a foolish blunder on Trump’s part, rather than a successful seizure of power. In trying to get rid of the Russia investigation, Trump has drawn far more attention to it and made himself look guilty, and now he might face bipartisan calls for an independent special prosecutor.

It’s actually kind of funny that nobody in the administration was able to convince Trump not to do it. Any sensible adviser would have pleaded with him: “Mr. Trump, I know you’re angry with Comey, but you can not fire him. I know you want the Russia story to go away, but this will instantly make it ten times worse, because it will look as if you are trying to hide something.”

This is exactly what happened. Hardly anybody believes the “Clinton email” justification, which would require us to think Trump felt Comey was unfair to Clinton. Instead, they think he is trying to cover something up. As John Cassidy writes:

Until the White House comes up with a less ludicrous rationalization for its actions, we can only assume that Trump fired Comey because the Russia investigation is closing in on him and his associates, and he knew that he didn’t have much sway over the F.B.I. director. That is the simplest theory that fits the facts.

Importantly, this is not the “simplest theory that fits the facts.” It is one of two competing simple theories. The other theory is that Trump fired Comey because he feels about the Russia investigation the same way that Clinton felt about her email investigation: that it’s a bunch of overblown nonsense, and that Comey is helping keep a B.S. non-scandal afloat through his irresponsible overzealousness. In fact, from Politico’s account, this is what Trump himself seems to have conveyed within the White House.

So it could be, as Cassidy says, that Trump knows Comey was closing in on some devastating truth. But it could also be that there isn’t any devastating truth, and that Trump simply became frustrated that Comey seemed to be getting too big for his britches, assuming an outsized amount of power relative to the president. In fact, one can easily imagine Trump eating nachos while watching Fox News, bellowing “Why is Comey on television again? I’m the one who’s president!”

Cassidy is right that “Trump is scared because the investigation is getting closer to the truth” is the most logical explanation if we assume that Trump is a rational actor instead of a petty, tantrum-throwing child. But it could be that Trump wasn’t so much “scared” of the Russia investigation as infuriated by its persistence. Now, though, because he couldn’t calm his temper enough to let Comey do his job and conclude the investigation, Trump is going to face an investigation that drags on even longer. And if it does turn out that there’s nothing to the story, this will be a hilarious display of incompetence. In firing Comey to get rid of the Russia investigation because he thinks it’s a non-story, Trump may have made it a far more important story and caused Republicans to think of it as something legitimately suspicious rather than just sour grapes from Democrats.

printedit

As people have pointed out, the closest parallel to the Comey incident is Richard Nixon’s infamous Saturday Night Massacre, in which Nixon ordered the firing of the special prosecutor assigned to investigate Watergate. But people are using this to show that Trump, like Nixon, tried to escape the scrutiny of ordinary law enforcement. Trump, they say, is showing Nixonian autocratic tendencies.

But people should also remember what happened after the Saturday Night Massacre. Needless to say, things did not actually end very well for Richard Nixon. Nixon’s firing of the special prosecutor led to a massive increase in public support for Nixon’s impeachment; a week after the “massacre,” for the first time, a plurality of Americans believed Nixon should be removed from office. The firings were a blunder, born of the president’s delusion that he could do anything.

That may well be what we have here. It’s not an attack on democracy, it’s not the shredding of the Constitution. It’s a legal, but stupid and disastrous, attempt at self-aggrandizement. Trump hasn’t managed to make himself a dictator. Instead, he’s just made people think he’d like to be one, and has made the same mistake that ultimately brought down Richard Nixon. Nixon believed that because he was president, he could act as pleased without regard to political or legal consequences. This was not the case. Donald Trump may well learn a similar lesson. 

Fines and Fees Are Inherently Unjust

Fining people equally hurts some people far more than others, undermining the justifications of punishment…

Being poor in the United States generally involves having a portion of your limited funds slowly siphoned away through a multitude of surcharges and processing fees. It’s expensive to be without money; it means you’ve got to pay for every medical visit, pay to cash your checks, and frankly, pay to pay your overwhelming debts. It means that a good chunk of your wages will end up in the hands of the payday lender and the landlord. (It’s a perverse fact of economic life that for the same property, it often costs less to pay a mortgage and get a house at the end than to pay rent and end up with nothing. If I am wealthy, I get to pay $750 a month to own my home while my poorer neighbor pays $1,500 a month to own nothing.) It’s almost a law of being poor: the moment you get a bit of money, some kind of unexpected charge or expense will come up to take it away from you. Being poor often feels like being covered in tiny leeches, each draining a dollar here and a dollar there until you are left weak, exhausted, and broke.

One of the most insidious fine regimes comes from the government itself in the form of fines in criminal court, where monetary penalties are frequently used as punishment for common misdemeanors and ordinance violations. Courts have been criticized for increasingly imposing fines indiscriminately, in ways that turn judges into debt collectors and jails into debtors’ prisons. The Department of Justice found that fines and fees in certain courts were exacted in such a way as to force “individuals to confront escalating debt; face repeated, unnecessary incarceration for nonpayment despite posing no danger to the community; lose their jobs; and become trapped in cycles of poverty that can be nearly impossible to escape.” A new report from PolicyLink confirms that “Wide swaths of low-income communities’ resources are being stripped away due to their inability to overcome the daunting financial burdens placed on them by state and local governments. There are countless stories of people being threatened with jail time for failing to pay fines for “offenses” like un-mowed lawns or cracked driveways.

Critics have targeted these fines because of the consequences they are having on poor communities. But it’s also important to note something further. The imposition of flat-rate fines and fees does not just have deleterious social consequences, but also fundamentally undermines the legitimacy of the criminal legal system. It cannot be justified – even in theory.

I work as a criminal defense attorney, and I have defended both rich and poor clients (mostly poor ones). Many of my clients have been given sentences involving the imposition of fines. For everyone, regardless of wealth, if a fine means less (or no) jail time, it’s almost always a better penalty. But, and this should be obvious, fines don’t mean the same thing to different people. For my poor clients, a fine means actual hardship. In extreme cases, it can mean a kind of indenture, as the reports have pointed out. If you make $1,000 a month, and are trying to pay rent and support yourself, a $500 fine means a lot. It means many months of indebtedness as you slowly work off your debt to the court. It might mean not buying clothes for your child, or forgoing necessary medical treatment.

Of course, the situation changes if you’re wealthy, or even middle-class. You write the check, you leave the court, the case is over. For my wealthy clients, a fine isn’t just the best outcome, it’s a fantastic outcome, because it means the crime which you are alleged to have committed has led to no actual consequences that affect you in a substantive way. You haven’t had to make any sacrifices –  your life will look precisely the same in the months after the fine was imposed as it did in the months before. Wealthy defendants want to know: “What can I pay to make this go away?” And sometimes paying to make it go away is exactly what they can do as courts will often accept pre-trial fines in exchange for dismissal.

As I said, it’s not news that it’s harder to pay a fine if you’re poor. But the implications of this are rarely worked all the way through. For if it’s true that the punishment prescribed by law hurts one class of defendants far more than it hurts another class of defendants, then the underlying justification for having the punishment in the first place is not actually being served, and the basic principle of equality under the law is being undermined.

anatomyad2

If fines are imposed at flat rates, poor people are being punished while rich people are not. If it’s true that wealthy defendants couldn’t care less about fines (and a millionaire with a $500 fine really couldn’t care less), then they’re not actually being deprived of anything in consequence of their violation of law. Punishment is supposed to serve the goals of retribution, deterrence, or rehabilitation. Leaving aside for the moment whether these are actually worthy goals, or whether criminal courts actually care about these goals, flat-rate fines don’t serve any of them when it comes to wealthy defendants. There’s no deterrence or rehabilitation, because if you can pay an insignificant fee to commit a crime, there’s no reason not to do it again. It’s wildly unclear how a negligibly consequential fine would deter a wealthy frat boy from continuing to urinate in public, whereas a person trying to escape homelessness might become very careful not to rack up any more fines.

Nor does the retribution imposed have a rational relationship to the significance of the crime. If the point of retribution is to make someone suffer a harm in proportion to the suffering they themselves have imposed (a dubious idea to begin with), flat-rate fines make no sense, because some people are being sentenced to far greater suffering than others. This means that it is unclear what we believe the actual correct retributive amount is supposed to be. It’s as if we punish in accordance with the philosophy of “an eye for an eye,” but we live in a society where some people start with one eye and some people start with a twenty eyes. Taking “an eye for an eye” means something quite different when imposed on a one-eyed man than it does with a twenty-eyed man. The one-eyed man has been punished with blindness while the twenty-eyed man can shrug and simply have one of the lenses removed from his spectacles.

This is important for how we view the law. If courts aren’t calibrating fees based on people’s actual wealth, then massively differential punishments are being imposed. Some people receive indenture while others receive no punishment at all, even given the same offense at the same level of culpability. If fines are supposed to have anything to do with making a person experience consequences for their crime, whether retributive consequences or rehabilitative consequences, then punishments are failing their stated purpose and being applied grossly unequally.

It may be objected that fines do not constitute an unequal application of the law, because they are applied equally to all. But the point here is that application of a law equally in each case does not mean “equal application of law to all” in any meaningful sense. In other contexts, this is perfectly clear. A law forbidding anyone from wearing a yarmulke and reading the Torah does not constitute the “equal application of law to all.” It clearly discriminates against Jews, even though Christians, Muslims, Hindus, and the non-religious are equally prohibited from wearing yarmulkes. (The absurdity of “equal application” meaning “legal equality” was well captured by Anatole France, who wrote that “The law, in its majestic equality, forbids the rich as well as the poor to sleep under bridges.”)

It is inevitable that laws will always affect people differently, because people will always be different. But if some people are given something that constitutes far more of a burdensome punishment for them than it is for others, the actual purposes of the law aren’t being served. Separate from the equality arguments, for a large class of people punishment simply isn’t even serving its intended function.

Of course, you could easily take a step toward this, by fining people in accordance with a percentage of their income rather than at a flat rate (or redistributing all wealth). If a fine is, say, 2% of one’s annual income, then a person with a $20,000 income would face a $400 fine whereas a person with a $200,000 income would face a $4,000 fine. That’s still grossly unfair of course, because $400 means far more to the poorer person than $4,000 does to the richer person. You wouldn’t have a fair system of fines until you figure out how to make the rich experience the same kinds of effects that fines impose on the poor. The fact that even massively increasing fines on the rich wouldn’t bring anything close to equal consequences should show how totally irrational our present system is.

But rather than having courts appropriate larger quantities of rich people’s wealth (though their wealth obviously does need appropriating), we could also simply reduce the harm being inflicted on the poor, through reforming local fines-and-fees regimes. It’s clear that in many cases, fines don’t have anything to do with actual punishment; they’re revenue-raising mechanisms, a legalized shakedown operation, as the Justice Department’s report on Ferguson made clear. Courts aren’t interested in actually calculating the deterrence effects of certain financial penalties. They want to fund their operations, and poor people’s paychecks are a convenient piggy bank.

We know that fines and fees have, in many jurisdictions, created pernicious debt traps for the poor, arising from trivial offenses. But it’s when we examine the comparative impact on wealthy defendants that this system is exposed as being irrational as well as cruel. It doesn’t just ensnare the less fortunate in a never-ending Kafkaesque bureaucratic nightmare. It also fundamentally delegitimizes the entire legal system, by severing the relationship between punishments and their purpose. It makes a joke out of the ideas of both the punishment fitting the crime and equality under the law, two bedrock principles necessary for  “law” to command any respect at all. So long as flat-rate fines are disproportionately impacting the poor, there is no reason to believe that criminal courts can ever be places of justice.