What Gun Advocates Don’t Understand About Economics

Gun control measures are about raising costs, not creating impenetrable barriers…

One of the most irritating tendencies of people on the right is to accuse people on the left of “not understanding basic economics.” This usually comes when lefties advocate that the rich give up a bit of their wealth to aid the poor, or when we point out that capitalism has an unfortunate habit of making everybody’s lives revolve around the desperate quest for financial security. The “you just don’t understand economics” attack is particularly unfair because it fails to consider the possibility that people on the left understand economics perfectly well but have good reasons for rejecting certain economic premises.

Thus I have mixed feelings about doing a “you don’t understand economics” argument myself, because I recognize it’s inherently supercilious and involves casting others as idiots rather than people with reasonable disagreements. Still, I can’t resist the opportunity to give the right a heaping spoonful of its own toxic medicine. Therefore: certain arguments favored by gun-rights advocates are woefully ignorant of basic economic insights.

The arguments in question are those along the following lines: efforts at gun control are senseless, because people will always find ways to kill each other. Or, in its bumper-sticker form: when you outlaw guns, only outlaws will have guns. And the basic economic insight that these arguments ignore is: people are rational actors who respond to changes in incentives.

The “outlaws” argument is among the oldest and most incessantly-repeated clichés in the discourse around guns. In the past couple of days, I’ve seen it flare up again on social media in conversations surrounding the recent London terror attacks. First, gun control proponents began citing the attacks as evidence of why strict gun control measures are useful: because large-capacity firearms are difficult to obtain in the U.K., unless an attacker has pretty sophisticated knowledge of explosives manufacturing, they will generally have to resort to using knives and vans rather than AR-15s. The same point is made about would-be mass-murderers in China: when, for instance, a madman in Zhongyuan decides to go on a violent spree at a primary school, a lot of kids might get stabbed, but as often as not they will all survive. If the same thing happens in Connecticut, there will be dozens of deaths. Madness is madness, but guns make killing people as simple as pushing a button.

In response to this, I’ve seen gun rights supporters making the “futility” point: if the British attackers had wanted to use firearms, they still could have. There are black markets, and just as Prohibition didn’t get rid of alcohol but simply drove its use underground, guns will always be available to those who want them enough or have enough ready cash. France’s gun control laws, for instance, did not stop the Bataclan terrorists from massacring nearly 100 Eagles of Death Metal fans.

The “outlaws” slogan also offers a variation on this theme. However, the slogan has two components, which should be distinguished: first, if you outlaw guns, that only outlaws will have guns, i.e. that law-abiding citizens will be deprived of guns for protection, and second, that outlaws will have guns, i.e. that “bad guys” don’t care about laws anyway. At the moment it’s the second claim that I’m interested in: that laws don’t stop the outlaws.

I don’t think I’m misstating or exaggerating the thrust of the argument put forth by gun rights advocates. While there may be more sophisticated versions, I certainly do see a lot of arguments, both in person and online, that come down to the idea that you simply can’t stop people from having guns, because if someone is determined enough to shoot someone, they will find a way to do so.

And here’s where the economic principle comes back in. A core insight of economics is that it makes sense to conceive of people as acting, in some sense, rationally, and calculating the relative weight of various preferences. And people tend to respond rationally to changes in prices; as things get cheaper, people will buy more of them, and as things get more expensive, people will buy less of them (assuming all other factors being equal, which they rarely are but nevermind that for now). Raise the cost of acquiring something and fewer people will acquire it. Raise the cost enough, and only the most determined people or the most wealthy people will acquire it. For each individual, there is a certain point at which the cost of something becomes too high. If I see a paisley-patterned dressing gown in a shop window, it will probably have to cost a lot for me not to buy it, because I am a sucker for both paisley-patterned goods and dressing gowns. However, if my friend Sparky sees one, such an item would probably have to be near-free for him to even consider buying it. They might even have to pay him to take it. This is because Sparky has taste.

anatomyad2

I am sorry for dwelling on such an elementary principle, but it’s important for understanding why the gun argument is so mistaken. That’s because gun-rights proponents are misunderstanding how economic reality works. When a person evaluates whether or not to shoot someone, the question is not whether they can access a gun. The question is how high the cost of accessing a gun is, and whether that cost is worth it for that individual. If the cost of acquiring and shooting a gun is very low, then a lot of people are likely to do it who would not have done it if the cost was extremely high. Of course, many people won’t shoot anyone no matter how low the cost of doing so is, just as incredibly vicious and determined people would shoot someone even if doing so required exorbitant expense. (There is still a limit, of course: if shooting someone required more wealth than any human being possessed, then no human being would shoot any other human being.)

There is therefore actually a good deal of sense to Chris Rock’s amusing “$5,000 Bullet” proposal. Rock suggested that instead of gun control, there should be “bullet control,” because “if a bullet cost $5,000, there’d be no more innocent bystanders.” Rock’s routine is funny in part because it’s totally right. It seems absurd, but if you raised the cost of murdering people, fewer people would get murdered.

That runs counter to what we might think of as “common sense,” which would tell us that murderers want to kill people, and if they’re determined to do so, they’ll do it, and they aren’t sitting down and looking at spreadsheets trying to determine whether murder is a good long-term investment. The crucial economic insight is that actually, murderers are “determined” to differing degrees, and marginal increases in cost can actually act as deterrents. If you put few legal restrictions on gun purchases, for instance, but put the only gun store at the top of Mt. St. Helens, with a nine-hour queue, I bet you anything you’ll see fewer people buying guns.

It’s bizarre to think of all murderers, even crime-of-passion ones, as being in some way “rational,” and the economic argument for humans as “rational calculators” is generally heavily criticized on the left. But while I think many popular versions of the “humans as rational” argument are indeed ludicrous (the one that suggests we all maximize our financial self-interest, and the one that human beings never make stupid decisions, for instance), there is a version of this model of human behavior that makes a lot of sense. Human decision-making is both rational and irrational: it is rational, in that human decision-making does tend to look like a kind of weighing of costs and benefits. (For instance: I want a croissant. But it is raining. Does the amount I want a croissant outweigh how much I hate getting wet?) But it is irrational, in that many of our preferences are crazy and inconsistent and we aren’t good guardians of our own self-interest. The relative ease of killing someone might well affect whether or not a murderer chooses to do it (which means murderers are deciding whether or not to murder based on how difficult it is to do so), but on another level it’s totally irrational and crazy for “whether a gun is 10 feet away” versus “whether a gun is 100 feet away” to affect whether or not I take somebody’s life.

In practice, though, this is exactly what happens, and it’s easy to see why it does happen. We can think, using common sense, that people who want to kill people will always find a way to do so. But what actually happens is that if you raise the costs of something sufficiently, fewer and fewer people will do it. An effective, though heinous, way to reduce crime is to have “instantaneous death” as the penalty for every single crime, no matter how minor. Sure, there will always be the determined, foolhardy, or irrational few who do things regardless of the risks. But most people respond somewhat rationally to incentives. (This is also an important reason why proponents of drug legalization should be careful about invoking Prohibition. Depending on how it’s done and what the society is like, it could actually be correct that legalizing drugs will massively increase their use, because a subset of people who were deterred by the risk and inconvenience of criminalized drugs will try them once there are fewer costs to doing so.)

In fact, to successfully deter people, you don’t even need “most” people to be rational. You just need a subset of people. The question about gun control, then, is not: “Will it still be possible for outlaws to get guns?” but “Will gun control raise the costs of obtaining a gun to the point where a significant subset of outlaws are deterred from acquiring guns?” It’s not about whether you’ve created some kind of “absolute” barrier to the possession of weapons; it’s about whether you’ve added enough layers to where a lot of people will simply give up. The relevant question is not “Did the Bataclan attackers manage to obtain guns despite France’s gun control laws?” but “Do France’s gun control laws raise the costs of acquiring guns to the point of preventing certain deaths?” The easier you make guns to acquire, the more people will acquire them, as evidenced by the fact that in America there are twice as many gun stores as there are McDonalds’, and we spend large portions of our spare time using these guns to murder one another

You’d think people would do the same thing, regardless of adjustments in the incentives. But the economic insight is that actually, depending on what the incentives actually are, they won’t. They may be determined, but there’s a limit to their determination, and the point is to find that limit. I’ve made the same point about suicide before. The common-sense view is that no matter what, suicidal people will always find a way to kill themselves. But actually, that’s not true. The easier suicide is, the more people will do it. If you put a net beneath a suicide bridge, some people will still just go and find another high place. Others won’t, though; there is actually a pretty low threshold of inconvenience at which a person will just give up on killing themselves. (It’s actually stunning how much minor inconveniences will prevent us from doing things. Another example of how we are both rational and irrational: we’re rational because we’re clearly weighing alternatives, but irrational because why on earth would your decision to take your life vary based on whether you happened to think of a nearby bridge or have a gun in the house?)

I’d like to emphasize that I haven’t advocated any particular gun control measures here. In fact, I am generally a skeptic of gun control laws, because I am dubious about measures that increase the reach of the criminal justice system and will lead to more prosecutions. Prohibition can definitely work, but it might only be able to work if you institute an intolerably draconian system, and I am generally more interested in eliminating gun culture than toughening gun laws. But I am nevertheless committed to seeing a world without guns, and I am concerned with having (if such a thing is possible) a more sensible conversation around restrictions on their purchase and use. The question is not: can people find ways to kill each other despite restrictions, or can outlaws still find guns? The question is: will they, and what would make it so that they wouldn’t?

The Dangerous Academic is an Extinct Species

If these ever existed at all, they are now deader than dodos…

It was curiosity, not stupidity that killed the Dodo. For too long, we have held to the unfair myth that the flightless Mauritian bird became extinct because it was too dumb to understand that it was being killed. But as Stefan Pociask points out in “What Happened to the Last Dodo Bird?”, the dodo was driven into extinction partly because of its desire to learn more about a new, taller, two-legged creature who disembarked onto the shores of its native habitat: “Fearless curiosity, rather than stupidity, is a more fitting description of their behavior.”

Curiosity does have a tendency to get you killed. The truly fearless don’t last long, and the birds who go out in search of new knowledge are inevitably the first ones to get plucked. It’s always safer to stay close to the nest.

Contrary to what capitalism’s mythologizers would have you believe, the contemporary world does not heap its rewards on those with the most creativity and courage. In fact, at every stage of life, those who venture beyond the safe boundaries of expectation are ruthlessly culled. If you’re a black kid who tends to talk back and call bullshit on your teachers, you will be sent to a special school. If you’re a transgender teenager like Leelah Alcorn in Ohio, and you unapologetically defy gender norms, they’ll make you so miserable that you kill yourself. If you’re Eric Garner, and you tell the police where they can stick their B.S. “loose cigarette” tax, they will promptly choke you to death. Conformists, on the other hand, usually do pretty well for themselves. Follow the rules, tell people what they want to hear, and you’ll come out just fine.

Becoming a successful academic requires one hell of a lot of ass-kissing and up-sucking. You have to flatter and impress. The very act of applying to graduate school to begin with is an exercise in servility: please deem me worthy of your favor. In order to rise through the ranks, you have to convince people of your intelligence and acceptability, which means basing everything you do on a concern for what other people think. If ever you find that your conclusions would make your superiors despise you (say, for example, if you realized that much of what they wrote was utter irredeemable manure), you face a choice: conceal your true self or be permanently consigned to the margins.

The idea of a “dangerous” academic is therefore somewhat self-contradictory to begin with. The academy could, potentially, be a place for unfettered intellectual daring. But the most daring and curious people don’t end up in the academy at all. These days, they’ve probably gone off and done something more interesting, something that involves a little bit less deference to convention and detachment from the material world. We can even see this in the cultural archetype of the Professor. The Professor is always a slightly harrumphy—and always white and male—individual, with scuffed shoes and jackets with leather elbows, hidden behind a mass of seemingly disorganized books. He is brilliant but inaccessible, and if not effeminate, certainly effete. But bouncing with ideas, so many ideas. There is nothing particularly menacing about such a figure, certainly nothing that might seriously threaten the existing arrangements of society. Of ideas he has plenty. Of truly dangerous ones, none at all.

If anything, the university has only gotten less dangerous in recent years. Campuses like Berkeley were once centers of political dissent. There was open confrontation between students and the state. In May of 1970, the Ohio National Guard killed four students at Kent State. Ten days later, police at the historically black Jackson State University fired into a crowd of students, killing two. At Cornell in 1969, armed black students took over the student union building in a demand for recognition and reform, part of a pattern of serious upheaval.

But over the years the university became corporatized. It became a job training center rather than an educational institution. Academic research became progressively more specialized, narrow, technical, and obscure. (The most successful scholarship is that which seems to be engaged with serious social questions, but does not actually reach any conclusions that would force the Professor to leave his office.)

anatomyad2

The ideas that do get produced have also become more inaccessible, with research inevitably cloaked behind the paywalls of journals that cost astronomical sums of money. At the cheaper end, the journal Cultural Studies charges individuals $201 for just the print edition, and charges institutions $1,078 for just the online edition. The science journal Biochimica et Biophysica Acta costs $20,000, which makes Cultural Studies look like a bargain. (What makes the pricing especially egregious is that these journals are created mostly with free labor, as academics who produce articles are almost never paid for them.) Ideas in the modern university are not free and available to all. They are in fact tethered to a vast academic industrial complex, where giant publishing houses like Elsevier make massive profits off the backs of researchers.

Furthermore, the academics who produce those ideas aren’t exactly at liberty to think and do as they please. The overwhelming “adjunctification” of the university has meant that approximately 76% of professors… aren’t professors at all, but underpaid and overworked adjuncts, lecturers, and assistants. And while conditions for adjuncts are slowly improving, especially through more widespread unionization, their place in the university is permanently unstable. This means that no adjunct can afford to seriously offend. To make matters worse, adjuncts rely heavily on student evaluations to keep their positions, meaning that their classrooms cannot be places to heavily contest or challenge students’ politics. Instructors could literally lose their jobs over even the appearance of impropriety. One false step—a video seen as too salacious, or a political opinion held as oppressive—could be the end of a career. An adjunct must always be docile and polite.

All of this means that university faculty are less and less likely to threaten any aspect of the existing social or political system. Their jobs are constantly on the line, so there’s a professional risk in upsetting the status quo. But even if their jobs were safe, the corporatized university would still produce mostly banal ideas, thanks to the sycophancy-generating structure of the academic meritocracy. But even if truly novel and consequential ideas were being produced, they would be locked away behind extortionate paywalls.

The corporatized university also ends up producing the corporatized student. Students worry about doing anything that may threaten their job prospects. Consequently, acts of dissent have become steadily de-radicalized. On campuses these days, outrage and anger is reserved for questions like, “Is this sushi an act of cultural appropriation?” When student activists do propose ways to “radically” reform the university, it tends to involve adding new administrative offices and bureaucratic procedures, i.e. strengthening the existing structure of the university rather than democratizing it. Instead of demanding an increase in the power of students, campus workers, and the untenured, activists tend to push for symbolic measures that universities happily embrace, since they do not compromise the existing arrangement of administrative and faculty power.

It’s amusing, then, that conservatives have long been so paranoid about the threat posed by U.S. college campuses. The American right has an ongoing fear of supposedly arch-leftist professors brainwashing nubile and impressionable young minds into following sinister leftist dictates. Since massively popular books like Roger Kimball’s 1990 Tenured Radicals and Dinesh D’Souza’s 1992 Illiberal Education: The Politics of Race on Campus, colleges have been seen as hotbeds of Marxist indoctrination that threaten the civilized order. This is a laughable idea, for the simple reason that academics are the very opposite of revolutionaries: they intentionally speak to minuscule audiences rather than the masses (on campus, to speak of a “popular” book is to deploy a term of faint disdain) and they are fundamentally concerned with preserving the security and stability of their own position. This makes them deeply conservative in their day-to-day acts, regardless of what may come out of their mouths. (See the truly pitiful lack of support among Harvard faculty when the university’s dining hall workers went on strike for slightly higher wages. Most of the “tenured radicals” couldn’t even be bothered to sign a petition supporting the workers, let alone march in the streets.)

But left-wing academics are all too happy to embrace the conservatives’ ludicrous idea of professors as subversives. This is because it reassures them that they are, in fact, consequential, that they are effectively opposing right-wing ideas, and that they need not question their own role. The “professor-as-revolutionary” caricature serves both the caricaturist and the professor. Conservatives can remain convinced that students abandon conservative ideas because they are being manipulated, rather than because reading books and learning things makes it more difficult to maintain right-wing prejudices. And liberal professors get to delude themselves into believing they are affecting something.

harmlessacedemics

Today, in what many call “Trump’s America,” the idea of universities as sites of “resistance” has been renewed on both the left and right. At the end of 2016, Turning Point USA, a conservative youth group, created a website called Professor Watchlist, which set about listing academics it considered dangerously leftist. The goal, stated on the Turning Point site, is “to expose and document college professors who discriminate against conservative students and advance leftist propaganda in the classroom.”

Some on the left are delusional enough to think that professors as a class can and should be presenting a united front against conservatism. At a recent University of Chicago event, a document was passed around from Refusefascism.org titled, “A Call to Professors, Students and All in Academia,” calling on people to “Make the University a Zone of Resistance to the Fascist Trump Regime and the Coming Assault on the Academy.”

Many among the professorial class seem to want to do exactly this, seeing themselves as part of the intellectual vanguard that will serve as a bulwark against Trumpism. George Yancy, a professor of philosophy and race studies at Emory University, wrote an op-ed in the New York Times, titled “I Am A Dangerous Professor.” Yancy discussed his own inclusion on the Professor Watchlist, before arguing that he is, in fact, dangerous:

“In my courses, which the watchlist would like to flag as ‘un-American’ and as ‘leftist propaganda,’ I refuse to entertain my students with mummified ideas and abstract forms of philosophical self-stimulation. What leaves their hands is always philosophically alive, vibrant and filled with urgency. I want them to engage in the process of freeing ideas, freeing their philosophical imaginations. I want them to lose sleep over the pain and suffering of so many lives that many of us deem disposable. I want them to become conceptually unhinged, to leave my classes discontented and maladjusted…Bear in mind that it was in 1963 that the Rev. Dr. Martin Luther King, Jr. raised his voice and said: ‘I say very honestly that I never intend to become adjusted to segregation and discrimination.’… I refuse to remain silent in the face of racism, its subtle and systemic structure. I refuse to remain silent in the face of patriarchal and sexist hegemony and the denigration of women’s bodies.”

He ends with the words:

“Well, if it is dangerous to teach my students to love their neighbors, to think and rethink constructively and ethically about who their neighbors are, and how they have been taught to see themselves as disconnected and neoliberal subjects, then, yes, I am dangerous, and what I teach is dangerous.”

Of course, it’s not dangerous at all to teach students to “love their neighbors,” and Yancy knows this. He wants to simultaneously possess and devour his cake: he is doing nothing that anyone could possibly object to, yet he is also attempting to rouse his students to overthrow the patriarchy. He suggests that his work is so uncontroversial that conservatives are silly to fear it (he’s just teaching students to think!), but also places himself in the tradition of Martin Luther King, Jr., who was trying to radically alter the existing social order. His teaching can be revolutionary enough to justify Yancy spending time as a philosophy professor during the age of Trump, but benign enough for the Professor Watchlist to be an act of baseless paranoia.

Much of the revolutionary academic resistance to Trump seems to consist of spending a greater amount of time on Twitter. Consider the case of George Ciccariello-Maher, a political scientist at Drexel University who specializes in Venezuela. In December of 2016, Ciccariello-Maher became a minor cause célèbre on the left after getting embroiled in a flap over a tweet. On Christmas Eve, for who only knows what reason, Ciccariello-Maher tweeted “All I Want for Christmas is White Genocide.” Conservatives became enraged, and began calling upon Drexel to fire him. Ciccariello-Maher insisted he had been engaged in satire, although nobody could understand what the joke was intended to be, or what the tweet even meant in the first place. After Drexel disowned Ciccariello-Maher’s words, a petition was launched in his defense. Soon, Ciccariello-Maher had lawyered up, Drexel confirmed that his job was safe, and the whole kerfuffle was over before the nation’s half-eaten leftover Christmas turkeys had been served up into sandwiches and casseroles.

Ciccariello-Maher continues to spend a great deal of time on Twitter, where he frequently issues macho tributes to violent political struggle, and postures as a revolutionary. But despite his temporary status as a martyr for the cause of academic freedom, one who terrifies the reactionaries, there was nothing dangerous about his act. He hadn’t really stirred up a hornet’s nest; after all, people who poke actual bees occasionally get bee stings. A more apt analogy is that he had gone to the zoo to tap on the glass in the reptile house, or to throw twigs at some tired crocodiles in a concrete pool. (When they turned their rheumy eyes upon him, he ran from the fence, screaming that dangerous predators were after him.) U.S. academics who fancy themselves involved in revolutionary political struggles are trivializing the risks faced by actual political dissidents around the world, including the hundreds of environmental activists who have been murdered globally for their efforts to protect indigenous land.

“University faculty are less and less likely to threaten any aspect of the existing social or political system…”

Of course, it’s true that there are still some subversive ideas on university campuses, and some true existing threats to academic and student freedom. Many of them have to do with Israel or labor organizing. In 2014, Steven Salaita was fired from a tenured position at the University of Illinois for tweets he had made about Israel. (After a protracted lawsuit, Salaita eventually reached a settlement with the university.) Fordham University tried to ban a Students for Justice in Palestine group, and the University of California Board of Regents attempted to introduce a speech code that would have punished much criticism of Israel as “hate speech.” The test of whether your ideas are actually dangerous is whether you are rewarded or punished for expressing them.

In fact, in terms of danger posed to the world, the corporatized university may itself be more dangerous than any of the ideas that come out of it.

In Hyde Park, where I live, the University of Chicago seems ancient and venerable at first glance. Its Ye Olde Kinda Sorta Englande architecture, built in 1890 to resemble Oxbridge, could almost pass for medieval if one walked through it at dusk. But the institution is in fact deeply modern, and like Columbia University in New York, it has slowly absorbed the surrounding neighborhood, slicing into older residential areas and displacing residents in landgrab operations. Despite being home to one of the world’s most prestigious medical and research schools, the university refused for many years to open a trauma center to serve the city’s South Side, which had been without access to trauma care. (The school only relented in 2015, after a long history of protests.) The university ferociously guards its myriad assets with armed guards on the street corners, and enacts massive surveillance on local residents (the university-owned cinema insists on examining bags for weapons and food, a practice I have personally experienced being selectively conducted in a racially discriminatory manner). In the university’s rapacious takeover of the surrounding neighborhood, and its treatment of local residents—most of whom are of color—we can see what happens when a university becomes a corporation rather than a community institution. Devouring everything in the pursuit of limitless expansion, it swallows up whole towns.

The corporatized university, like corporations generally, is an uncontrollable behemoth, absorbing greater and greater quantities of capital and human lives, and churning out little of long-term social value. Thus Yale University needlessly decided to open a new campus in Singapore despite the country’s human rights record and restrictions on political speech, and New York University decided to needlessly expand to Abu Dhabi, its new UAE campus built by low-wage workers under brutally repressive conditions. The corporatized university serves nobody and nothing except its own infinite growth. Students are indebted, professors lose job security, surrounding communities are surveilled and displaced. That is something dangerous.

Left professors almost certainly sense this. They see themselves disappearing, the campus becoming a steadily more stifling environment. Posturing as a macho revolutionary is, like all displays of machismo, driven partially by a desperate fear of one’s impotence. They know they are not dangerous, but they are happy to play into the conservative stereotype. But the “dangerous academic” is like the Dodo in 1659, a decade before its final sighting and extinction: almost nonexistent. And the more universities become like corporations, the fewer and fewer of these unique birds will be left. Curiosity kills, and those who truly threaten the inexorable logic of the neoliberal university are likely to end up extinct.

Illustrations by Chris Matthews.

Pessimism is Suicide

On possibility and the limits of certainty…

I am neither an optimist nor a pessimist, because both positions seem unreasonable and foolish to me. The pessimist thinks the glass is half-empty, the optimist thinks the glass is half-full, but any reasonable person understands that both terms are equally applicable, and that arguments over which is more correct are futile and useless. The only sensible answer to the question of whether the glass is half-full or half-empty is “both,” or “it depends what those terms mean,” and if we want a precise understanding of what is going on with the glass, unclouded by normative values, we should simply say that water is taking up half of the glass. 

You might think I’m taking the glass cliché too pedantically and literally. But it provides a useful example of why both optimism and pessimism should be avoided. If you’re either one of these, then instead of assessing what the facts actually are, with as little bias as possible, you project your own prejudice onto any given situation in front of you. “Rose-colored glasses” are showing you a world that doesn’t exist, but the same is equally true for glasses that make the world look bleak and dreary. In each case, your personal filter is keeping you from perceiving nature’s true colors.

Actually, that’s not quite right, since it’s impossible to have some kind of objectively “true” perception in which the world is seen exactly as it is. You will always be a human being with prejudices, and all images will be bent by those prejudices. But if we want to have the best possible understanding of the world around us, our job is to try to figure out what those prejudices are and correct for them as much as possible. Thus nobody should embrace either optimism or pessimism, since doing so entails renouncing the quest to see things as clearly as possible.

Fortunately, I don’t meet many optimists these days. I do, however, meet an awful lot of pessimists. As young people become ensnared in a lifetime of low-wage work and indenture to their debt, and are politically powerless and disengaged against the civilization-destroying forces of climate change and nuclear war, they understandably feel somewhat hopeless. It seems rational to believe that everything is heading straight for a miserably fiery hell in a rapidly-accelerating handbasket.

This position isn’t rational, however. Think about what it actually requires to believe in unavoidable doom: you have to think that you know every single possible path that humanity’s destiny could take. By taking a position that it is impossible to avoid calamity and extinction, a person asserts that they singlehandedly comprehend all possible futures. This is, to put it mildly, somewhat hubristic.

Thus another reason that pessimism is folly is that it is far too confident in the human capacity for prediction. Predictions are tricky things to make about anything, let alone the destiny of the species. Pessimism requires an unwarranted confidence in the inevitability of misfortune. But in order to understand the inevitable, you have to understand the universe, and if there is one thing human beings definitely do not understand, it is the universe.

It is wise to take a modest approach to anything that requires opining on the limits of the possible. A humble intelligence admits that it doesn’t know how things will go. It realizes that any statement about the world as having a “trajectory” that can be extrapolated into the future requires adopting a phenomenal amount of confidence in one’s own predictive capacity, and that both optimists and pessimists presume a kind of understanding of the rules and tendencies of the world that is actually impossible to attain.

People are always too quick to declare certain things impossible. They hardly ever actually know what’s possible and impossible, but they will happily explain the limits on human action to anyone who dares to dream of something mildly beyond that which already exists. Of course, things once branded “impossible” happen every day, from the flying of airplanes to the election of Donald Trump as President of the United States. But as soon as the “impossible” happens and becomes the ordinary, instead of stopping using the word “impossible,” people simply go and apply it to other things that they assume can’t happen.

anatomyad2

That’s the key problem with use of the term “possible”: it assumes that “I can’t conceive of X” and “X can’t happen” mean the same thing. Actually, since human beings are tiny creatures made of flesh and bone, the limits of our imagination may be much stronger than the limits of reality itself. Perhaps people should stop assuming that simply because they can’t think of a way something could happen, there is no way something could happen.

I am always being told that my political beliefs are impossible. This is because I am a utopian: I believe in a world where all people are happy, free, and prosperous, and in which there is no war, destitution, or suffering. The arguments that are raised against this position amount to pessimism: human nature makes such a world impossible, and the things I dream of simply will never happen. But my answer to these criticisms is always the same: how the hell can you possibly know?

I worry about the consequences of all kinds of certainty, because of the “self-fulfilling prophecy” problem. The only way to know for sure that you will fail is to resign yourself to failure, and once you think you’re doomed, you’re not going to be able to muster the energy necessary to struggle against that doom. After all, why bother? Pessimism is therefore a kind of suicide, because it justifies dropping out and giving up. Personally, I am in a near-constant panic over nuclear weapons and environmental catastrophe, but pessimism itself almost seems a greater threat, since nothing better guarantees our doom than an embrace of the idea that we are doomed.

Shouldn’t we be optimists, then? If prophecies can be self-fulfilling, shouldn’t we assume that good things are going to happen? Isn’t the correct disposition something like The Secret or the prosperity gospel, where if you believe in something enough, it will transpire?

I don’t think it is. Optimism may bring comfort, but it’s no less irrational than pessimism. The real task is not to find the best self-fulfilling prophecy, but to stop relying on prophecies altogether, and simply try to bring about the outcome you desire. Instead of believing that this or that good or bad thing will happen, we should simply say what we would like to happen, and do everything we can to make that thing happen.

The sensible position, then, is neither optimistic nor pessimistic, but hopeful: “I do not know what the future will be like, but I hope it will be good and I will try to make it good.” Instead of trying to figure out what “is going” to happen, doom or utopia, as if we have no say in the matter, people should announce what they intend to make happen. Of course, you need some predictions in order to take any action. But generally, apocalyptic prophecies should be discarded, because they can’t do anyone any good. (Besides, even if there is no hope, struggle against our inevitable fate can generate meaning in itself, as Albert Camus so appealingly argued in The Myth of Sisyphus.) The best thing one can do is to find some kind of “pragmatic hopefulness” that lies at the medium point between optimism and pessimism.

Nobody knows what is possible or impossible, and anyone who says they do is failing to recognize just how limited the human capacity for understanding is. Perhaps humanity is doomed. Perhaps it isn’t. But the one thing we know is that it’s suicidal to resign ourselves.

How Liberals Fell In Love With The West Wing

Aaron Sorkin’s political drama shows everything wrong with the Democratic worldview…

In the history of prestige tv, few dramas have had quite the cultural staying power of Aaron Sorkin’s The West Wing.

Set during the two terms of fictional Democratic President and Nobel Laureate in Economics  Josiah “Jed” Bartlet (Martin Sheen) the show depicts the inner workings of a sympathetic liberal administration grappling with the daily exigencies of governing. Every procedure and protocol, every piece of political brokerage—from State of the Union addresses to legislative tugs of war to Supreme Court appointments—is recreated with an aesthetic authenticity enabled by ample production values (a single episode reportedly cost almost $3 million to produce) and rendered with a dramatic flair that stylizes all the bureaucratic banality of modern governance.

Nearly the same, of course, might be said for other glossy political dramas such as Netflix’s House of Cards or Scandal. But The West Wing aspires to more than simply visual verisimilitude. Breaking with the cynicism or amoralism characteristic of many dramas about politics, it offers a vision of political institutions which is ultimately affirmative and approving. What we see throughout its seven seasons are Democrats governing as Democrats imagine they govern, with the Bartlet Administration standing in for liberalism as liberalism understands itself.

More than simply a fictional account of an idealized liberal presidency, then, The West Wing is an elaborate fantasia founded upon the shibboleths that sustain Beltway liberalism and the milieu that produced them.

“Ginger, get the popcorn

The filibuster is in

I’m Toby Ziegler with The Drop In

What Kind of Day Has It Been?

It’s Lin, speaking the truth

—Lin-Manuel Miranda, “What’s Next?

During its run from 1999 to 2006, The West Wing garnered immense popularity and attention, capturing three Golden Globe Awards and 26 Emmys and building a devout fanbase among Democratic partisans, Beltway acolytes, and people of the liberal-ish persuasion the world over. Since its finale more than a decade ago, it has become an essential part of the liberal cultural ecosystem, its importance arguably on par with The Daily Show, Last Week Tonight, and the rap musical about the founding fathers people like for some reason.

If anything, its fandom has only continued to grow with age: In the summer of 2016, a weekly podcast hosted by seasons 4-7 star Joshua Malina, launched with the intent of running through all 154 episodes (at a rate of one per week), almost immediately garnered millions of downloads; an elaborate fan wiki with almost 2000 distinct entries is maintained and regularly updated, magisterially documenting every mundane detail of the West Wing cosmos save the characters’ bowel movements; and, in definitive proof of the silence of God, superfan Lin-Manuel Miranda has recently recorded a rap named for one of the show’s most popular catchphrases (“What’s next?”).

While certainly appealing to a general audience thanks to its expensive sheen and distinctive writing, The West Wing’s greatest zealots have proven to be those who professionally inhabit the very milieu it depicts: Washington political staffers, media types, centrist cognoscenti, and various others drawn from the ranks of people who tweet “Big, if true” in earnest and think a lanyard is a talisman that grants wishes and wards off evil.  

The West Wing “took something that was for the most part considered dry and nerdy—especially to people in high school and college—and sexed it up,” former David Axelrod advisor Eric Lesser told Vanity Fair in a longform 2012 feature about the “Sorkinization of politics” (Axelrod himself having at one point advised West Wing writer Eli Attie). It “very much served as inspiration”, said Micah Lasher, a staffer who then worked for Michael Bloomberg.

Thanks to its endless depiction of procedure and policy, the show naturally gibed with the wonkish libidos of future Voxsplainers Matt Yglesias and Ezra Klein. “There’s a cultural meme or cultural suggestion that Washington is boring, that policy is boring, but it’s important stuff,” said Klein, adding that the show dramatized “the immediacy and urgency and concern that people in this town feel about the issues they’re working on.” “I was interested in politics before the show started,” added Yglesias. “But a friend of mine from college moved to D.C. at the same time as me, after graduation, and we definitely plotted our proposed domination of the capital in explicitly West Wing terms: Who was more like Toby? Who was more like Josh?”

Far from the Kafkaesque banality which so often characterizes the real life equivalent, the mundane business of technocratic governance is made to look exciting, intellectually stimulating, and, above all, honorable. The bureaucratic drudgery of both White House management and governance, from speechwriting, to press conference logistics, to policy creation, are front and center across all seven seasons. A typical episode script is chock full of dweebish phraseology — “farm subsidies”, “recess appointments”, “census bureau”, “congressional consultation” — usually uttered by swift-tongued, Ivy League-educated staffers darting purposefully through labyrinthine corridors during the infamous “walk-and-talk” sequences. By recreating the look and feel of political processes to the tee, while garnishing them with a romantic veneer, the show gifts the Beltway’s most spiritually-devoted adherents with a vision of how many would probably like to see themselves.

In serving up this optimistic simulacrum of modern US politics, Sorkin’s universe has repeatedly intersected with real-life US politics. Following the first season, and in the midst of the 2000 presidential election contest, Salon’s Joyce Millman wrote: “Al Gore could clinch the election right now by staging as many photo-ops with the cast of The West Wing as possible.” A poll published during the same election found that most voters preferred Martin Sheen’s President Bartlet to Bush or Gore. A 2008 New York Times article predicted an Obama victory on the basis of the show’s season 6-7 plot arc. The same election year, the paper published a fictionalized exchange between Bartlet and Barack Obama penned by Sorkin himself. 2016 proved no exception, with the New Statesman’s Helen Lewis reacting to Donald Trump’s victory by saying: “I’m going to hug my West Wing boxset a little closer tonight, that’s for sure.”

Appropriately, many of the show’s cast members, leveraging their on-screen personas, have participated or intervened in real Democratic Party politics. During the 2016 campaign, star Bradley Whitford—who portrays frenetically wily strategist Josh Lyman—was invited to “reveal” who his [fictional] boss would endorse:

“There’s no doubt in my mind that Hillary would be President Bartlet’s choice. She’s—nobody is more prepared to take that position on day one. I know this may be controversial. But yes, on behalf of Jed Bartlet, I want to endorse Hillary Clinton.”

Six leading members of the cast, including Whitford, were even dispatched to Ohio to stump for Clinton (inexplicably failing to swing the crucial state in her favor).

anatomyad2

During the Democratic primary season Rob Lowe (who appeared from 1999-2003 before leaving in protest at the ostensible stinginess of his $75,000/episode salary) even deployed a clip from the show and paraphrased his own character’s lines during an attack on Bernie Sanders’ tax plan: “Watching Bernie Sanders. He’s hectoring and yelling at me WHILE he’s saying he’s going to raise our taxes. Interesting way to communicate.” In Season 2 episode “The Fall’s Gonna Kill You”, Lowe’s character Sam Seaborn angrily lectures a team of speechwriters:  

“Every time your boss got on the stump and said, ‘It’s time for the rich to pay their fair share,’ I hid under a couch and changed my name…The top one percent of wage earners in this country pay for twenty-two percent of this country. Let’s not call them names while they’re doing it, is all I’m saying.”

What is the actual ideology of The West Wing? Just like the real American liberalism it represents, the show proved to be something of a political weather vane throughout its seven seasons on the air.

Debuting during the twilight of the Clinton presidency and spanning much of Bush II’s, it predictably vacillated somewhat in response to events while remaining grounded in a general liberal ethos. Having writing credits for all but one episode in The West Wing’s first four seasons, Sorkin left in 2003, with Executive Producer John Wells characterizing the subsequent direction as more balanced and bipartisan. The Bartlet administration’s actual politics—just like those of the real Democratic Party and its base—therefore run the gamut from the stuff of Elizabeth Warren-esque populism to the neoliberal bilge you might expect to come from a Beltway think tank having its white papers greased by dollars from Goldman Sachs.  

But promoting or endorsing any specific policy orientation is not the show’s true raison d’être. At the conclusion of its seven seasons it remains unclear if the Bartlet administration has succeeded at all in fundamentally altering the contours of American life. In fact, after two terms in the White House, Bartlet’s gang of hyper-educated, hyper-competent politicos do not seem to have any transformational policy achievements whatsoever. Even in their most unconstrained and idealized political fantasies, liberals manage to accomplish nothing.

The lack of any serious attempts to change anything reflect a certain apolitical tendency in this type of politics, one that defines itself by its manner and attitude rather than a vision of the change it wishes to see in the world. Insofar as there is an identifiable ideology, it isn’t one definitively wedded to a particular program of reform, but instead to a particular aesthetic of political institutions. The business of leveraging democracy for any specific purpose comes second to how its institutional liturgy and processes look and, more importantly, how they make us feel—virtue being attached more to posture and affect than to any particular goal. Echoing Sorkin’s 1995 film The American President (in many ways the progenitor of The West Wing) it delights in invoking “seriousness” and the supposedly hard-headed pragmatism of grownups.

cast2

Consider a scene from Season 2’s “The War at Home”, in which Toby Ziegler confronts a rogue Democratic Senator over his objections to Social Security cuts prospectively to be made in collaboration with a Republican Congress. The episode’s protagonist certainly isn’t the latter, who tries to draw a line in the sand over the “compromising of basic Democratic values” and threatens to run a third party presidential campaign, only to be admonished acerbically by Ziegler:  

“If you think demonizing people who are trying to govern responsibly is the way to protect our liberal base, then speaking as a liberal…go to bed, would you please?…Come at us from the left, and I’m gonna own your ass.”

The administration and its staff are invariably depicted as tribunes of the serious and the mature, their ideological malleability taken to signify their virtue more than any fealty to specific liberal principles.

Even when the show ventures to criticize the institutions of American democracy, it never retreats from a foundational reverence for their supposed enlightenment and the essential nobility of most of the people who administer them. As such, the presidency’s basic function is to appear presidential and, more than anything, Jed Bartlet’s patrician aura and respectable disposition make him the perfect avatar for the West Wing universe’s often maudlin deference to the liturgy of “the office.” “Seriousness,” then— the superlative quality in the Sorkin taxonomy of virtues—implies presiding over the political consensus, tinkering here and there, and looking stylish in the process by way of soaring oratory and white-collar chic.   

“Make this election about smart, and not. Make it about engaged, and not. Qualified, and not. Make it about a heavyweight. You’re a heavyweight. And you’ve been holding me up for too many rounds.”

—Toby Ziegler, Hartsfield’s Landing (Season 3, Episode 14)

Despite its relatively thin ideological commitments, there is a general tenor to the West Wing universe that cannot be called anything other than smug.

It’s a smugness born of the view that politics is less a terrain of clashing values and interests than a perpetual pitting of the clever against the ignorant and obtuse. The clever wield facts and reason, while the foolish cling to effortlessly-exposed fictions and the braying prejudices of provincial rubes. In emphasizing intelligence over ideology, what follows is a fetishization of “elevated discourse” regardless of its actual outcomes or conclusions. The greatest political victories involve semantically dismantling an opponent’s argument or exposing its hypocrisy, usually by way of some grand rhetorical gesture. Categories like left and right become less significant, provided that the competing interlocutors are deemed respectably smart and practice the designated etiquette. The Discourse becomes a category of its own, to be protected and nourished by Serious People conversing respectfully while shutting down the stupid with heavy-handed moral sanctimony.  

In Toby Ziegler’s “smart and not,” “qualified and not” formulation, we can see a preview of the (disastrous) rhetorical strategy that Hillary Clinton would ultimately adopt against Donald Trump. Don’t make it about vision, make it about qualification. Don’t make it about your plans for how to make people’s lives better, make it about your superior moral character. Fundamentally, make it about how smart and good and serious you are, and how bad and dumb and unserious they are.

“The administration and its staff are invariably depicted as tribunes of the serious and the mature, their ideological malleability taken to signify their virtue…”

In this respect, The West Wing’s foundational serious/unserious binary falls squarely within the tradition that has since evolved into the “epic own/evisceration” genre characteristic of social media and late night TV, in which the aim is to ruthlessly use one’s intellect to expose the idiocy and hypocrisy of the other side. In a famous scene from Season 4’s “Game On”, Bartlet debates his Republican rival Governor Robert Ritchie (James Brolin). Their exchange, prompted by a question about the role of the federal government, is the stuff of a John Oliver wet dream:  

Richie: My view of this is simple. We don’t need a federal Department of Education telling us our children have to learn Esperanto, they have to learn Eskimo poetry. Let the states decide, let the communities decide on health care and education, on lower taxes, not higher taxes. Now he’s going to throw a big word at you — ‘unfunded mandate’, he’s going to say if Washington lets the states do it, it’s an unfunded mandate. But what he doesn’t like is the federal government losing power. I call it the ingenuity of the American people.”

Bartlet: Well first of all let’s clear up a couple of things: unfunded mandate is two words, not one big word. There are times when we are 50 states and there are times when we’re one country and have national needs. And the way I know this is that Florida didn’t fight Germany in World War Two or establish civil rights. You think states should do the governing wall-to-wall, now that’s a perfectly valid opinion. But your state of Florida got 12.6 billion dollars in federal money last year from Nebraskans and Virginia’s and New Yorkers and Alaskans, with their Eskimo poetry — 12.6 out of the state budget of 50 billion. I’m supposed to be using this time for a question so here it is: Can we have it back please?”

In an even more famous scene from Season 2 episode “The Midterms”, Bartlet humiliates homophobic talk radio host Jenna Jacobs by quoting scripture from memory, destroying her by her very own logic.

printedit

If Richie and Jacobs are the obtuse yokels to be epically taken down with facts and reason, the show also elevates several conservative characters to reinforce its postpartisan celebration of The Discourse. Republicans come in two types: slack-jawed caricatures, and people whose high-mindedness and mutual enthusiasm for Putting Differences Aside make them the Bartlet Administration’s natural allies or friends regardless of whatever conflicts of values they may ostensibly have. Foremost among the latter is Vinick: a moderate, pro-choice Republican who resembles John McCain (at least the imaginary “maverick” John McCain that liberals continue to pretend exists) and is appointed by Bartlet’s Democratic successor Matthew Santos to be Secretary of State. (In reality, there is no such thing as a “moderate” Republican, only a polite one. The upright and genial Paul Ryan, whom President Bartlet would have loved, is on a lifelong quest to dismantle every part of America’s feeble social safety net.)

Thus Bartlet Democrats do not see Republicans as the “enemy,” except to the extent that they are rude or insufficiently respectful of the rules of political decorum. In one Season 5 plot, the administration opts to install a Ruth Bader Ginsburg clone (Glenn Close) as Chief Justice of the Supreme Court. The price it pays—willingly, as it turns out—is giving the other vacancy to an ultra-conservative justice, for the sole reason that Bartlet’s staff find their amiable squabbling stimulating. Anyone with substantively progressive political values would be horrified by a liberal president’s appointment of an Antonin Scalia-style textualist to the Supreme Court. But if your values are procedural, based more on the manner in which people conduct themselves rather than the consequences they actually bring about, it’s easy to chuckle along with a hard-right conservative, so long as they are personally charming (Ziegler: “I hate him, but he’s brilliant. And the two of them together are fighting like cats and dogs … but it works.”)

“What’s next?”

Through its idealized rendering of American politics and its institutions, The West Wing offers a comforting avenue of escape from the grim and often dystopian reality of the present. If the show, despite its age, has continued to find favor and relevance among liberals, Democrats, and assorted Beltway acolytes alike, it is because it reflects and affirms their worldview with greater fidelity and catharsis than any of its contemporaries.

But if anything gives that worldview pause, it should be the events of the past eight years. Liberals got a real life Josiah Bartlet in the figure of Barack Obama, a charismatic and stylish politician elected on a populist wave. But Obama’s soaring speeches, quintessentially presidential affect, and deference to procedure did little to fundamentally improve the country or prevent his Republican rivals from storming the Congressional barricades at their first opportunity. Confronted by a mercurial TV personality bent on transgressing every norm and truism of Beltway thinking, Democrats responded by exhaustively informing voters of his indecency and hypocrisy, attempting to destroy him countless times with his own logic, but ultimately leaving him completely intact. They smugly taxonomized as “smart” and “dumb” the very electorate they needed to win over, and retreated into an ideological fever dream in which political success doesn’t come from organizing and building power, but from having the most polished arguments and the most detailed policy statements. If you can just crush Trump in the debates, as Bartlet did to Richie, then you’ve won. (That’s not an exaggeration of the worldview. Ezra Klein published an article entitled “Hillary Clinton’s 3 debate performances left the Trump campaign in ruins,” which entirely eliminated the distinction between what happens in debates and what happens in campaigns. The belief that politics is about argument rather than power is likely a symptom of a Democratic politics increasingly incubated in the Ivy League rather than the labor movement.)

Now, facing defeat and political crisis, the overwhelming liberal instinct has not been self-reflection but a further retreat into fantasy and orthodoxy. Like viewers at the climax of The West Wing’s original run, they sit waiting for the decisive gestures and gratifying crescendos of a series finale, only to find their favorite plotlines and characters meandering without resolution. Shockingly, life is not a television program, and Aaron Sorkin doesn’t get to write the ending.

The West Wing is many things: a uniquely popular and lavish effort in prestige TV; an often crisply-written drama; a fictionalized paean to Beltway liberalism’s foundational precepts; a wonkish celebration of institutions and processes; an exquisitely-tailored piece of political fanfiction.

But, in 2017, it is foremost a series of glittering illusions to be abandoned.

Illustrations by Meg T. Callahan.

Pretending It Isn’t There

How we think about the nuclear threat…

have long had an objection to the prospect of being blown to smithereens. It is a peculiar fixation of mine. I prefer my life as a fully intact human being, my organs comfortably encased beneath my flesh. I don’t wish to be burned to a crisp, splattered onto a wall, or boiled alive. I do not want to be described as “charred beyond recognition.” I am strongly opposed to having my limbs, brains, and other components violently extracted from my person and scattered in all directions.

I am therefore somewhat horrified by the prospect of nuclear war. I find it disquieting to realize that the United States possesses about 6,800 warheads, ready to be deployed at any time via submarine, aircraft, and intercontinental ballistic missile (ICBM).

Yet others do not seem to share my horror. Certainly, if they do, they don’t talk about it much. The number of nuclear war-related conversations I have overheard or been invited into in the last six months stands at zero. It doesn’t seem to come up much.

I suppose it’s easy to forget that all the warheads are lying there, ready to vaporize every city on earth in an instant. After all, you rarely see them. Sometimes it’s hard to even believe they exist. They don’t sit in your front garden waiting to be exploded. They hide deep within secure military installations, often in remote deserts. You don’t see many pictures of them, they aren’t paraded down the streets. Living under the nuclear threat doesn’t feel like living with a person permanently pointing a loaded gun at your head.

And yet that’s precisely what it is. In fact, it’s much, much more terrifying than living with a gun to your head. Because the weapon in question doesn’t just threaten you, it threatens every single thing you love, every family member, every friend, every colleague, every beautiful and precious thing in your life and the lives of everybody you know.

My God, that makes me sound like some alarmist nutcase. I seem like I’m exaggerating. But I don’t think my premises are in any way controversial; it’s simply factually true that, in the course of a single day, the world’s great powers could end almost all life on earth. We all know this. It’s beyond argument. And yet it doesn’t really seem plausible. It’s hard for me to really believe, sitting at my desk in a fuzzy blanket looking out the window at sunshine and trees, that everything could truly be obliterated instantaneously.

But it absolutely could. And by everything, I do truly mean everything. The bombings of Hiroshima and Nagasaki (in which the United States decided to demonstrate its newfound capabilities to the Japanese by detonating atomic weapons in the middle of two cities rather than, as some in the Truman Administration thought would be more reasonable, in an uninhabited area) look like holiday firecrackers next to the explosions we are now capable of producing. A nuclear device 12 feet long could turn every single person in Manhattan into a smudge, and give everyone else within a 100 mile radius both hideous burns and cancer.

nuclearlarge

I know everybody knows this. I know it’s a cliché. But I can’t think that everybody really does know it, because nobody seems to act as if it’s true. Perhaps that’s because after a certain amount of repetition, the language and imagery of nuclear war becomes empty of feeling, a set of symbols and signs that don’t actually convey much appreciable content. Differing amounts of megatons just seem like numbers, they don’t seem real in any substantive way. The word “warhead” becomes innocuous; for decades now it’s been a candy with a mushroom cloud logo. The mushroom cloud itself is almost adorable or comical. It’s still vaguely morbid, but if it made us think of Japanese babies without any skin, you wouldn’t be able to brand sour candy with it.

Perhaps we’ve been in a state of relative peace for so long that we’ve forgotten what war really is. It hasn’t been that long, of course; there are still World War II veterans and Hiroshima victims alive. And plenty of people on earth do have an intimate acquaintance with the realities of large-scale violence. But especially in the United States, it’s perfectly possible to go through life with only the fuzziest and most cartoonish understanding of what it means to actually destroy places and people. I’ve never even seen a very large explosion, let alone had one near me, let alone watched someone I love be torn to bits. How can I possibly contemplate the scale of a nuclear weapon? I can think about it intellectually. But the realities are not just too horrible, but too remote from anything in my experiences, for me to be able to seriously conceive of what we are even talking about. Yes, I can affirm that, rationally, I believe a 12-foot long metal object can vaporize everything in the Greater Boston Area. Rationally, I know that there are thousands of hidden underground launching silos, filled with tubes that can fly thousands of miles and turn a million human bodies to ash. I know that the great cities we have spent a dozen generations building are so precarious that Donald Trump could eliminate one within an hour. Yet for these being the rational results of inescapable logic, they sounds totally and profoundly irrational, because they feel just about as true as the existence of leprechauns or the Great Pumpkin. Really, there are warheads everywhere? I’ve never seen one. And I can’t accept that everything here in Boston, from the Old North Church to the Suffolk Law School to every stop on the Red Line, could cease to exist in a nanosecond.

The good thing about it not seeming like a real threat is that maybe it isn’t a real threat. Maybe nuclear deterrence really does make us very secure. It certainly seems to have worked for seventy years. Perhaps, despite the counterintuitiveness of the idea, the safest thing for countries to do really is to point the largest possible weapons at one another and depend on the mutual operation of rational self-interest.

I will confess that this does not bring me too much comfort. That’s mainly because it only has to fail one time. I actually do believe that rational self-interest is a pretty good predictor of much human behavior. Unfortunately, I also believe in the existence of madness. And it only takes one or two nations controlled by the mad or the ambitious in order to plunge humanity into eternal oblivion. To keep nuclear weapons around is to operate on the assumption that there will never again be another Hitler, bent on expansion at all costs and ideologically committed to mass murder. It assumes that a death cult, or a cruel and stupid religious sect like ISIS, will never control the governing apparatus of a major state. And while that may be true in the short term, is it possible that it can be true forever? Someday something irrational will happen, and it only needs to happen once.  

Maybe that’s not the case. Maybe the world really has entered a period totally different from every other historical era, in which large-scale war will never again occur. Maybe no government of a major nation will ever again be unhinged and irrational. Or maybe I am a uniquely naïve and pessimistic person, who simply fails to comprehend the way the world works. It’s hard not to believe that I am, since everyone else seems so untroubled.

But I just don’t know. And it doesn’t seem absurd to me to think that some crazed form of religious fundamentalism could have some theory for why the world needed to be destroyed in order to please their god. It doesn’t seem a stretch to believe that a chain of small human errors could add up to a very large mistake, one which can never be undone. (As Albert Einstein put it in his warning about the bomb, “So long as there are sovereign nations possessing great power, war is inevitable. [Yet] unless another war is prevented it is likely to bring destruction on a scale never before held possible and even now hardly conceived.” Einstein’s logic actually leads to the conclusion that the ultimate goal should be the elimination of “sovereign nations possessing great power” altogether.)

nucleardiagramsmall

I often think of the “Oh, shit” moment that comes along with a catastrophe. This is the moment where someone realizes that everything they thought was true was totally wrong, that what seemed impossible was actually quite possible indeed, and that there is no way to go back and fix the problem. It’s the moment where we become fully cognizant of the fact that there was no real logical reason to assume the thing wouldn’t happen, that we had just kind of assumed it because contemplating it was so unbearable. The last big “Oh, shit” moment was the night of Donald Trump’s election. Over the course of the evening, those who were horrified by the prospect of a Trump presidency, but were dead certain that he would lose, realized that they had been conflating desire and reality. They realized that actually, the polls had showed a close race, and the experts’ confidence had been completely unwarranted. They realized that the fact that a Trump presidency was inconceivable didn’t actually affect whether it was likely. But by the time that realization came, it was over. There was no way to go back and adjust one’s actions accordingly.

My fear is that nuclear war could be similar. It won’t seem possible until it becomes inevitable. And once it becomes inevitable, we will have an “Oh, shit” moment. We’ll realize that everyone’s certainty had been totally groundless, that it had been based entirely on wishful thinking rather than fact. But having the moment of realization doesn’t actually let you go back and undo anything. It’s too late. All you get is those two words. Oh, shit.

I’m not alone in thinking this. William J. Perry, Secretary of Defense under Bill Clinton, has spent the most recent decade or so of his life trying to warn the world of the serious possibility of nuclear catastrophe. In his book My Journey at the Nuclear Brink, Perry recounts his experiences with nuclear weaponry from the Cuban Missile Crisis to the present, and issues an urgent call to humanity to wake up and recognize that there is literally no reason to believe that the unthinkable is impossible merely because it is unthinkable. Perry states it plainly: “Today, the danger of some sort of a nuclear catastrophe is greater than it was during the Cold War and most people are blissfully unaware of this danger.” Yet during the Cold War, people actually felt the danger. They were afraid. Talk of nuclear war was part of life. (It was even a recurrent theme in pop culture. The six-disc CD box set Atomic Platters: Cold War Music from the Golden Age of Homeland Security collects nuclear-themed music from the 40s through 60s, including Muddy Waters playing the “Atomic Bomb Blues” and a gospel number called “Jesus Hits Like An Atom Bomb.”)

It’s strange, then, that as the destructive capabilities of atomic weapons have only increased, their presence in the public consciousness has diminished. And while during the postwar era, Einstein, Bertrand Russell, and countless other public intellectuals constantly discussed the implications of atomic weaponry for humanity’s long-term prospects, today’s physicists and philosophers are largely silent on the topic, even as our destructive potential has continued to multiply.

Examining William Perry’s work in the New York Review of Books, California governor Jerry Brown pondered why nobody was listening:

“No one I have known, or have even heard of, has the management experience and the technical knowledge that William Perry brings to the subject of nuclear danger. Few have his wisdom and integrity. So why isn’t anyone paying attention to him? Why is fear of a nuclear catastrophe far from the minds of most Americans? And why does almost all of official Washington disagree with him and live in nuclear denial?”

Brown answers these questions by quoting Perry:

“Our chief peril is that the poised nuclear doom, much of it hidden beneath the seas and in remote badlands, is too far out of the global public consciousness. Passivity shows broadly. Perhaps this is a matter of defeatism and its cohort, distraction. Perhaps for some it is largely a most primal human fear of facing the “unthinkable.” For others, it might be a welcoming of the illusion that there is or might be an acceptable missile defense against a nuclear attack. And for many it would seem to be the keeping of faith that nuclear deterrence will hold indefinitely—that leaders will always have accurate enough instantaneous knowledge, know the true context of events, and enjoy the good luck to avoid the most tragic of military miscalculations.”

It’s reassuring, if that is the right word, to hear Perry confirm this. I keep thinking I must be missing something. But I’m not. Perry knows more about nuclear weapons than anybody, and he says I am right to be shitting myself. The refusal to deal seriously with the nuclear threat can only be based on myths and fallacies, born out of both a desire not to face the unthinkably horrific and a sense that even if one did think about it, it would be impossible to know what to do about it, thus it is better to keep it out of mind.

That type of thinking is suicidal, though. And I am not suicidal. For a person who thinks about the apocalypse as much as I do, I actually believe I am more of an optimist than many other people. When I do talk to people about the future of humankind, especially people my age, they often seem to feel resigned to doom. Jokes are made about how the species will be lucky if it survives another fifty years. People do not have much confidence in our ability to solve our problems, to eliminate warfare and the threat we pose to ourselves. Human nature is too flawed, technology advancing too rapidly, militaries too sophisticated, social systems too uncontrollable, for a non-catastrophic future to be possible. We must enjoy what we can while we can, but there’s generally very little hope. I find this attitude woefully pessimistic. Yet it’s extremely common. I worry, though, that it’s a self-fulfilling prophecy and a license to justify inaction through resignation. If you’re doomed, why try to fix anything? The courageous and forward-looking thing is to treat human problems and civilizational threats not as our inevitable fate, but as quandaries needing solutions. I may scare people with my talk of nuclear war, with my constant exhortations to people to look at the photos of Hiroshima victims and the numbers on available megatons and ICBM capabilities. But I am more scared of those who refuse to look at these things, who avoid them and leave them to others, and whose first thoughts about them will come at the “Oh, shit” moment.

anatomyad2

I know full well that it’s hard. I don’t want to think about what happened to the people in Hiroshima. The true horrors are so revolting that if I described or showed them to you fully, you would slam down the lid of your computer. You would be sick to your stomach. And to a certain degree, it is necessary to couch our discussions in morbid jokes, irony, cartoons, because we are ill-equipped to think about what really happens to people when a nuclear weapon is detonated. Actually contemplating it would require us to think of our friends as skeletons, to think of toddlers without skin. I want it so desperately for it to be word, not a physical occurrence in the lives of humans like myself. But it isn’t. The bombs are sleeping and waiting, and there’s no use thinking they’re not.

Let me be clear on what I am trying to argue: I have not advocated immediate nuclear disarmament. My sole contention here is that nuclear weapons need to be thought about and understood for what they are, because if their threat isn’t taken seriously, it will only be appreciated in hindsight, and in hindsight we will all be dead. I have not taken a position on how nuclear war is to be averted, only that it needs to be given the same sober attention that Einstein and Perry have given it.

There are, in fact, good arguments that certain attempts at disarmament could actually make the world less secure. Brad Roberts, in The Case for U.S. Nuclear Weapons in the 21st Century, counsels extreme caution in approaches toward reducing U.S. nuclear capability. (Despite a title that makes him sound like Dr. Strangelove, Roberts is sensible and even-handed in his approach.) After all, if the great powers are constantly engaged in a classic “Mexican standoff” situation (the one in films, where the cowboys and banditos are all pointing their guns at each other at once, waiting for one false move), there might be far more risk in trying to get everyone to lower their weapons than in holding things where they are. Roberts, who worked in the Obama administration on nuclear weapons policy, shares a belief that nuclear weapons pose a major threat to humankind, but believes that there are serious perils in trying to disarm quickly. As is often pointed out, if you eliminate nuclear weapons, but countries are still hostile to one another, then instead of being a race to stockpile the most weapons, there will be a race to produce the greatest capacity to reproduce nuclear weapons quickly if war were to occur. Thus it may be necessary to focus on reducing hostility rather than simply weapons.

I can entertain the intellectual arguments that people like Roberts make, about how from a pragmatic and strategic perspective, campaigns like Global Zero (aiming for the total elimination of nuclear weapons) could increase global instability. However, when reading works on nuclear policy from think tank scholars, I am frequently disturbed by the lack of appreciation shown for the real-world implications of the underlying question. To Roberts, as to many who opine on military strategy, international relations is a policy like any other, to be discussed in precise and technical language. But when we are talking about nuclear weapons, we are fundamentally talking about a set of incredibly violent acts that will be perpetrated upon human beings against their will. It is necessary to appreciate what Hiroshima actually meant to the people it happened to in order to have any kind of sensible discussion about control of nuclear weapons. There is something missing from books like The Case for Nuclear Weapons in the 21st Century, which is any sense of what nuclear weapons actually are: what they do to people, how they do it, and what the scenarios we are envisaging would really imply. (I feel the same way about the writings of those who defend the Hiroshima/Nagasaki bombings as necessary. I can entertain the argument that the bombings were the least worst option. But those making the argument are never willing to discuss what the bombings actually did to people. They always wave away these considerations, as Roberts does, with some cursory line about how we all know that nuclear weapons are terrible things that inflict a lot of damage. But do we know this? Do we really?) Thus even those who have given the most thoughtful consideration of the problems surrounding weapons control still have an insufficient sense of urgency and alarm, and an insufficient appreciation of the true stakes of the issue. When we do think about the stakes, we realize in our bones that global nuclear war cannot be allowed to happen under any circumstances. However many of these weapons we have, however many we build, we must never, ever fire one. (This makes them, even at their most useful, an incredibly expensive, useless, and inefficient side effect of an unfortunate intercontinental Prisoner’s Dilemma.)

Lyndon Johnson’s infamous “Daisy” ad is now mostly known as a successful piece of political propaganda, and a milestone in the history of scaremongering. (In it, a little girl picks petals off a daisy before being annihilated in a nuclear explosion; Johnson’s voice warns viewers that “These are the stakes… we must love each other, or we must die.”  The implication was that one shouldn’t vote for Barry Goldwater.) Johnson was criticized for trying to terrify Americans into voting for him.

But the scenario depicted in the ad was perfectly plausible. In fact, we’ve come close to it several times. Anyone who is insufficiently concerned about arms control should pick up Eric Schlosser’s Command and Control, which spends 600 pages documenting the United States’ long history of very near misses with nuclear bombs. The Johnson ad was absolutely right, and hardly propagandistic, in focusing Americans’ attention squarely where it should be: on the issue of which candidate is more likely to end human civilization. Next to this question, everything else is somewhat secondary.

printedit

I have a recurring nightmare about nuclear war. In it, I am in a vast cave, deep within a mountain in Colorado or New Mexico. I turn a corner and realize I am in a storage chamber for nuclear warheads. It is totally silent. I cannot believe how peaceful it is. I go up and touch the warheads. They are so still. They seem like they are sleeping. It is difficult to believe that they can even explode, let alone that they can destroy cities. Suddenly, an alert sounds. The usual flashing red lights and sirens. The missiles fire up and launch from the cave. Hundreds of them leave. I know they are heading for cities all over the world. Soon, I am left alone, back in the silence, with the knowledge that in only a few minutes, there will be nothing left of humanity, save for me and the empty cave. I call out, trying to get the missiles to come back. They are gone. I was sitting next to them. But I did not stop them, and now there will be nothing.

The funny thing about this nightmare is that it’s not really a nightmare at all. It’s the reality we inhabit every day, whether we’d prefer to think about it or not. The missiles are in the caves. They are on submarines, and at air force bases. And if we don’t do something while they slumber, there’s no calling them back once they’ve woken up. All you can do is stand at the mouth of the cave, and spend the last few moments thinking about what you did, and what you didn’t do.

Oh, shit. 

Illustration by Nick Sirotich.

Speaking of Despair

How much can suicide hotlines do?

I started volunteering at a suicide hotline around three years ago. Whenever I happen to mention to someone that this is a thing I do, they usually seem a bit shocked. I think they imagine that I regularly talk callers off ledges, like a Hollywood-film hostage negotiator. “How many people have you saved?” an acquaintance asked me once. I have no idea, but the answer is probably none, or very few, in the immediate sort of sense the questioner was likely envisioning, where somebody calls the hotline intending to kill themselves and I masterfully persuade them not to. In reality, the vast majority of your time at a hotline is spent simply listening to strangers talk about their day, making little noises of affirmation, and asking open-ended questions.

The conversations you end up having on a suicide hotline are inherently somewhat peculiar. They’re more intimate than you would have in daily life, where an arbitrary set of social niceties constrains us from talking about the things that are close to our hearts. But they are also strangely impersonal. Operators at most call centers are forbidden from revealing personal details about themselves, offering opinions on specific subjects, or giving advice on problems: all of which tend to be central features of ordinary human conversation.

With practice, and a sufficiently lucid and responsive caller, you can sometimes make this bizarre lopsidedness feel a bit less awkward. At the same time, however, you also have to find a way to squeeze in a suicide risk assessment—hopefully, not with a bald non-sequitur like “Sorry to interrupt, but are you feeling suicidal right now?” but in some more fluid and natural manner. The purpose of the risk assessment is to enable the person to talk about their suicidal thoughts, in case they’re unwilling to broach the topic themselves, and also to allow you, the operator, to figure out how close the caller might be to taking some kind of action. From “are you feeling suicidal?” you work your way up to greater levels of specificity: “have you thought about how you might take your life?” “Do you have access to the thing you were planning to use?” “Is it in the room with you right now?” “Have you picked a time?” And so on.

I can’t speak for every operator at every call center, but in my own experience, I would estimate that fewer than 10% of the people I’ve ever spoken to have expressed any immediate desire or intention to end their lives. Well over half of callers, I would estimate, answer “no” to the first risk assessment question. This might, on its face, seem surprising. So who’s calling suicide hotlines, then, if not people who are thinking about killing themselves?

Well, for starters—let’s just get this one out of the way—a fair number of people call suicide hotlines to masturbate.

“Wait, but why?” you, in all your naïve simplicity, may be thinking. “Why would someone call a suicide hotline, a phone service intended for people in the throes of life-ending despair, to masturbate?” Friends, that question is beyond my ken: as theologians are fond of saying, we are living in a Fallen World. If I had to make a guess, I’d say a) suicide hotlines are toll-free, b) a lot of the operators are women, and c) there is a certain kind of person who gets off on the idea of an unwilling and/or unwitting person being tricked into listening in on their autoerotic exploits. The phenomenon would be significantly less annoying if some of the callers didn’t pretend to be kind-of-sort-of suicidal in order to keep you on the line longer: it’s rather frustrating, when one is trying one’s best to enter empathetically into the emotional trials of a succession of faceless voices, to then simultaneously have to conduct a quasi-Turing test to sort out the bona fide callers from the compulsive chicken-chokers.

All right, aside from that, who else is calling?

The other callers are the inmates of our society’s great warehouses of human unhappiness: nursing homes, mental institutions, prisons, homeless shelters, graduate programs. They are people with psychiatric issues that make it difficult for them to form or maintain relationships in their daily lives, or cognitive issues that have rendered them obsessively focused on some singular topic. They are people who are deeply miserable and afraid, who are repelled by the idea of ending their own life, but who still say that they wish they were dead, that they wish they could cease to exist by some other means. Among the most common topics of discussion are heartbreak, chronic illness, unemployment, addiction, and childhood sexual abuse.

Some people are deeply depressed or continually anxious, experiencing recurring crises for which the suicide hotline is one of their chief comforts or coping strategies; while others present as fairly cheerful on the phone, and are annoyed by your attempts to risk-assess them or steer the conversation towards the reason for their call. The great common denominator is loneliness. People call suicide hotlines because they have no one else, because they are friendless in the world, because the people in their lives are unkind to them; or because the people they love have said they need a break, have said don’t call me anymore, don’t call me for a while, I’ll come by later, we’ll talk later, and they are struggling to understand why, why they can’t call their sister or their friend or their doctor or their ex ten, twelve, fifteen times a day, when that’s the only thing that briefly alleviates the terrible upswelling of sadness inside them.

One thing you learn quickly, from taking these kinds of calls, is that misery has no respect for wealth or class. Rich and poor terrorize their children alike. Misery is everywhere: it hides in gaps and secret spaces, but it also walks abroad in daylight, unnoticed. The realm of misery is a bit like the Otherworld of Irish myth, or perhaps the Upside Down on the popular Netflix series Stranger Things. It inhabits the same geographic space as the world that happy people live in. You might pride yourself on your sense of direction, but if you were to wander unaware into the invisible eddy, if you were to catch the wrong thing out of the corner of your eye, you too could find yourself there all of a sudden, someplace where everything familiar wears a cruel and unforgiving face. Somebody you know might be in that place now, perhaps, and you simply can’t see it.

If misery could make a sound like a siren, you would hear it wailing in the apartment next door; you would hear it shrieking at the end of your street; a catastrophic klaxon-blast would shatter the windows of every single hospital and high school in the country, all an endless cacophony of “help me help me it hurts it hurts.” And even if most of the people who call hotlines never come close to taking their own lives, their situation still feels like an emergency.

printedit

We might ask, though, what is the rationale behind a hotline whose protocols are set up for assessing suicidality, when the vast majority of people who call the hotline do not, by their own account, have any concrete thoughts of suicide. The prevailing theory is that suicide hotlines are catching people “upstream,” so to speak, before they find themselves in a crisis state where suicide might start to feel like a real option for them. These people, in theory, are people who are at risk of becoming suicidal down the line if they aren’t given the right kind of support now. But is this actually true?

The fact is, we have no idea. If we take “suicide prevention” as the chief purpose of suicide hotlines, we soon find that the effectiveness of hotlines is very tricky to assess empirically. Of the approximately 44,000 people in the United States who complete suicide every year, we have no way of knowing how many may have tried calling a hotline in the past. Of the people who do call a suicide hotline presenting as high-risk, we don’t know how many ultimately go on to attempt or complete suicide. Small-scale studies have tracked caller satisfaction through follow-up calls, or have tried to measure the efficacy of hotline operators by monitoring a sample of their conversations. But these studies are, by their very nature, of dubious evidentiary value. There’s no control group of “distressed/suicidal people who haven’t called hotlines” to compare to, and the pool of callers is an inherently self-selecting population, which may or may not reflect the population of people who are at greatest risk. There are also obvious ethical concerns about confidentiality when it comes to actively monitoring phone calls by “listening in” without permission from the caller, or placing follow-up calls with people who have phoned the service. A substantial number of people who call suicide hotlines express anxiety about the privacy of their calls. Given the social and religious stigma that continues to be associated with thoughts of suicide, we might posit that the higher-risk a caller is, the more anxious they are likely to be. They may perhaps be reluctant to agree to a follow-up call when asked, and nervous to call the hotline again if they suspect they might be part of some study.

All of this is not to say that we need Hard Numbers to justify the existence of a service that provides a listening ear to people in distress. The value of human connection is self-evident, and when it comes to intangibles like happiness, spiritual purpose, and a sense of closeness to others, so-called scientific studies are mostly bunk anyway. Nonetheless, we can still use our imaginations and our common sense to hypothesize about the limitations of the current system and possible alternatives. I think there are two questions worth considering: first, are suicide hotlines generally accessible or useful to people who are actively suicidal? Secondly, for the “low-risk” callers who appear to be the most frequent users of suicide hotlines, is the service giving them what they need, or is there some better way to provide comfort and relief to these people?

As to whether high-risk individuals are actually being reached by suicide hotlines, as outlined above, it’s hard to tell. Anecdotally, the perception of suicide hotlines seems to differ pretty markedly when you peek in on suicide-themed message boards, as opposed to message boards centered around support for depression or other psychological issues. For example, posters on the mental health support forum Seven Cups describe suicide hotline operators as “supportive,” “non-judgmental,” “patient and understanding,” “some of the most loving people you’ll ever talk to,” and “varied from unhelpful-but-kind to helpful.” By contrast, on the Suicide Project, a site specifically devoted to sharing stories about attempting or losing someone to suicide, posters wrote that their calls were “awkward and forced,” “left me thinking I should just get on with killing myself [and] not speak to anyone before hand,” and “totally useless,” and commented negatively on long hold times or call time limits.

We can’t really draw conclusions from this tiny sample, not least because the kinds of people who frequent message boards and comments sections on the internet are not necessarily representative of broader populations who share some of the same self-identified characteristics. But—again anecdotally—I have noted that high-risk or more despairing callers on the hotline I volunteer for, when questioned about the extent of their suicidal intention, often express sentiments like, “If I were really suicidal, I wouldn’t be calling” or “If I wanted to commit suicide, I would just do it.” It’s hard to say exactly what this means, but it seems as if a general perception among borderline-suicidal callers is that an actively suicidal person wouldn’t bother to call a hotline. Given that suicide is sometimes a split-second decision, and that people who complete suicide tend to use highly lethal means, such as firearms, this perhaps isn’t surprising. (Calls where someone claims to be holding a gun are always the most alarming.)

For lower-risk callers, meanwhile, is a fifteen-minute conversation all we can do for them? People who call hotlines sometimes express frustration at the impersonality of the service. They want a give-and-take conversation, more like a normal interaction with a friend, but many suicide hotlines (including the one I volunteer for) forbid volunteers from giving out personal information about themselves. You never share your own opinion on a topic, even if the caller asks you directly: you merely express empathy, and give short reflective summaries of the caller’s responses to your questions, in order to demonstrate engagement and help the caller navigate through their own feelings.

This isn’t necessarily a bad approach, broadly speaking, since it keeps operators out of the thorny territory of giving possibly-useless, possibly-harmful advice to a person whose full life circumstances they know very little about, or of overwhelming or inadvertently shaming the caller with some inapposite emotional response of their own. For some callers, this non-reciprocal outpouring of feeling may be exactly what they need. But for other callers, who often become wise to a call center’s protocols over many repeated calls, this one-sided engagement is not at all what they say they want. What they want is a real human connection, even its messiness and impracticality, not a disembodied voice that might as well be a pre-programmed conversation bot. Reconciling these conflicting goals is a tricky thing. There are certainly people who use hotlines in what seems to be a compulsive kind of way: they’ll call every half-hour, and if you don’t impose some kind of limit, they’ll tie up the line for less persistent (but perhaps, by some metrics, more vulnerable) callers. But it nevertheless feels cruel to tell desperately lonely people that their insatiable need for the warmth of a human presence is Against The Rules.

“It feels cruel to tell desperately lonely people that their insatiable need for the warmth of a human presence is Against The Rules…”

I often wonder if a suicide hotline’s unique ability to reach a population of acutely unhappy people could be harnessed for more personal, community-based interventions. Currently, there are both national and local call centers, but even on local lines, the caller is still miles away from you, and operators aren’t allowed to set up meetings with the people they speak to. Many people call because of a serious crisis in their lives, but the most you can do is give them a referral to a mental health organization that might be able to help them. I’ve frequently wished it were possible to send an actual human to check up on the person, ask how they’re doing, and see what they might need help with. It would be nice if neighborhoods or cities had corps of volunteers who were willing to be on-call for that kind of thing.

This, it seems to me, might be especially important for callers who seem more desperate and perhaps at higher risk of suicide. When you’re a hotline operator, there’s no middle ground between giving somebody verbal comfort and perhaps a referral, and dispatching emergency services directly to their location. (Some hotlines will only do this if the caller gives permission, while others, if the situation seems imminently dangerous, will send any information associated with the caller’s phone number to local police.) People who have previously had ambulances called on them often express deep shame and embarrassment about the experience. It attracts attention of all their neighbors; depending on the circumstances, the caller might even have been taken out of their home on a stretcher and rushed to an emergency room. Callers who have had this happen, or know someone it’s happened to, will often be especially cagey about sharing their suicidal thoughts, or paranoid about the information that might be being gathered about them. This is extremely problematic, because it means that potentially high-risk callers might deliberately understate the extent of their emotional distress if they ever call again in the future. Moreover, if they’ve been to hospitals before under these circumstances and found the experience traumatizing, they may be unwilling to accept medical interventions in the future. Wouldn’t it be better if instead the caller could consent for a nice person to come discreetly check up on them at their house, have a nice chat, maybe make them a cup of tea? For lower-risk callers, especially people in hospitals or nursing homes who don’t have any company, shouldn’t we be able to find someone living nearby who can pay them a visit during the week?

Of course, suicide hotlines are already understaffed, and so expanding them into an even more labor-intensive grassroots organization wouldn’t be easy. The kinds of callers who call suicide hotlines repeatedly and obsessively would likely be pleading for visits on a constant basis: you would probably need some kind of rationing system to make sure they weren’t overwhelming the entire volunteer network. In a small number of cases, there might be safety concerns about going in person to a caller’s house. (No house-calls for the masturbators, obviously.) The bigger problem, however, is figuring out how to mobilize communities and get people to feel invested in the emotional wellbeing of their neighbors. Personal entanglement is inherently a hard sell. Part of the reason why people volunteer with charitable organizations rather than simply knocking on their neighbors’ doors is because they want to keep their regular lives and their volunteer obligations strictly separate. They want to perform a service for someone without becoming closely enmeshed in the day-to-day reality of that person’s problems. This kind of distance is preferred by most part-time volunteers—I certainly find it more convenient to compartmentalize my life in this way, though I’m not at all sure that’s a good thing—and it may be preferable for some callers, too, especially those who are dealing with issues they intensely desire to keep private, for whom a visit from the wrong neighbor might be mortifying.

But I think we must attempt to surmount these obstacles. When people lament the demise of communities or multi-generation family units in the United States, this is the kind of mutual support they’re thinking of. The extent to which America was once comprised of warm, child-raising villages in its real-life past is, of course, greatly exaggerated, and we certainly shouldn’t romanticize local communities per se: they always have the capacity to be meddling, oppressive, and exclusionary. But all communities don’t have to be like that, and instead of abdicating community ideals as outdated, we could be working to realize them better in the particular places we live. As American lifestyles become increasingly mobile and rootless, close involvement in a community may not be foremost on people’s minds; to the extent that people these days talk about “settling down” somewhere, they usually seem to be thinking in terms of sending their kids to a local school, patronizing nearby restaurants, and attending summer concerts in the park, not trundling around to people’s homes and asking what they can do for them.

But even if we aren’t planning to live in the same town for the entire rest of our lives, we mustn’t allow ourselves to use this as a convenient excuse to distance ourselves from local problems we may have the power to ameliorate. People who come to the U.S. from other parts of the world often find our way of living perverse, in ways we simply take for granted as facts of human nature, rather than peculiar societal failings. I was recently talking to a Haitian-born U.S. citizen who works long hours as a nurse’s aid, and then comes home each night to care for her mentally disabled teenage son. She told me that if it were possible, she would go back to Haiti in a heartbeat. She was desperately poor in Haiti, but there, she said, her neighbors would have helped her: they would have invited her over for dinner, they would have offered to look after the children. “Here,” she said, “nobody helps you.” That’s one of the worst condemnations of American civil society I’ve heard in a while.

As Current Affairs has written in the past, many of the problems that underlie or exacerbate people’s suicidal crises—homelessness, unemployment, lack of access to healthcare—are the result of an economic and political system that is fundamentally profit-driven, and fails to prioritize the well-being of its most vulnerable citizens. Large-scale political changes are necessary to free up the resources that would be necessary to truly tackle these problems in a lasting and meaningful sense, and foster a society that’s better geared towards the health and happiness of all its members. But we must also recognize that government programs—even if well-funded—will never be enough, if they’re administered by an impersonal bureaucracy. What people want, what they need, are real fellow-humans who will come talk to them, and look them in the eye, and genuinely care about what happens to them. At the moment, given the system we currently have to work with, to allocate all that responsibility onto a few poorly-paid, exhausted social workers and health sector employees just isn’t fair—nor is it effective. This is a responsibility that should belong to all of society: to anybody who has even a hour to spare.

Giving people a number to call is a start. It would make sense to use existing hotlines as a tool to find and reach people who need help, both those who are at high risk of harming themselves, and those that are simply unhappy. As for how local volunteer forces could be coordinated, this is something municipalities should trade ideas about: possibly there are communities who have successfully implemented programs like this. Organizations that work narrowly on certain types of social problems might have ideas about how to structure a multi-purpose community-wide organization that could intervene more generally in a variety of contexts. When it comes down to it, actually caring about—and taking care of—your neighbors, even when it’s difficult, is always the most radical form of political activism.

The Clintons Had Slaves

But the prison labor system is also rotten to the core…

Contrary to popular understanding, the Thirteenth Amendment to the United States Constitution did not prohibit slavery. The text makes it clear:

Neither slavery nor involuntary servitude, except as a punishment for crime whereof the party shall have been duly convicted, shall exist within the United States, or any place subject to their jurisdiction.

The nifty little loophole of that word “except” means that slavery isn’t actually banned outright; someone simply has to be convicted of a crime in order to be enslaved. This gave Southern states a welcome free hand in re-establishing forced servitude for African Americans in the years after Reconstruction collapsed; as Douglas Blackmon documents in Slavery By Another Name, the Jim Crow era was in many places characterized by a mass re-enslavement process, whereby criminal laws were devised that allowed states and municipalities to put black people in chains again. Today, forced labor among African Americans persists; in Louisiana, for example, felons are sentenced to “hard labor” as well as prison time, and inmates at the infamous Angola prison still pick cotton at gunpoint.

The prison labor system in the United States has long been an unacknowledged scandal. It’s quite plainly a form of slavery. The Thirteenth Amendment even admits as much: it doesn’t say that when you’re forced to work for being convicted of a crime, that isn’t slavery. It says that slavery is legal if it is imposed as part of a conviction for a crime. All manner of people benefit from the system; as Mother Jones has reported, Congress actually incentivized private companies to use inmate labor, and the incarcerated now produce everything from bedding to eyeglasses. They even staff call centers, with a company called UNICOR encouraging companies to “smart-source” their call-center work to prisoners rather than sending it overseas.

But two possibly unexpected beneficiaries of the contemporary prison slavery system were none other than Bill and Hillary Clinton, who during their time at the Arkansas governor’s mansion in the 1980’s used inmates to perform various household tasks in order to “keep costs down.” Hillary Clinton wrote of the practice openly and without any apparent sense of moral conflict.

The Clintons’ practice has gotten some renewed attention over the last day, with the rediscovery of the relevant passage from It Takes a Village. Last year I wrote a bit about Hillary’s admission in my book Superpredator: Bill Clinton’s Use and Abuse of Black America:

Clinton was, however, generous enough to allow inmates from Arkansas prisons to work as unpaid servants in the Governor’s Mansion. In It Takes a Village, Hillary Clinton writes that the residence was staffed with “African-American men in their thirties,” since “using prison labor at the governor’s mansion was a longstanding tradition, which kept down costs.” It is unclear just how longstanding the tradition of having chained black laborers brought to work as maids and gardeners had been. But one has no doubt that as the white residents of a mansion staffed with unpaid blacks, the Clintons were continuing a certain historic Southern practice. (Hillary Clinton did note, however, that she and Bill were sure not to show undue lenience to the sla…servants, writing that “[w]e enforced rules strictly and sent back to prison any inmate who broke a rule.”

Indeed it’s really difficult, given the facts, to conclude that this practice was anything other than slavery. The Clintons were perfectly content to be waited on by black people who received no compensation and would have been pursued and dragged back in chains if they had tried to leave. There is only one word for such an arrangement.

One could almost respect the honesty with which Hillary spoke of her use of convict labor. She acknowledges that these men were black, and that she had a strict policy of sending them back to prison if they violated any rules. But Hillary Clinton isn’t like the Atlantic writer who dwelled on his upbringing as part of a family who held a woman in a state of slavery. Her forthrightness in It Takes a Village is not because she is attempting to grapple with the atrocity in which she was complicit, but because she doesn’t see anything wrong with what happened. Whereas many of us would be appalled at the idea of having our meals served by unpaid black servants, Clinton found the whole situation quaintly traditional, and was favorably impressed by the financial benefits of not paying her staff. What others might call “a crime against human dignity,” Clinton referred to in It Takes a Village as simply “an unusual aspect of living at the governor’s mansion.”

The Clintons’ use of prison labor was only one small part of a long and horrifying record. Both Clintons, but especially Bill, have consistently manipulated black political interests while showing complete disregard for the humanity of African Americans. This stretches from Hillary’s perpetuation of a hideous racist myth about a wave of hyper-violent “superpredators” to Bill’s politically-motivated execution of a mentally disturbed black inmate. (I know it’s a crass plug, but there really is far more on this, with a lot of sources, in Superpredator.)

Predictably, when people started to mention how disturbing it was that the Clintons had kept slaves, a few especially committed online Hillary fans began to issue impossibly contorted defenses, including blaming “DudeBros” for bringing the matter up and explaining that Hillary had tried to empathize with the convicts. (To see why the defenses fail, simply imagine how laughable they would seem if applied to any other situation of unpaid black labor, e.g. if a 19th century Southerner offered them.)

But let’s also be clear: the issue of prison labor isn’t just about the Clintons. I believe the Clintons have an indefensible record of behavior toward black people. But the story of the Clintons in the Arkansas governor’s mansion also just illustrates how ubiquitous and taken-for-granted situations of slavery are. It’s very easy to think “But that couldn’t possibly be slavery, the state is just assigning inmates to an interesting work detail.” Yet if we examine the facts critically, it’s hard to see how it could be anything else. Prison labor doesn’t seem like slavery because it no longer displays some of the imagery associated with slavery in our minds, such as the whippings and the auction blocks. But we’re still dealing with a situation in which people are working by compulsion rather than choice, and are threatened with violence if they leave. They are leased to corporations, as if they are property. The auction block may be gone, but the core aspect of slavery is not that people are bought and sold. Rather, it’s about the kind of dominance that is asserted over them. (After all, slavery can exist even if one party has a monopoly on slaveowning. It’s the forced labor and the experience of the slave that counts, not the trading element.)

Of course, one could draw a distinction between “slavery” (in which a person asserts all rights over a human being, including the right to sell them and their children and to take their life) and “involuntary servitude” (in which a person is simply forced to work), a distinction such as the Thirteenth Amendment contemplates. But “involuntary servitude” immediately begins to sound like little more than a euphemism for slavery, and many of the situations that modern anti-slavery advocates would consider to be slavery—such as that perpetrated by Alex Tizon‘s family—do not necessarily include people being murdered and having their children sold. (Though they sometimes do.) It is important never to minimize the distinct horrors of early American slavery, but the term also applies to situations in which the victims are treated comparatively “well,” and which are not characterized by all of the worst features of the pre-Civil War South. Thus I do think it’s fair to classify prison labor as a form of enslavement. Degrees of force obviously vary, but since the Angola prison plantation today looks exactly the same as it did in the 19th century, I believe the word helps us appreciate the evils of mass incarceration rather than diminishing the evils of the antebellum era. 

The Clinton slavery controversy should not really be about the Clintons. It’s the prison labor system as a whole that is rotten, and they were only two especially amoral beneficiaries of it. Today, our attention should be focused on the cotton-pickers of Louisiana and the scores of other modern-day slaves. This is not a mere pathology of the Clintons, but a pathology of the country we all inhabit. And it is not just a single noxious political family that is complicit. We all are. 

superad1

 

Campus Politics and the Administrative Mind

Anyone who supports the goals of campus activists should be willing to criticize their focus on bureaucratic remedies…

Recently, I was asked by a friend what to make of recent controversies at American colleges regarding “no platforming” tactics, the efforts of student activists to shut speakers they disagree with out of campus speaking opportunities. It’s an issue I think about often – as one of those few remaining leftists who remembers that civil liberties are essential to left-wing practice, as a college employee, and as someone who grew up surrounded by campus activism. I told my friend, only halfway joking, that I would think more of these efforts once college students had “no platformed” Barack Obama. Obama, after all, has far more blood on his hands than Milo Yiannopoulos or Ann Coulter. But, I also told her, I didn’t see that coming anytime soon.

Why? In part, it’s likely that the idea of no platforming Barack Obama would be far less popular among campus protesters than with Yiannopoulos or Coulter, even though there are plenty of radical critiques of Obama. However badly Obama failed left-wing ideals, with his complete failure to take on Wall Street, his expansion of our military entanglements, and his general moderation in a time demanding extremity, for many young left-leaning people Obama remains the kindly, progressive figurehead of political life. This reflects the “squashing” effect of college activism: the social and organizational dynamics of campus life can push your committed anticapitalist into the same groups and actions as your more conventional liberal Democrat. Furthermore, many college activists likely still have not really developed their exact ideological position.

There’s nothing wrong with those things. Political organizing is about forming coalitions, and part of the point of activism for young people is to sort out what, exactly, they believe. But analytically, this ideological confusion makes it harder for outside observers to draw the right lessons about what exactly the socialist left believes and what its tactics should be. The 2016 election saw liberal vs. leftist fights break out for more than a year, thanks to the Clinton-Sanders primary. Leftists criticized liberal Democrats relentlessly, and righteously, for the latter’s inability to conceive of a real alternative to austerity and neoliberalism. Yet I’ve been surprised to see many of those same leftists defend campus protesters at all turns, not seeming to understand that many of those same protesters will leave college life to become precisely the kind of upwardly-mobile Clintonite Democrats they despised during the election. That’s what a lifetime spent around college activists has shown me.

Besides, there’s another, more salient reason it’s hard to imagine a successful effort to shut down a speech by Obama or Hillary Clinton or a similarly prominent Democrat: there are few colleges or universities where such attempts would be tolerated, thanks to the culture and economics of the contemporary university. Though conservatives frequently attack higher education as a radical enclave, the institutional culture of the contemporary university is really far more aligned with institutional liberalism than radical leftism. The concept of the “deep state” has been debased lately, but in its original form – the idea that there is a bureaucratic class that persists within elected governments regardless of the outcomes of elections and which has its own interests that it asserts through subtle administrative power – is true of colleges, perhaps even more than of governments themselves. And the deep state of most universities is not radical but rather progressive. It’s not comprised of Sanders-style insurgents but of Clinton-style establishmentarians. It’s this class of people that college students have been petitioning, and so the presumptions held by that class of people represent the boundaries of what much contemporary college activism can achieve.

anatomyad2

In particular, to ban an Obama or a Clinton from campus would be to risk offending the donor class that is so essential to the fiscal functioning of the kinds of private colleges where campus activism tends to flourish. I am hardly the first to point out that Republican state legislators have made great hay by claiming that public universities are leftist indoctrination machines, and that no platforming tactics should be used carefully given this potential backlash. The donors and alumni are the much less-discussed private college equivalent, and if anything, private colleges are even more in thrall to their interests than public schools are to the state.

Leftist defenses of campus activism have been almost entirely silent on the strange interplay between campus protesters and the administrators they petition, but that relationship is an absolutely essential facet of this discussion. In particular, we need to recognize that higher education has developed an entire set of administrators whose fundamental purpose is to prevent controversy from happening before it starts. I’ve come to call them the “Liability and Controversy Avoidance Class.” They are the diversity officers, the Title IX coordinators, the fixers of Greek life controversies, the public relations and marketing people who know just how much intersectionality language to pepper into their press releases.

I don’t think that none of these jobs are worthwhile; in fact some of them are essential. But anyone who cares about genuinely radical action on campus has to understand the way that universities have adapted to protests by treating them as a marketing issue to be managed. Sometimes university administrators are indeed the (potentially sympathetic) gatekeepers who hold the keys to students getting what they want. But as much as it may be in the short-term interest of those admins to give in to student demands, in the macro sense they have interests that are at best orthogonal to those of activists. And a student movement that fails to understand that risks finding itself defeated not in a romantic violent clash in the streets, but by the numbing power of middle management, by being shunted into committee, by being “handled.”

Conflict avoidance has become the great growth industry of the American college. Conservatives have, in recent years, made much of the various missteps involved in Title IX enforcement on campus, claiming that the tendency of universities to trample on due process in adjudicating Title IX complaints tells us something about modern feminism. They’re wrong. Rather, Title IX enforcement tells us something about the nature of bureaucracy. In particular, it tells us that people employed by an institution will always serve the needs of that institution first. Title IX ostensibly empowers administrators to pursue sexual inequality claims on campus with the backing of the federal government. But what it actually produces in practice is a small army of college employees whose real job is preventing colleges from absorbing the worst consequences for failing to achieve sexual equality. That is, by virtue of being employed within these institutions, even the most ethical and passionate Title IX enforcement officer ends up playing a defensive role on behalf of the institution. This is not an indictment of anyone’s integrity; it’s a statement about the nature of institutions.

A friend of mine worked a Title IX job for several years. She’s one of the most committed and informed feminists I know. When she started, she described her position as a dream job. But she ended up leaving after only a few years, burnt out by the drudgery and frustration of a job that combined the bureaucratic morass of the university with that of the federal government. And when she left, she said that she had come to understand that the very nature of Title IX and similar regulation means that the purpose of positions like hers would inevitably be a matter of avoiding litigation for the institutions that paid her salary. That is the inevitable tradeoff: a law that creates real punishments for organizations will compel those organizations to create structures designed to avoid those punishments.

That’s not a reason to abolish Title IX; I remain a supporter of the law, in broad strokes, because we need to give the effort to achieve gender equity on campus teeth. But the fact remains that a Title IX enforcement officer paid by a university will by necessity place the university’s needs above that of students. The same can be said of the diversity officers that are now being employed by more and more universities. In response to the student uprisings at schools like Yale, Amherst, and Oberlin several years ago, many institutions set about hiring administrators to ensure that minority students on campus feel included and safe; some of them have built or are building new minority student centers or similar structures. (The tendency to respond to student demands by cutting checks is another hallmark of the college administration playbook.) Those goals are laudable. But the same constraints on Title IX officers will surely afflict these diversity officers, and again regardless of their personal integrity.

That’s important for everyone to understand, because increasingly the act of being a campus protester involves petitioning administrators for what you want. The archetypal behavior of protest groups during the brief campus uprising, after all, was to submit a list of demands to the college board or president. I don’t find this some sort of strategic mistake, but I do think it’s remarkable just how many college activists I meet treat asking administrators for things as the end-all, be-all of protest. And as that belief spreads, so too do the conflict avoidance strategies. Crucially, at most schools these strategies will never involve just telling students “no.” Rather, they will delay rather than deny, give students some of what they want rather than all, and always affirm the righteousness of what the students are doing and the legitimacy of their complaints. It turns out that the discourse of social justice is compatible with administrativ-ese, if only a conflict avoidance officer really puts their mind to it.

printedit

Besides, the problem with appealing to authority is that sometimes authority says “no.” And while the courageous protesters at the University of Missouri – and their successful campaign to depose the school’s president – show that you can eventually raise the stakes for administrators dramatically, there will also always be times when the authority has the wherewithal just to turn you down. At that point, the strategy of petitioning authority collapses. So look at Oberlin, which is often taken by conservatives to be the nadir of loony campus politics and by liberals as an example of principled campus resistance. Oberlin student protesters presented the school’s president with a controversial list of demands, which included things like dictating aspects of curriculum and firing specific campus faculty and staff. They also insisted that their demands list was non-negotiable. So Oberlin’s president didn’t negotiate – he just said “no.” By coupling the extremity of their demands to a preemptive rejection of negotiation, the students had given him all the cover he needed. I haven’t heard much of that effort since; I assume many of the framers of the document have graduated and gone on to live their post-collegiate lives. (That’s another structural issue with campus organizing: the ability of establishment power to run out the clock.)

None of this is intended as some scathing indictment of campus activists. It is, instead, an attempt to analyze conditions in campus politics without romance. As I have said before, and will say again, the best way to understand current campus political controversies is as a negotiation between competing interests under neoliberalism. That’s true no matter how much integrity, passion, and savvy the student organizers possess. It’s just an observation of the endless layers of control that we’re living under in neoliberal capitalism – that we’re all living under.

Sadly, I find this conversation almost impossible to have in left spaces. Many leftists I know – smart, committed people who are ordinarily capable of thinking critically and with nuance about people with whom they broadly agree – have adopted a stance of blind support to campus activists, no matter what their goals or tactics. I understand this impulse, emotionally and socially. It’s a dark time and we’re looking for solidarity wherever we can. Campus attracts so much left-wing attention because it feels like one of the only places where we feel like we can win. But the conditions there are very specific and very idiosyncratic, and the tactics and strategies that work in the collegiate space are unlikely to work in the workplace or society writ large. But if we insist on seeing college activism as an integral part of left practice, then I also insist on seeing it clearly, on looking at it with sympathetic but critical eyes. To do so, we must be willing to ask uncomfortable questions about the nature of that work.

Theresa May’s Refusal To Debate Jeremy Corbyn Really Is Shameful

The Conservatives have shown contempt for Britain and for democracy, and should be punished at the polls.

Let’s be clear: there is only one reason why British Prime Minister Theresa May has spent the entire U.K. election campaign refusing to publicly debate Labour leader Jeremy Corbyn. The reason is not very complicated or difficult: the possible risks simply outweighed the possible rewards. She refused to debate Corbyn because she made a cost-benefit calculus and determined that the amount of public support she would lose by abstaining was not large enough to justify the potential costs of a debate. These costs could have been very high indeed for the Conservative Party. At best, May would have been legitimizing the Labour Party as an equal competitor worthy of engagement. At worst, she could have actually lost to Jeremy Corbyn, which would have been a disaster and would have totally undermined her attempt to use the election to quickly vanquish and discredit Labour.

It’s obvious that this was simply a result of a strategic calculus, because all of the excuses May offered for not debating were excruciatingly feeble. She has claimed to be too busy working on Brexit negotiations to do a televised debate—a claim she has been making during televised interviews. (And sure enough one of her other excuses has been that she is already doing plenty of media: “I’ve not been off the television screens, I’ve been doing things in the television.”) May also called the election herself, making it odd for her to claim that it’s a bad and busy time and that she is preoccupied with far more important matters than the election. May also dismissed the prospect of a debate by saying that nobody wanted to see politicians “swapping soundbites,” explicitly conceding that if she did attend a debate, she would be bringing nothing but soundbites.

It has been suggested that May’s avoidance of a public debate shows that she is “frightened” of Jeremy Corbyn. I don’t think it’s necessarily that, though. Certainly, May must be partly worried that she could lose a debate. But May might still have declined despite being absolutely certain that she would prevail in the encounter. All she needed to realize was that debates can be politically effective even for those that lose them. The reason U.S. third party candidates like Ralph Nader are so desperate to get into presidential debates, for example, is not to do with winning or losing but the tremendous publicity value of being seen as an equal on the stage. And Donald Trump, despite performing catastrophically in the actual “debate” portions of the presidential debates, was able to use his television match-ups with other candidates to make himself appear like a viable, if volatile, option.

So May realized that, with Labour over two dozen points down in the polls at the beginning of the campaign and Jeremy Corbyn being widely seen as a fringe nobody who was shepherding the party toward its doom, avoiding the debates would send the message that Labour wasn’t even serious enough to be worth confronting. With Corbyn already being branded as a hopeless loser, May could further undermine his attempt to form a legitimate opposition and could imply that there is simply no alternative to continued Conservative rule.

All of this has turned out to be completely hubristic, now that Labour has drastically improved its stance in the polls and the race has become competitive. But I want to pause and note something else: just how amoral and contemptuous of democracy May’s stance on the debates has been. May deprived the British public of their chance to hear a real discussion of the differences between the two parties, purely because she believed doing so would not help her own electoral prospects. She treated voters as stupid by offering them transparent lies about her reasons for declining to debate. And she demonstrated that she doesn’t actually care about having a fair political fight, but solely about retaining her hold on power.

anatomyad2

The presence or absence of a debate may seem like a comparatively minor issue. After all, the public has heard a lot from both Corbyn and May separately. But the logic of May’s decision-making here should repulse people. May called the election cynically at the time when she believed it would be easiest for her to win, and then refused to actually fight fairly in the debate arena. Her entire handling of the election has been purely cynical and calculated. (Witness her immediate reversal of a party policy when she found out it was unpopular. The Conservatives are utterly uninterested in a process of presenting serious ideas to the public and having a discussion about them, they simply want to take advantage of a perceived opportunity to crush the other side.)

Of course, you might say that May was behaving rationally, and that there’s no crime in a politician making a political calculation. Showing contempt for the political process and telling lies to voters was the optimally correct political move. But there’s no reason why voters should allow themselves to be manipulated this way. The only way that amoral behavior ceases to be strategically correct is when people are punished for engaging in it. Theresa May believed that pulling out of the debates would not cost her much politically. Next week, voters have a choice to prove that this isn’t so. In fact, unless voters heavily punish the Conservative Party for ducking the debates and trying to win an election without allowing the Labour Party to debate them, this kind of cynical behavior will occur more in the future. If voters think the Prime Minister should be willing to debate the opposition, then a Prime Minister who simply ignores all requests to debate the opposition should no longer get to be Prime Minister.

Now, just to be clear, I believed Jeremy Corbyn’s initial refusal to attend the debate if May wouldn’t was wrong as well. Corbyn’s reasoning was far more justified; Britain is functionally a two-party system, and if the Prime Minister won’t show up to the debate, there isn’t really a debate to be had. But I am glad Corbyn ultimately reversed himself and attended a debate alongside the third parties. The whole idea of skipping out on the televised debates, however, originated with May.

When I first heard that Theresa May wouldn’t consent to debate Jeremy Corbyn, despite having called a snap election herself, I was furious. It felt like cheating. Not only did she want to forestall Labour’s opportunity to build support by having an election as quickly as possible, but she insisted on denying them the one rightful chance they had to meaningfully compete with the Conservatives. To not even be willing to defend your ideas against the opposing party is cowardly and dishonorable, and suggests that Theresa May is far less concerned with British democracy than with her personal power. Procedural issues like debates are obviously less important than substantive policy issues, but unless politicians who shun accountability are given a stiff political penalty by voters, they will be increasingly inclined to avoid fair fights when those fights might undermine them, which means they will steadily withdraw from public accountability.

Theresa May’s television appearances have gone disastrously. It’s not hard to see why she doesn’t want to debate Jeremy Corbyn, who is warm, intelligent, and likable. But debate is integral to democracy, and “because the other side might win” is not a legitimate reason to shut it down. Theresa May’s avoidance of a public confrontation should be treated as a very serious indictment of her character and of her party’s respect for the British public.

The Real Obama

What the president does in retirement will reveal his true self…

The best thing about being an ex-president is that you can do whatever you want. Do you want to retire to the countryside to build henhouses and tootle around in your amphibious car? You can do that. Do you want to teach Sunday school and build houses for poor people, and maybe broker an occasional international peace agreement? You can do that also. Do you want to spend your days painting pictures of your dogs, your feet, and the soldiers you caused to be maimed? It’s an option! The retirement activities of presidents offer useful insights into their natures, because they are finally freed of all political constraints on their action. At liberty to pursue activities of their choosing, we get a sense for what they actually enjoy, and who they actually are.

During his two terms in office, Barack Obama’s most zealous devotees tended to explain away apparent failures or complacencies by referring to the constraints high office places on anyone who ascends to it. Even some critics on the left may have suspected that the deeds of Obama’s administration were out of sync with his natural instincts, that Obama was a man of high conscience weighed down or blunted by Washington’s leviathan bureaucracy, or frustrated by the exigencies of an unstable world.

Obama’s retirement should therefore finally give us meaningful insight into who he really is or, to put it another way, who he has been all along. The albatross of office finally lifted from his neck, America’s 44th president is now free to do anything and everything he desires without impediment. He can be the person he has always wanted to be, the person whom he has had to keep hidden away. Who, then, is the real Obama?

Well, it turns out the real Obama is quite like the one we knew already. And what he most wants to do is nestle himself cozily within the bosom of the global elite, and earn millions from behind a thinly-veiled philanthropic facade.

In January, Obama launched his post-presidential foundation with a board that consists of private equity executives, lobbyists, and an Uber advisor, tasking it to implement the world’s most meaningless mandate (“to inspire people globally to show up for the most important office in any democracy, that of citizen”). Able to choose his friends from out of anyone in the world, Obama has been seen kitesurfing with venture capital magnate Richard Branson (worth more than $5 billion) and brunching with Bono. (You can usually judge a person pretty well by their friends, and nobody who voluntarily spends his free time with Bono should be trusted.)

Obama’s recent forays into politics have also confirmed him as a friend to the elite. He used his last weeks in office to personally help derail the candidacy of left-wing congressman Keith Ellison for DNC chair. After Ellison became an early favorite in the race, Obama used his influence to recruit and boost the more centrist and less controversial Tom Perez, who won after a series of vile smears were launched against Ellison by influential party donors.

Obama also extended his influence overseas. Ahead of the first round of voting, he effectively endorsed French presidential candidate Emmanuel Macron, a former investment banker who “wants to roll back state intervention in the economy, cut public-sector jobs, and reduce taxes on business and the ultra-rich.” (Macron also once responded to a union worker who needled him over his fancy suits by declaring that fancy suits accrue to those who work the hardest, an assertion that is manifestly false.)

Then there were the speeches. In December, conservative commentator Andrew Sullivan, asked what Obama should do with his post-presidency, had jokingly pleaded: “No speeches at Goldman Sachs, please.” After all, Hillary Clinton’s Wall Street speeches had become the ultimate symbol of Democratic hypocrisy, a clear demonstration of how those who profess to oppose inequality will happily reap financial benefits from it. For Sullivan, it was laughable to think that a man like Obama, who maintained a public image characterized by modesty and personal integrity, would instantly lapse into the tawdry and unscrupulous Clinton practice of cashing in.

But then Obama cashed in. Mere weeks after leaving 1600 Pennsylvania Avenue he signed on with the Harry Walker Agency (the very same outfit through which the Clintons have jointly pocketed a virtually incomprehensible $158 million on the speaker’s circuit). It was then revealed that he had been paid a whopping $400,000 fee by Cantor Fitzgerald a bond firm which deals in credit default swaps, the inscrutable instruments of financial alchemy that helped cause the 2008 financial meltdown. (After that came news of another $400,000 speaking fee.)

At the first sign of backlash against Obama’s pursuit of riches, media and political elites unleashed a torrent of toadyism in his defense. After expressing faint concern about Obama’s speaking fees, Amanda Marcotte chastised “people who’ve never had money worries” for casting judgement on “those who have,” elsewhere complaining: “The obsession with speaking fees is politics version of begrudging athlete salaries while ignoring owner profits” (an analogy that only holds up if Obama literally works for Wall Street). The Boston Globe’s Michael Cohen added: “If someone wants to pay Barack Obama $400,000 to give a speech I can’t think of a single reason why he shouldn’t take it…Obama is not doing anything wrong. He’s giving a speech. Nothing to apologize for.” It seemed that American liberalism’s eight year journey from  “Change We Can Believe In” to “Everybody Grifts…” was finally complete. (There is a fun game one can play with ideologically-committed Democrats that we might call “Rationalize That Injustice.” See if there are any right-wing policies that they won’t justify if told that Obama did them.)

Certain defenses of Obama opted for an explicitly racial framework. The Daily Show’s Trevor Noah exclaimed “So the first black president must also be the first one to not take money afterwards? Fuck that, and fuck you!” April Reign, creator of the viral hashtag #OscarsSoWhite, equated Obama’s critics with defenders of the slave trade. Attorney Imani Gandy, who litigated foreclosure cases on behalf of J.P. Morgan before becoming a prominent social justice activist on Twitter, seized upon the controversy to call antipathy towards Wall Street “the whitest shit I’ve ever heard.” This particular line of argumentation almost defied credulity, especially since critics of Obama’s speaking fees were simply extending a criticism originally applied to Bill and Hillary Clinton.

170207080749-obama-branson-kitesurf-challenge-full-169
Obama and Branson enjoy the ocean together.

But while certain rationalizations of Obama’s conduct have ventured into burlesque satire, it is worth taking Michael Cohen’s question seriously: what’s so wrong with Obama doing a speech for money? He speaks, they pay, nobody gets hurt. What’s the actual harm? Since Obama isn’t actually in a position to give Wall Street any political favors, and since he’s a private citizen, why should it matter? Indeed, Debbie Wasserman Schultz told those who might be upset by the speech to “mind their own business.”

Well, first, there are some basic issues of personal ethics involved in post-presidential buckraking. There is something tawdry about immediately leaving office to go and make piles of money in any way you can, and it’s a short hop from doing your inspirational speaking schtick for corporate events to doing it in television commercials or at birthday parties for investment bankers’ teenage children. That’s why Harry Truman famously refused to serve on corporate boards, declaring that doing so would be undignified. (“I could never lend myself to any transaction, however respectable, that would commercialize on the prestige and dignity of the office of the presidency.”) And those who think Obama is being held to an impossible standard (that impossible “do good things rather than simply lucrative things” standard) should remember that Jimmy Carter has spent a productive and comparatively modest retirement writing, campaigning for the basic dignity of Palestinians, and quite regularly intervening to criticize American policy at home and abroad.

Some have said that as a “private citizen,” Obama’s choices of how to make money should be beyond moral scrutiny. But it’s private citizens who could use a lot more moral scrutiny. Obama’s choosing to become a mansion-dwelling millionaire is not wrong because he used to be the president, but because being exorbitantly rich in a time of great global poverty is heinously immoral. Moreover it defies credulity to suggest, as some have in earnest, that Obama needs to take money from this particular source. He is already guaranteed a lavish annual pension of more than $200,000 in addition to expenses and almost $400,000 in further pension money accrued from his time as an Illinois State Senator. He and the former First Lady have just signed the most sumptuous post-presidential book deal in history (worth $65 million, or almost 1500 times the median personal income) and will assuredly spend the next several decades enjoying a standard of material comfort few Americans have ever known, Wall Street speaking fees notwithstanding.

Finally, there’s the political hypocrisy. On the very same day as the infamous speech, Obama was elsewhere decrying the pernicious political influence of wealth, somberly declaring that “because of money and politics, special interests dominate the debates in Washington in ways that don’t match up with what the broad majority of Americans feel.” Obama’s public posture has always been that he resents the political influence of special interests and financial elites, yet as both a political candidate and a private citizen they have showered him with money he has been only too happy to accept.

anatomyad2

Yet Michael Cohen is also partially right: the speech itself is not actually terribly important. It’s a mistake to focus on the personal ethics of Obama’s actual decision, and if we frame the relevant question as “Should Obama have taken the money?” then it’s easy to lapse into something of a shrug. So the guy wants to get rich. Fine. He’s no worse than every other member of the 1%. They’re all indefensible, and as long as nobody continues to maintain the illusion that Obama is any different from any other politician, there’s no reason to single him out as uniquely wicked. (One suspects, however, that some people do still maintain the illusion that Obama is different from other wealthy denizens of the political class.)

The most important aspect of the story is not that Obama accepted Cantor Fitzgerald’s offer, but that the offer was made in the first place. Indeed, it’s hard to escape the impression that certain powerful interests are now rewarding the former president with a gracious thanks for a job well done. Rather than asking whether Obama should have turned down the gig, we can ask: if his administration had taken aggressive legal and regulatory action against Wall Street firms following the financial crisis, would they be clamouring for him to speak and offering lucrative compensation mere weeks after his leaving office? It’s hard to think they would, and if a Democratic president has done their job properly, nobody on Wall Street should want to pay them a red cent in retirement. Obama’s decision to take Cantor Fitzgerald’s cash isn’t, therefore, some pivotal moment in which he betrayed his principles in the pursuit of lucre. It’s simply additional confirmation he has never posed a serious challenge to Wall Street’s outsized economic power.

In fact, we’ve known that for as long as we’ve known Obama. He was popular on Wall Street back when he first ran for president. According to Politico, he “raised more money from Wall Street through the Democratic National Committee and his campaign account than any politician in American history,” and in just one year “raked in more cash from bank employees, hedge fund managers and financial services companies than all Republican candidates combined.”

Serious economic progressives did not become disillusioned with Obama when he accepted $400,000 for a speech, but when he arrived in office at the apex of the financial crisis and immediately stuffed his cabinet and advisory team with a coterie of alumni from Goldman Sachs (a top donor to this campaign in 2008). At the height of the worst financial catastrophe since the Great Depression, during a time of unique (and completely warranted) antipathy towards rapacious corporate interests, Obama had been elected with the single greatest mandate to implement sweeping change in recent political history. Given the same extraordinary kind of political demand, FDR took the opportunity to proclaim that “The old enemies of peace: business and financial monopoly, speculation, reckless banking, class antagonism, sectionalism, war profiteering…they are unanimous in their hate for me — and I welcome their hatred.”

But when Obama was faced with a similar moment of calamity and possibility, he opted instead for the avenues of brokerage and appeasement. He chose not to push for criminal prosecutions of financial executives whose greed and negligence caused the 2008 economic crash. In 1999, Obama’s Attorney General, Eric Holder, had proposed the concept of “collateral consequences” (colloquially known as “too big to jail”), whereby “the state could pursue non-criminal alternatives for companies if they believed prosecuting them might result in too much ‘collateral’ damage” to the economy. Thus, when banking giant HSBC was revealed to be laundering billions of dollars for Mexican drug cartels and groups linked to al-Qaeda, Obama’s Justice Department allowed the bank to escape with a fine and no criminal charges, on the grounds that a prosecution might damage HSBC too much and have wider effects on the economy. Top prosecutors had evidence of serious wrongdoing by HSBC, but Holder prevented them from proceeding. A report prepared for the House Financial Services Committee concluded that Holder “overruled an internal recommendation by DOJ’s Asset Forfeiture and Money Laundering Section to prosecute HSBC because of DOJ leadership’s concern that prosecuting the bank would have serious adverse consequences on the financial system.” Yet Holder later falsely suggested that the decision was made by the prosecutors rather than himself. (“Do you think that these very aggressive US attorneys I was proud to serve with would have not brought these cases if they had the ability?”) One should note just how unjust the “collateral consequences” idea is: it explicitly creates separate systems of justice for rich and poor, because there will always be more economic consequences to prosecuting major banking institutions than individual poor people. The same crime will therefore carry two different sets of consequences depending on how much you matter to the economy.

Holder also institutionalized the practice of extrajudicial settlements, under which “there was no longer any opportunity for judges or anyone else to check the power of the executive branch to hand out financial indulgences” to corporate offenders. Thus even as guilty pleas were extracted from banks and financiers for crimes ranging from fraud, manipulation, and bribery to money laundering and tax evasion, not a single malefactor from Wall Street ended up behind bars. (Meanwhile, America’s prisons remained full of less economically consequential people who had been convicted of the same crimes.)

Obama’s politics were the same when it came to policy-making. After several years of sustained corporate pushback, aided by both the White House and Congress, the much-touted Dodd-Frank law was whittled down to the status of a mild and extremely tenuous reform. A similar pattern inflected Obama’s signature legislative achievement, the now-precarious Affordable Care Act. While undoubtedly improving on the horrific status quo in American health care, Obamacare was notably soft on the insurance and pharmaceutical industries, both of which were extensively consulted during its composition. Far from being the Stalinist caricature of Tea Party fever dreams, Obamacare was based on plan put in place by a Republican governor and sketched out by the Heritage Foundation in the early 1990s. No matter how much the American right may distort the record, Obamacare was essentially a massive corporate giveaway (after all, it mandated that millions of people become new insurance customers), and it manifestly failed to tackle the crux of the problem with US healthcare, which is that market actors are involved in the provision of health insurance to begin with. Obama arguably had the votes to create a public option that would have ameliorated matters somewhat, even without his having made any serious attempt at exerting political pressure in favor of one. But instead, he opted to needlessly compromise with the very corporate actors who stand between Americans and the guarantee of healthcare as a right.

This consistently pro-business approach has ensured that Obama isn’t the only administration official that corporate America has showered with gratitude. For plenty of Obama’s top lieutenants, the revolving door between Wall Street and the corridors of the US government has kept spinning continuously. David Plouffe, Obama’s 2008 campaign manager and former senior advisor, now works for Uber. Press Secretary Robert Gibbs is executive vice-president at McDonalds, lobbying hard against raising the minimum wage. Eric Holder, who had left the white-collar defense outfit Covington & Burling to become attorney general, returned in 2015 to once again represent many of the same banks and financial firms he had ostensibly been charged with regulating and prosecuting while in office. (Covington had literally been keeping Holder’s office waiting for him. “This is home for me,” Holder said of the corporate firm.) And having presided over massive bailouts during his tenure running the US Treasury, Timothy Geithner headed to Wall Street to take up a lucrative gig at private equity firm Warburg Pincus.

This is why Matthew Yglesias was wrong to characterize Barack Obama’s speaking fee as a betrayal of “everything [he] believes in.” In fact, it was the exact opposite: totally consistent with everything he has always stood for. The point isn’t that he’s “sold out.” It’s that, when the soaring cadences and luminous rhetoric are stripped away, Obama never offered any transformative change to begin with. Thus his $400,000 speech matters, not because it represents a deviation from the norm, or a venal lapse in personal ethics, but because it conveniently demonstrates a pattern that has been there all along.

printedit

In the Obama presidency, many liberals found the embodiment of their political ideal: an administration of capable, apparently well-intentioned people with impeccable Ivy League credentials, fronted by a person of undeniable charisma and charm, and with a beautiful and photogenic family to boot.

But examining Obama seriously requires acknowledging the fundamental limits of his brand of politics: a liberalism that continues to trade in the language of social concern while remaining invested in the very institutions undergirding the poverty and injustice it tells us it exists to fight; see, e.g., the upper-middle-class liberals who decry educational inequities while sending their own children to private schools. Like the Davos billionaires who “fret about inequality over vintage wine and canapés,” Obama denounces money in politics but can’t keep himself from taking it. And because he’s such a part of the very elite system whose effects he abhors, “Obamaism” was always destined to be a fundamentally empty and insincere philosophy.

Matt Taibbi issued a prescient assessment of Obama all the way back in 2007, when it was still unclear who would win the Democratic presidential primary:

“The Illinois Senator is the ultimate modern media creature—he’s a good-looking, youthful, smooth-talking, buttery-warm personality with an aw-shucks demeanor who exudes a seemingly impenetrable air of Harvard-crafted moral neutrality… His entire political persona is an ingeniously crafted human cipher, a man without race, ideology, geographic allegiances, or, indeed, sharp edges of any kind…[He appears] as a sort of ideological Universalist, one who spends a great deal of rhetorical energy showing that he recognizes the validity of all points of view…His political ideal is basically a rehash of the Blair-Clinton “third way” deal, an amalgam of Kennedy, Reagan, Clinton and the New Deal; he is aiming for the middle of the middle of the middle….In short, Obama is a creature perfectly in tune with the awesome corporate strivings of Hollywood, Madison avenue and the Beltway—he tries, and often succeeds, at selling a politics of seeking out the very center of where we already are, the very couch where we’ve been sitting all this time, as an exciting, revolutionary journey into the unknown.”

The real tragedy of the Obama story is that in 2008, millions of desperate Americans cast votes for a presidential candidate they believed would fight for meaningful change. He successfully marketed “hope” and “change” to a country that was reeling from a horrific financial collapse (his 2008 presidential run even won a “Marketing Campaign of the Year” award from the ad industry, beating out Apple and Zappos). But beneath it all was no serious vision of change; the grand speeches, paid and unpaid, turn out to contain little more than well-crafted platitudes. (Christopher Hitchens once pointed out that while everyone considered Obama a powerful and memorable speaker, nobody could ever seem to remember a single specific line from any of his orations, a good sign he’d in fact said nothing at all.) And as Obama biographer David Garrow concludes, “while the crucible of self-creation had produced an ironclad will, the vessel was hollow at its core.”

But Obama’s weaknesses are not the product of some unique personal pathology. He is simply the most charismatic and successful practitioner of an ideology shared by many contemporary Democrats: a kind of Beltway liberalism that sacrifices nearly all real political ambition, espousing a rhetoric of compassion and transformation while rationalizing every form of amorality and capitulation as a pragmatic necessity. In a moment when militancy and moral urgency are needed most, it seeks only innocuous, technocratic change and claims with the smuggest certitude that this represents the best grown adults can aspire to. In a world of spiralling inequality and ascendant corporate tyranny, it insists on weighting equally the interests of all sides and deems the result a respectable democratic consensus. Bearing witness to entrenched human misery, it wryly declares it was ever thus and delights in lazily dismissing critics with scornful refrains like “That will never get through Congress…” Confronted with risk or danger, it willingly retreats to ever more conservative ground and calls the sum total of these maneuvers “incrementalism.” In place of a coherent vision or a clear program of reform, the best it can offer is the hollow sensation of progress stripped of all its necessary conflicts and their corresponding discomforts.

One could see, in the defenses of Obama’s Wall Street speech, just how far this ideology narrows our sense of the possible: it tells us it is unrealistic and unfair to conceive of a president who does not shamelessly use the office to enrich himself. What passes for pragmatism is in fact the most dispiriting kind of capitalist pessimism: this is your world, you’re stuck with it, and it’s madness to dream of anything better. There Is No Alternative.

We can almost respect Hillary Clinton for embracing this idea openly, and barely even pretending to represent our most elevated selves rather than our most acquisitive ones. The cruelty Obama perpetrated was to encourage people to believe in something better, then give them nothing but a stylized status quo. At least now that he’s windsurfing with billionaires and doing the Wall Street speaking tour, there’s no longer any reason to keep believing that underneath it all, he was a true idealist whose innermost desires were thwarted by crushing political realities. All along, his innermost desire was to meet Bono over eggs benedict.

The Obama of 2008 was to be this century’s FDR, signifying a moment of lasting realignment and transcendent progress rather than one of growing alienation and despair culminating in the election of Donald Trump. But the liberalism of 21st century America, it turns out, is ill-equipped to achieve the transformative change it once so loftily promised: not because it made a noble attempt and failed but because it never really sought this change to begin with.

While Obama may not have been sincere, a great many of his voters were, and the millions who embraced his message revealed a genuine hunger for transformative change.

Now all we need is a political movement that actually seeks it out.