Blight At The Museum

The Smithsonian is supposed to be the people’s museum, but it tells corporate America’s version of history…

The Smithsonian has long carried a special virtuous sheen in the American imagination. It feels like one of our country’s few genuine projects for the common good. It was established out of the bequest of James Smithson, a wealthy British scientist who gave his estate to the young American nation in order to create an institution “for the increase and diffusion of knowledge.” In 1846, it became a trust administered by a special Board of Regents to be approved by the United States Congress. No other museum in the country has such an arrangement. And because its buildings line the National Mall, and admission is free, it has been regarded as something like the American people’s own special repository for knowledge. The Smithsonian helps define how America sees itself, and carries a weighty sense of dignity and neutrality.

It’s strange, then, that in certain parts of the Smithsonian, you may feel rather as if you’ve walked into the middle of a corporate sales pitch. When I visited the Smithsonian American History museum in December, for example, a “Mars Chocolate Demonstration” entitled “From Bean To Bar” was set up in a vestibule between exhibits. A half dozen people stood at a long table, showing how different stages in chocolate production worked. I had assumed they were docents until I noticed that most wore shirts embroidered with the Mars logo.

The lead presenter passed around a silicone model of a cacao pod, describing the process of growing the trees, explaining the role of hot chocolate in the American revolution, and telling us that the Aztecs used to consume only the white pulp that grows around the beans in the cacao pods. He informed us that nobody knows how the Aztecs discovered that the beans themselves had value, but offered a theory that they left the discarded beans by the fire, where they burned fragrantly. Then he passed around a bowl of roasted cacao nibs.

Later, I asked him whether he was a historian.

“I make M&Ms for a living,” he told me.

The demonstration was sponsored, I learned, by American Heritage Chocolate, a sub-brand of Mars that is sold exclusively at museums and historical sites. It is hard to critique a candy-making exhibit without seeming like a killjoy. But I don’t think it’s unreasonable to suggest that the Mars promotional demonstration has somewhat limited relevance to the core mission of the Museum of American History, or that having chocolatiers speculate about Aztec history is possibly below the expected Smithsonian standard of rigor. Having a chocolate-making demonstration is certainly a crowd-pleaser, and we did get free hot chocolate samples. But one cannot escape the suspicion that Mars, Inc. is using the Smithsonian to advertise chocolate to kids.


The chocolate exhibit is far from an isolated phenomenon. From the moment one arrives at the American History museum, in fact, its corporate sponsorship is evident. Much of the first floor is dedicated to the theme of “American Enterprise,” including the elaborate “Mars Hall of American Business.” The Hall of Business, sponsored by Monsanto, Altria (a.k.a. Philip Morris), S.C. Johnson, Intel, 3M, the United Soybean Board, and of course, Mars (among others) is intended to “convey the drama, breadth and diversity of America’s business heritage along with its benefits, failures and unanticipated consequences.” This kind of euphemistic, understated apologia is typical of the entire exhibition. American business may have experienced occasional “failures” and “unintended consequences,” but it has certainly never been guilty of, you know, “crimes,” or “wrongdoing.”

The Hall builds an entire history of the U.S. economy around “themes of opportunity, innovation, competition and the search for common good in the American marketplace.” (Note: not the search for profit.) When Mars first announced its multi-million dollar donation to build the Hall, the company’s president declared its intention to “provide examples of how U.S. companies and individuals have fundamentally, and positively, changed the way the world works and be a source of inspiration for future generations of business leaders and entrepreneurs.” The Smithsonian, in turn, promised to “provide visitors with a hands-on understanding of innovation, markets and business practice,” with activities including “choosing marketing campaigns for target audiences [and] making or losing simulated money through ‘trades.’” The museum also promised “larger personal and family stories featuring biographies of innovators and entrepreneurs.” (Note: not day laborers and shoe shiners.)


Thus, there was no pretense whatsoever that the exhibit would be neutral on the question of whether American capitalism had been good for the world. This was to be a celebratory showcase of business’s positive achievements. Innovation, growth, and entrepreneurship were the watchwords; anyone expecting a Hall of American Labor Struggles, about the grinding exploitation and violence perpetrated on American workers (from slavery to the Ludlow Massacre to contemporary Florida orange groves) was in for disappointment. The sponsorship of Altria and Monsanto ensures that the history of American economic development is the history of cotton gins and Cadillacs, rather than of child laborers in Kazakhstan producing Philip Morris cigarettes, or Monsanto selling Agent Orange to the Department of Defense.

The pro-business perspective of the exhibition is present in every aspect of its carefully euphemistic language. Here is how the Hall’s text summarizes the “Merchant Era” that lasted from the 1770s to the 1850s:

“During the Merchant Era, many people profited from abundant land and resources—mining gold, acquiring territory, and establishing factories. A market revolution disrupted everyday life and ways of doing businesses. Indian nations struggled with loss of their land, while other Americans faced changes in work and managing debt.”

Of course, the period under discussion is the peak American era of massacre, indenture, conquest, and slavery. But in the Smithsonian’s description, these are incidental and unfortunate side-developments, rather than the entire foundation of the economic successes of the “Merchant Era.” And to the extent these unsavory details intrude at all, they are sanitized with calculated synonyms and painstakingly exculpatory grammatical constructions. Land is not “stolen,” it is “acquired.”  Indian nations are not exiled, displaced, and killed; instead they “struggle with loss,” as if this were some sort of private bereavement, rather than a deliberate campaign of despoliation and extermination waged by enterprising settlers, speculators, and government agencies.

The subsequent “Corporate Era” from the 1860s to the 1930s—the age of Chinese railroad work, the Triangle Shirtwaist factory fire, robber barons, Pinkertons, the re-enslavement of blacks through Jim Crow prison labor, and the massacre of striking miners—is characterized in a similarly upbeat manner.

“During the Corporate Era, industrialization, national competition, and business expansion brought widespread economic growth and social change to the United States. This period also saw turbulence in the form of widespread immigration, financial panics, and confrontations between labor and management.”

The central characteristic of the era, then, is not the long hours worked by 12-year-olds in mines and textile mills. It is not the burning alive of 146 Jewish and Italian garment workers locked by their employers inside a high-rise sweatshop, nor is it the opulence of the Rockefellers, Vanderbilts, or Astors. Rather, it is a time of game-show values like “expansion” and “competition.” Corporations are bringers of Growth and Change, while immigrants and unions are lumped in with financial panics as bringers of Turbulence. It’s an era that “saw” confrontation, in some kind of relaxed, desultory, impersonal manner, not an era in which incredibly rich people did their absolute utmost (using legal and extra-legal methods) to keep incredibly poor people working long hours in unsafe conditions.


The story expanded upon in the exhibition’s companion book, American Enterprise: A History of Business, which is largely edited and written by Smithsonian historians, but also features chapters from such acclaimed historical scholars as Bill Ford of the Ford Motor Company, ex-Chevron VP Patricia Woertz, and S.C. Johnson CEO Fisk Johnson (as well as a single token labor organizer). These days, corporations don’t just sponsor the history; they even help write it.

None of this is to say that the Hall ignores the existence of slavery, labor struggles, and immigrant workers. The Smithsonian is keen to present the story of the “turbulence” that ran through American history, which includes a sympathetic contextualization of the lives of working people. But fundamentally, the “Hall of Business” inherently treats American economic history as the story of businessmen rather than workers. It is a story of triumph, in which the railroads go west, the canals are dug, and exciting new kinds of marketing and advertising techniques are developed. The themes that predominate are precisely the themes emphasized by the Mars executive: America’s entrepreneurs and innovators are the heroes who built our country. Thus, the bulk of our economic history has been a story of triumph, rather than one of colonization and immiseration.

Elsewhere in the Museum of American History, the tone is similar. You will find exhibitions on “The Value of Money,” “Stories on Money,” and “The Price of Freedom.” Krispy Kreme, Nordic Ware, and Williams Sonoma all co-sponsor “Food: Transforming the American Table 1950-2000.” (To the “Food” exhibit’s credit, it does mention the United Farm Workers and point out that “many [have] raised questions about the long-term effects of mass production and consumerism, especially on the environment, health and workers.” These questions evidently have not caused the curators of the Smithsonian to lose any sleep, but it is considerate of them to acknowledge their existence.)

In the museum’s east wing, the General Motors Hall of Transportation houses an exhibit titled “America on the Move,” made possible with generous support from General Motors Corporation (along with AAA, State Farm Companies Foundation, the U.S. Department of Transportation, ExxonMobil, and others). A placard documents the history of public transit:

“In the early 1900s, streetcars and electric interurban systems helped fill the nation’s transportation needs. But over the next few decades the limitations of streetcar systems, government and corporate policies and actions, consumer choice, and the development of alternatives—especially the bus and car—helped make trolleys obsolete… Most important [sic], Americans chose another alternative—the automobile. The car became the commuter option of choice for those who could afford it, and more people could do so. In Washington, D.C., the last streetcar ran in 1962. In 2000, a public-transit authority runs an expansive bus service and operates a subway system. But as in most cities, the majority of D.C.-area residents prefer to drive alone in their cars from their homes to their workplaces.”

The language repeatedly emphasizes consumer choice as particularly important. Residents “prefer to drive alone” and the car became “the option of choice.” But consumers can only choose among the options that are provided to them. The implication here is that people don’t want good public transit, they want GM cars. We are thus led to infer that the people in Los Angeles, for example, prefer to sit in two hours of traffic to and from work, rather than be caught riding something as “obsolete” and déclassé as a trolley. But do they really have a choice? Driving may be preferable to the other existing transport options, but is really preferable to a functional, far-reaching, and efficient public transit system? By emphasizing that the sovereign consumer has already made up her mind, the exhibit rationalizes the status quo. There is no indication in these descriptions that the world we live in, and the options that are available to us, could possibly have looked otherwise than they do. Though the Smithsonian exhibits pay lip-service to individual autonomy, the visitor nonetheless gets the sense that the historical development of American business was as inexorable and irreversible as the formation of our mountains and coastlines. We can explain the forces that brought them into being, but we don’t think of these forces as having any kind of agency, or moral responsibility. Economic “production” continues to be a black box of hidden suffering, while the entrepreneurial spirit is lionized as the highest form of civic virtue.


One commonly hears arguments for why funding doesn’t influence content. It was, after all, Hillary Clinton’s claim regarding her considerable Goldman Sachs speaking fees. It’s also the claim made by corporate-funded researchers. And in theory, in the absence of direct quid-pro-quo corruption, the funder could just hand over the cash and leave the institution/candidate/researcher with total freedom to say and do as they please.

But wandering through the Smithsonian Museum of American History, it’s hard to believe that this can be entirely true in practice. Throughout, one gets the vague sensation that the information being consumed has been subtly molded by its sponsors. The Smithsonian has certainly been plagued in the past by sponsorship-related controversies.  In 2003, photographer Subhankar Banerjee debuted his exhibit “Seasons of Life and Land” at the American Museum of Natural History. His photos — of the Arctic National Wildlife Refuge in Alaska — were abruptly moved to the museum’s basement after Senator Barbara Boxer brought a photo of a polar bear from the exhibit to the Senate floor to bolster an argument against arctic drilling. The same museum paid for its Hall of Human Origins with a $15 million grant from David Koch. The exhibit strongly implies that climate change may not be man-made, and reminds visitors that the earth is cooler now than it was ten thousand years ago. According to the New Yorker’s Jane Mayer, a game at the exhibit suggested that humans could simply evolve to deal with climate change by building “underground cities” and developing “short, compact bodies” or “curved spines,” so that “moving around in tight spaces will be no problem.”

In some cases, corporate influence on informational presentation is direct and obvious, as in the S.C. Johnson CEO’s co-authorship of a Smithsonian book. But elsewhere the effect is more subtle; a phrase swapped in here, the use of passive voice there, and the strategic withholding of anything that might lead the public to demand a change in policy, or to abhor the actions of the ruling class. As I went through the museum, I felt confused and paranoid, not because I felt as if all of the facts were being manipulated to serve an agenda, but because I couldn’t tell which ones were being manipulated. 

That’s what should be concerning. Corporate sponsorship may only have a limited effect on museum content. Yet any effect at all erodes confidence in the museum’s status as a reliable guardian of fact. It’s understandable why chronically underfunded museums would turn to whatever revenue streams they can come by. But a museum sponsored by Mars and Monsanto cannot tell the full truth. To let American history be written by its corporations is to give preferential voice to the economy’s winner and profiteers, and to downplay or excuse the injustices inflicted upon its underclass.

Illustrations by Pranas Naujokaitis.

How The Economist Thinks

Is it fair to trash The Economist? You bet it is.

Current Affairs is well-known for its signature “Death to The Economist bumper stickers, which have greatly improved the expressive capacities of the American motorist when it comes to demonstrating a discerning taste in periodicals. But, occasionally, members of the public send us adverse feedback on our vehicular adhesive strips. “What,” they ask, “is your problem with The Economist? Why be so rude? How can you wish death upon a perfectly innocuous and respectable British political magazine?” Current Affairs, it is said, is behaving badly. We are being unfair.

It’s true that death is an extreme consequence to wish on another magazine, even if the magazine in question is The Economist. And sometimes I do wonder whether the sentiment goes a bit too far, whether it would be more fair to wish something like “a minor drop in circulation” or “a financially burdensome libel suit” on our London competitor.

But then I remember what The Economist actually is, and what it stands for, and what it writes. And I realize that death is the only option. A just world would not have The Economist in it, and the death of The Economist is thus an indispensable precondition for the creation of a just world.  

In his deliciously biting 1991 examination of the role of The Economist in shaping American elite opinion, James Fallows tried to figure out exactly what was so repellent about the magazine’s approach to the seeking of truth. Fallows puzzled over the fact that American intellectuals hold a disproportionate amount of respect for The Economist’s judgment and reporting, even though the magazine is produced by imperious 20-something Oxbridge graduates who generally know little about the subjects on which they so confidently opine. Fallows suggested that The Economist’s outsized reputation in the U.S. was partially attributable to Americans’ lingering colonial insecurity, their ongoing belief that despite all evidence to the contrary, British people are inherently intelligent and trustworthy. Fallows even dug up an embarrassingly snooty quote from Robert Reich, boasting about his sensible preference for British news: “I, for one, don’t get my economics news from Newsweek. I rely on The Economist — published in London.”


But the most damning case put by Fallows is not that The Economist is snobbish and preys on the intellectual self-doubt of Americans through its tone of Oxonian omniscience. (Though it is, and it does.) Fallows also reveals the core flaw of the magazine’s actual reportage: thanks to its reflexive belief in the superiority of free markets, it is an unreliable guide to the subjects on which it reports. Because its writers will bend the truth in order to defend capitalism, you can’t actually trust what you read in The Economist. And since journalism you can’t trust is worthless, The Economist is worthless.

Fallows gives an example of how reality gets filtered as it passes through the magazine and reaches The Economist’s readers:

Last summer, a government man who helps make international economic policy told me (with a thoughtful expression) he was reading “quite an interesting new book” about the stunning economic rise of East Asia. “The intriguing thing is, it shows that market forces really were the explanation!” he exclaimed in delight. “Industrial policies and government tinkering didn’t matter that much.” By chance, I had just read the very book—Governing the Market by Robert Wade. This detailed study, citing heaps of evidence, had in fact concluded nearly the opposite: that East Asian governments had tinkered plenty, directly benefiting industry far beyond anything “market forces” could have done. I knew something else about the book: The Economist magazine had just reviewed it and mischaracterized its message almost exactly the way the government official had. Had he actually read the book? Maybe, but somehow I have my doubts… The crucial paragraph of The Economist review—the one that convinced my friend the official, and presumably tens of thousands of other readers, that Wade’s years of research supported the magazine’s preexisting world view—was this: “The [Asian] dragons differed from other developing countries in avoiding distortions to exchange rates and other key prices, as much as in their style of intervening. Intervention is part of the story—but perhaps the smaller part. That being so, Mr. Wade’s prescriptions seem unduly heavy on intervention, and unduly light on getting prices right.” These few lines are a marvel of Oxbridge glibness, and they deserve lapidary study. Notice the all-important word “perhaps.” Without the slightest hint of evidence, it serves to dismiss everything Wade has painstakingly argued in the book. It clears the way for: “That being so . . . ” What being so? That someone who has Taken a First [at Oxbridge] can wave off the book’s argument with “perhaps”?

Here, then, is the problem with the magazine: readers are consistently given the impression, regardless of whether it is true, that unrestricted free market capitalism is a Thoroughly Good Thing, and that sensible and pragmatic British intellectuals have vouched for this position. The nuances are erased, reality is fudged, and The Economist helps its American readers pretend to have read books by telling them things that the books don’t actually say.

Now, you may think that Fallows’ example tells us very little. It was, after all, one small incident. He spoke to one man, who had gotten one wrong impression from one faulty Economist review. Perhaps we were dealing with an exceptional case. Presumably Fallows encountered this kind of thinking regularly, but perhaps he’s singling out the minor part of the magazine’s otherwise-stellar reportage and reviews.

Let me, then, add a data point of my own. Until last week, I had not read The Economist since high school, where debate nerds subscribed to it in order to quote it to each other and prove themselves informed and worldly. But a few days ago, I was trying to compile a list of news outlets that Current Affairs staff should regularly glance at, in order to make sure we are considering a broad and ecumenical set of perspectives on contemporary geopolitics. I remembered Current Affairs’ ostensible rivalry with The Economist, and thought it might be a good idea to at least read the damn thing if we’re going to be selling bumper stickers calling for its execution. I am nothing if not open-minded and fair.


What, then, did I find upon navigating over to The Economist’s website? The very first article on the page was a piece called “A selective scourge: Inside the opioid epidemic,” subtitled “Deaths from the drugs say more about markets than about white despair.” Its theme is classic Economist: the American opioid epidemic is not occurring because global capitalism is ruining lives, but is the tragic outcome of the operation of people’s individual preferences. A quote:

It has even been argued that the opioid epidemic and the Trump vote in 2016 are branches of the same tree. Anne Case and Angus Deaton, both economists at Princeton University, roll opioid deaths together with alcohol poisonings and suicides into a measure they call “deaths of despair”. White working-class folk feel particular anguish, they explain, having suffered wrenching economic and social change. As an explanation for the broad trend, that might be right. Looked at more closely, though, the terrifying rise in opioid deaths in the past few years seems to have less to do with white working-class despair and more to do with changing drug markets. Distinct criminal networks and local drug cultures largely explain why some parts of America are suffering more than others.

25 years after Fallows wrote his Economist takedown, not a single thing has changed. The 1991 Economist used the meaningless phrase “that being so” to dismiss an author’s entire argument and conclude that markets should be left alone. The 2017 Economist concedes that “as an explanation for the broad trend,” economic despair “might be right,” but that “looked at more closely,” drug deaths are not about despair. “Looked at more closely” functions here the same way that “that being so” did: it concedes the point, but then pretends it hasn’t. After all, if despair might be the correct “explanation for the broad trend,” what does it mean to say that “looked at more closely” the trend isn’t the result of despair at all? It’s either an explanation or it isn’t, and if it doesn’t hold when “looked at more closely,” then it wouldn’t be “right” as an explanation for the broad trend.

What happens when The Economist looks at opioid deaths “more closely” is simple obfuscation. The magazine shows that opioid use looks different in different parts of the United States, because the drugs themselves differ. For example, when it comes to heroin, “Addicts west of the Mississippi mostly use Mexican brown-powder or black-tar heroin, which is sticky and viscous, whereas eastern users favour Colombian white-powder heroin.” Note the subtle invocation of “free choice” language: heroin users in the Eastern United States “favour” Colombian heroin. It’s not just that this happens to be the available form of the drug; it’s also that they have a kind of rational preference for a particular form of heroin. Every subtle rhetorical step is toward exonerating capitalism for people’s suffering, and blaming the people and their own foolish choices within a free and fair marketplace.

The Economist’s article on the opioid epidemic offers some legitimately interesting observations about regional variation in types of drug use. Increases in deaths have been concentrated more heavily in places where drugs are available in easier-to-ingest forms. The trouble is that The Economist argues that this implies the idea in the article’s subtitle, that deaths from drugs “say more about markets than white despair.” That’s just a conclusion that doesn’t follow from the provided evidence. The magazine’s own charts show that drug use of all kinds has been rising, meaning that the differences between usage types can’t account for the broad trend. The drug type differences can tell us why different places may experience differing levels of rises in opiate deaths, but they can’t tell us why so many people are now drugging themselves who weren’t before. And we can’t answer that question without considering economic class; opiate addiction has disproportionately risen among poor white people, meaning we have to find a way to understand what specific race- and poverty-correlated factors are causing the change.

The Economist is not, therefore, an honest examiner of the facts. It is constantly at pains not to risk conclusions that may hurt the case for unregulated markets. This tendency reached its absurd apotheosis in the magazine’s infamous 2014 review of Edward Baptist’s The Half Has Never Been Told: Slavery and the Making of American Capitalism. The magazine objected to Baptist’s brutal depiction of the slave trade, saying the book did not qualify as “an objective history of slavery” because “almost all the blacks in his book are victims, almost all the whites villains.” When outraged readers pointed out that this is because, well, the victims of slavery tended to be black, The Economist retracted the review. But as Baptist observed in response, there was a reason why the magazine felt the need to mitigate the evils of slavery. Baptist’s book portrayed slavery as an integral part of the history of capitalism. As he wrote: “If slavery was profitable—and it was—then it creates an unforgiving paradox for the moral authority of markets—and market fundamentalists. What else, today, might be immoral and yet profitable?” The implications of Baptist’s work would have unsettling implications for The Economist. They would damn the foundations of the very Western free enterprise system that the magazine is devoted to championing. Thus The Economist needed to find a way to soften its verdict on slavery. (It was not the first time they had done so, either. In a tepid review of Greg Grandin’s The Empire of Necessity with the hilariously offensive title of “Slavery: Not Black or White,” the magazine lamented that “the horrors in Mr Grandin’s history are unrelenting.” And the magazine’ long tradition of defending misery stretches back to the 19th century, when it blamed the Irish potato famine on irresponsible decisions made by destitute peasants.)

Why, then, have a “Death to The Economist” bumper sticker? Because The Economist would justify any horror perpetrated in the name of the market and Western Enlightenment values, even to the extent of rationalizing the original great and brutal crime on which our prosperity was founded. Its tone, as Fallows observed, is one “so cocksure of its rightness and superiority that it would be a shame to freight it with mere fact.” And the problem with that is not that The Economist is cocksure (I of all people should have no objection to cocksureness in periodicals), but that it doesn’t wish to be freighted with inconvenient truths. The fact that The Economist has a clear set of ideological commitments means that it will pull the wool over its readers’ eyes in the service of those commitments, which saps it of intellectual worth. It will lie to you about the contents of a book by waving them away with a “that being so.” Or it will reassure you that capitalism has nothing to do with opiate deaths, by asserting without evidence that when “looked at more closely,” drug addiction is “less” about despair. It will fudge, fumble, and fool you in any way it can, if it means keeping markets respectable. And it will play on your insecurity as a resident of a former British colony to convince you that all intelligent people believe that the human misery created in “economically free” societies is necessary and just. It will give intellectual cover to barbarous crimes, and its authors won’t even have the guts to sign their names to their work. Instead, they will pretend to be the disembodied voice of God, whispering in your ear that you’ll never impress England until you fully deregulate capitalism.

So, then: Death to slavery. Death to injustice. Death to The Economist.

The Dangerous Academic is an Extinct Species

If these ever existed at all, they are now deader than dodos…

It was curiosity, not stupidity that killed the Dodo. For too long, we have held to the unfair myth that the flightless Mauritian bird became extinct because it was too dumb to understand that it was being killed. But as Stefan Pociask points out in “What Happened to the Last Dodo Bird?”, the dodo was driven into extinction partly because of its desire to learn more about a new, taller, two-legged creature who disembarked onto the shores of its native habitat: “Fearless curiosity, rather than stupidity, is a more fitting description of their behavior.”

Curiosity does have a tendency to get you killed. The truly fearless don’t last long, and the birds who go out in search of new knowledge are inevitably the first ones to get plucked. It’s always safer to stay close to the nest.

Contrary to what capitalism’s mythologizers would have you believe, the contemporary world does not heap its rewards on those with the most creativity and courage. In fact, at every stage of life, those who venture beyond the safe boundaries of expectation are ruthlessly culled. If you’re a black kid who tends to talk back and call bullshit on your teachers, you will be sent to a special school. If you’re a transgender teenager like Leelah Alcorn in Ohio, and you unapologetically defy gender norms, they’ll make you so miserable that you kill yourself. If you’re Eric Garner, and you tell the police where they can stick their B.S. “loose cigarette” tax, they will promptly choke you to death. Conformists, on the other hand, usually do pretty well for themselves. Follow the rules, tell people what they want to hear, and you’ll come out just fine.

Becoming a successful academic requires one hell of a lot of ass-kissing and up-sucking. You have to flatter and impress. The very act of applying to graduate school to begin with is an exercise in servility: please deem me worthy of your favor. In order to rise through the ranks, you have to convince people of your intelligence and acceptability, which means basing everything you do on a concern for what other people think. If ever you find that your conclusions would make your superiors despise you (say, for example, if you realized that much of what they wrote was utter irredeemable manure), you face a choice: conceal your true self or be permanently consigned to the margins.

The idea of a “dangerous” academic is therefore somewhat self-contradictory to begin with. The academy could, potentially, be a place for unfettered intellectual daring. But the most daring and curious people don’t end up in the academy at all. These days, they’ve probably gone off and done something more interesting, something that involves a little bit less deference to convention and detachment from the material world. We can even see this in the cultural archetype of the Professor. The Professor is always a slightly harrumphy—and always white and male—individual, with scuffed shoes and jackets with leather elbows, hidden behind a mass of seemingly disorganized books. He is brilliant but inaccessible, and if not effeminate, certainly effete. But bouncing with ideas, so many ideas. There is nothing particularly menacing about such a figure, certainly nothing that might seriously threaten the existing arrangements of society. Of ideas he has plenty. Of truly dangerous ones, none at all.

If anything, the university has only gotten less dangerous in recent years. Campuses like Berkeley were once centers of political dissent. There was open confrontation between students and the state. In May of 1970, the Ohio National Guard killed four students at Kent State. Ten days later, police at the historically black Jackson State University fired into a crowd of students, killing two. At Cornell in 1969, armed black students took over the student union building in a demand for recognition and reform, part of a pattern of serious upheaval.

But over the years the university became corporatized. It became a job training center rather than an educational institution. Academic research became progressively more specialized, narrow, technical, and obscure. (The most successful scholarship is that which seems to be engaged with serious social questions, but does not actually reach any conclusions that would force the Professor to leave his office.)


The ideas that do get produced have also become more inaccessible, with research inevitably cloaked behind the paywalls of journals that cost astronomical sums of money. At the cheaper end, the journal Cultural Studies charges individuals $201 for just the print edition, and charges institutions $1,078 for just the online edition. The science journal Biochimica et Biophysica Acta costs $20,000, which makes Cultural Studies look like a bargain. (What makes the pricing especially egregious is that these journals are created mostly with free labor, as academics who produce articles are almost never paid for them.) Ideas in the modern university are not free and available to all. They are in fact tethered to a vast academic industrial complex, where giant publishing houses like Elsevier make massive profits off the backs of researchers.

Furthermore, the academics who produce those ideas aren’t exactly at liberty to think and do as they please. The overwhelming “adjunctification” of the university has meant that approximately 76% of professors… aren’t professors at all, but underpaid and overworked adjuncts, lecturers, and assistants. And while conditions for adjuncts are slowly improving, especially through more widespread unionization, their place in the university is permanently unstable. This means that no adjunct can afford to seriously offend. To make matters worse, adjuncts rely heavily on student evaluations to keep their positions, meaning that their classrooms cannot be places to heavily contest or challenge students’ politics. Instructors could literally lose their jobs over even the appearance of impropriety. One false step—a video seen as too salacious, or a political opinion held as oppressive—could be the end of a career. An adjunct must always be docile and polite.

All of this means that university faculty are less and less likely to threaten any aspect of the existing social or political system. Their jobs are constantly on the line, so there’s a professional risk in upsetting the status quo. But even if their jobs were safe, the corporatized university would still produce mostly banal ideas, thanks to the sycophancy-generating structure of the academic meritocracy. But even if truly novel and consequential ideas were being produced, they would be locked away behind extortionate paywalls.

The corporatized university also ends up producing the corporatized student. Students worry about doing anything that may threaten their job prospects. Consequently, acts of dissent have become steadily de-radicalized. On campuses these days, outrage and anger is reserved for questions like, “Is this sushi an act of cultural appropriation?” When student activists do propose ways to “radically” reform the university, it tends to involve adding new administrative offices and bureaucratic procedures, i.e. strengthening the existing structure of the university rather than democratizing it. Instead of demanding an increase in the power of students, campus workers, and the untenured, activists tend to push for symbolic measures that universities happily embrace, since they do not compromise the existing arrangement of administrative and faculty power.

It’s amusing, then, that conservatives have long been so paranoid about the threat posed by U.S. college campuses. The American right has an ongoing fear of supposedly arch-leftist professors brainwashing nubile and impressionable young minds into following sinister leftist dictates. Since massively popular books like Roger Kimball’s 1990 Tenured Radicals and Dinesh D’Souza’s 1992 Illiberal Education: The Politics of Race on Campus, colleges have been seen as hotbeds of Marxist indoctrination that threaten the civilized order. This is a laughable idea, for the simple reason that academics are the very opposite of revolutionaries: they intentionally speak to minuscule audiences rather than the masses (on campus, to speak of a “popular” book is to deploy a term of faint disdain) and they are fundamentally concerned with preserving the security and stability of their own position. This makes them deeply conservative in their day-to-day acts, regardless of what may come out of their mouths. (See the truly pitiful lack of support among Harvard faculty when the university’s dining hall workers went on strike for slightly higher wages. Most of the “tenured radicals” couldn’t even be bothered to sign a petition supporting the workers, let alone march in the streets.)

But left-wing academics are all too happy to embrace the conservatives’ ludicrous idea of professors as subversives. This is because it reassures them that they are, in fact, consequential, that they are effectively opposing right-wing ideas, and that they need not question their own role. The “professor-as-revolutionary” caricature serves both the caricaturist and the professor. Conservatives can remain convinced that students abandon conservative ideas because they are being manipulated, rather than because reading books and learning things makes it more difficult to maintain right-wing prejudices. And liberal professors get to delude themselves into believing they are affecting something.


Today, in what many call “Trump’s America,” the idea of universities as sites of “resistance” has been renewed on both the left and right. At the end of 2016, Turning Point USA, a conservative youth group, created a website called Professor Watchlist, which set about listing academics it considered dangerously leftist. The goal, stated on the Turning Point site, is “to expose and document college professors who discriminate against conservative students and advance leftist propaganda in the classroom.”

Some on the left are delusional enough to think that professors as a class can and should be presenting a united front against conservatism. At a recent University of Chicago event, a document was passed around from titled, “A Call to Professors, Students and All in Academia,” calling on people to “Make the University a Zone of Resistance to the Fascist Trump Regime and the Coming Assault on the Academy.”

Many among the professorial class seem to want to do exactly this, seeing themselves as part of the intellectual vanguard that will serve as a bulwark against Trumpism. George Yancy, a professor of philosophy and race studies at Emory University, wrote an op-ed in the New York Times, titled “I Am A Dangerous Professor.” Yancy discussed his own inclusion on the Professor Watchlist, before arguing that he is, in fact, dangerous:

“In my courses, which the watchlist would like to flag as ‘un-American’ and as ‘leftist propaganda,’ I refuse to entertain my students with mummified ideas and abstract forms of philosophical self-stimulation. What leaves their hands is always philosophically alive, vibrant and filled with urgency. I want them to engage in the process of freeing ideas, freeing their philosophical imaginations. I want them to lose sleep over the pain and suffering of so many lives that many of us deem disposable. I want them to become conceptually unhinged, to leave my classes discontented and maladjusted…Bear in mind that it was in 1963 that the Rev. Dr. Martin Luther King, Jr. raised his voice and said: ‘I say very honestly that I never intend to become adjusted to segregation and discrimination.’… I refuse to remain silent in the face of racism, its subtle and systemic structure. I refuse to remain silent in the face of patriarchal and sexist hegemony and the denigration of women’s bodies.”

He ends with the words:

“Well, if it is dangerous to teach my students to love their neighbors, to think and rethink constructively and ethically about who their neighbors are, and how they have been taught to see themselves as disconnected and neoliberal subjects, then, yes, I am dangerous, and what I teach is dangerous.”

Of course, it’s not dangerous at all to teach students to “love their neighbors,” and Yancy knows this. He wants to simultaneously possess and devour his cake: he is doing nothing that anyone could possibly object to, yet he is also attempting to rouse his students to overthrow the patriarchy. He suggests that his work is so uncontroversial that conservatives are silly to fear it (he’s just teaching students to think!), but also places himself in the tradition of Martin Luther King, Jr., who was trying to radically alter the existing social order. His teaching can be revolutionary enough to justify Yancy spending time as a philosophy professor during the age of Trump, but benign enough for the Professor Watchlist to be an act of baseless paranoia.

Much of the revolutionary academic resistance to Trump seems to consist of spending a greater amount of time on Twitter. Consider the case of George Ciccariello-Maher, a political scientist at Drexel University who specializes in Venezuela. In December of 2016, Ciccariello-Maher became a minor cause célèbre on the left after getting embroiled in a flap over a tweet. On Christmas Eve, for who only knows what reason, Ciccariello-Maher tweeted “All I Want for Christmas is White Genocide.” Conservatives became enraged, and began calling upon Drexel to fire him. Ciccariello-Maher insisted he had been engaged in satire, although nobody could understand what the joke was intended to be, or what the tweet even meant in the first place. After Drexel disowned Ciccariello-Maher’s words, a petition was launched in his defense. Soon, Ciccariello-Maher had lawyered up, Drexel confirmed that his job was safe, and the whole kerfuffle was over before the nation’s half-eaten leftover Christmas turkeys had been served up into sandwiches and casseroles.

Ciccariello-Maher continues to spend a great deal of time on Twitter, where he frequently issues macho tributes to violent political struggle, and postures as a revolutionary. But despite his temporary status as a martyr for the cause of academic freedom, one who terrifies the reactionaries, there was nothing dangerous about his act. He hadn’t really stirred up a hornet’s nest; after all, people who poke actual bees occasionally get bee stings. A more apt analogy is that he had gone to the zoo to tap on the glass in the reptile house, or to throw twigs at some tired crocodiles in a concrete pool. (When they turned their rheumy eyes upon him, he ran from the fence, screaming that dangerous predators were after him.) U.S. academics who fancy themselves involved in revolutionary political struggles are trivializing the risks faced by actual political dissidents around the world, including the hundreds of environmental activists who have been murdered globally for their efforts to protect indigenous land.

“University faculty are less and less likely to threaten any aspect of the existing social or political system…”

Of course, it’s true that there are still some subversive ideas on university campuses, and some true existing threats to academic and student freedom. Many of them have to do with Israel or labor organizing. In 2014, Steven Salaita was fired from a tenured position at the University of Illinois for tweets he had made about Israel. (After a protracted lawsuit, Salaita eventually reached a settlement with the university.) Fordham University tried to ban a Students for Justice in Palestine group, and the University of California Board of Regents attempted to introduce a speech code that would have punished much criticism of Israel as “hate speech.” The test of whether your ideas are actually dangerous is whether you are rewarded or punished for expressing them.

In fact, in terms of danger posed to the world, the corporatized university may itself be more dangerous than any of the ideas that come out of it.

In Hyde Park, where I live, the University of Chicago seems ancient and venerable at first glance. Its Ye Olde Kinda Sorta Englande architecture, built in 1890 to resemble Oxbridge, could almost pass for medieval if one walked through it at dusk. But the institution is in fact deeply modern, and like Columbia University in New York, it has slowly absorbed the surrounding neighborhood, slicing into older residential areas and displacing residents in landgrab operations. Despite being home to one of the world’s most prestigious medical and research schools, the university refused for many years to open a trauma center to serve the city’s South Side, which had been without access to trauma care. (The school only relented in 2015, after a long history of protests.) The university ferociously guards its myriad assets with armed guards on the street corners, and enacts massive surveillance on local residents (the university-owned cinema insists on examining bags for weapons and food, a practice I have personally experienced being selectively conducted in a racially discriminatory manner). In the university’s rapacious takeover of the surrounding neighborhood, and its treatment of local residents—most of whom are of color—we can see what happens when a university becomes a corporation rather than a community institution. Devouring everything in the pursuit of limitless expansion, it swallows up whole towns.

The corporatized university, like corporations generally, is an uncontrollable behemoth, absorbing greater and greater quantities of capital and human lives, and churning out little of long-term social value. Thus Yale University needlessly decided to open a new campus in Singapore despite the country’s human rights record and restrictions on political speech, and New York University decided to needlessly expand to Abu Dhabi, its new UAE campus built by low-wage workers under brutally repressive conditions. The corporatized university serves nobody and nothing except its own infinite growth. Students are indebted, professors lose job security, surrounding communities are surveilled and displaced. That is something dangerous.

Left professors almost certainly sense this. They see themselves disappearing, the campus becoming a steadily more stifling environment. Posturing as a macho revolutionary is, like all displays of machismo, driven partially by a desperate fear of one’s impotence. They know they are not dangerous, but they are happy to play into the conservative stereotype. But the “dangerous academic” is like the Dodo in 1659, a decade before its final sighting and extinction: almost nonexistent. And the more universities become like corporations, the fewer and fewer of these unique birds will be left. Curiosity kills, and those who truly threaten the inexorable logic of the neoliberal university are likely to end up extinct.

Illustrations by Chris Matthews.

How Liberals Fell In Love With The West Wing

Aaron Sorkin’s political drama shows everything wrong with the Democratic worldview…

In the history of prestige tv, few dramas have had quite the cultural staying power of Aaron Sorkin’s The West Wing.

Set during the two terms of fictional Democratic President and Nobel Laureate in Economics  Josiah “Jed” Bartlet (Martin Sheen) the show depicts the inner workings of a sympathetic liberal administration grappling with the daily exigencies of governing. Every procedure and protocol, every piece of political brokerage—from State of the Union addresses to legislative tugs of war to Supreme Court appointments—is recreated with an aesthetic authenticity enabled by ample production values (a single episode reportedly cost almost $3 million to produce) and rendered with a dramatic flair that stylizes all the bureaucratic banality of modern governance.

Nearly the same, of course, might be said for other glossy political dramas such as Netflix’s House of Cards or Scandal. But The West Wing aspires to more than simply visual verisimilitude. Breaking with the cynicism or amoralism characteristic of many dramas about politics, it offers a vision of political institutions which is ultimately affirmative and approving. What we see throughout its seven seasons are Democrats governing as Democrats imagine they govern, with the Bartlet Administration standing in for liberalism as liberalism understands itself.

More than simply a fictional account of an idealized liberal presidency, then, The West Wing is an elaborate fantasia founded upon the shibboleths that sustain Beltway liberalism and the milieu that produced them.

“Ginger, get the popcorn

The filibuster is in

I’m Toby Ziegler with The Drop In

What Kind of Day Has It Been?

It’s Lin, speaking the truth

—Lin-Manuel Miranda, “What’s Next?

During its run from 1999 to 2006, The West Wing garnered immense popularity and attention, capturing three Golden Globe Awards and 26 Emmys and building a devout fanbase among Democratic partisans, Beltway acolytes, and people of the liberal-ish persuasion the world over. Since its finale more than a decade ago, it has become an essential part of the liberal cultural ecosystem, its importance arguably on par with The Daily Show, Last Week Tonight, and the rap musical about the founding fathers people like for some reason.

If anything, its fandom has only continued to grow with age: In the summer of 2016, a weekly podcast hosted by seasons 4-7 star Joshua Malina, launched with the intent of running through all 154 episodes (at a rate of one per week), almost immediately garnered millions of downloads; an elaborate fan wiki with almost 2000 distinct entries is maintained and regularly updated, magisterially documenting every mundane detail of the West Wing cosmos save the characters’ bowel movements; and, in definitive proof of the silence of God, superfan Lin-Manuel Miranda has recently recorded a rap named for one of the show’s most popular catchphrases (“What’s next?”).

While certainly appealing to a general audience thanks to its expensive sheen and distinctive writing, The West Wing’s greatest zealots have proven to be those who professionally inhabit the very milieu it depicts: Washington political staffers, media types, centrist cognoscenti, and various others drawn from the ranks of people who tweet “Big, if true” in earnest and think a lanyard is a talisman that grants wishes and wards off evil.  

The West Wing “took something that was for the most part considered dry and nerdy—especially to people in high school and college—and sexed it up,” former David Axelrod advisor Eric Lesser told Vanity Fair in a longform 2012 feature about the “Sorkinization of politics” (Axelrod himself having at one point advised West Wing writer Eli Attie). It “very much served as inspiration”, said Micah Lasher, a staffer who then worked for Michael Bloomberg.

Thanks to its endless depiction of procedure and policy, the show naturally gibed with the wonkish libidos of future Voxsplainers Matt Yglesias and Ezra Klein. “There’s a cultural meme or cultural suggestion that Washington is boring, that policy is boring, but it’s important stuff,” said Klein, adding that the show dramatized “the immediacy and urgency and concern that people in this town feel about the issues they’re working on.” “I was interested in politics before the show started,” added Yglesias. “But a friend of mine from college moved to D.C. at the same time as me, after graduation, and we definitely plotted our proposed domination of the capital in explicitly West Wing terms: Who was more like Toby? Who was more like Josh?”

Far from the Kafkaesque banality which so often characterizes the real life equivalent, the mundane business of technocratic governance is made to look exciting, intellectually stimulating, and, above all, honorable. The bureaucratic drudgery of both White House management and governance, from speechwriting, to press conference logistics, to policy creation, are front and center across all seven seasons. A typical episode script is chock full of dweebish phraseology — “farm subsidies”, “recess appointments”, “census bureau”, “congressional consultation” — usually uttered by swift-tongued, Ivy League-educated staffers darting purposefully through labyrinthine corridors during the infamous “walk-and-talk” sequences. By recreating the look and feel of political processes to the tee, while garnishing them with a romantic veneer, the show gifts the Beltway’s most spiritually-devoted adherents with a vision of how many would probably like to see themselves.

In serving up this optimistic simulacrum of modern US politics, Sorkin’s universe has repeatedly intersected with real-life US politics. Following the first season, and in the midst of the 2000 presidential election contest, Salon’s Joyce Millman wrote: “Al Gore could clinch the election right now by staging as many photo-ops with the cast of The West Wing as possible.” A poll published during the same election found that most voters preferred Martin Sheen’s President Bartlet to Bush or Gore. A 2008 New York Times article predicted an Obama victory on the basis of the show’s season 6-7 plot arc. The same election year, the paper published a fictionalized exchange between Bartlet and Barack Obama penned by Sorkin himself. 2016 proved no exception, with the New Statesman’s Helen Lewis reacting to Donald Trump’s victory by saying: “I’m going to hug my West Wing boxset a little closer tonight, that’s for sure.”

Appropriately, many of the show’s cast members, leveraging their on-screen personas, have participated or intervened in real Democratic Party politics. During the 2016 campaign, star Bradley Whitford—who portrays frenetically wily strategist Josh Lyman—was invited to “reveal” who his [fictional] boss would endorse:

“There’s no doubt in my mind that Hillary would be President Bartlet’s choice. She’s—nobody is more prepared to take that position on day one. I know this may be controversial. But yes, on behalf of Jed Bartlet, I want to endorse Hillary Clinton.”

Six leading members of the cast, including Whitford, were even dispatched to Ohio to stump for Clinton (inexplicably failing to swing the crucial state in her favor).


During the Democratic primary season Rob Lowe (who appeared from 1999-2003 before leaving in protest at the ostensible stinginess of his $75,000/episode salary) even deployed a clip from the show and paraphrased his own character’s lines during an attack on Bernie Sanders’ tax plan: “Watching Bernie Sanders. He’s hectoring and yelling at me WHILE he’s saying he’s going to raise our taxes. Interesting way to communicate.” In Season 2 episode “The Fall’s Gonna Kill You”, Lowe’s character Sam Seaborn angrily lectures a team of speechwriters:  

“Every time your boss got on the stump and said, ‘It’s time for the rich to pay their fair share,’ I hid under a couch and changed my name…The top one percent of wage earners in this country pay for twenty-two percent of this country. Let’s not call them names while they’re doing it, is all I’m saying.”

What is the actual ideology of The West Wing? Just like the real American liberalism it represents, the show proved to be something of a political weather vane throughout its seven seasons on the air.

Debuting during the twilight of the Clinton presidency and spanning much of Bush II’s, it predictably vacillated somewhat in response to events while remaining grounded in a general liberal ethos. Having writing credits for all but one episode in The West Wing’s first four seasons, Sorkin left in 2003, with Executive Producer John Wells characterizing the subsequent direction as more balanced and bipartisan. The Bartlet administration’s actual politics—just like those of the real Democratic Party and its base—therefore run the gamut from the stuff of Elizabeth Warren-esque populism to the neoliberal bilge you might expect to come from a Beltway think tank having its white papers greased by dollars from Goldman Sachs.  

But promoting or endorsing any specific policy orientation is not the show’s true raison d’être. At the conclusion of its seven seasons it remains unclear if the Bartlet administration has succeeded at all in fundamentally altering the contours of American life. In fact, after two terms in the White House, Bartlet’s gang of hyper-educated, hyper-competent politicos do not seem to have any transformational policy achievements whatsoever. Even in their most unconstrained and idealized political fantasies, liberals manage to accomplish nothing.

The lack of any serious attempts to change anything reflect a certain apolitical tendency in this type of politics, one that defines itself by its manner and attitude rather than a vision of the change it wishes to see in the world. Insofar as there is an identifiable ideology, it isn’t one definitively wedded to a particular program of reform, but instead to a particular aesthetic of political institutions. The business of leveraging democracy for any specific purpose comes second to how its institutional liturgy and processes look and, more importantly, how they make us feel—virtue being attached more to posture and affect than to any particular goal. Echoing Sorkin’s 1995 film The American President (in many ways the progenitor of The West Wing) it delights in invoking “seriousness” and the supposedly hard-headed pragmatism of grownups.


Consider a scene from Season 2’s “The War at Home”, in which Toby Ziegler confronts a rogue Democratic Senator over his objections to Social Security cuts prospectively to be made in collaboration with a Republican Congress. The episode’s protagonist certainly isn’t the latter, who tries to draw a line in the sand over the “compromising of basic Democratic values” and threatens to run a third party presidential campaign, only to be admonished acerbically by Ziegler:  

“If you think demonizing people who are trying to govern responsibly is the way to protect our liberal base, then speaking as a liberal…go to bed, would you please?…Come at us from the left, and I’m gonna own your ass.”

The administration and its staff are invariably depicted as tribunes of the serious and the mature, their ideological malleability taken to signify their virtue more than any fealty to specific liberal principles.

Even when the show ventures to criticize the institutions of American democracy, it never retreats from a foundational reverence for their supposed enlightenment and the essential nobility of most of the people who administer them. As such, the presidency’s basic function is to appear presidential and, more than anything, Jed Bartlet’s patrician aura and respectable disposition make him the perfect avatar for the West Wing universe’s often maudlin deference to the liturgy of “the office.” “Seriousness,” then— the superlative quality in the Sorkin taxonomy of virtues—implies presiding over the political consensus, tinkering here and there, and looking stylish in the process by way of soaring oratory and white-collar chic.   

“Make this election about smart, and not. Make it about engaged, and not. Qualified, and not. Make it about a heavyweight. You’re a heavyweight. And you’ve been holding me up for too many rounds.”

—Toby Ziegler, Hartsfield’s Landing (Season 3, Episode 14)

Despite its relatively thin ideological commitments, there is a general tenor to the West Wing universe that cannot be called anything other than smug.

It’s a smugness born of the view that politics is less a terrain of clashing values and interests than a perpetual pitting of the clever against the ignorant and obtuse. The clever wield facts and reason, while the foolish cling to effortlessly-exposed fictions and the braying prejudices of provincial rubes. In emphasizing intelligence over ideology, what follows is a fetishization of “elevated discourse” regardless of its actual outcomes or conclusions. The greatest political victories involve semantically dismantling an opponent’s argument or exposing its hypocrisy, usually by way of some grand rhetorical gesture. Categories like left and right become less significant, provided that the competing interlocutors are deemed respectably smart and practice the designated etiquette. The Discourse becomes a category of its own, to be protected and nourished by Serious People conversing respectfully while shutting down the stupid with heavy-handed moral sanctimony.  

In Toby Ziegler’s “smart and not,” “qualified and not” formulation, we can see a preview of the (disastrous) rhetorical strategy that Hillary Clinton would ultimately adopt against Donald Trump. Don’t make it about vision, make it about qualification. Don’t make it about your plans for how to make people’s lives better, make it about your superior moral character. Fundamentally, make it about how smart and good and serious you are, and how bad and dumb and unserious they are.

“The administration and its staff are invariably depicted as tribunes of the serious and the mature, their ideological malleability taken to signify their virtue…”

In this respect, The West Wing’s foundational serious/unserious binary falls squarely within the tradition that has since evolved into the “epic own/evisceration” genre characteristic of social media and late night TV, in which the aim is to ruthlessly use one’s intellect to expose the idiocy and hypocrisy of the other side. In a famous scene from Season 4’s “Game On”, Bartlet debates his Republican rival Governor Robert Ritchie (James Brolin). Their exchange, prompted by a question about the role of the federal government, is the stuff of a John Oliver wet dream:  

Richie: My view of this is simple. We don’t need a federal Department of Education telling us our children have to learn Esperanto, they have to learn Eskimo poetry. Let the states decide, let the communities decide on health care and education, on lower taxes, not higher taxes. Now he’s going to throw a big word at you — ‘unfunded mandate’, he’s going to say if Washington lets the states do it, it’s an unfunded mandate. But what he doesn’t like is the federal government losing power. I call it the ingenuity of the American people.”

Bartlet: Well first of all let’s clear up a couple of things: unfunded mandate is two words, not one big word. There are times when we are 50 states and there are times when we’re one country and have national needs. And the way I know this is that Florida didn’t fight Germany in World War Two or establish civil rights. You think states should do the governing wall-to-wall, now that’s a perfectly valid opinion. But your state of Florida got 12.6 billion dollars in federal money last year from Nebraskans and Virginia’s and New Yorkers and Alaskans, with their Eskimo poetry — 12.6 out of the state budget of 50 billion. I’m supposed to be using this time for a question so here it is: Can we have it back please?”

In an even more famous scene from Season 2 episode “The Midterms”, Bartlet humiliates homophobic talk radio host Jenna Jacobs by quoting scripture from memory, destroying her by her very own logic.


If Richie and Jacobs are the obtuse yokels to be epically taken down with facts and reason, the show also elevates several conservative characters to reinforce its postpartisan celebration of The Discourse. Republicans come in two types: slack-jawed caricatures, and people whose high-mindedness and mutual enthusiasm for Putting Differences Aside make them the Bartlet Administration’s natural allies or friends regardless of whatever conflicts of values they may ostensibly have. Foremost among the latter is Vinick: a moderate, pro-choice Republican who resembles John McCain (at least the imaginary “maverick” John McCain that liberals continue to pretend exists) and is appointed by Bartlet’s Democratic successor Matthew Santos to be Secretary of State. (In reality, there is no such thing as a “moderate” Republican, only a polite one. The upright and genial Paul Ryan, whom President Bartlet would have loved, is on a lifelong quest to dismantle every part of America’s feeble social safety net.)

Thus Bartlet Democrats do not see Republicans as the “enemy,” except to the extent that they are rude or insufficiently respectful of the rules of political decorum. In one Season 5 plot, the administration opts to install a Ruth Bader Ginsburg clone (Glenn Close) as Chief Justice of the Supreme Court. The price it pays—willingly, as it turns out—is giving the other vacancy to an ultra-conservative justice, for the sole reason that Bartlet’s staff find their amiable squabbling stimulating. Anyone with substantively progressive political values would be horrified by a liberal president’s appointment of an Antonin Scalia-style textualist to the Supreme Court. But if your values are procedural, based more on the manner in which people conduct themselves rather than the consequences they actually bring about, it’s easy to chuckle along with a hard-right conservative, so long as they are personally charming (Ziegler: “I hate him, but he’s brilliant. And the two of them together are fighting like cats and dogs … but it works.”)

“What’s next?”

Through its idealized rendering of American politics and its institutions, The West Wing offers a comforting avenue of escape from the grim and often dystopian reality of the present. If the show, despite its age, has continued to find favor and relevance among liberals, Democrats, and assorted Beltway acolytes alike, it is because it reflects and affirms their worldview with greater fidelity and catharsis than any of its contemporaries.

But if anything gives that worldview pause, it should be the events of the past eight years. Liberals got a real life Josiah Bartlet in the figure of Barack Obama, a charismatic and stylish politician elected on a populist wave. But Obama’s soaring speeches, quintessentially presidential affect, and deference to procedure did little to fundamentally improve the country or prevent his Republican rivals from storming the Congressional barricades at their first opportunity. Confronted by a mercurial TV personality bent on transgressing every norm and truism of Beltway thinking, Democrats responded by exhaustively informing voters of his indecency and hypocrisy, attempting to destroy him countless times with his own logic, but ultimately leaving him completely intact. They smugly taxonomized as “smart” and “dumb” the very electorate they needed to win over, and retreated into an ideological fever dream in which political success doesn’t come from organizing and building power, but from having the most polished arguments and the most detailed policy statements. If you can just crush Trump in the debates, as Bartlet did to Richie, then you’ve won. (That’s not an exaggeration of the worldview. Ezra Klein published an article entitled “Hillary Clinton’s 3 debate performances left the Trump campaign in ruins,” which entirely eliminated the distinction between what happens in debates and what happens in campaigns. The belief that politics is about argument rather than power is likely a symptom of a Democratic politics increasingly incubated in the Ivy League rather than the labor movement.)

Now, facing defeat and political crisis, the overwhelming liberal instinct has not been self-reflection but a further retreat into fantasy and orthodoxy. Like viewers at the climax of The West Wing’s original run, they sit waiting for the decisive gestures and gratifying crescendos of a series finale, only to find their favorite plotlines and characters meandering without resolution. Shockingly, life is not a television program, and Aaron Sorkin doesn’t get to write the ending.

The West Wing is many things: a uniquely popular and lavish effort in prestige TV; an often crisply-written drama; a fictionalized paean to Beltway liberalism’s foundational precepts; a wonkish celebration of institutions and processes; an exquisitely-tailored piece of political fanfiction.

But, in 2017, it is foremost a series of glittering illusions to be abandoned.

Illustrations by Meg T. Callahan.

Fines and Fees Are Inherently Unjust

Fining people equally hurts some people far more than others, undermining the justifications of punishment…

Being poor in the United States generally involves having a portion of your limited funds slowly siphoned away through a multitude of surcharges and processing fees. It’s expensive to be without money; it means you’ve got to pay for every medical visit, pay to cash your checks, and frankly, pay to pay your overwhelming debts. It means that a good chunk of your wages will end up in the hands of the payday lender and the landlord. (It’s a perverse fact of economic life that for the same property, it often costs less to pay a mortgage and get a house at the end than to pay rent and end up with nothing. If I am wealthy, I get to pay $750 a month to own my home while my poorer neighbor pays $1,500 a month to own nothing.) It’s almost a law of being poor: the moment you get a bit of money, some kind of unexpected charge or expense will come up to take it away from you. Being poor often feels like being covered in tiny leeches, each draining a dollar here and a dollar there until you are left weak, exhausted, and broke.

One of the most insidious fine regimes comes from the government itself in the form of fines in criminal court, where monetary penalties are frequently used as punishment for common misdemeanors and ordinance violations. Courts have been criticized for increasingly imposing fines indiscriminately, in ways that turn judges into debt collectors and jails into debtors’ prisons. The Department of Justice found that fines and fees in certain courts were exacted in such a way as to force “individuals to confront escalating debt; face repeated, unnecessary incarceration for nonpayment despite posing no danger to the community; lose their jobs; and become trapped in cycles of poverty that can be nearly impossible to escape.” A new report from PolicyLink confirms that “Wide swaths of low-income communities’ resources are being stripped away due to their inability to overcome the daunting financial burdens placed on them by state and local governments. There are countless stories of people being threatened with jail time for failing to pay fines for “offenses” like un-mowed lawns or cracked driveways.

Critics have targeted these fines because of the consequences they are having on poor communities. But it’s also important to note something further. The imposition of flat-rate fines and fees does not just have deleterious social consequences, but also fundamentally undermines the legitimacy of the criminal legal system. It cannot be justified – even in theory.

I work as a criminal defense attorney, and I have defended both rich and poor clients (mostly poor ones). Many of my clients have been given sentences involving the imposition of fines. For everyone, regardless of wealth, if a fine means less (or no) jail time, it’s almost always a better penalty. But, and this should be obvious, fines don’t mean the same thing to different people. For my poor clients, a fine means actual hardship. In extreme cases, it can mean a kind of indenture, as the reports have pointed out. If you make $1,000 a month, and are trying to pay rent and support yourself, a $500 fine means a lot. It means many months of indebtedness as you slowly work off your debt to the court. It might mean not buying clothes for your child, or forgoing necessary medical treatment.

Of course, the situation changes if you’re wealthy, or even middle-class. You write the check, you leave the court, the case is over. For my wealthy clients, a fine isn’t just the best outcome, it’s a fantastic outcome, because it means the crime which you are alleged to have committed has led to no actual consequences that affect you in a substantive way. You haven’t had to make any sacrifices –  your life will look precisely the same in the months after the fine was imposed as it did in the months before. Wealthy defendants want to know: “What can I pay to make this go away?” And sometimes paying to make it go away is exactly what they can do as courts will often accept pre-trial fines in exchange for dismissal.

As I said, it’s not news that it’s harder to pay a fine if you’re poor. But the implications of this are rarely worked all the way through. For if it’s true that the punishment prescribed by law hurts one class of defendants far more than it hurts another class of defendants, then the underlying justification for having the punishment in the first place is not actually being served, and the basic principle of equality under the law is being undermined.


If fines are imposed at flat rates, poor people are being punished while rich people are not. If it’s true that wealthy defendants couldn’t care less about fines (and a millionaire with a $500 fine really couldn’t care less), then they’re not actually being deprived of anything in consequence of their violation of law. Punishment is supposed to serve the goals of retribution, deterrence, or rehabilitation. Leaving aside for the moment whether these are actually worthy goals, or whether criminal courts actually care about these goals, flat-rate fines don’t serve any of them when it comes to wealthy defendants. There’s no deterrence or rehabilitation, because if you can pay an insignificant fee to commit a crime, there’s no reason not to do it again. It’s wildly unclear how a negligibly consequential fine would deter a wealthy frat boy from continuing to urinate in public, whereas a person trying to escape homelessness might become very careful not to rack up any more fines.

Nor does the retribution imposed have a rational relationship to the significance of the crime. If the point of retribution is to make someone suffer a harm in proportion to the suffering they themselves have imposed (a dubious idea to begin with), flat-rate fines make no sense, because some people are being sentenced to far greater suffering than others. This means that it is unclear what we believe the actual correct retributive amount is supposed to be. It’s as if we punish in accordance with the philosophy of “an eye for an eye,” but we live in a society where some people start with one eye and some people start with a twenty eyes. Taking “an eye for an eye” means something quite different when imposed on a one-eyed man than it does with a twenty-eyed man. The one-eyed man has been punished with blindness while the twenty-eyed man can shrug and simply have one of the lenses removed from his spectacles.

This is important for how we view the law. If courts aren’t calibrating fees based on people’s actual wealth, then massively differential punishments are being imposed. Some people receive indenture while others receive no punishment at all, even given the same offense at the same level of culpability. If fines are supposed to have anything to do with making a person experience consequences for their crime, whether retributive consequences or rehabilitative consequences, then punishments are failing their stated purpose and being applied grossly unequally.

It may be objected that fines do not constitute an unequal application of the law, because they are applied equally to all. But the point here is that application of a law equally in each case does not mean “equal application of law to all” in any meaningful sense. In other contexts, this is perfectly clear. A law forbidding anyone from wearing a yarmulke and reading the Torah does not constitute the “equal application of law to all.” It clearly discriminates against Jews, even though Christians, Muslims, Hindus, and the non-religious are equally prohibited from wearing yarmulkes. (The absurdity of “equal application” meaning “legal equality” was well captured by Anatole France, who wrote that “The law, in its majestic equality, forbids the rich as well as the poor to sleep under bridges.”)

It is inevitable that laws will always affect people differently, because people will always be different. But if some people are given something that constitutes far more of a burdensome punishment for them than it is for others, the actual purposes of the law aren’t being served. Separate from the equality arguments, for a large class of people punishment simply isn’t even serving its intended function.

Of course, you could easily take a step toward this, by fining people in accordance with a percentage of their income rather than at a flat rate (or redistributing all wealth). If a fine is, say, 2% of one’s annual income, then a person with a $20,000 income would face a $400 fine whereas a person with a $200,000 income would face a $4,000 fine. That’s still grossly unfair of course, because $400 means far more to the poorer person than $4,000 does to the richer person. You wouldn’t have a fair system of fines until you figure out how to make the rich experience the same kinds of effects that fines impose on the poor. The fact that even massively increasing fines on the rich wouldn’t bring anything close to equal consequences should show how totally irrational our present system is.

But rather than having courts appropriate larger quantities of rich people’s wealth (though their wealth obviously does need appropriating), we could also simply reduce the harm being inflicted on the poor, through reforming local fines-and-fees regimes. It’s clear that in many cases, fines don’t have anything to do with actual punishment; they’re revenue-raising mechanisms, a legalized shakedown operation, as the Justice Department’s report on Ferguson made clear. Courts aren’t interested in actually calculating the deterrence effects of certain financial penalties. They want to fund their operations, and poor people’s paychecks are a convenient piggy bank.

We know that fines and fees have, in many jurisdictions, created pernicious debt traps for the poor, arising from trivial offenses. But it’s when we examine the comparative impact on wealthy defendants that this system is exposed as being irrational as well as cruel. It doesn’t just ensnare the less fortunate in a never-ending Kafkaesque bureaucratic nightmare. It also fundamentally delegitimizes the entire legal system, by severing the relationship between punishments and their purpose. It makes a joke out of the ideas of both the punishment fitting the crime and equality under the law, two bedrock principles necessary for  “law” to command any respect at all. So long as flat-rate fines are disproportionately impacting the poor, there is no reason to believe that criminal courts can ever be places of justice.

The Real Obama

What the president does in retirement will reveal his true self…

The best thing about being an ex-president is that you can do whatever you want. Do you want to retire to the countryside to build henhouses and tootle around in your amphibious car? You can do that. Do you want to teach Sunday school and build houses for poor people, and maybe broker an occasional international peace agreement? You can do that also. Do you want to spend your days painting pictures of your dogs, your feet, and the soldiers you caused to be maimed? It’s an option! The retirement activities of presidents offer useful insights into their natures, because they are finally freed of all political constraints on their action. At liberty to pursue activities of their choosing, we get a sense for what they actually enjoy, and who they actually are.

During his two terms in office, Barack Obama’s most zealous devotees tended to explain away apparent failures or complacencies by referring to the constraints high office places on anyone who ascends to it. Even some critics on the left may have suspected that the deeds of Obama’s administration were out of sync with his natural instincts, that Obama was a man of high conscience weighed down or blunted by Washington’s leviathan bureaucracy, or frustrated by the exigencies of an unstable world.

Obama’s retirement should therefore finally give us meaningful insight into who he really is or, to put it another way, who he has been all along. The albatross of office finally lifted from his neck, America’s 44th president is now free to do anything and everything he desires without impediment. He can be the person he has always wanted to be, the person whom he has had to keep hidden away. Who, then, is the real Obama?

Well, it turns out the real Obama is quite like the one we knew already. And what he most wants to do is nestle himself cozily within the bosom of the global elite, and earn millions from behind a thinly-veiled philanthropic facade.

In January, Obama launched his post-presidential foundation with a board that consists of private equity executives, lobbyists, and an Uber advisor, tasking it to implement the world’s most meaningless mandate (“to inspire people globally to show up for the most important office in any democracy, that of citizen”). Able to choose his friends from out of anyone in the world, Obama has been seen kitesurfing with venture capital magnate Richard Branson (worth more than $5 billion) and brunching with Bono. (You can usually judge a person pretty well by their friends, and nobody who voluntarily spends his free time with Bono should be trusted.)

Obama’s recent forays into politics have also confirmed him as a friend to the elite. He used his last weeks in office to personally help derail the candidacy of left-wing congressman Keith Ellison for DNC chair. After Ellison became an early favorite in the race, Obama used his influence to recruit and boost the more centrist and less controversial Tom Perez, who won after a series of vile smears were launched against Ellison by influential party donors.

Obama also extended his influence overseas. Ahead of the first round of voting, he effectively endorsed French presidential candidate Emmanuel Macron, a former investment banker who “wants to roll back state intervention in the economy, cut public-sector jobs, and reduce taxes on business and the ultra-rich.” (Macron also once responded to a union worker who needled him over his fancy suits by declaring that fancy suits accrue to those who work the hardest, an assertion that is manifestly false.)

Then there were the speeches. In December, conservative commentator Andrew Sullivan, asked what Obama should do with his post-presidency, had jokingly pleaded: “No speeches at Goldman Sachs, please.” After all, Hillary Clinton’s Wall Street speeches had become the ultimate symbol of Democratic hypocrisy, a clear demonstration of how those who profess to oppose inequality will happily reap financial benefits from it. For Sullivan, it was laughable to think that a man like Obama, who maintained a public image characterized by modesty and personal integrity, would instantly lapse into the tawdry and unscrupulous Clinton practice of cashing in.

But then Obama cashed in. Mere weeks after leaving 1600 Pennsylvania Avenue he signed on with the Harry Walker Agency (the very same outfit through which the Clintons have jointly pocketed a virtually incomprehensible $158 million on the speaker’s circuit). It was then revealed that he had been paid a whopping $400,000 fee by Cantor Fitzgerald a bond firm which deals in credit default swaps, the inscrutable instruments of financial alchemy that helped cause the 2008 financial meltdown. (After that came news of another $400,000 speaking fee.)

At the first sign of backlash against Obama’s pursuit of riches, media and political elites unleashed a torrent of toadyism in his defense. After expressing faint concern about Obama’s speaking fees, Amanda Marcotte chastised “people who’ve never had money worries” for casting judgement on “those who have,” elsewhere complaining: “The obsession with speaking fees is politics version of begrudging athlete salaries while ignoring owner profits” (an analogy that only holds up if Obama literally works for Wall Street). The Boston Globe’s Michael Cohen added: “If someone wants to pay Barack Obama $400,000 to give a speech I can’t think of a single reason why he shouldn’t take it…Obama is not doing anything wrong. He’s giving a speech. Nothing to apologize for.” It seemed that American liberalism’s eight year journey from  “Change We Can Believe In” to “Everybody Grifts…” was finally complete. (There is a fun game one can play with ideologically-committed Democrats that we might call “Rationalize That Injustice.” See if there are any right-wing policies that they won’t justify if told that Obama did them.)

Certain defenses of Obama opted for an explicitly racial framework. The Daily Show’s Trevor Noah exclaimed “So the first black president must also be the first one to not take money afterwards? Fuck that, and fuck you!” April Reign, creator of the viral hashtag #OscarsSoWhite, equated Obama’s critics with defenders of the slave trade. Attorney Imani Gandy, who litigated foreclosure cases on behalf of J.P. Morgan before becoming a prominent social justice activist on Twitter, seized upon the controversy to call antipathy towards Wall Street “the whitest shit I’ve ever heard.” This particular line of argumentation almost defied credulity, especially since critics of Obama’s speaking fees were simply extending a criticism originally applied to Bill and Hillary Clinton.

Obama and Branson enjoy the ocean together.

But while certain rationalizations of Obama’s conduct have ventured into burlesque satire, it is worth taking Michael Cohen’s question seriously: what’s so wrong with Obama doing a speech for money? He speaks, they pay, nobody gets hurt. What’s the actual harm? Since Obama isn’t actually in a position to give Wall Street any political favors, and since he’s a private citizen, why should it matter? Indeed, Debbie Wasserman Schultz told those who might be upset by the speech to “mind their own business.”

Well, first, there are some basic issues of personal ethics involved in post-presidential buckraking. There is something tawdry about immediately leaving office to go and make piles of money in any way you can, and it’s a short hop from doing your inspirational speaking schtick for corporate events to doing it in television commercials or at birthday parties for investment bankers’ teenage children. That’s why Harry Truman famously refused to serve on corporate boards, declaring that doing so would be undignified. (“I could never lend myself to any transaction, however respectable, that would commercialize on the prestige and dignity of the office of the presidency.”) And those who think Obama is being held to an impossible standard (that impossible “do good things rather than simply lucrative things” standard) should remember that Jimmy Carter has spent a productive and comparatively modest retirement writing, campaigning for the basic dignity of Palestinians, and quite regularly intervening to criticize American policy at home and abroad.

Some have said that as a “private citizen,” Obama’s choices of how to make money should be beyond moral scrutiny. But it’s private citizens who could use a lot more moral scrutiny. Obama’s choosing to become a mansion-dwelling millionaire is not wrong because he used to be the president, but because being exorbitantly rich in a time of great global poverty is heinously immoral. Moreover it defies credulity to suggest, as some have in earnest, that Obama needs to take money from this particular source. He is already guaranteed a lavish annual pension of more than $200,000 in addition to expenses and almost $400,000 in further pension money accrued from his time as an Illinois State Senator. He and the former First Lady have just signed the most sumptuous post-presidential book deal in history (worth $65 million, or almost 1500 times the median personal income) and will assuredly spend the next several decades enjoying a standard of material comfort few Americans have ever known, Wall Street speaking fees notwithstanding.

Finally, there’s the political hypocrisy. On the very same day as the infamous speech, Obama was elsewhere decrying the pernicious political influence of wealth, somberly declaring that “because of money and politics, special interests dominate the debates in Washington in ways that don’t match up with what the broad majority of Americans feel.” Obama’s public posture has always been that he resents the political influence of special interests and financial elites, yet as both a political candidate and a private citizen they have showered him with money he has been only too happy to accept.


Yet Michael Cohen is also partially right: the speech itself is not actually terribly important. It’s a mistake to focus on the personal ethics of Obama’s actual decision, and if we frame the relevant question as “Should Obama have taken the money?” then it’s easy to lapse into something of a shrug. So the guy wants to get rich. Fine. He’s no worse than every other member of the 1%. They’re all indefensible, and as long as nobody continues to maintain the illusion that Obama is any different from any other politician, there’s no reason to single him out as uniquely wicked. (One suspects, however, that some people do still maintain the illusion that Obama is different from other wealthy denizens of the political class.)

The most important aspect of the story is not that Obama accepted Cantor Fitzgerald’s offer, but that the offer was made in the first place. Indeed, it’s hard to escape the impression that certain powerful interests are now rewarding the former president with a gracious thanks for a job well done. Rather than asking whether Obama should have turned down the gig, we can ask: if his administration had taken aggressive legal and regulatory action against Wall Street firms following the financial crisis, would they be clamouring for him to speak and offering lucrative compensation mere weeks after his leaving office? It’s hard to think they would, and if a Democratic president has done their job properly, nobody on Wall Street should want to pay them a red cent in retirement. Obama’s decision to take Cantor Fitzgerald’s cash isn’t, therefore, some pivotal moment in which he betrayed his principles in the pursuit of lucre. It’s simply additional confirmation he has never posed a serious challenge to Wall Street’s outsized economic power.

In fact, we’ve known that for as long as we’ve known Obama. He was popular on Wall Street back when he first ran for president. According to Politico, he “raised more money from Wall Street through the Democratic National Committee and his campaign account than any politician in American history,” and in just one year “raked in more cash from bank employees, hedge fund managers and financial services companies than all Republican candidates combined.”

Serious economic progressives did not become disillusioned with Obama when he accepted $400,000 for a speech, but when he arrived in office at the apex of the financial crisis and immediately stuffed his cabinet and advisory team with a coterie of alumni from Goldman Sachs (a top donor to this campaign in 2008). At the height of the worst financial catastrophe since the Great Depression, during a time of unique (and completely warranted) antipathy towards rapacious corporate interests, Obama had been elected with the single greatest mandate to implement sweeping change in recent political history. Given the same extraordinary kind of political demand, FDR took the opportunity to proclaim that “The old enemies of peace: business and financial monopoly, speculation, reckless banking, class antagonism, sectionalism, war profiteering…they are unanimous in their hate for me — and I welcome their hatred.”

But when Obama was faced with a similar moment of calamity and possibility, he opted instead for the avenues of brokerage and appeasement. He chose not to push for criminal prosecutions of financial executives whose greed and negligence caused the 2008 economic crash. In 1999, Obama’s Attorney General, Eric Holder, had proposed the concept of “collateral consequences” (colloquially known as “too big to jail”), whereby “the state could pursue non-criminal alternatives for companies if they believed prosecuting them might result in too much ‘collateral’ damage” to the economy. Thus, when banking giant HSBC was revealed to be laundering billions of dollars for Mexican drug cartels and groups linked to al-Qaeda, Obama’s Justice Department allowed the bank to escape with a fine and no criminal charges, on the grounds that a prosecution might damage HSBC too much and have wider effects on the economy. Top prosecutors had evidence of serious wrongdoing by HSBC, but Holder prevented them from proceeding. A report prepared for the House Financial Services Committee concluded that Holder “overruled an internal recommendation by DOJ’s Asset Forfeiture and Money Laundering Section to prosecute HSBC because of DOJ leadership’s concern that prosecuting the bank would have serious adverse consequences on the financial system.” Yet Holder later falsely suggested that the decision was made by the prosecutors rather than himself. (“Do you think that these very aggressive US attorneys I was proud to serve with would have not brought these cases if they had the ability?”) One should note just how unjust the “collateral consequences” idea is: it explicitly creates separate systems of justice for rich and poor, because there will always be more economic consequences to prosecuting major banking institutions than individual poor people. The same crime will therefore carry two different sets of consequences depending on how much you matter to the economy.

Holder also institutionalized the practice of extrajudicial settlements, under which “there was no longer any opportunity for judges or anyone else to check the power of the executive branch to hand out financial indulgences” to corporate offenders. Thus even as guilty pleas were extracted from banks and financiers for crimes ranging from fraud, manipulation, and bribery to money laundering and tax evasion, not a single malefactor from Wall Street ended up behind bars. (Meanwhile, America’s prisons remained full of less economically consequential people who had been convicted of the same crimes.)

Obama’s politics were the same when it came to policy-making. After several years of sustained corporate pushback, aided by both the White House and Congress, the much-touted Dodd-Frank law was whittled down to the status of a mild and extremely tenuous reform. A similar pattern inflected Obama’s signature legislative achievement, the now-precarious Affordable Care Act. While undoubtedly improving on the horrific status quo in American health care, Obamacare was notably soft on the insurance and pharmaceutical industries, both of which were extensively consulted during its composition. Far from being the Stalinist caricature of Tea Party fever dreams, Obamacare was based on plan put in place by a Republican governor and sketched out by the Heritage Foundation in the early 1990s. No matter how much the American right may distort the record, Obamacare was essentially a massive corporate giveaway (after all, it mandated that millions of people become new insurance customers), and it manifestly failed to tackle the crux of the problem with US healthcare, which is that market actors are involved in the provision of health insurance to begin with. Obama arguably had the votes to create a public option that would have ameliorated matters somewhat, even without his having made any serious attempt at exerting political pressure in favor of one. But instead, he opted to needlessly compromise with the very corporate actors who stand between Americans and the guarantee of healthcare as a right.

This consistently pro-business approach has ensured that Obama isn’t the only administration official that corporate America has showered with gratitude. For plenty of Obama’s top lieutenants, the revolving door between Wall Street and the corridors of the US government has kept spinning continuously. David Plouffe, Obama’s 2008 campaign manager and former senior advisor, now works for Uber. Press Secretary Robert Gibbs is executive vice-president at McDonalds, lobbying hard against raising the minimum wage. Eric Holder, who had left the white-collar defense outfit Covington & Burling to become attorney general, returned in 2015 to once again represent many of the same banks and financial firms he had ostensibly been charged with regulating and prosecuting while in office. (Covington had literally been keeping Holder’s office waiting for him. “This is home for me,” Holder said of the corporate firm.) And having presided over massive bailouts during his tenure running the US Treasury, Timothy Geithner headed to Wall Street to take up a lucrative gig at private equity firm Warburg Pincus.

This is why Matthew Yglesias was wrong to characterize Barack Obama’s speaking fee as a betrayal of “everything [he] believes in.” In fact, it was the exact opposite: totally consistent with everything he has always stood for. The point isn’t that he’s “sold out.” It’s that, when the soaring cadences and luminous rhetoric are stripped away, Obama never offered any transformative change to begin with. Thus his $400,000 speech matters, not because it represents a deviation from the norm, or a venal lapse in personal ethics, but because it conveniently demonstrates a pattern that has been there all along.


In the Obama presidency, many liberals found the embodiment of their political ideal: an administration of capable, apparently well-intentioned people with impeccable Ivy League credentials, fronted by a person of undeniable charisma and charm, and with a beautiful and photogenic family to boot.

But examining Obama seriously requires acknowledging the fundamental limits of his brand of politics: a liberalism that continues to trade in the language of social concern while remaining invested in the very institutions undergirding the poverty and injustice it tells us it exists to fight; see, e.g., the upper-middle-class liberals who decry educational inequities while sending their own children to private schools. Like the Davos billionaires who “fret about inequality over vintage wine and canapés,” Obama denounces money in politics but can’t keep himself from taking it. And because he’s such a part of the very elite system whose effects he abhors, “Obamaism” was always destined to be a fundamentally empty and insincere philosophy.

Matt Taibbi issued a prescient assessment of Obama all the way back in 2007, when it was still unclear who would win the Democratic presidential primary:

“The Illinois Senator is the ultimate modern media creature—he’s a good-looking, youthful, smooth-talking, buttery-warm personality with an aw-shucks demeanor who exudes a seemingly impenetrable air of Harvard-crafted moral neutrality… His entire political persona is an ingeniously crafted human cipher, a man without race, ideology, geographic allegiances, or, indeed, sharp edges of any kind…[He appears] as a sort of ideological Universalist, one who spends a great deal of rhetorical energy showing that he recognizes the validity of all points of view…His political ideal is basically a rehash of the Blair-Clinton “third way” deal, an amalgam of Kennedy, Reagan, Clinton and the New Deal; he is aiming for the middle of the middle of the middle….In short, Obama is a creature perfectly in tune with the awesome corporate strivings of Hollywood, Madison avenue and the Beltway—he tries, and often succeeds, at selling a politics of seeking out the very center of where we already are, the very couch where we’ve been sitting all this time, as an exciting, revolutionary journey into the unknown.”

The real tragedy of the Obama story is that in 2008, millions of desperate Americans cast votes for a presidential candidate they believed would fight for meaningful change. He successfully marketed “hope” and “change” to a country that was reeling from a horrific financial collapse (his 2008 presidential run even won a “Marketing Campaign of the Year” award from the ad industry, beating out Apple and Zappos). But beneath it all was no serious vision of change; the grand speeches, paid and unpaid, turn out to contain little more than well-crafted platitudes. (Christopher Hitchens once pointed out that while everyone considered Obama a powerful and memorable speaker, nobody could ever seem to remember a single specific line from any of his orations, a good sign he’d in fact said nothing at all.) And as Obama biographer David Garrow concludes, “while the crucible of self-creation had produced an ironclad will, the vessel was hollow at its core.”

But Obama’s weaknesses are not the product of some unique personal pathology. He is simply the most charismatic and successful practitioner of an ideology shared by many contemporary Democrats: a kind of Beltway liberalism that sacrifices nearly all real political ambition, espousing a rhetoric of compassion and transformation while rationalizing every form of amorality and capitulation as a pragmatic necessity. In a moment when militancy and moral urgency are needed most, it seeks only innocuous, technocratic change and claims with the smuggest certitude that this represents the best grown adults can aspire to. In a world of spiralling inequality and ascendant corporate tyranny, it insists on weighting equally the interests of all sides and deems the result a respectable democratic consensus. Bearing witness to entrenched human misery, it wryly declares it was ever thus and delights in lazily dismissing critics with scornful refrains like “That will never get through Congress…” Confronted with risk or danger, it willingly retreats to ever more conservative ground and calls the sum total of these maneuvers “incrementalism.” In place of a coherent vision or a clear program of reform, the best it can offer is the hollow sensation of progress stripped of all its necessary conflicts and their corresponding discomforts.

One could see, in the defenses of Obama’s Wall Street speech, just how far this ideology narrows our sense of the possible: it tells us it is unrealistic and unfair to conceive of a president who does not shamelessly use the office to enrich himself. What passes for pragmatism is in fact the most dispiriting kind of capitalist pessimism: this is your world, you’re stuck with it, and it’s madness to dream of anything better. There Is No Alternative.

We can almost respect Hillary Clinton for embracing this idea openly, and barely even pretending to represent our most elevated selves rather than our most acquisitive ones. The cruelty Obama perpetrated was to encourage people to believe in something better, then give them nothing but a stylized status quo. At least now that he’s windsurfing with billionaires and doing the Wall Street speaking tour, there’s no longer any reason to keep believing that underneath it all, he was a true idealist whose innermost desires were thwarted by crushing political realities. All along, his innermost desire was to meet Bono over eggs benedict.

The Obama of 2008 was to be this century’s FDR, signifying a moment of lasting realignment and transcendent progress rather than one of growing alienation and despair culminating in the election of Donald Trump. But the liberalism of 21st century America, it turns out, is ill-equipped to achieve the transformative change it once so loftily promised: not because it made a noble attempt and failed but because it never really sought this change to begin with.

While Obama may not have been sincere, a great many of his voters were, and the millions who embraced his message revealed a genuine hunger for transformative change.

Now all we need is a political movement that actually seeks it out.

Who Profits from Poverty?

On the success of Matthew Desmond’s “Evicted”

Praise for Matthew Desmond’s Evicted has been nearly universal. It has won a PEN Award, the National Book Critics Circle Award, and, now, the 2017 Nonfiction Pulitzer Prize. To quote the New York Times, it is “a comet book — the sort of thing that swings around only every so often, and is, for those who’ve experienced it, pretty much impossible to forget.”

For many readers, it is a first foray into the housing question. Desmond draws our attention not only to the power of evictions to reproduce poverty, but also to their prevalence: evictions are common in communities that are black and white, poor and not so poor. For many more readers, it is also a first foray into Midwestern poverty. As American liberals scramble to understand what went wrong in swing states like Wisconsin, Desmond illustrates how the daily desperation of Milwaukee’s low-income communities translates into “lost confidence in… political capacity.”

But while the book makes valuable contributions to public understanding of eviction, the overwhelming critical enthusiasm for Desmond’s book should perhaps give us pause. After all, most journalists and academics live in municipalities where housing inequality runs high. We might, in general, expect Desmond’s call for fairly apportioned housing to engender some resistance — or at least some uncomfortable reflection — from those readers who enjoy the fruits of housing market exclusion from brunch on the corner to dinner in the brownstone.

Instead, Evicted has met only eager approval. The reason for this, I believe, is that most readers feel that the Desmond’s evictions are distant from them — something they merely observe as sympathetic spectators, rather than something in which all of us actively participate. The deceptive simplicity of Desmond’s policy prescription—housing vouchers—implies that an inclusive housing system can be accomplished in one fell swoop, without any substantial sacrifices or lifestyle change on the part of the privileged.

As a moral portrait, Desmond’s book is Manichean, with clear delineations between good and evil. The protagonists confront injustice after injustice: Arleen and her children are evicted, and then they are robbed, and then evicted once more. Lamar, who lost both of his legs when he passed out in a freezing house, faces eviction with his two children, and is unable to collect his disability benefits. Villainous landlords, meanwhile, inflict injustice without much remorse. “You know, if you have money right now, you can profit from other people’s failures,” says Sherrena Tarver, a former schoolteacher and one of Desmond’s main landlord contacts. “I’m catching properties. I’m catching ’em.” Sherrena gambles, she travels, she evicts. At times, Desmond is careful to humanize Sherrena, who is making her way in a world that does not afford many opportunities to black women. Overall, however, Desmond portrays her as self-pitying (“If you ever thinking about become [sic] a landlord, don’t… Get the short end of the stick every time”), self-important (“these low-quality people,” she complains of her tenants), and self-satisfied (“The ’hood is good”). The other landlord who appears in Desmond’s account is Tobin Charney, who makes over $400,000 from 131 trailers. He is, according to his tenants, a “greedy Jew.” From chapter to chapter, we are led from heartbreak to righteous outrage.

But the moral universe in Evicted is small. This is the strength of Desmond’s ethnography. He digs deep into the lives of his subjects to give a portrait of poverty that is both honest and respectful. Yet Desmond begs us to consider the question of scope — both for his diagnosis of the eviction problem and his prescription for it. “It is ultimately up to future researchers to determine whether what I found in Milwaukee is true in other places,” he writes.

This sets forth two very different tasks. The first is basic and empirical: Are low-income renters in other cities around the world vulnerable to eviction and the suffering that flows from it? The second, however, requires us to go beyond eviction to assess the distributional justice of housing markets beyond the landlord-tenant relationship. Who profits from renters’ poverty? Who gains from tenure insecurity? And what would we need to sacrifice to guarantee inclusion in our cities? After all, our cities are not populated exclusively by grasping landlords and their impoverished tenants. All of us participate in the housing market, and many of us benefit from it in ways that hurt our neighbors.

In “Home and Hope,” the book’s epilogue, Desmond carefully lays out his policy prescription for America’s broken housing system. “The idea is simple,” Desmond writes. The government should guarantee rental subsidies to all low-income families struggling to pay rent. With vouchers in hand, families could choose where they wanted to live — “as long as their housing was neither too expensive, big, and luxurious nor too shabby and run-down” — without the fear of falling into debt and, inevitably, facing eviction. “A universal housing voucher program would carve a middle path between the landlord’s desire to make a living and the tenant’s desire, simply, to live.”

Desmond is fervent in his advocacy. The program would, he writes, “change the face of poverty”:

Evictions would plummet and become rare occurrences. Homelessness would almost disappear. Families would immediately feel the income gains and be able to buy enough food, invest in themselves and their children through schooling or job training, and start modest savings. They would find stability and have a sense of ownership over their home and community.


It is here, in the epilogue, that the limits of Desmond’s book come into view. He tucks a confession into a footnote. “A universal voucher program would not solve all our problems,” he writes — “especially in tight markets.” But where, we might ask, are Desmond’s loose markets? All across the United States, housing markets are tightening at an increasing clip — from Los Angeles to San Francisco, Austin to Houston, Washington to Boston. In those markets, while addressing some short-term problems, housing voucher programs create a host of new, long-term ones.

Consider Britain, where I have spent the last two years researching the ongoing housing crisis. Britain’s housing voucher system earns Desmond’s praise. “Great Britain’s Housing Benefit is available to so many households that a journalist recently reporting on the program asked, ‘Perhaps it is easier to say who does not get it?’ ‘Indeed,’ came the answer.” But Britain’s housing market remains one of the most exclusionary in Europe. Far from narrowing the gap between rich and poor in Britain, Housing Benefit has — in many ways — done the opposite.

First, Housing Benefit drives up demand in places like central London, where properties would otherwise be unaffordable to the vast majority of people. Vouchers in hand, renters can pursue high-rent properties — so long as they are not, to use Desmond’s phrase, “too big, expensive and luxurious.” With this rising demand, landlords can then raise their rents, knowing that the state will foot the bill. Often, this inflationary pressure — rather than preventing evictions — incentivizes them. Evictions in Britain’s private rental sector have soared over the last five years, even as tenant arrears have been in steep decline. The reason for the rising rate of eviction is landlord opportunism. Because they are certain that their properties will fetch increasingly higher rents, landlords make use of evictions to free up their property.

By driving up the value of local housing stock, Housing Benefit can also behave like a regressive tax on low-income renters. Homeowners reap huge windfall gains from house price inflation — the average house in London rose by £40,000 in value in 2015 alone. For these homeowners, the gains of house price inflation far outweigh the tax burden of housing benefit expenditure. So it is low-income renters that ultimately bear the cost of their vouchers — funding homeowners’ retirement along the way.

Desmond’s housing voucher system may very well “change the face of poverty,” but it will do nothing to challenge housing market exclusion in America’s major cities. There, the face of poverty will become even more segregated. Low-income renters will be funneled toward low-income neighborhoods, where at least — if new regulations are introduced, as Desmond hopes they will be — evictions will fall. High-income renters will be funneled toward high-income neighborhoods where — repeating “white flight” — they can reproduce systems of privilege. We can further consider the link between gentrification and displacement. With security of tenure, homeowners gain from gentrification: house prices rise, local amenities multiply, and neighborhood services improve. Without security of tenure, renters face displacement: rents rise, landlords evict, and local shops price them out.

The most powerful insight of Desmond’s book is, to quote its title, that there is profit to be made from poverty. The implication of my argument here, though, is that it is not just landlords that reap this profit — all local homeowners and wealthy renters stand to gain from housing market exclusion. Who, after all, cries loudest in the name of not in my backyard? Not the landlords. Instead, it is those wealthy renters and homeowners who seek to maintain the status quo.

This is the piece that is missing from Desmond’s Evicted: housing markets are broadly zero-sum. Accumulation for some is immiseration for others. We are all tied together — landlords and tenants, homeowners and homeless.

In Evicted, though, there is no confrontation between these groups. They do not confront each other in the street — unwinding gentrification or redressing school segregation. Nor do they confront each other in city hall — crafting policy for a fairer housing market.


And this raises the question: If we make solutions to social problems appear simple, noncontroversial, and non-zero-sum in the abstract — or, in this case, in the ‘loose market’ of Milwaukee — when the implementation of these solutions will, in fact, threaten the resources and status of those in power, are we charting a course for a better world or soothing the conscience of elites?

Let’s imagine an inclusive housing market — a place where, Desmond hopes, the “basic right of all Americans” to affordable housing is balanced against “the right to make as much money as possible.” What would it look like? More importantly, what changes in our cities would be necessary to get there?

At the most basic level, inclusion would require that cash not rule everything around us. Wealthy residents would not have priority in the choice of apartments based on their economic advantage. They would, like the thousands and thousands on public housing lists across American cities, have to wait their turn. Gentrification would move at a snail’s pace. Pop-up shops could not descend on low-income communities, replacing affordable with luxury amenities. Public school districts would no longer segregate the privileged from the poor.

In a word, inclusion would require de-commodification — the transformation of our cities from sites of speculative investment to sites of rights-based community organization and development.

This is a sacrifice, however, that most wealthy (and white) urban residents are unwilling to make. Nikole Hannah-Jones, in her excellent work on public schools in Brooklyn, shows just how tightly her middle-class neighbors cling to the system of segregation that keeps low-income students on a separate campus. If applied in cities like New York, Desmond’s voucher program would have rippling effects for institutions like public schools, but Hannah-Jones’s work suggests that these would come with considerable resistance. Residents of large American cities are simply too attached to the distributive justice of the dollar. What do you mean, the banker will ask, that I cannot outbid my rivals for this house, this apartment, this bagel?

Desmond predicts some level of resistance to housing reform. “Those who profit from the current situation,” he writes, “will say that the housing market should be left alone to regulate itself.” But in reality, it’s not only free-market conservatives who will resist housing reform. Most likely, the American liberal will support regulation until it shows up in his backyard.

Evicted is indeed a masterpiece of “relational ethnography.” Desmond is thorough in his data collection, unearthing minute details that bring us deep into the lives of his subjects. He is careful in his depiction — never too sentimental — of the complex social and economic relations that produce and are products of eviction. And he is measured in his suggestion of a common-sense housing reform that would raise the welfare of millions of Americans.

If, however, we want to solve the problem of urban inequality on our doorstep, we need a whole new set of solutions. A housing voucher system will not suffice. We must think instead about what we are willing to give up on behalf of inclusion. We might start with tolerating the noisy construction next door, which will build the new units that are necessary to house our cities’ low-income residents. Or we might raise our property taxes, funding new housing developments with the balance. Or we might send our children to the local public school, finally following through on our constitutional promise of integration. But it will be hard, and it will be painful. There can be no true social reform, after all, without sacrifice.

The View From The Back Row

Journalist and photographer Chris Arnade discusses a country divided by meaning, morality, education, and economics.

In 2016, pundits speculated endlessly on that mysterious place called Trump Country. To many in the Beltway, much of America was a foreign country, to be analyzed statistically rather than in person. Chris Arnade, on the other hand, was determined to escape his coastal bubble. Arnade got into his old van, and has spent the last several years traveling hundreds of thousands of miles, interviewing people all over the country, discovering their joys, sorrows, discontents, and aspirations. In the process he has produced a set of photographs and stories, depicting the everyday Americans who are left out of the media’s understandings of the country, and who feel left out of the 21st century economy. Arnade spoke to Current Affairs editor Nathan J. Robinson about what he has learned in his travels.

NR: You’ve traveled over 100,000 miles across America talking to people from all stripes of life. What are some of the misconceptions that people have about the country they live in? What are some things people think they know about America that are totally wrong? 

CA: Everyone knows we’re a divided country, but I don’t think people understand exactly how deep that division is, and what the true nature of it is. I was a banker for 20 years. I lived in Brooklyn Heights, I sent my kids to private school. I was paid well; I had a Ph.D. in physics. I was kind of the New York neoliberal elite who valued science, valued rationality. And that elite built a world over the last 30 years that is massively unequal. I think everybody knows statistically that we have massive wealth inequality and continued racial inequality. But we kind of pat ourselves on the back and say we’re an egalitarian society in other ways. We’ve given equal legal status to gender, sexuality, and race. And so we kind of think we’ve addressed many of the issues. But when you go out in the country, you realize that we’re massively unequal, and we’re unequal beyond economics. We’re unequal in terms of the way we live, how we choose to live, unequal in our valuation framework, what we view as moral, what we view as right and wrong, what we view as the goals. And beyond the obvious racial differences, which are huge—I spent, as much time in poor minority neighborhoods as I did in poor white working class neighborhoods—the most salient division I see beyond race is education.

NR: Yes, you’ve described this framework for thinking about educational inequality, what you call the “front row kids” versus the “back row kids.” The kids who did well in school and advanced to the top of the economic ranks, and the kids who were sort of left behind, and the differences that creates in their worldview. Could you talk a little bit about that framework and what that division in worldview really is?

CA: Right, the front row kids and the back row kids. Now within that there are some divisions and complexities obviously. But the most salient thing about it is that it’s not about political party. It’s non-partisan. “Front row kids” means both Jeb Bush and Hillary Clinton. The front row is anybody who comes from an elite school, Princeton, Harvard, the Ivies or has a postgraduate degree, Ph.D. They’re mobile, global, and well-educated. Their primary social network is via college and career. That’s how they define themselves, through their job. And within that world intellect is primary. They view the world through a framework of numbers and rational arguments. Faith is irrational, and they see themselves as beyond gender. You can describe this using other frameworks, like “the Acela corridor” types.

On the Democratic side, you can think of the Matt Yglesias types in the media, these kinds of global technocrats, policy wonks. Their framework is: “Give me a problem and I’ll devise a maximally optimal solution using my data.” Most importantly, though, they view their lives as having been better than their parents, and they think their children’s lives will be better than their own. And for them, that’s still true.

The front row kids have won. They’re in charge of things. They are the donor class in politics, they’re the analysts and specialists who scream every time someone has a policy difference they disagree with. “You can’t do X, you’re going to cause a global world war.” Or “You can’t get rid of NAFTA,” “you can’t do Brexit.”

NR: What about the “back row kids,” then? What is that segment of society, and what is the difference in its worldview?

CA: It encompasses a lot of types of people, but it’s defined by its difference with the front row. It’s not just the “white working class,” it includes minorities, black kids who are stuck in east Buffalo or central Cleveland or Bronx in New York. Mostly they don’t have an education beyond high school degree and if they do it’s kind of cobbled together through trade schools and community colleges and smaller state schools. Their primary social network is via institutions beyond work such as family. And their community is defined geographically, meaning they generally don’t leave where they grew up. They might leave for 5-6 years to go to the military, take jobs that bring them to Alaska for a few years, but they’ll come back.

All photos © Chris Arnade 2017.

And they have different kinds of worldviews and values. They find meaning and morality through faith, which is also a form of community. And if you read the work of [Harvard sociologist] Michèle Lamont, she writes about the ethos of the decency of hard work. It’s the idea that you don’t necessarily use your brain to advance, you use your strength and you use your commitment. You’re going to play by the rules, you’re going to break a few rocks, you’re going to work hard. It’s also, and here’s where I’ll sweep a lot under the rug, a kind of traditional view of race and gender.

This group of people views their life as worse than their parents, and they think their children’s lives will be worse than theirs. And that’s rational, from their perspective. After all, they’ve lost. Their kind of worldview has been devalued, because it’s the front row kids that have been in charge: the globalized, rational meritocracy versus the more traditional concepts of morality.

NR: You mention rationality. One of the things that seems to puzzle elites as they try to understand these other parts of society is that they feel the grievances there are genuinely irrational. From their perspective, free trade has been good for everybody, it’s made everybody better off than the alternative. And so they don’t understand these kinds of populist backlashes in the form of the support for Trump (or Bernie Sanders), because they feel like the rage and the desire to destroy the elite is a failure to recognize their own self-interest. After all, why would you vote for someone whose economic policies are irrational, or who, like Trump, might destroy the universe? It just doesn’t make sense. They don’t know why people hate experts, since experts have expertise, and expertise is good!

CA: Well, let me approach it this way. I think that when you talk about any group’s failings as being atavistic, because of laziness, because of weakness, because of some other failing, you’re doing it wrong as a progressive. So when we progressives look at poor minorities and, from a sociological perspective, the frustrations and deviances that are there, and when conservatives say “Hey, there’s more crime in black neighborhoods because they’re more violent” or “There’s higher unemployment because they’re lazier,” we liberals rightly push back. We say “Whoah, let’s look at the structural issues here. Let’s look at the structural racism that denies them access to jobs. Let’s look at the structural inequalities in the educational system which provide a harder route for them to leave.”

And I’d say you have to do that for all groups, instead of dismissing them as irrational. And that includes the white working class. You have to look at the context of what they’re facing. So from their perspective, knocking over the system probably makes sense because their worldview is being devalued. It’s being devalued monthly, has been devalued for 25 years.

Now, some of that devaluation I agree with; I believe the idea that you should get supremacy from being white and male should be devalued. But regardless of what you disagree with, that devaluation is happening. And they’re also being devalued economically. And then, even further, their whole worldview, their sense of place and meaning, is being eroded.

So let’s talk about NAFTA, you alluded to NAFTA and free trade. Mathematically it works, because the winners win more than the losers lose. So on a net basis, you say: “Hey look! The data says everybody wins.” There are three fundamental problems with that. One is that winners never share with the losers, that just doesn’t happen. Secondly, what you’re measuring is a very narrow framework of what’s valuable; you’re making the assumption that everybody wants more stuff, having more stuff is what meaning’s about. But the back row finds meaning through their connections, their community, through their structure. When they lose, they’ve lost everything. When the factories go, the town and community fall apart. Their churches hollow out. Their families start facing problems with drugs. So when your sense of meaning and place and valuation comes from your community, and your community gets eroded, that’s it. Game over.

NR: And this something quite real, it’s not an illusion, it’s not just on paper. You’ve traveled all over, and there really are communities like that, that have just been hollowed out. And you’ve extensively covered the drug epidemic.

CA: I didn’t get into this because I wanted to write about politics. I got into this because I was writing about drugs. And I always kind of glibly say that wherever I went to find drugs, I found hope leaving. And where I found hope leaving I saw Trump entering, if it was a white community. Drugs don’t just go into a place because people are lazy; drugs go into a place because drugs work and help. They’re a get-meaning-quick scheme. So is fascism, so is populism. Both these things give a sense of meaning. People use drugs because they think their life is stuck. It’s a form of suicide, and for them, it’s a way of finding some relief from something that seems like it’s not working. That they’re humiliated and devalued, and they want to find a way to fight back against that. And drugs are just one way to do that, with another way being fascism and populism.

NR: So the rise of Trump is definitely some kind of response to despair and hopelessness, then.

CA: Oh, hell yeah. But I would go even further. First, just because I say I’m not surprised this happened, doesn’t mean I’m justifying it. But what I’m saying is: if you want to put a recipe together to create populist fascist white identity politics, we’ve done it over the past 20-30 years. We’ve created a system that’s immensely unequal, created a ruling class, which is educated and uses their education to elevate themselves and demean anybody else. And we’ve rendered it not simply economic, but cultural as well. These divisions are massive. You can blindfold me and put me in any town in the United States and I can tell you within five minutes if it has a college in it or not.

There are these marches across the country that are taking place against Trump. And they’re great. I approve. I don’t like Trump. But there’s a meme that’s going around now that says: “Look it’s all across America. It’s even happening in Texas! And Arkansas! But it’s happening on a goddamn college campus in Texas and Arkansas. I spent a week and a half in two towns, Kalamazoo and Battle Creek, Michigan, separated by 35 miles. One has a college, one doesn’t. Which one do you think voted for Trump? First time they ever voted for a Republican.


To go back to the question of the rationale for being “irrational”: you have to put people, the way they think, in context. When people are faced with constraints, or when they view the world as having a different goal from themselves, from their perspective they make the right choices. So in my mind, voting for Trump, they felt like they had limited options. They’re backed into a corner, and they’re looking at the system that they feel like is devaluing them every year so they’re just going take a hammer and break it.

NR: Which is actually a kind of rational thing to do in that situation, given the set of values they hold.

CA: I even put it in mathematical terms for people, because I used to be a Ph.D. in math. I can give you the economic framework for it. If you look at their probability outcomes, their downside is limited, the upside is not limited. So you break the system, you want volatility.

Now you can ask the question, what about the black working class? Why aren’t they doing it? Well, there’s some huge differences there. One is the front row kids have made a very valiant attempt to elevate minority communities, and that’s great. I applaud that. So blacks, minorities know who butters their bread and they say, “Ok, I’m gonna go for that.” But in addition, if you look at this election, one of the things I wish I had written more about: I spent time in black working class neighborhoods, and I didn’t hear a lot of enthusiasm for Hillary. I heard a lot more distaste for Trump on college campuses than I did in poor black communities. They rendered their frustration, not by voting for Trump, it was by not voting. Or by a mute cynicism. They’ve been so, so eroded for such a long time that there has been pressure to just kind of throw their hands up, and give up on the political process. The black back row is frustrated, but they’ve been frustrated for 80-100 years.

NR: So there’s class divide in non-white communities, too, and the front-row/back-row framework isn’t just about the white working class versus a kind of racially diverse elite. And perhaps the difference in expectations makes a difference to the amount of rage there is.

CA: And their lives are getting marginally better. Marginally. If you look at the rate of change, it’s going up from a very low base. In many cases, that’s what matters.

But if I had to kind of get one point across about the elite, it’s this: they speak a different language. They don’t know how different their worldview is. They have no clue. And it took me 3 ½ years to figure it out.


NR: You’ve suggested that that is actually going to prevent them from understanding when Trump is succeeding and failing, because what he does will send different messages to different groups of people.

CA: Yeah. So, for example, right now, this immigration action, from the measure of the front row, has been a disaster. But measured from the other valuation framework, not so much. He’s doing what he said he was going to do. The outrage is not shared everywhere. They like that Trump drives the media and the elites crazy. Trump is a genius at knowing how to find that gap and exploit it.

NR: There’s actually a quote from him where he says something like: “There are two audiences. There’s New York society bullshit, and I don’t care what they think because they’ve always hated me. And then there’s America, and America has always loved Donald Trump.” So that’s what he says.

CA: Think about this: what does he spend his life doing? He spends his life selling cheap meaning to people, people who feel meaningless or humiliated. The biggest buzzword I would use to describe what I’ve found in Trump country is “humiliation.” And a desire for pride.

NR: You wrote a piece suggesting that “respect” was the big thing that they all cited as wanting.

CA: At our core, everybody wants to feel valued as a part of something larger. And right now the front row has that. At least up until this election, they had that. They generally can look at their lives and say: “I’m an adjunct professor of Greek History at Bumblefuck University…” Uh, don’t use Bumblefuck.

NR: We can change it.

CA: At Cornell. Anyway, they have a source of pride. But that person has a lot more in common with a bond trader than a truck driver.

NR: Liberal professors definitely don’t think they have more in common with bond traders…

CA: Well, that’s my whole frustration. That was the revelation I had over the last 2½ years. You have to view it from a framework of valuation and morality. And also culture, it’s not about economics. You have to use the old framework of is something banal or sacred? Is it profane or is it sacred?

I often use my favorite example, which is McDonald’s. I grew up in a white working-class town, so for me, it’s kind of rediscovering what I already knew. But McDonald’s, which is viewed with contempt, is actually a center of community, it’s where people gather. McDonald’s is not a joke.

And actually, I can link this back to Trump and explain how he exploits this. Remember when he sends his VP to eat in Chili’s in Times Square? The front row kids went ballistic. Fast food is profane, it’s low culture, it’s banal. It’s without meaning. And they went insane. But viewed from the back row’s perspective, McDonald’s and Chili’s and Applebee’s and Wal-Marts are a central part of the community.


NR: I seem to remember there was a moment during the campaign when Trump said something like “Oh, Melania is a great cook, she makes the most wonderful meatloaf.” And then people said “That’s not being a chef! Anyone can make that.”

CA: He does that intentionally. Because he knows getting the front row to scream will cause them to do what they do when they get mad. They’ll use scorn and derision. They’ll mock. Because that’s what you do when you’re an educated person. To engage with someone, to even bother to argue with them is beneath you. So they mock. Look at Jon Stewart. Look at all the fucking Comedy Central people. You mock the opponent because to engage with them is beneath you. Now when you’re at the bottom, in the back row, your form of engagement is anger, is bitterness, is violence. Because the people above you refuse to engage, what are you going to do?

NR: Well, if you’re not mocking them you’re fact-checking them. That’s the other weapon.

CA: Right, because that’s your valuation framework. Let me give you another example. I was a banker. I liked TARP. For however many fucking years of my life, I supported TARP. I supported all the goddamn neoliberal acronyms: NAFTA, TARP, TPP, all of it. So I can have an argument with a macro person. I go into town to McDonald’s, because I hang out in churches and McDonald’s when I go into town. So if I go in there and I say “Well, TARP will help.” They’ll say, “Yeah, but why are you giving 20 billion dollars to Wall Street?” And I can say, “Well, actually, the money was used to buy assets, and the assets increased in value, and then we got paid back.” And they’d say: “Well, what the fuck? Look at that factory over there: that’s been, kind of sitting there.” And you look out the window and there’s a factory that’s all rusted and boarded up. “That used to employ lots of people. Where was our bailout?” And you have those conversations 30 times and you say: “Maybe I should stop saying ‘Well, actually.’” Maybe I should listen. It’s always a “Well, actually.” And these are clever arguments, but ultimately they just benefit you.

NR: That’s how I feel about a lot of these arguments for why things like the TPP benefit people in the statistical aggregate. Because even if that’s the case, you’re still not really granting people their humanity, because you’re treating them as numbers on a balance sheet, and you’re the one who is in charge of moving the numbers around and doing what’s best for them, and you don’t care if they understand, they’re just supposed to be grateful. 

CA: Again, you’re judging things within a framework that benefits you, a data framework. This mentality says: “We want data geeks. We’re rational people, so we want to do two things: We want to maximize GDP, and we want to do it efficiently.” That’s the neoliberal mantra, which is Larry Summers, Robert Rubin, Bill Clinton. And when you take that worldview, and you take that framework, the natural thing to do is to hand that power to businesses, to deregulate, because that’s how you can maximize GDP and be most efficient. Let’s give industry whatever it wants. And you maximize GDP but you steamroll everything in the process, forgetting about the consequences.  Forgetting that that may not be what everybody wants. People don’t just necessarily want uber-efficiency and more stuff. They might think meaning comes from having a community, having a network. Being valued, not just having 5 iPods, but having one iPod and four friends!


NR: I saw something similar in the way some Democrats were frustrated that people didn’t appreciate Obamacare enough. “You’ve all been made better off, I don’t see why you’re upset.” But if it’s complicated to use, and it’s policy being made from afar, and people aren’t being engaged in politics or included, they can get better off in the narrow statistical aggregate, and still not appreciate it, for a very rational reason.

CA: One thing elites don’t get about the working class—and there are differences, but in the aggregate—is that they don’t want handouts from above. They would much rather have good jobs than handouts. And both conservatives and liberals have misused this notion. But it’s true that people want things that give them a role, that respect them. Obamacare is complicated. It did get a bad rap, because this tribal division in the U.S. means things can get knocked just because they have the wrong label attached. But I’m on Obamacare, and it’s a nightmare to use. I can’t tell you how much I just want to kill myself every time I have engage with it. It’s not easy to use.

NR: I think about the difference between the way that policies look on paper, versus the way that people actually experience them. One of the major problem with a kind of technocratic attitude is that it’s not sympathetic to the real-life frustrations that people have, because these are often things that are never going to show up in the numbers. So unemployment rates might be going down, and that’s great, but the kind of jobs there are might be qualitatively worse.

Anyway, your writings are not particularly hopeful about the prospects for the divide. And post-election, you don’t seem to have much hope that the media is going to help. Their realization seems to have been “Oh, we should have visited more parts of the country,” but there’s not really a change in how well they understand people different from them, just a sort of recognition that there is another America and it’s powerful and angry. And so you don’t think the front row has much hope.

CA: Nope, not much, and also, just to make this clear, I don’t have much hope the back row is going to understand the front row either. It’s a two-way street. I happen to believe the front row is in power so there’s more of an obligation for them to understand the back row. Although currently, the back row has gained power for a short period here.

NR: Well, they’ve sort of gained power. They elected Trump, but Trump isn’t exactly “back row.” I mean, elite Democrats are furious. But all the people that Trump appoints, and all the people that are going to be running the country, they’re not necessarily people from the angry working class.

CA: I do think he is going to burn the very people that voted for him, not so much because he doesn’t have intentions of working for them as because he’s just incompetent himself. But I also disagree because, despite the people that he has around him, I think his overall arc is towards his supporters’ valuation framework more than it is towards the front row valuation framework. I just think he’s personally corrupt, and he’s incompetent, and he’ll get taken advantage of by the people around him.

NR: Also he doesn’t actually care about people.

CA: Oh no, he doesn’t. I mean, this whole thing is just another scam. He’s been doing that all his life. But he’s certainly not helping the front row with his policies, and he has no intention of doing that. He may help his buddies, some front row people might be smart enough to glom onto him and sell out and be corrupt. But overall 8 years of a Trump administration is not going to do the front row well. It will do the back row better than the front row, I would speculate, if he wasn’t incompetent.

But I think ultimately the division we have is close to unsolvable. There’s no policy that’s going to address it, because I think it is so social and cultural. It requires almost a national kumbaya, the front row going back and living in different communities and opening their mind, and it requires the back row to drop a little bit of their anger. I just don’t see that happening in either case.

NR: Well, we’ll leave it on that somewhat hopeless, discouraging note.

CA: I hope that wasn’t too negative.

The Clinton Comedy of Errors

What can we learn from the disaster depicted in “Shattered”?

It would be very nice never to think about the 2016 election again. It was miserable, and it is over. What is done will never be undone, and there is no sense “re-litigating” yesterday’s arguments. We should, to use a popular formulation, look forward not backward. Instead of dwelling on which persons may have made what catastrophic mistakes, opponents of Trump should be spending their time thinking about what to do next and how to do it.

Yet reexamining the forces that led to Donald Trump’s defeat of Hillary Clinton is essential for understanding how to prevent a similar result from occurring again. What this does mean is that the most useful examinations of the 2016 race are those conducted with an eye toward drawing lessons. Divvying up responsibility is not a worthwhile exercise for its own sake, and only needs to be done insofar as figuring out causes is a way of preventing future effects.

It’s important to be careful, then, in looking back on Hillary Clinton’s unsuccessful campaign for the presidency. We can ask whose fault Clinton’s loss was, and assign percentages of blameworthiness to James Comey’s letter, Bernie Sanders’ criticisms, Vladimir Putin’s machinations, Bill Clinton’s libido, and Hillary’s own ineptitude. But that’s only useful to the extent that it’s useful, and a better question than “Whose fault was this debacle?” might be “What should we gather from this if 2020 is to be different?” Those two questions overlap (if you know whose fault it is, you can try to make sure they stay in the woods and out of public life). But the point is that for anyone who has progressive political values, the exercise of examining 2016 should be constructive rather than academic.

This need to avoid gratuitously flogging dead horses for one’s own satisfaction is important to keep in mind while reading Jonathan Allen and Amie Parnes’ new book, Shattered: Inside Hillary Clinton’s Doomed Campaign (Crown, $28.00). Allen and Parnes had access to numerous Clinton insiders, and their book is full of sumptuous campaign gossip. But while Clinton-haters will be tempted to relish the book’s tales of hubris and incompetence, there’s no point conducting a needless exercise in schadenfreude. For progressives, the issue is whether the story told in Shattered can yield any useful lessons. And it can.

Shattered depicts a calamity of a campaign. While on the surface, Hillary Clinton’s team were far more unified and capable than their counterparts in 2008 had been, behind the scenes there was utter discord. The senior staff engaged in constant backstabbing and intrigue, jockeying for access to the candidate and selectively keeping information from one another. Clinton herself never made it exactly clear who had responsibility for what, meaning that staff were in a constant competition to take control. Worse, Clinton was so sealed off from her own campaign that many senior team members had only met her briefly, and interacted with her only when she held conference calls to berate them for their failures. Allen and Parnes call the situation “an unholy mess, fraught with tangled lines of authority, petty jealousies, distorted priorities, and no sense of general purpose,” in which “no one was in charge.”


Clinton campaign manager Robby Mook comes across very badly indeed, and appears to have been the wrong man for the job. First, he had a Machiavellian streak (the authors call him a “professional political assassin” bent on “neutralizing” competitors), which he seems to have directed less towards defeating Donald Trump than towards squelching his power rivals within the campaign team by selectively depriving them of knowledge.

Second, and worse, he appears to have been an idiot. Mook was a numbers nerd obsessed with data analytics, but had such blind confidence in his statistical calculations that he followed along when they told him to send Hillary to spend the last stretch of the campaign in Arizona rather than Wisconsin. Every single decision he made was based on the elaborate analyses of campaign stats guru Elan Kriegel (a man whose name should live in infamy), from which Mook concluded that it was a “waste of time and energy” to try to persuade undecided voters or to go to rural areas. Mook ignored pleas from state-level organizers for adequate organizing and advertising budgets, and rebuffed everyone who dared to question the algorithm’s superior wisdom. They were fools who didn’t understand the superiority of cold hard math to fuzzy intuition, and Mook felt they failed to adequately appreciate the superior rationality of his strategy. Thus every time Bill Clinton warned that the campaign was dangerously losing support among the white working class, and “underestimating the significance of Brexit,” Mook responded that “the data run counter to your anecdotes.” After the election, asked to explain what the hell had happened, Mook blamed the data. (I can’t help but be reminded of Michael Scott obediently following his GPS as it directs him to drive into a lake, because “the machine knows.”)

Numerous tactical decisions were simply inscrutable. A planned rally in Green Bay, which would have paired Clinton with Barack Obama, was canceled after the Orlando nightclub shooting and never rescheduled. Mook “declined to use pollsters to track voter preferences in the final three weeks of the campaign” even though some advisors warned him that it was an “unwise decision because it robbed him of another data point against which to check the analytics.” Bernie Sanders recorded a TV spot promoting Clinton, but the campaign declined to air it, which some insiders thought was a “real head-scratcher” giving the difficulty Clinton was having in swaying former Bernie voters. A campaign staffer confirms that “our failure to reach out to white voters, like literally from the New Hampshire primary on… never changed.” Mook was so confident they would win, however, that he had already been considering how to get himself appointed to head the DNC afterwards. The arrogance was infectious: phone-banking volunteers, who realized there was little enthusiasm for Clinton among the electorate, were puzzled that “campaign staffers were so confident” and “acting like they had this in the bag.”

But it would be a mistake to pin too much blame on Robby Mook as an individual. Allen and Parnes say that Clinton herself was an adherent of the “facts over feelings” dogma, and was so “driven by math… that she couldn’t, or wouldn’t, see that she was doing nothing to inspire the poor, rural, and working-class white voters.” Clinton favored evidence-based decision-making, but often to the point of absurdity. Everything she said or did was focus grouped, calculated, and reworked by committee in order to be mathematically optimal. A vast speechwriting bureaucracy watered down every public utterance to the point of total vapidity (they even “deliberated over the content of tweets for hours on end,” an especially galling revelation when one considers the quality of the resulting tweets). Yet Clinton was somehow puzzled as to why the public found her robotic and inauthentic! Her team even proudly told the New York Times of their brand-new plan to make Hillary appear more warm and likable, then they were somehow surprised to discover that the idea of an “authenticity strategy” was considered hilariously oxymoronic.

In writing about Clinton’s selection of Tim Kaine as Vice President, I wrote that he was so bland that he seemed to have been selected by algorithm. This turns out to be almost exactly what happened; Clinton didn’t know or care much about Kaine, but he was simply the end result of a formulaic process of elimination. Nobody had any notion that he would energize voters; he was merely logically inevitable, having met the maximum number of designated criteria. (Note that if Clinton had picked Bernie Sanders she would have won the election, but this was never even seriously considered.)


Many of Hillary Clinton’s supporters have been resentful over the attention paid to the infamous “email scandal,” suggesting that Clinton was unfairly damaged in the press over something trivial. But by Shattered’s account, Clinton’s own poor management of the situation helped drag the story out. Even Barack Obama was exasperated with Clinton. He “couldn’t understand what possessed Hillary to set up the private email server” in the first place, and then thought “her handling of the scandal—obfuscate, deny, and evade—amounted to political malpractice.” Clinton did make factually untrue statements to the public about whether she sent or received classified documents on the private email server and her campaign tried to mislead the press into treating the FBI’s investigation as less serious than it actually was. (The Clinton campaign falsely insisted that the investigation was a mere “security review” rather than a criminal investigation, and even got the New York Times to partially go along.) She spent months refusing to apologize as donors and allies “furiously” pressured her to engage in some public contrition to defuse the issue, and Clinton ally Neera Tanden wrote in an email that “her inability to just do a national interview and communicate genuine feelings of remorse and regret is now, I fear, becoming a character problem.” Sometimes Hillary Clinton’s public relations instincts were almost unbelievably poor: when a reporter asked her if she had wiped her email server, Clinton replied “What, like with a cloth or something?” This did not exactly scream forthrightness and seriousness.

Clinton did know that she was clueless about the psychology of the American voter, at one point admitting “I don’t understand what’s happening with the country. I can’t get my arms around it” and knew she “couldn’t grasp the sentiment of the electorate.” But throughout the process, she disregarded the advice of those who cautioned her about getting on the wrong side of the prevailing populist tides. She had “ignored warnings from friends not to give the paid speeches” to Goldman Sachs that would ultimately create months of bad press when she pointlessly refused to release the (relatively benign) transcripts. She insisted that one speech should retain a “sappy” reference to the $2400-a-ticket Broadway musical Hamilton, despite several suggestions from speechwriters that it “connected with her liberal donors and cosmopolitan millennial aides but perhaps not the rest of the country.” (Note that she did this even after Current Affairs had carefully explained how the idea of a nationwide mania for Hamilton is a myth that exists only among political and cultural elites.) And she spent August hanging out in the Hamptons with wealthy donors and celebrities, attending a swanky fundraiser with Calvin Klein, Jimmy Buffett, Jon Bon Jovi, and Paul McCartney, and joining them for a celebrity sing-along of “Hey Jude.” (The New York Times ran a story explaining to voters why Hillary had disappeared from the campaign trail entitled “Where Has Hillary Clinton Been? Ask the Ultra-Rich…”) To the parts of the country seething with resentment of coastal elites, this was probably the worst possible way for Clinton to pass the summer months.

By far the largest problem with Clinton’s campaign, however, and the one that recurs consistently throughout Allen and Parnes’ narrative, is the team’s total inability to craft a compelling message for the campaign. “There wasn’t a real clear sense of why she was in” the race to begin with, and she was consistently “unable to prove to many voters that she was running for the presidency because she had a vision for the country rather than visions of power.” Despite Clinton’s vow to learn from the mistakes of her loss against Obama, “no one had figured out how to make the campaign about something bigger than Hillary.” A speechwriter assigned to draft an address laying out the reasons for Hillary’s candidacy found the task nearly impossible; Clinton simply couldn’t provide a good reason why she was running. She literally did feel as if it was simply “her turn,” and campaign staffers even floated the possibility of using “it’s her turn” as a public justification for her candidacy. Just as many people suspected, Clinton didn’t run because she had a real idea of how she wanted to change the country (after all, “America Is Already Great”), but simply felt as if she was the most qualified and deserving person for the job. Pressured to come up with a slogan to capture the essence of Clinton’s run, the team finally settled on “Breaking Barriers,” which the campaign staff all hated and the public instantly forgot.

The one area in which Clinton appears to have truly shined is in debate preparation. Allen and Parnes reveal that she obsessively prepared for her televised encounters with Donald Trump, conducting multiple intensive drills and meticulously memorizing policy details. Staff recalled that “she needed to theorize everything to the ground.” Her advisor Philippe Reines went to extraordinary lengths to perfect his Trump impersonation, even considering dyeing himself orange. Clinton’s practice rounds paid off. She was widely seen as having mashed Trump into dust, her carefully-polished and intelligent answers presented a dignified contrast to Trump’s sniffing and blustering. (It’s amusing to think of how much effort Trump probably put into his own preparation, having given us possibly the most revealing example in U.S. history of what “just going ahead and winging it” in a nationally-televised presidential debate would look like.)


But even Clinton’s excessive attention to the debates reveals one of the campaign’s core weaknesses. Clinton comes across as subscribing to what Luke Savage classifies as theWest Wing view” of political power, namely that success in politics is produced by having the best argument in favor of your position. On this view, if you win the debates, you are supposed to become president. Thus Kennedy beat Nixon by beating him in a debate, and Bill Clinton beat George H.W. Bush the same way. It’s a perspective that seems to have infected both the Obama administration and the Clinton campaign, each of which appears to have been blindsided by the fact that their right-wing opponents could not be defeated by polite discourse and appeals to reason. As Savage points out, this was the mistake made by Ezra Klein, who wrote that Clinton’s three debate performances “left the Trump campaign in ruins,” conflating “the debate” with “the campaign” and contributing to the media consensus that because Hillary had proven Trump to be wrong and unqualified, she was therefore somehow likely to win. In reality, the debates are theater and do not matter. (Or if they do, it is not because of the quality of their arguments but the quality of their persuasive power.) A similar critique can be made of late-night political comedy; it may be satisfying when John Oliver “eviscerates” Donald Trump, but it can also leave us with the false sense that Trump has somehow been “taken down” in some actual meaningful sense, even though it’s perfectly possible for someone’s power to grow even as they are rhetorically humiliated night after night.

This is the sort of lesson from Shattered that goes well beyond Clinton. And in analyzing the book’s account, it’s important to distinguish between those failings that are unique to Clinton and her 2016 political team and those that represent wider tendencies in the Democratic Party. The Clinton-specific traits are less relevant, since she is gone from the political world (unless, God forbid, she actually does run for Mayor of New York or Chelsea Clinton takes a break from occupying a string of vague sinecures to pursue a congressional seat). But some things are deep-rooted and will come back again and again until Democrats wake up and fix them.

The defects that are Clinton-specific (or, at least, not fundamental to contemporary Democratic politics) are managerial incompetence and Nixonian levels of cronyism and paranoia. Clinton was obsessed with loyalty, “prizing [it] most among human traits” (above, e.g., virtue). She had downloaded and rooted through the emails of all her 2008 campaign staff to determine who had screwed her, and tried to sniff out “acts of betrayal.” She even assigned “loyalty scores” to various members of Congress, “from one for the most loyal to seven for those who had committed the most egregious acts of treachery.” She and Bill had worked to unseat those who made the list of traitors. Even among trusted staff, secrets were kept closely guarded. When Hillary Clinton became sick with pneumonia, important campaign officials were kept in the dark, causing them to send mixed messages to the press and look as if they were hiding something. After the 2008 campaign, Clinton had wondered what had created the campaign’s destructive atmosphere of suspicion and mutual hostility, and she decided to reset in 2016 with a whole new group of people. This time it happened again, yet she still found herself perplexed as to what or who the common denominator could be. (Another theme of Shattered is that the Clintons never, ever blame themselves for anything that goes wrong.)

On the incompetence front, as other reviewers have noted, much of Shattered reads like a discarded story outline from Veep. In one of the book’s more amusing moments, a Clinton staffer mishears a request to book a major TV interview with “Bianna.” The staffer hears “Brianna” instead, and books the interview with the tough-minded Brianna Keilar of CNN, rather than the desired Bianna Golodryga of Yahoo! News, who is married to a Clinton advisor and thus expected to be a soft touch. The resulting encounter did not go well. Actually, while this anecdote has been widely commented on, it’s a little unfair to read too much into it. All politics is Veep-like to one extent or another, and misunderstandings and bunglings are the Washington way. The true case for incompetence comes from Clinton’s inability to manage a campaign team or plot an electoral strategy.

These particular aspects of the Clinton campaign can theoretically be corrected for in the future, without changing the party much. Barack Obama demonstrated that Wall Street-friendly Democratic centrism can be politically deft and free of Nixonism. It can even be somewhat inspiring, despite ultimately being vacuous. But some of the tendencies displayed in Shattered are inevitable, and bound to recur without serious structural reforms to the Democratic Party.


First, the Clinton campaign’s inability to forge a coherent vision for the country was no accident. Goodness knows they tried; dozens of smart people sat around in rooms for months trying to figure out why Hillary Clinton was running and what she wanted to do. But it was an unanswerable question, because the answer is that she didn’t really want to do anything and wasn’t really running for any good reason. She couldn’t give them a good answer, so obviously they couldn’t give her one. And that’s honestly not because Hillary Clinton is a uniquely egotistical and myopic person. Instead, she’s simply one of many adherents to a kind of “managerial” liberalism, which sees its aspirations for governance less in terms of some clear vision for how the world ought to be, and more as an enterprise in which small groups of smart, qualified, decent-but-pragmatic people should be appointed to preside over the status quo, perhaps tweaking here and there as they see fit. This philosophy means politics is not a contest to enact serious and principled moral commitments, but is little more than a resume-measuring contest. The Democratic Party doesn’t stand for anything in particular, other than the fact that it isn’t vulgar, irrational, racist, and unqualified like Donald Trump.

Politics thereby becomes hollow, drained of its center, with a lot of expertise but without an underlying set of core values. The Clinton campaign puzzled over the fact that they had “laid out a million detailed policies” without the public being able to remember a single one of them. But that shouldn’t have been surprising; if you’re not motivated by a coherent set of principles, then your ideas won’t be coherent either. One reason Republicans are highly effective at messaging is that their worldview holds together and is intelligible. Freedom is good, markets are freedom, therefore markets are good and government is bad. Once you know what you stand for and why, it’s easy to deliver a clear message, and even Herman Cain, with his colossally stupid “9-9-9” tax plan, produced a more memorable policy proposal than anything to come from the squabbling of Clinton’s Authenticity Committees. (And it would be a mistake to think that Republicans are unfairly advantaged by the fact that dumb, oversimplified policies are the easily communicated ones. The Civil Rights movement paired demands for complex legislation with elementary appeals to morality, and Martin Luther King’s speeches are things of both great intellectual subtlety and astonishing clarity and cogency. Heck, the original Martin Luther also managed to get his theses across, even though there were 95 of them.)

At no point in Shattered does anyone in the Clinton campaign display a sign of caring about anything beyond the narrow goal of getting elected. The decision of whether to promise criminal justice reform is not taken based on whether it’s morally reprehensible for a country to keep multiple millions of its own people in cages, but on a calculus of whether it would make African American millennials marginally more likely to turn up to the polls. Clinton did not emphasize issues of gender and race in the campaign because she cared about them the most (after all, in 2008, she had been equally happy to cast her appeal explicitly toward white people instead). Rather, Robby Mook’s algorithm had concluded that each dollar spent on encouraging black and Hispanic Democrats to vote was more probable to yield a return than a dollar spent trying to persuade an undecided working-class white voter.

This is what can happen when you stay in politics too long. You get in because you want to do some good. Then, for the sake of expediency, you make a moral compromise here and there. Yet if you don’t have a clear sense of what you’re ultimately firmly committed to, sooner or later you’ll just be doing whatever it takes in order to reach higher office. You begin by rationalizing that the ends justify the means. But if you’re not careful, things will soon become all means and no ends. Politics will become about itself rather than about whatever it is you started off trying to do. Of course, political ideas must be pragmatic and grounded. But Clintonian politics takes this to its amoral extreme, never taking a stand for reasons of conviction rather than because it polls well. This is what gives you things like Clinton’s infamously mealy-mouthed public statement on the Dakota Access Pipeline, which pleased neither side. Ezra Klein euphemistically refers to this as Hillary Clinton’s desire to listen to and incorporate all people’s perspectives, but it’s actually just a cowardly refusal to stand for anything. (Bill Clinton is actually much more unprincipled in this respect; see Superpredator: Bill Clinton’s Use and Abuse of Black America.)

There’s something else missing from the world depicted in Shattered: democracy. That is, for the Clinton campaign, people are voters. They are there to elect you, and they mostly exist as boxes on a spreadsheet. Outside the campaign cycle, they are nonentities. Inside the campaign cycle, you only talk to them if you have to. Mook wasn’t trying to engage people in a larger political project; he was trying to coax as many as possible into dragging themselves to the polls and filling in a bubble for Hillary. There was no sense of trying to get people to join in; on-the-ground organizing was only done to the degree absolutely necessary, with television advertising frequently preferred. But if the Democratic Party is actually going to take back power, it can’t simply consist of a small team of elite campaign operatives and an electorate whose only function is to vote every two to four years. Ordinary people have to be encouraged to participate in the political life of their communities, and the fact that they haven’t is one reason that Democratic representation in state governments has been plummeting.

Perhaps the things the Democrats need at the moment can be summed up as follows:

  1. Vision
  2. Authenticity
  3. Strategy

In other words: What do you care about? Are you the sort of person people should trust to do something about it? And do you have a plan for how to do it? Clinton’s answers to these three questions, respectively, were “Nothing,” “No,” and “Yes.” She had a plan, but it wasn’t really a plan for anything, because neither she nor anybody on her team actually had an underlying animating vision of what they are trying to help the world to become. Democrats would do well to think about the Vision-Authenticity-Strategy formulation, because unless they can convince the public that they possess these things, it’s hard to see how the Republican dominance of government can be reversed. (Further elaboration on how to introduce these elements into progressive politics can be found in the final chapter of Trump: Anatomy of a Monstrosity.)

Now, let me just deal briefly with what I’m sure will be the principal objection to the various above critiques and suggestions: Hillary Clinton’s loss was not the fault of Clinton herself or her campaign team or the Democratic Party. Instead, she was subject to external sabotage from James Comey and the Russians. Democrats should not be looking inward and examining themselves but outward at the unfair interventions that turned a popular vote victory into an Electoral College loss. This appears to have been Clinton’s own perspective on the reasons for her defeat; in conversations after the election, according to Allen and Parnes, she “kept pointing her finger at Comey and Russia.”

But ultimately, there’s a simple response to this objection: Very well. You’re completely correct. Also it doesn’t matter.

First, let’s be clear on what we mean by identifying something that “caused” the result. Because the election was extremely close, and well under 100,000 people would have had to change their minds for the result to be different, hundreds and hundreds of factors can be identified as “but for” causes of the result, i.e. but for the existence of Factor X, Clinton would have won. So, say we narrow our 500 “but for” causes down to 4: the Clinton campaign’s incompetence, the Russian leaking of embarrassing internal documents, obstinate voters who refused to come out for Clinton, and James Comey’s letter. If we assume for the moment that we think each of these had an equal effect, we can see how it’s the case that in the absence of any one of them, the result would have changed:


That means that the decision of which factor to pick out for blame is subjective. Since both Comey’s letter and Clinton’s incompetence are equal causes, in that without one of them the result would have tipped in the other direction, the person who blames Comey and the person who blames Clinton are equally correct. Again, the actual chart would have about 5 million causes rather than 4. But the point is that we have to decide which of these causes to focus our attention on.

Thus the statement “The Clinton campaign lost because it lacked vision, authenticity, and strategy” is consistent with the statement “If it wasn’t for James Comey’s letter, Hillary Clinton would have won the election.” But personally, I believe it’s far more important to focus on the causes that you can change in the future. You don’t know what the FBI director will do, and you can’t affect whether he does it or not. What you can do is affect what your side does. So the Democrats cannot determine whether James Comey will choose to give a damning statement implying their candidate is a criminal. But they can determine whether or not to run a candidate who is under FBI investigation in the first place.

Note that even if you think Comey was the major cause of Clinton’s loss, it still might be advisable to turn your attention elsewhere:


If you fix the other things, then even a highly impactful Comey letter won’t tip the election. And correspondingly, even if you prove that Clinton’s own actions were 99% responsible for her loss, a Clinton supporter would be technically correct in identifying Comey as causing the outcome:


In any scenario, it’s probably best to figure out what your party itself can do to address the situation. After all, if we’re really adding up causes, Donald Trump himself is probably the primary one, yet it would be a waste of time to sit around blaming Donald Trump, if it’s also true that you ran a horrible campaign that alienated people.

You can also think certain things acted as precipitating causes without necessarily being at fault. For example, you might think that WikiLeaks was a direct cause of the result, but not think them at fault because it’s their job to post the material they receive. The same goes for the New York Times covering the email story; it might have contributed to the outcome, but you might think this isn’t their fault because they’re journalists and that’s what they do. Likewise James Comey; you might believe he was doing his job as he saw fit. And Bernie Sanders: Clinton may have lost both because she gave speeches to Goldman Sachs and because Bernie Sanders repeatedly criticized her for it, but you might think that one of those things is more justified than the other. There’s a question of which things you can change to improve outcomes, and then there’s a question of which things you should change. In 1992, for example, Bill Clinton realized that Democrats could win more elections if they adopted the Republican platform of slashing welfare and locking up young black men. This did change outcomes. But it was also heinous. And personally, I think you’re changing something about the party, you should change “Democrats enriching themselves from Wall Street speeches” rather than “people pointing out that Democrats are enriching themselves from Wall Street speeches.”

Shattered is both tragic and comic. It’s tragic because Donald Trump becomes president at the end. But it’s comic in that it depicts a bunch of egotistical and hyper-confident people arrogantly pursuing an obviously foolish strategy, dismissing every critic as irrational and un-pragmatic, only to completely fall on their faces. There was, Allen and Parnes tell us, “nothing like the aimlessness and dysfunction of Hillary Clinton’s second campaign for the presidency—except maybe those of her first bid for the White House.” And however horrible it may be to have Donald Trump as commander in chief (it is incredibly, deeply horrible and threatens all of human civilization), reading Shattered one cannot help but get a tiny amount of satisfaction from the fact that Mook and Clinton’s cynical and contemptuous attitude toward the American public didn’t actually produce the result that they were certain it would. One wishes they had won, but one is also a tiny bit glad that they lost.

Vision, authenticity, strategy. You need to have clear sense of what you want to do and why you want to do it. You need to show people that you mean it and believe in it. And you need to have an idea of how to get from here to there. The Clinton campaign had no vision, was inauthentic, and botched its strategy. But that’s not a problem unique to Hillary Clinton, and singling her out for too much criticism is unfair and, yes, sexist (especially because Bill is much worse). This is a party-wide failure, and it will require more than just banishing the Clintons from politics. If the Democrats are to have a future, they must offer something better, more honest, and more inspiring. With Republicans dominating the government, we cannot afford to end up shattered again.

I Don’t Care How Good His Paintings Are, He Still Belongs In Prison

George W. Bush committed an international crime that killed hundreds of thousands of people.

Critics from the New Yorker and the New York Times agree: George W. Bush may have been an inept head of state, but he is a more than capable artist. In his review of Bush’s new book Portraits of Courage: A Commander in Chief’s Tribute to America’s Warriors (Crown, $35.00), New Yorker art critic Peter Schjeldahl says Bush’s paintings are of “astonishingly high” quality, and his “honestly observed” portraits of wounded veterans are “surprisingly likable.” Jonathan Alter, in a review titled “Bush Nostalgia Is Overrated, but His Book of Paintings Is Not,” agrees: Bush is “an evocative and surprisingly adept artist.” Alter says that while he used to think the Iraq War was “the right war with the wrong commander in chief,” he now thinks that it was the “wrong war” but with “the right commander in chief, at least for the noble if narrow purpose of creatively honoring veterans through art.”

Alter and Schjeldahl have roughly the same take on Bush: he is a decent person who made some dreadful mistakes. Schjeldahl says that while Bush “made, or haplessly fronted for, some execrable decisions…hating him took conscious effort.” Alter says that while the Iraq War was a “colossal error” and Bush “has little to show for his dream of democratizing the Middle East,” there is a certain appeal to Bush’s “charming family, warm relationship with the Obamas, and welcome defense of the press,” and his paintings of veterans constitute a “message of love” and a “step toward bridging the civilian-military divide.” Alter and Schjeldahl both see the new book as a form of atonement. Schjeldahl says that with his “never-doubted sincerity and humility,” Bush “obliviously made murderous errors [and] now obliviously atones for them.” Alter says that Bush is “doing penance,” and that the book testifies to “our genuine, bipartisan determination to do it better this time—to support healing in all of its forms.”

This view of Bush as a “likable and sincere man who blundered catastrophically” seems to be increasingly popular among some American liberals. They are horrified by Donald Trump, and Bush is beginning to seem vastly preferable by comparison. If we must have Republicans, let them be Bushes, since Bush at least seems good at heart while Trump is a sexual predator. Jonathan Alter insists he is not becoming nostalgic, but his gauzy tributes to Bush’s “love” and “warmth” fully endorse the idea of Bush’s essential goodness. Now that Bush spends his time painting puppies and soldiers, having mishaps with ponchos and joking about it on Ellen, more and more people may be tempted to wonder why anyone could ever have hated the guy.

Nostalgia takes root easily, because history is easy to forget. But in Bush’s case, the history is easily accessible and extremely well-documented. George W. Bush did not make a simple miscalculation or error. He deliberately perpetrated a war crime, intentionally misleading the public in order to do so, and showed callous indifference to the suffering that would obviously result. His government oversaw a regime of brutal torture and indefinite detention, violating every conceivable standard for the humane treatment of prisoners. And far from trying to “atone,” Bush has consistently misrepresented history, reacting angrily and defensively to those who confront him with the truth. In a just world, he would be painting from a prison cell. And through Alter and Schjeldahl’s effort to impute to Bush a repentance and sensitivity that he does not actually possess, they fabricate history and erase the sufferings of Bush’s victims.

First, it’s important to be clear what Bush actually did. There is a key number missing from both Alter and Schjeldahl’s reviews: 500,000, the sum total of Iraqi civilians who perished as a result of the U.S. war there. (That’s a conservative estimate, and stops in 2011.) Nearly 200,000 are confirmed to have died violently, blown to pieces by coalition air strikes or suicide bombers, shot by soldiers or insurgents. Others died as a result of the disappearance of medical care, with doctors fleeing the country by the score as their colleagues were killed or abducted. Childhood mortality and infant mortality shot up, as well as malnutrition and starvation, and toxins introduced by American bombardment led to “congenital malformations, sterility, and infertility.” There was mass displacement, by the millions. An entire “generation of orphans” was created, with hundreds of thousands of children losing parents and wandering the streets homeless. The country’s core infrastructure collapsed, and centuries-old cultural institutions were destroyed, with libraries and museums looted, and the university system “decimated” as professors were assassinated. For years and years, suicide bombings became a regular feature of life in Baghdad, and for every violent death, scores more people were left injured or traumatized for life. (Yet in the entire country, there were less than 200 social workers and psychiatrists put together to tend to people’s psychological issues.) Parts of the country became a hell on earth; in 2007 the Red Cross said that there were “mothers appealing for someone to pick up the bodies on the street so their children will be spared the horror of looking at them on their way to school.” The amount of death, misery, suffering, and trauma is almost inconceivable.

These were the human consequences of the Iraq War for the country’s population. They generally go unmentioned in the sympathetic reviews of George W. Bush’s artwork. Perhaps that’s because, if we dwell on them, it becomes somewhat harder to appreciate Bush’s impressive use of line, color, and shape. If you begin to think about Iraq as a physical place full of actual people, many of whom have watched their children die in front of them, Bush’s art begins to seem ghoulish and perverse rather than sensitive and accomplished. There is a reason Schjeldahl and Alter do not spend even a moment discussing the war’s consequences for Iraqis. Doing so requires taking stock of an unimaginable series of horrors, one that makes Bush’s colorful brushwork and daytime-TV bantering seem more sickening than endearing.

But perhaps, we might say, it is unfair to linger on the subject of the war’s human toll. All war, after all, is hell. We must base our judgment of Bush’s character not on the ultimate consequences of his decisions, but on the nature of the decisions themselves. After all, Schjeldahl and Alter do not deny that the Iraq War was calamitous, with Alter calling it one of “the greatest disasters in American history,” a “historic folly” with “horrific consequences,” and Schjeldahl using that curious phrase “murderous error.” It’s true that both obscure reality by using vague descriptors like “disaster” rather than acknowledging what the invasion meant for the people on whom it was inflicted. But their point is that Bush meant well, even though he may have accidentally ended up causing the birth of ISIS and plunging the people of Iraq into an unending nightmare.


Viewing Bush as inept rather than malicious means rejecting the view that he “lied us into war.” If we accept Jonathan Alter’s perspective, it was not that Bush told the American people that Iraq had weapons of mass destruction when he knew that it did not. Rather, Bush misjudged the situation, relying too hastily and carelessly on poor intelligence, and planning the war incompetently. The war was a “folly,” a bad idea poorly executed, but not an intentional act of deceit or criminality.

This view is persuasive because it’s partially correct. Bush did not “lie that there were weapons of mass destruction,” and it’s unfortunate that anti-war activists have often suggested that this was the case. Bush claims, quite plausibly, that he believed that Iraq possessed WMDs, and there is no evidence to suggest that he didn’t believe this. That supports the “mistake” view, because a lie is an intentional false statement, and Bush may have believed he was making a true statement, thus being mistaken rather than lying.

But the debate over whether Bush lied about WMDs misstates what the actual lie was. It was not when Bush said “the Iraq regime continues to possess and conceal some of the most lethal weapons ever devised” that he lied to the American people. Rather, it was when he said Iraq posed a “threat” and that by invading it the United States was “assuring its own national security.” Bush could not have reasonably believed that the creaking, isolated Saddam regime posed the kind of threat to the United States that he said it did. WMDs or not, there was nothing credible to suggest this. He therefore lied to the American people, insisting that they were under a threat that they were not actually under. He did so in order to create a pretext for a war he had long been intent on waging.

This is not to say that Bush’s insistence that Saddam Hussein had WMDs was sincere. It may or may not have been. The point is not that Bush knew there weren’t WMDs in Iraq, but that he didn’t care whether there were or not. This is the difference between a lie and bullshit: a lie is saying something you know to be untrue, bullshit is saying something without caring to find out if it’s true. The former highest-ranking CIA officer in Europe told 60 Minutes that the Bush White House intentionally ignored evidence contradicting the idea that Saddam had WMDs. According to the officer, when intelligence was provided that contradicted the WMD story, the White House told the officer that “this isn’t about intel anymore. This is about regime change,” from which he concluded that “the war in Iraq was coming and they were looking for intelligence to fit into the policy.” It’s not, then, that Bush knew there were no WMDs. It’s that he kept himself from finding out whether there were WMDs, because he was determined to go to war.

The idea that Saddam posed a threat to the United States was laughable from the start. The WMDs that he supposedly possessed were not nuclear weapons, but chemical and biological ones. WMD is a catch-all category, but the distinction is important; mustard gas is horrific, but it is not a “suitcase nuke.” Bashar al-Assad, for example, possesses chemical weapons, but does not pose a threat to the U.S. mainland. (To Syrians, yes. To New Yorkers, no.) In fact, according to former Saddam aide Tariq Aziz, “Saddam did not consider the United States a natural adversary, as he did Iran and Israel, and he hoped that Iraq might again enjoy improved relations with the United States.” Furthermore, by the time of the U.S. invasion, Saddam “had turned over the day-to-day running of the Iraqi government to his aides and was spending most of his time writing a novel.” There was no credible reason to believe, even if Saddam possessed certain categories of weapons prohibited by international treaty, that he was an active threat to the people of the United States. Bush’s pre-war speeches used terrifying rhetoric to leap from the premise that Saddam was a monstrous dictator to the conclusion that Americans needed to be scared. That was simple deceit.

In fact, Bush had long been committed to removing Saddam, and was searching for a plausible justification. Just “hours after the 9/11 attacks,” Donald Rumsfeld and the Vice Chairman of the Joint Chiefs of Staff were pondering whether they could “hit Saddam at the same time” as Osama bin Laden as part of a strategy to “move swiftly, go massive.” In November of 2001, Rumsfeld and Tommy Franks began plotting the “decapitation” of the Iraqi government, pondering various pretexts for “how [to] start” the war. Possibilities included “US discovers Saddam connection to Sept. 11 attack or to anthrax attacks?” and “Dispute over WMD inspections?” Worried that they wouldn’t find any hard evidence against Saddam, Bush even thought of painting a reconnaissance aircraft in U.N. colors and flying it over Iraqi airspace, goading Saddam into shooting it down and thereby justifying a war. Bush “made it clear” to Tony Blair that “the U.S. intended to invade… even if UN inspectors found no evidence of a banned Iraqi weapons program.”

Thus Bush’s lie was not that there were weapons of mass destruction. The lie was that the war was about weapons of mass destruction. The war was about removing Saddam Hussein from power, and asserting American dominance in the Middle East and the world. Yes, that was partially to do with oil (“People say we’re not fighting for oil. Of course we are… We’re not there for figs.” said former Defense Secretary Chuck Hagel, while Bush CENTCOM commander John Abizaid admitted “Of course it’s about oil, we can’t really deny that”). But the key point is that Bush detested Saddam and was determined to show he could get rid of him; according to those who attended National Security Council meetings, the administration wanted to “make an example of Hussein” to teach a lesson to those who would “flout the authority of the United States.” “Regime change” was the goal from the start, with “weapons of mass destruction” and “bringing democracy” just convenient pieces of rhetoric.

Nor was the war about the well-being of the people of Iraq. Jonathan Alter says that Bush had a “dream of democratizing the Middle East” but simply botched it; Bush’s story is almost that of a romantic utopian and tragic hero, undone by his hubris in just wanting to share democracy too much. In reality, the Bush White House showed zero interest in the welfare of Iraqis. Bush had been warned that invading the country would lead to a bloodbath; he ignored the warning, because he didn’t care. The typical line is that the occupation was “mishandled,” but this implies that Bush tried to handle it well. In fact, as Patrick Cockburn’s The Occupation and Rajiv Chandrasekaran’s Imperial Life in The Emerald City show, American officials were proudly ignorant of the Iraqi people’s needs and desires. Decisions were made in accordance with U.S. domestic political considerations rather than concern for the safety and prosperity of Iraq. Bush appointed totally inexperienced Republican Party ideologues to oversee the rebuilding effort, rather than actual experts, because the administration was more committed to maintaining neoconservative orthodoxies than actually trying to figure out how to keep the country from self-destructing. When Bush gave Paul Bremer his criteria for who should be the next Iraqi leader, he was emphatic that he wanted someone who would “stand up and thank the American people for their sacrifice in liberating Iraq.”

As the situation in Iraq deteriorated into exactly the kind of sectarian violence that the White House had been warned it would, the Bush administration tried to hide the scale of the disaster. Patrick Cockburn reported that while Bush told Congress that fourteen out of eighteen Iraqi provinces “are completely safe,” this was “entirely untrue” and anyone who had gone to these provinces to try and prove it would have immediately been kidnapped or killed. In tallies of body counts, “U.S. officials excluded scores of people killed in car bombings and mortar attacks from tabulations measuring the results of a drive to reduce violence in Baghdad.” Furthermore, according to the Guardian “U.S. authorities failed to investigate hundreds of reports of abuse, torture, rape and even murder by Iraqi police and soldiers” because they had “a formal policy of ignoring such allegations.” And the Bush administration silently presided over atrocities committed by both U.S. troops (who killed almost 700 civilians for coming too close to checkpoints, including pregnant women and the mentally ill) and hired contractors (in 2005 an American military unit observed as Blackwater mercenaries “shot up a civilian vehicle” killing a father and wounding his wife and daughter).

Then, of course, there was torture and indefinite detention, both of which were authorized at the highest levels. Bush’s CIA disappeared countless people to “black sites” to be tortured, and while the Bush administration duplicitously portrayed the horrific abuses at Abu Ghraib as isolated incidents, the administration was actually deliberately crafting its interrogation practices around torture and attempting to find legal loopholes to justify it. Philippe Sands reported that the White House tried to pin responsibility for torture on “interrogators on the ground,” a “false” explanation that ignored the “actions taken at the very highest levels of the administration” approving 18 new “enhanced interrogation” techniques, “all of which went against long-standing U.S. military practice as presented in the Army Field Manual.” Notes from 20-hour interrogations reveal the unimaginable psychological distress undergone by detainees:

Detainee began to cry. Visibly shaken. Very emotional. Detainee cried. Disturbed. Detainee began to cry. Detainee bit the IV tube completely in two. Started moaning. Uncomfortable. Moaning. Began crying hard spontaneously. Crying and praying. Very agitated. Yelled. Agitated and violent. Detainee spat. Detainee proclaimed his innocence. Whining. Dizzy. Forgetting things. Angry. Upset. Yelled for Allah. Urinated on himself. Began to cry. Asked God for forgiveness. Cried. Cried. Became violent. Began to cry. Broke down and cried. Began to pray and openly cried. Cried out to Allah several times. Trembled uncontrollably.

Indeed, the U.S. Senate Select Intelligence Committee’s report on CIA interrogation tactics concluded that they were “brutal and far worse than the CIA represented to policymakers.” They included “slamming detainees into walls,” “telling detainees they would never leave alive,” “Threats to harm the children of a detainee, threats to sexually abuse the mother of a detainee, threats to cut a detainee’s mother’s throat,” waterboardings that sometimes “evolved into a series of near drownings,” and the terrifyingly clench-inducing “involuntary rectal feedings.” Sometimes they would deprive detainees of all heat (which “likely contributed to the death of a detainee”) or perform what was known as a “rough takedown,” a procedure by which “five CIA officers would scream at a detainee, drag him outside of his cell, cut his clothes off, and secure him with Mylar tape. The detainee would then be hooded and dragged up and down a long corridor while being slapped and punched.” All of that is separate from the outrage of indefinite detention in itself, which kept people in cages for years upon years without ever being able to contest the charges against them. At Guantanamo Bay, detainees became “so depressed, so despondent, that they had no longer had an appetite and stopped eating to the point where they had to be force-fed with a tube that is inserted through their nose.” Their mental and emotional conditions would deteriorate until they were reduced to a childlike babbling, and they frequently attempted self-harm and suicide. The Bush administration even arrested the Muslim chaplain at Guantanamo Bay, U.S. Army Captain James Yee, throwing him in leg irons, threatening him with death, and keeping him in solitary confinement for 76 days after he criticized military practices.


Thus President Bush was not a good-hearted dreamer. He was a rabid ideologue who would spew any amount of lies or B.S. in order to achieve his favored goal of deposing Saddam Hussein, and who oversaw serious human rights violations without displaying an ounce of compunction or ambivalence. There was no “mistake.” Bush didn’t “oops-a-daisy” his way into Iraq. He had a goal, and he fulfilled it, without consideration for those who would suffer as a result.

It should be mentioned that most of this was not just immoral. It was illegal. The Bush Doctrine explicitly claimed the right to launch a preemptive war against a party that had not actually attacked the United States, a violation of the core Nuremberg principle that “to initiate a war of aggression…is not only an international crime; it is the supreme international crime, differing only from other war crimes in that it contains within itself the accumulated evil of the whole.” Multiple independent inquiries have criticized the flimsy legal justifications for the war. Former U.N. Secretary General Kofi Annan openly declared the war illegal, and even Tony Blair’s former Deputy Prime Minister concurred. In fact, it’s hard to see how the Iraq War could be anything but criminal, since no country—even if it gathers a “coalition of the willing”—is permitted to simply depose a head of state at will. The Iraq War made the Nuremberg Laws even more empty and selective than they have always been, and Bush’s escape from international justice delegitimizes all other war crimes prosecutions. A core aspect of the rule of law is that it applies equally to all, and if the United States is free to do as it pleases regardless of its international legal obligations, it is unclear what respect anybody should hold for the law.

George W. Bush may therefore be a fine painter. But he is a criminal. And when media figures try to redeem him, or portray him as lovable-but-flawed, they ignore the actual record. In fact, Bush has not even made any suggestion that he is trying to “atone” for a great crime, as liberal pundits have suggested he is. On the contrary, he has consistently defended his decision-making, and the illegal doctrine he espoused. He even wrote an entire book of self-justifications. Bush is not a haunted man. And since any good person, if he had Bush’s record, would be haunted, Bush is not a good person. Kanye West had Bush completely right. He simply does not think very much about the lives of people darker than himself. That sounds like an extreme judgment, but it’s true. If he cared about them, he wouldn’t have put them in cages. George Bush may love his grandchildren, he may paint with verve and soul. But he does not care about black or brown people.

It’s therefore exasperating to see liberals like Alter and Schjeldahl offer glowing assessments of Bush’s book of art, and portray him as soulful and caring. Schjeldahl says that Bush is so likable that hating him “takes conscious effort.” But it only takes conscious effort if you don’t think about the lives of Iraqis. If you do think about the lives of Iraqis, then hating him not only does not take conscious effort, but it is automatic. Anyone who truly appreciates the scale of what Bush inflicted on the world will feel rage course through their body whenever they hear his voice, or see him holding up a paintbrush, with that perpetual simpering grin on his face.

Alter and Schjeldahl are not alone in being captivated by Bush the artiste. The Washington Post’s art critic concluded that “the former president is more humble and curious than the Swaggering President Bush he enacted while in office [and] his curiosity about art is not only genuine but relatively sophisticated.” This may be the beginning of a critical consensus. But it says something disturbing about our media that a man can cause 500,000 deaths and then have his paintings flatteringly profiled, with the deaths unmentioned. George W. Bush intentionally offered false justifications for a war, destroyed an entire country, and committed an international crime. He tortured people, sometimes to death.

But would you look at those brushstrokes? And have you seen the little doggies?