Feeds:
Entradas
Comentarios

Harry Truman’s Atomic Bomb Decision: After 70 Years We Need to Get Beyond the Myths 

HNN August 2, 2015

President Truman’s decision to use the atomic bomb against Japan in 1945 is arguably the most contentious issue in all of American history. The bombings of Hiroshima and Nagasaki have generated an acrimonious debate that has raged with exceptional intensity for five decades. The spectrum of differing views ranges from unequivocal assertions that the atomic attacks were militarily and morally justified to claims that they were unconscionable war crimes. The highly polarized nature of the controversy has obscured the reasons Truman authorized the dropping of the bomb and the historical context in which he acted.

The dispute over the atomic bomb has focused on competing myths that have received wide currency but are seriously flawed. The central question is, “was the bomb necessary to end the war as quickly as possible on terms that were acceptable to the United States and its allies?”

The “traditional” view answers the question with a resounding “Yes.” It maintains that Truman either had to use the bomb or order an invasion of Japan that would have cost hundreds of thousands of American lives, and that he made the only reasonable choice. This interpretation prevailed with little dissent among scholars and the public for the first two decades after the end of World War II. It still wins the support of a majority of Americans. A Pew Research Center poll published in April 2015 showed that 56% of those surveyed, including 70% aged 65 and over, agreed that “using the atomic bomb on Japanese cities in 1945 was justified,” while 34% thought it was unjustified.

The “revisionist” interpretation that rose to prominence after the mid-1960s answers the question about whether the bomb was necessary with an emphatic “No.” Revisionists contend that Japan was seeking to surrender on the sole condition that the emperor, Hirohito, be allowed to remain on his throne. They claim that Truman elected to use the bomb despite his awareness that Japan was in desperate straits and wanted to end the war. Many revisionists argue that the principal motivation was not to defeat Japan but to intimidate the Soviet Union with America’s atomic might in the emerging cold war.

It is now clear that the conflicting interpretations are unsound in their pure forms. Both are based on fallacies that have been exposed by the research of scholars who have moved away from the doctrinaire arguments at the poles of the debate.

The traditional insistence that Truman faced a stark choice between the bomb and an invasion is at once the most prevalent myth and the easiest to dismiss. U.S. officials did not regard an invasion of Japan, which was scheduled for November 1, 1945, as inevitable. They were keenly aware of other possible means of achieving a decisive victory without an invasion. Their options included allowing the emperor to remain on the throne with sharply reduced power, continuing the massive conventional bombing and naval blockade that had destroyed many cities and threatened the entire Japanese nation with mass starvation, and waiting for the Soviets to attack Japanese troops in Manchuria. Traditionalists have generally played down the full range of options for ending the war and failed to explain why Truman regarded the bomb as the best alternative.

A staple of the traditional interpretation is that an invasion of Japan would have caused hundreds of thousands of American deaths, as Truman and other officials claimed after the war. But it is not supported by contemporaneous evidence. Military chiefs did not provide estimates in the summer of 1945 that approached numbers of that magnitude. When Truman asked high-level administration officials to comment on former president Herbert Hoover’s claim that an invasion would cost 500,000 to 1,000,000 American lives, General Thomas T. Handy, General Marshall’s deputy chief of staff, reported that those estimates were “entirely too high.” Hoover apparently based his projections on an invasion of the entire Japanese mainland, but military planners were convinced that landings on southern Kyushu and perhaps later on Honshu, if they became necessary, would force a Japanese surrender.

The revisionist interpretation suffers from even more grievous flaws. Japanese sources opened in the past few years have shown beyond reasonable doubt that Japan had not decided to surrender before Hiroshima. It is also clear from an abundance of evidence that U.S. officials were deeply concerned about how to end the war and how long it would take. The arguments that Japan was seeking to surrender on reasonable terms and that Truman knew it are cornerstones of the revisionist thesis. They have been refuted by recent scholarship, though impressing the Soviets was a secondary incentive for using the bomb.

The answer to the question about whether the bomb was necessary is “Yes”. . . and “No.” Yes, it was necessary to end the war at the earliest possible moment, and that was Truman’s primary concern. Without the bomb, the war would have lasted longer than it did. Nobody in a position of authority told Truman that the bomb would save hundreds of thousands of American lives, but saving a far smaller number was ample reason for him to use it. He hoped that the bomb would end the war quickly and in that way reduce American casualties to zero.

No, the bomb was not necessary to avoid an invasion of Japan. The war would almost certainly have ended before the scheduled invasion. A combination of the Soviet invasion of Manchuria, the effects of conventional bombing and the blockade, the steady deterioration of conditions in Japan, and growing concerns among the emperor’s advisers about domestic unrest would probably have brought about a Japanese surrender before November 1. And no, the bomb was not necessary to save hundreds of thousands of American lives.

The controversy over Truman’s decision seems certain to continue. The use of a bomb that killed tens of thousands instantaneously needs to be constantly re-examined and re-evaluated. This process should be carried out on the basis of documentary evidence and not on the basis of myths that have taken hold and dominated the discussion for 70 years.

J. Samuel Walker is the author of Prompt and Utter Destruction: Truman and the Use of Atomic Bombs against Japan (University of North Carolina Press, 1997, second edition, 2004). He is now working on a third edition of the book.

President Truman and the Atom Bomb Decision: “Preventing an Okinawa from One End of Japan to Another” 

HNN    August 3, 2015

What did President Harry S. Truman and his senior advisers believe an invasion of Japan would cost in American dead? For many years this has been a matter of heated historical controversy, with Truman’s critics maintaining that the huge casualty estimates he later cited were a «postwar creation» designed to justify his use of nuclear weapons against a beaten nation already on the verge of suing for peace. The real reasons, they maintain, range from a desire to intimidate the Russians to sheer bloodlust. One historian wrote in the New York Times: «No scholar of the war has ever found archival evidence to substantiate claims that Truman expected anything close to a million casualties, or even that such large numbers were conceivable.» Another skeptic insisted on the total absence of «any high-level supporting archival documents from the Truman administration in the months before Hiroshima that, in unalloyed form, provides even an explicit estimate of 500,000 casualties, let alone a million or more.»

A series of documents discovered at the Harry S. Truman Presidential Library and Museum in Independence, Missouri, and described by this author in an article in the Pacific Historical Review, tell a different story.

In the midst of the bloody fighting on Okinawa, which began in April 1945, President Truman received a warning that the invasion could cost as many as 500,000 to 1,000,000 American lives. The document containing this estimate, «Memorandum on Ending the Japanese War,» was one of a series of papers written by former President Herbert Hoover at Truman’s request in May 1945.

The Hoover memorandum is well known to students of the era, but they have generally assumed that Truman solicited it purely as a courtesy to Hoover and Secretary of War Henry Stimson, who had been Hoover’s Secretary of State. What had lain buried in the Truman Library archives, however, was Harry Truman’s reaction to Hoover’s memoranda and the “Truman-Grew-Hull-Stimson-Vinson exchange” that it prompted.

Truman reviewed the material from the former president and after writing “From Herbert Hoover” across the top of its memo 4, “Memorandum on Ending the Japanese War,” he forwarded the original copy to his manpower czar, Fred M. Vinson on or about Monday, June 4. The War Mobilization and Reconversion director had no quarrel with the casualty estimate when he responded on Thursday, 7 June, suggesting that Hoover’s paper be sent to Secretary Stimson and Acting Secretary of State Joseph C. Grew, as well as former Secretary of State Cordell Hull, who was currently a patient at the Bethesda Naval Medical Center.

Truman agreed and had his staff type up additional copies of memo 4 on Saturday, June 9 and sent them to Stimson, Grew, and Hull asking each for a written analysis and telling both Grew and Stimson that he wished to discuss their individual analyses personally — eye to eye — after they submitted their responses. Stimson subsequently sent his copy to the Deputy Chief of Staff, Major General Thomas J. Handy because he wanted to get “the reaction of the Operations Division Staff to it” and mentioned in his diary that he “had a talk both with Handy and [General George C.] Marshall on the subject.” Handy’s staff then produced a briefing paper for Stimson which drew attention to the fact that memo 4’s figure of potentially 1,000,000 American dead was fully double the Army’s estimates.  It was “entirely too high under the present plan of campaign” which entailed only the seizure of southern Kyushu, the Tokyo region, and several key coastal areas.  The pointed disclaimer “under the present plan of campaign” was, however, literally the only part of the 550-word analysis, excluding headlines, that carried a typed underline and was an ominous reminder that the battle then raging on Okinawa was itself not playing out as planned.

Hull was the first to respond directly to Truman. He branded memo 4 Hoover’s “appeasement proposal” in his June 12 letter because it suggested that the Japanese be offered lenient terms to entice them to a negotiating table. Hull did not take issue with the casualty estimate. Grew also did not take issue with the casualty estimate in his June 13 memorandum and confirmed that the Japanese “are prepared for prolonged resistance” and that “prolongation of the war will cost a large number of human lives.”

Grew’s opinion would not have come as any surprise to the president since he had told Truman, ironically just hours after the meeting with Hoover, that “The Japanese are a fanatical people capable of fighting to the last man. If they do this, the cost in American lives will be unpredictable.” One can readily surmise that Hoover and Grew’s statements, hitting virtually back-to-back in the midst of America’s costliest campaign of the Pacific war on Okinawa, were not of much comfort to the new commander in chief.

Grew’s memorandum, messengered by government courier, and Hull’s letter both arrived on Wednesday, June 13, and Truman subsequently met with Admiral William D. Leahy on the matter. Leahy, who was the president’s personal representative on the Joint Chiefs of Staff and acted as unofficial chairman at their meetings, sent a memorandum, stamped “URGENT” in capital letters, to the other JCS members as well as Secretary of War Stimson and Secretary of the Navy James Forrestal. The president wanted a meeting the following Monday afternoon, June 18, 1945, to discuss, “the losses in dead and wounded that will result from an invasion of Japan proper,” and Leahy stated unequivocally that “It is his intention to make his decision on the campaign with the purpose of economizing to the maximum extent possible in the loss of American lives. Economy in the use of time and in money cost is comparatively unimportant.” The night before the momentous meeting, Truman wrote in his diary that the decision whether to “invade Japan [or] bomb and blockade” would be his “hardest decision to date.”

The “Truman-Grew-Hull-Stimson-Vinson exchange” not only places the very high casualty numbers squarely on the President’s desk long before Hiroshima, but, says Robert Ferrell, editor of Truman’s private papers, it demonstrates that Truman «was exercised about the 500,000 figure, no doubt about that.» Ferrell adds that the exchange answers the question of why Truman called the June 18 meeting with the Joint Chiefs, Navy Secretary Forrestal, and Stimson. Said the senior archivist at the Truman Library, Dennis Bilger, when shown the documents: «This is as close to a one-to-one relationship as I have ever seen in the historical record.» Yet another discovery, by the Hoover Presidential Library’s former senior archivist, Dwight M. Miller, indicates that the huge casualty estimate likely originated during Hoover’s regular briefings by Pentagon intelligence officers.

The possible cost in American blood was of paramount importance. Entering the war “late” –and because of its sheer distance from Europe and the western Pacific – the United States did not begin to experience casualties comparable to those of the other belligerents until the conflict’s final year. By then the U.S. Army alone was losing soldiers at a rate that Americans today would find astounding, suffering an average of 65,000 killed, wounded, and missing each and every month during the “casualty surge” of 1944-45, with the November, December, and January figures standing at 72,000, 88,000 and 79,000 respectively in postwar tabulations.

Most of these young men were lost battling the Nazis, but Truman was greatly disturbed by the casualty figures from the ongoing Okinawa Campaign and the Marines’ recent battle on Iwo Jima. Even though the United States was by now several months into the steep increase in draft calls implemented under President Franklin D. Roosevelt to produce a 140,000-men-per-month “replacement stream” for the now one-front war, Truman wanted to directly address this matter with his most senior advisors.

The president’s meeting with the Joint Chiefs and service secretaries took place before one of the recipients of Truman directive, Stimson, had submitted a written response. It was not until after the meeting and several drafts that Stimson wrote: “The terrain, much of which I have visited several times, has left the impression on my memory of being one which would be susceptible to a last ditch defense such as has been made on Iwo Jima and Okinawa and which of course is very much larger than either of those two areas. . . . We shall in my opinion have to go through a more bitter finish fight than in Germany [and] we shall incur the losses incident to such a war.”

At the Monday meeting, all the participants agreed that an invasion of the Home Islands would be extremely costly, but that it was essential for the defeat of Imperial Japan. Said Marshall: “It is a grim fact that there is not an easy, bloodless way to victory.” There was also considerable discussion of the tactical and operational aspects surrounding the opening invasion of Kyushu, the southernmost of Japan’s Home Islands, with the emphasis on their effects on American casualties. The meeting transcript says that: “Admiral Leahy recalled that the President had been interested in knowing what the price in casualties for Kyushu would be and whether or not that price could be paid. He pointed out that the troops on Okinawa had lost 35 percent in casualties.”

Leahy noted that “If this percentage were applied to the number of troops to be employed in Kyushu, he thought from the similarity of the fighting to be expected, that this would give a good estimate of the casualties to be expected. He was interested therefore in finding out how many troops are to be used in Kyushu.”

Leahy did not believe that the dated and narrowly constructed figure of 34,000 ground force battle casualties in a ratio table accompanying General Marshal’s opening presentation offered a true picture of losses on Okinawa which, depending on the accounting method used, actually ran from 65,631 to 72,000 partially because of extreme exhaustion and combat-related psychosis. He used the total number of Army-Marine casualties to formulate the 35 percent figure, a figure which excluded the U.S. Navy’s brutal losses to Japanese Kamikaze suicide aircraft. Since Leahy, as well as the other participants including Truman, already knew that ground force casualties on Okinawa were far higher than 34,000 and approximately how many men were to be committed to the Kyushu fight, he was obviously making an effort — commonly done in such meetings — to focus the participants’ attention on the statistical consequences of the disparity. General Marshall presented the most recent figure for the troop commitment in this first (and smaller) operation of the two-phase invasion, 766,700, and allowed those around the table, including Leahy, to draw their own conclusions as to long-term implications.

A discussion then ensued on the sizes of the opposing Japanese and American forces which was fundamental to understanding how Leahy’s 35 percent might play out. Finally, Truman, who was continuing to monitor the rising casualty figures from Okinawa on a daily basis cut to the bottom line since the initial assault, Operation Olympic against the Island of Kyushu, would in fact be dwarfed by the Spring 1946 strike directly at Tokyo, Operation Coronet: “The President expressed the view that it was practically creating another Okinawa“ to which “the Chiefs of Staff agreed.”

More discussion ensued and Truman asked “if the invasion of Japan by white men would not have the effect of more closely uniting the Japanese?” Stimson stated that “there was every prospect of this.” He added that he “agreed with the plan proposed by the Joint Chiefs of Staff as being the best thing to do, but he still hoped for some fruitful accomplishment through other means.” The “other means” included a range of measures from increased political pressure brought to bear through a display of Allied unanimity at the upcoming conference in Potsdam to the as yet untested atomic weapons that it was hoped would “shock” the Japanese into surrender.

Continued discussion touched on military considerations and the merits of unconditional surrender, and the president moved to wrap up the meeting: “The President reiterated that his main reason for this conference with the Chiefs of Staff was his desire to know definitely how far we could afford to go in the Japanese campaign. He was clear on the situation now and was quite sure that the Joint Chiefs of Staff should proceed with the Kyushu operation” and expressed the hope that “there was a possibility of preventing an Okinawa from one end of Japan to the other.”
Other HNN articles by D. M. Giangreco relating to President Harry S. Truman’s decision to drop the atomic bomb: 

D. M. Giangreco is the author of Hell to Pay: Operation Downfall and the Invasion of Japan, 1945-1947 (Naval Institute Press, 2009) and his Journal of Military History article “Casualty Projections for the U.S. Invasions of Japan: Planning and Policy Implications” was awarded the Society for Military History’s Moncado Prize in 1998. The following article is abridged from his Pacific Historical Review article, “ ‘A Score of Bloody Okinawas and Iwo Jimas’: President Truman and Casualty Estimates for the Invasion of Japan” which is available from the University of California Press. 

rosenbergs

olice photos of Julius and Ethel Rosenberg (Source: Exhibits from the Julius and Ethel Rosenberg Case File, 03/13/1951 – 03/27/1951)

New Rosenberg Grand Jury Testimony Released!

David Greenglass Transcript Opened by Court Order in Case Brought by National Security Archive and Historical/Archival Associations
Greenglass testimony at trial helped send his sister Ethel Rosenberg to the electric chair, but to the grand jury he said the opposite: “I never spoke to my sister about this at all.”
Edited by Thomas Blanton
Posted – July 14, 2015
Updated – July 15, 2015, 1 p.m.

Washington, D.C., July 15, 2015 – The newly released grand jury testimony by Ethel Rosenberg’s brother David Greenglass suggests he committed perjury on the witness stand in the Rosenberg spy trial, according to experts who analyzed the documents released today and posted by the National Security Archive.

The grand jury testimony from August 1950 shows Greenglass resisting prosecutors’ questions implicating his sister, in one case (page 30) insisting: «I said before, and say it again, honestly, this is a fact: I never spoke to my sister about this at all.»

But at trial in March 1951, Greenglass and his wife Ruth put Ethel at the center of the conspiracy, typing up handwritten notes for delivery to the Soviets and operating a microfilm camera hidden in a console table – neither of which is mentioned in the grand jury statements.

Decades later, after Greenglass served nearly 10 years in prison and his wife was not even indicted, Greenglass admitted to journalist Sam Roberts that he had lied on the stand to protect his wife, whom the grand jury testimony shows was far more central to the spying than was Ethel.

Experts participating in a briefing today at the National Security Archive decried the prosecutors’ behavior as either having suborned false testimony, or presenting testimony they had reason to know was false. Attorney David Vladeck, who led the litigation to open the Rosenberg grand jury records on behalf of petitioners including the National Security Archive and the major historical associations, pointed out that prosecutors intended to use Ethel to put pressure on Julius to confess, but neither did so and thus «called the Justice Department’s bluff» in a miscarriage of justice.

Legal scholar Brad Snyder described the mistake of the U.S. Supreme Court in not accepting cert in the Rosenberg case in 1953, thus enabling their execution, when the contrast between grand jury testimony and trial testimony showed «reversible error.»

Author Steve Usdin, whose book Engineering Communism (Yale University Press) describes two of the Rosenberg spy ring’s members who went on to build the Soviet Union’s Silicon Valley, commented that the Greenglass testimony was «the last important evidence we’re likely ever to have on the Rosenberg case.» Usdin pointed out that the documents provided answers to three key questions: Were the Rosenbergs guilty of spying? Yes. Was their trial fair? Probably not. Did they deserve the death penalty? No.

Archive director Tom Blanton summed up the discussion by describing the Cold War narrative of the Rosenberg case as a black-and-white argument – supporters said they were framed, critics called them traitors. The evidence now shows both were right – a much more nuanced and difficult story. Yes, Julius Rosenberg led an active spy ring; no, Ethel Rosenberg was not an active spy, even though witting. Blanton commented that the case should be a warning about the perils of unchecked prosecutors’ power.

* * * * *

[Press Advisory]

New Rosenberg Grand Jury Transcripts To Be Released Wednesday, July 15, 2015

Key Testimony from Ethel’s Brother, David Greenglass, May Show Perjury

Result of open records lawsuit by National Security Archive and historical associations

Briefing scheduled by plaintiffs, experts, at 2 p.m., Gelman Library, George Washington University

Posted – July 14, 2015

For more information:
National Security Archive, nsarchiv@gwu.edu, 202.994.7000

Washington D.C., July 14, 2015 – Tomorrow the public will see for the first time the actual transcripts of previously secret grand jury testimony by Ethel Rosenberg’s brother, David Greenglass, in the espionage trial from the early 1950s that sent Ethel and Julius Rosenberg to the electric chair on charges of spying for the Soviet Union.

To explain the documents and provide context, the National Security Archive will host a press briefing at 2 p.m. on Wednesday, July 15, at Gelman Library, George Washington University, 7th floor (where the Archive is located), 2130 H Street NW, Washington D.C. 20037.

The U.S. government has decided not to appeal the federal court decision on May 19, 2015 ordering the release of the Greenglass testimony, in a lawsuit brought by the National Security Archive and major historical and archival associations.

Previously in 2008, the petitioners succeeded in winning release of most of the other Rosenberg grand jury testimony, but Greenglass – who was still alive at the time – objected and the court declined to include his transcripts. Greenglass passed away in 2014 and the plaintiffs re-opened the case before Judge Alvin Hellerstein in federal district court in New York.

Police mugshots of David and Ruth Greenglass (public domain)

The transcripts will show whether Greenglass mentioned to the grand jury what became his most incendiary charge at trial against his sister, that she had typed up his handwritten notes for delivery to the Soviets. Historians have now concluded that he lied on the witness stand.

Copies of the transcripts will be available on the Archive’s web site,www.nsarchive.org, and at the press briefing at 2 p.m. The government has announced that the National Archives and Records Administration will also post the transcripts starting at noon on July 15 at www.nara.gov.

Together with the Archive, the petitioners included the American Historical Association, the American Society of Legal History, the Organization of American Historians, the Society of American Archivists, and journalist Sam Roberts who authored a biography of Greenglass. Representing the petitioners are Georgetown University Law Center professor David C. Vladeck and Debra L. Raskin of the New York firm Vladeck, Waldman, Elias & Engelhard, who also authored the original 2008 petitions that opened the previous Rosenberg grand jury records.

Participating in the briefing will be Rosenberg case experts Brad Snyder and Steve Usdin, together with Archive director Tom Blanton and the petititioners’ lead attorney David Vladeck.

For background on the case, and previous news-making releases, see http://nsarchive.gwu.edu/news/20150519/

“Everybody’s talking bout…” – The Music of Nina Simone for Today’s Frustrations

It has been a year since the deaths of Eric Garner in Staten Island, New York, and Michael Brown in Ferguson, Missouri.  Their deaths kicked off a movement challenging police brutality.  From the deaths of Garner and Brown slogans like: “Hands Up, Don’t Shoot!” “I Can’t Breathe!” and most notably “Black Lives Matter” arose to proclaim the value of Black lives in the midst of an overwhelming tide of racial violence.  One year later, the list of victims keeps growing.  Freddie Gray’s death in Baltimore, the murder of the Charleston Nine at Emmanuel A.M.E. Church, Sandra Bland in Texas, Sam DuBose in Cincinnati, and many others keep engaging the conversation on the value of Black lives.  New hashtags like “#SayHerName and #IfIdieinpolicecustody reflect what has become a disturbing reality.  In one year the list as well as people’s frustration keeps growing.

On any given day, I can open up my Facebook newsfeed, and see a diverse list of postings from militant outrage to statements proclaiming love, compassion, and understanding are the answers to people decrying that Black people must value their own lives first to the questioning of whether the victims truly met the standard of etiquette as dictated in politics of respectability.  A post by one of my friends punched through the noise of Facebook.  She recently attended an event in a local national park.  When she left the party, she unwittingly drove in the wrong direction and was stopped by Park Police.  In that moment, she was absolutely terrified.  In that moment, she realized that any action might be misconstrued by the police officer, and she could become the next victim.  The traffic stop went surprisingly well, yet my friend’s experience reflects the nature of the times.

My colleague Rhon Manigault-Bryant posted “Life Goes On:” A Meditation from Howard Thurman as a source of solace.  I have also found the writings of Howard Thurman, but at this particular moment I find myself in need of a stronger expression of what I can best describe as righteous indignation.  Approximately 50 years ago, Nina Simone captured her frustration with the violence against Blacks in her iconic song “Mississippi Goddam.”   The song is a melodic indictment of the violence, the calls for Blacks to act respectably, and the requests for slow methodic change.  The song would ultimately have a deleterious effect on Simone’s career, but it remains a significant musical expression of the Civil Rights era.

Noelle Trent

UntitledNoelle Trent recently earned her doctorate in American history at Howard University. Her dissertation, “Frederick Douglass and the Making of American Exceptionalism,” examines how noted African-American abolitionist and activist, Frederick Douglass, influenced the development of the American ideas of liberty, equality, and individualism which later coalesced to form the ideology of American exceptionalism. Dr. Trent also holds a Master’s degree in Public History from Howard University and is a member of Phi Beta Kappa. She has worked with several noted organizations and projects, including the National Archives and Records Administration, the National Park Service, Catherine B. Reynolds Civil War Washington Teacher’s Fellows, and the Smithsonian Institution’s National Museum of African American History and Culture and the National Museum of American History. She has presented papers and lectures at the American Historical Association, Association for the Study of African American Life and History, the Lincoln Forum, and the Frederick Douglass National Historic Site. She currently resides in suburbs of Washington, DC.

Celebrating Emancipation

Frederick Douglass and the story of New York City’s 1865 “Emancipation Jubilee.”

 
Jacobin   August 1, 2015
An 1865 illustration in Harper's magazine celebrating Emancipation.

An 1865 illustration in Harper’s magazine celebrating Emancipation.

The ongoing campaign to eradicate Confederate symbols marks an important moment in American public memory, perhaps allowing the scars of slavery and segregation to start healing. Yet while the actions of Bree Newsome and company have been truly inspiring, the collective feeling when the flags began to come down seemed mainly to be a sigh of relief.

One hundred fifty years earlier, in the first summer after the actual downfall of the Confederacy, African Americans across the land were more upbeat. Emancipation did not immediately bring full equality, but the war’s end was still cause for optimism. The shackles had come off in the South, while in the North, blacks no longer had to fear being sent back to slavery. It was time for celebration.

In New York, their previous efforts to do so had sparked controversy. Just a few weeks after Gen. Robert E. Lee’s surrender in April 1865, the New York Common Council had denied blacks the right to formally participate in President Lincoln’s funeral procession. At a Cooper Union event in early June, an indignant Frederick Douglass called the council’s action “the most disgraceful and scandalous proceeding ever exhibited by people calling themselves civilized.”

But on August 1, both Douglass and Manhattan’s African-American community were in a far better mood as they traveled across the East River for an “Emancipation Jubilee” in Brooklyn. And though he only spoke for a few minutes at the gathering, Douglass again memorably captured the spirit of the moment.

The jubilee was timed to coincide with West Indian Emancipation Day, which marked the end of slavery in the British Empire in 1834. Initially celebrated in abolitionist centers like Philadelphia, Boston, and Upstate New York, by the 1850s Emancipation Day events could be found across the frontier, from Indiana to California.

Douglass had regularly attended such events near his home in RochesterBut while he had close ties to many Brooklyn abolitionists, Douglass hadn’t yet journeyed down for one of the local jubilees, which had been held regularly since the early 1850s. Everyone knew that the first one after the Civil War would be grand, though.

At just over 5,000 (or 1.5% of the city total), Brooklyn’s black population was still relatively small in 1865. Yet over the preceding two decades, black communities in Williamsburg and Weeksville had served as abolitionist strongholds. During and after the Draft Riots of July 1863, many blacks from Manhattan had also taken refuge on the other side of the East River.

The August 1 festivities took place in what is now Bedford-Stuyvesant, at two sites that have since been demolished — the vast Hanft’s Myrtle Avenue Park and the nearby, smaller Lefferts Park.

Despite their racist caricatures of “exultant darkies” or “dancing darkies,” lengthy accounts in the Democratic Brooklyn Daily Eagleand the Republican New York Times conveyed the mood of the attendees. “Twenty thousand men, women and children of sable hue yesterday mingled their joys and experiences in the suburban parks of the city of churches,” the Times wrote. At stands outside Myrtle Avenue Park, the Eagle reported, “quaint-looking damsels in gorgeously striped dresses with brilliant turbans on their heads” dispensed peaches and pigs’ feet, with sides of corn, cabbage, apple dumplings, and chicken potpie.

Writing in Horace Greeley’s New York TribuneSydney Howard Gay— a leading white abolitionist and longtime friend of Douglass — maintained a more genteel tone. “Colored people” turned out in great numbers in their “Sunday best,” Gay noted. He described a range of activities on display, from formal dancing to less high-brow amusements like a Jefferson Davis knock-down game, with three tosses costing a nickel.

In addition to live bands, carnival attractions, and sporting events (including a game played by the Weldenken Colored Baseball Club of Williamsburg), there were also talks given by an array of distinguished African-American speakers. At Myrtle, Professor William Howard Day (who had challenged segregation in Michigan in the late 1850s) explained the history of West Indian emancipation; while at Lefferts, two leading local abolitionist ministers, James Pennington and James Gloucester, urged receptive listeners to continue the fight for full equality.

Jacobin-Series-3bdd91b95cfc219305403acaa1630163

When Douglass addressed the Myrtle gathering, the great orator was surprisingly brief. But what he said was also surprising, as illustrated by the divergent reports found in the various daily newspapers.

By most accounts, Douglass cheerfully told the enthusiastic crowd, “No man here wants to know whether liberty is a good thing or slavery a bad thing; we all know it already; we don’t want any instruction.” After all, he said, the main message of abolitionists had always been that “‘every man is his own master; every man belongs to himself.”

But what Douglass said next remains open to dispute. According to the Times (and the Eagle), he stated: “Every man has the right to do as he pleases, to come and go, to make love, get married, and do all sorts of things that are pleasant and profitable. [Applause.] We are here to enjoy ourselves — to sing, dance and make merry. I am not going to take up your time; go on; enjoy yourselves. [Prolonged cheering.]” The Tribune account by Douglass’s friend Sydney Gay, however, says nothing about love or marriage, and skips right to “[w]e are here…to sing, dance, and make merry.”

Perhaps the most convincing reportage can be found in the New York Herald. James Gordon Bennett’s paper — which had the largest circulation in the US — may have been a house organ of the War Democrats (who supported the Union but opposed Lincoln). But during the Civil War, the Herald bolstered its journalistic reputation by sending numerous correspondents into the field.

Near the end of its lengthy August 2, 1865 recap of the preceding day’s Jubilee events, the Herald presented Douglass’s statements as follows:

The only thing abolitionists ever taught the American people was that every man is himself. That is all. Every man belongs to himself — can belong to nobody else. We are not here for instruction. We are here to enjoy ourselves, to play ball, to dance, to make merry, to make love (laughter and applause), and to do everything that is pleasant. I am not going to take up your time. Go on, and enjoy yourselves.

The moral instruction to “get married” is conspicuously absent here. Yet of the various reports, the Herald’s is the one that most reads like an impromptu direct address. Such carefree comments by Douglass ultimately seem most befitting for an ecstatic day-long jubilee, one filled with joy in every sense of the word.

Beyond simply playful encouragement, Douglass in his brief remarks urged African Americans in Brooklyn and elsewhere to start envisioning their own future, and to fully enjoy their freedom. Any hopes for a bright future would be short-lived, of course. But in the summer after the war, blacks everywhere could echo Douglass’s insistence that at last, “every man belongs to himself.”

Theodore Hamm is chair of journalism and new media studies at St Joseph’s College in Clinton Hill, Brooklyn.

26-mokdad-1024x643

El cine americano posterior al 11-S

Linda Mokdad

Estudios de Política Exterior  AFKAR/IDEAS No. 46  Verano 2015

Desde 2001, Hollywood ha recurrido al realismo, la historia y la personificación para incorporar un discurso de victimización y trauma en sus películas.

A la luz de las consecuencias inmediatas y duraderas del 11 de septiembre de 2001, tanto nacionales como internacionales, estudiar la construcción histórica de los ataques terroristas sigue siendo una tarea difícil, aunque apremiante. Ya se vea el 11-S como justificación de la “guerra contra el terror” o como excusa ilegítima para la expansión ilimitada del poder ejecutivo –la firma de la Ley Patriota (USA Patriot Act), las prácticas de la detención indefinida y de entregas extraordinarias (extraordinary rendition), el recurso a la tortura o las invasiones de Irak y Afganistán–, la narración cinematográfica de ese día es motivo de complejos debates y enfrentamientos en torno a la historia. En la trama de determinadas películas tras el 11-S se ha apostado por un marco estadounidense o nacional para aislar o contener el significado de los atentados, mientras que en otras se ha insistido en situar esa jornada de 2001 en un contexto global que pone de relieve la historia de la política exterior de Estados Unidos y su intervención en otros lugares del mundo. También ha habido relatos que, de modo ambivalente, han oscilado entre estas dos posturas, acomodando pero también reajustando la historia, para remodelar el papel de EE UU más allá de sus fronteras. A partir de estas tramas, este artículo describe cómo en el cine hollywoodiense posterior al 11-S se observan tres tendencias importantes en las representaciones de Oriente Medio o de los árabes/musulmanes.

Primero, varias de estas películas activan estrategias que señalan una inquietud con respecto al “realismo”. Segundo, también muestran la tendencia a situar el 11-S y la “guerra contra el terror” en otros escenarios e historias habitualmente utilizados para designar la relación problemática entre Oriente Medio y Washington (incluido el fundamentalismo islámico, las invasiones de Afganistán e Irak, el conflicto árabe-israelí o la lucha por el petróleo). Por último, el cine posterior al 11-S sugiere la supresión de la persona con respecto a la figura del árabe/musulmán, sustituida por una inversión creciente en el trauma y psicotrauma del norteamericano. En última instancia, estas pautas dejan entrever que Hollywood ha usado y abusado del 11-S y la posterior “guerra contra el terror” como oportunidades de moderar, regular y, a menudo, reelaborar los encuentros y enfrentamientos históricos entre Washington y Oriente Medio.

Apuesta por el ‘realismo’

El énfasis que las películas posteriores al 11-S ponen en el realismo no se explica solo por lo ocurrido ese día. De hecho, como película rodada años antes de los ataques –pero después del primer ataque a las Torres Gemelas–, Estado de sitio (Edward Zwick, 1998) ha sido calificada a menudo de pionera por el modo en que emplea la historia al construir y representar a personajes árabes y araboestadounidenses. Con una trama basada en una ola de ataques perpetrados por una red terrorista islámica fundamentalista en respuesta a la captura de un clérigo iraquí por el ejército estadounidense, Estado de sitio aborda Oriente Medio y el problema del terrorismo en un marco más amplio de la política exterior y la geopolítica norteamericanas. De hecho, a diferencia del género de cine de acción hollywoodiense, con títulos como Mentiras arriesgadas (James Cameron, 1994), Decisión crítica (Stuart Baird, 1996) o Reglas de compromiso (William Friedkin, 2000), que presentaban temerariamente a malos de caricatura y, por ende, imágenes de terroristas árabes o musulmanes fácilmente desechables, los productores de Estado de sitiose esforzaron claramente por evitar acusaciones de intolerancia racial o religiosa. De forma irónica, este tratamiento conscientemente sobrio del terrorismo puede ser en parte responsable de que se boicoteara la película, y de los estragos que causó entre los grupos de defensa de árabes y musulmanes. Hussein Ibish, de la ADC (American-Arab Anti-Discrimination Committee), distinguía esta película de muchos otros productos de Hollywood que caían en estereotipos sobre árabes y musulmanes, y afirmaba: “[Las representaciones anteriores] eran tontas y unidimensionales. Estado de sitio pretende ser un producto socialmente responsable”. (“Los musulmanes se sienten asediados”, 1998).

Si bien Estado de sitio demuestra que la gran industria del cine ya había empezado a responder y a incorporar la crítica poscolonial y el multiculturalismo, tales estrategias regulan y gestionan cada vez más las historias, ahora “intrusivas”, del “Otro” árabe/musulmán tras los ataques de 2001. Esto es, el cine norteamericano posterior al 11-S apunta a un interés aun más fetichista por el realismo y la historia. Al recurrir o confiar en el periodismo incrustado, títulos como Gunner Palace (Petra Epperlein y Michael Tucker, 2004), En el valle de Elah (Paul Haggis, 2007), Restrepo (Sebastian Junger y Tim Hetherington, 2010) y La noche más oscura (Kathryn Bigelow, 2012) tratan de dotar de autoridad a sus “verdades” sobre árabes/musulmanes o la “guerra contra el terror”. Además, muchas de esas películas toman prestadas técnicas o estrategias del cine documental (incluidos los movimientos bruscos cámara en mano o la vertiginosa fotografía de guerra), para dar más protagonismo y reforzar la identificación con la perspectiva de los soldados estadounidenses.

Vuelta al pasado

La inversión en el realismo del cine posterior al 11-S comprometido con Oriente Medio, consolidada por el uso del periodismo incrustado o técnicas de documental, se ve reforzada por las referencias a las situaciones históricas y geopolíticas, una información muchas veces reprimida en las películas anteriores al 11-S. Por ejemplo, Syriana (Stephen Gaghan, 2005), inspirada en el bestseller autobiográfico del antiguo agente de la CIA, Robert Baer, See No Evil, establece conexiones explícitas entre el militarismo y la política exterior de Washington, los monopolios de los recursos petrolíferos, la explotación de trabajadores paquistaníes en Oriente Medio y el fundamentalismo islámico. Red de mentiras (Ridley Scott, 2008) también reproduce debates en torno a la contribución de la política exterior de la CIA y de EE UU al auge del terrorismo en Oriente Medio. Un corazón invencible (Michael Winterbottom, 2007) vuelve sobre el asesinato real del reportero del Wall Street Journal, Daniel Pearl, perpetrado por extremistas religiosos. Argo se remonta a la crisis de los rehenes de Irán, en 1980. Y aun viaja más atrás Múnich (Steven Spielberg, 2005), trasladándonos hasta la crisis de los rehenes de las Olimpiadas de Múnich, en 1972, para formular una tesis sobre el 11-S y el terrorismo en nuestros días.

En cierto modo, podríamos considerar un avance la atención que estas películas prestan a la historia y a los aspectos geopolíticos, o la complicidad de la política exterior estadounidense. A diferencia de Black Sunday (John Frankenheimer, 1977), The Delta Force (Menahem Golan, 1986) o Navy Seals (Lewis Teague, 1990), estas películas pueden sugerir análisis de los conflictos entre EE UU y Oriente Medio más complejos y reflexivos que las representaciones monolíticas anteriores de árabes y musulmanes que a menudo los han mostrado, a ellos y su relación con la violencia, aislados de la historia. Ahora bien, al integrar el 11-S en otros puntos del conflicto de Oriente Medio, el cine posterior al 11-S ha generado una y otra vez analogías históricas y relatos teleológicos simplistas e interesados. Por ejemplo, Múnich, en lugar de situar en primer término el contexto histórico inmediato que da pie a la masacre de 1972 en las Olimpiadas de Múnich, ubica y filtra “Múnich” a través de la memoria del Holocausto y los hechos del 11-S. En otras palabras, el Holocausto, la masacre de Múnich y el 11-S se presentan como ejemplos de violencia que son parte de la misma trayectoria histórica. Mike Chopra-Gant ha afirmado que la última imagen del World Trade Center (con la que acaba Múnich) establece una “relación causal directa” que resulta “simplista y reduccionista”. La pretensión de Golda Meir en la película de que la masacre en la ciudad alemana “es algo nuevo” y que “lo que pasó en Múnich lo cambia todo” recuerda a la ya conocida retórica posterior al 11-S sobre la excepcionalidad norteamericana, que dio a la Casa Blanca licencia para invadir Afganistán e Irak, imponiendo y aprobando la violencia israelí como respuesta al terrorismo.

Inversión creciente en el trauma y psicotrauma del norteamericano

La tendencia del cine estadounidense posterior al 11-S a producir imágenes más realistas de Oriente Medio, así como a aportar más datos históricos (aunque reformulados y revisados) a la hora de retratar a árabes o musulmanes, no puede considerarse sin pensar en la tercera estrategia habitual utilizada en estas películas. Aunque aceptemos que, en el mejor de los casos, puedan abordar el papel de la política exterior de Washington en su lucha contra la violencia procedente de Oriente Medio, debe plantearse en qué medida su potencial de crítica se ve mermado por el modo en que sitúan y contraponen estadounidenses y árabes/musulmanes.

Para la mayoría de películas centradas en Oriente Medio posteriores al 11-S, sería casi imposible ignorar la violencia causada por la invasión de Afganistán o Irak (Green Zone: distrito protegido, Paul Greengrass, 2010) o las torturas que tuvieron lugar en las cárceles clandestinas de la CIA (El sospechoso, Gavin Hood, 2007 o La noche más oscura, 2012). Sin embargo, lo que debe tenerse en cuenta es el modo en que esta violencia se representa y reparte entre colectivos distintamente codificados. Si la movilización o reproducción de estrategias relacionadas con el periodismo incrustado ha priorizado a menudo las experiencias de los estadounidenses, los árabes/musulmanes están muy vehiculados, tecnologizados y asociados al pasado y la historia. El ejemplo más reciente, y tal vez el más extremo, de película posterior al 11-S en que prevalece la persona (y la psique) del soldado americano por encima de los árabes/musulmanes a quienes mata es El francotirador (Clint Eastwood, 2014). Basada en la historia real del francotirador miembro del cuerpo de los Navy SEAL, Chris Kyle, la crítica que hace de la guerra se limita básicamente a plantear el impacto negativo que ha tenido en el soldado, al tiempo que subestima o racionaliza la pérdida de vidas árabes. El francotirador contó con una gran publicidad (positiva y negativa), pero no es más que una de las muchas películas de guerra posteriores al 11-S en que se distingue entre ciudadanos norteamericanos y árabes, pidiendo al público que se sumerja completamente del lado de los primeros. Una estrategia habitual para fomentar la identificación con los estadounidenses tras el 11-S consiste en dedicarse a rodar películas sobre la guerra y la vuelta a casa, centradas en el trastorno de estrés postraumático. Títulos como Badland (Francesco Lucente, 2007), Homeland (Christoper C. Young, 2009), En tierra hostil (Kathryn Bigelow, 2008) o En el valle de Elah otorgan un trato de excepción al estadounidense, distinguiéndolo del árabe/musulmán.

A diferencia del personaje masculino invulnerable y militarizado del cine de acción hollywoodiense de los años ochenta y noventa,  muchas tramas militares posteriores al 11-S ponen de relieve la vulnerabilidad del soldado estadounidense. Por ejemplo, incluso una película como En el valle de Elah, que aborda el abuso sufrido por civiles iraquíes a manos de los soldados, relativiza esos abusos, atribuyéndolos al trauma y postrauma sufrido por las tropas estadounidenses. Tal como sugiere el propio título, el soldado norteamericano es el valeroso pero vulnerable David que se enfrenta al poderoso y monstruoso Goliat. Estas películas posteriores al 11-S, pues, pueden probar una voluntad aun mayor de mostrar abusos y torturas, incluso los cometidos por los propios americanos, pero se racionalizan como una respuesta aceptable y un comportamiento comprensible por parte de una persona y una nación traumatizados. El abuso contra civiles iraquíes que se muestra en En el valle de Elah se capta, vehicula y mantiene a distancia gracias a la tecnología de la videocámara, y se califica de perteneciente al “pasado.” El soldado norteamericano traumatizado y postraumatizado, en cambio, invita a la identificación emocional y engendra mucho más afecto, al estar situado en el “presente” de la película, y dotado de una voz y una trayectoria como personaje que se niega al árabe/musulmán. En estos casos, las inquietudes inmediatas del presente superan los pecados del pasado, abstraídos y vehiculados.

El cine de guerra y vuelta a casa posterior al 11-S, que alterna recuerdos de la batalla y problemas de readaptación posteriores al conflicto, se convierte en el vehículo perfecto para distinguir entre americanos y árabes/musulmanes, al mostrar registros temporales tan dispares. Asimismo, el visionado y revisionado de las escenas de tortura grabadas en La noche más oscura no solo produce y reproduce el personaje musulmán criminalizado y legible, sino que, además, lo priva de cualquier humanidad.

En conclusión, aunque el cine posterior al 11-S pueda sugerir unos retratos más sensibles de los árabes/musulmanes, al reconocer la historia y hasta la complicidad de Washington, toda crítica que puedan ofrecer, en definitiva, se ve empañada por el modo en que la historia se reajusta para redimir y dar prioridad a los norteamericanos. En última instancia, Hollywood ha recurrido al realismo, la historia y la personificación en el cine posterior al 11-S para incorporar un discurso de victimización y trauma.

Linda Mokdad es Profesora auxiliar de Inglés y Estudios Cinematográficos, St. Olaf College. Northfield.

150720_zeitz_trump_gty

What Trump Doesn’t Get About Vietnam

The conflict was an internal class war as well as a war against a foreign enemy.

Politico Magazine July 20, 2015

Vietnam wasn’t supposed to rear its head in 2016. With the election of Barack Obama, the first president to have come of age after the war’s close, many political observers expected that the quadrennial debate over who served and who dodged—an issue in every presidential election from 1992 through 2004—was at last over. Leave it Trump to drag it back into the public square on Saturday, when he derogated the wartime service of Sen. John McCain, a combat veteran who endured five years of torture as a POW in the notorious Hanoi Hilton. “I like people that weren’t captured,” he said.

The Donald, who received a medical deferment in 1968 for bone spurs in his heels, seems genuinely confused by the backlash. It would be easy to write his nescience off as a form of adolescent self-absorption (though, in fairness to adolescents, most probably know how to recognize a war hero when they see one).

But part of his problem owes to a lasting historical legacy of the Vietnam War. Simply put, Vietnam was an internal class war as well as a war against a foreign belligerent. Unlike all American conflicts that preceded it, Vietnam drew sharp lines between those with means and those without. Young men from privileged backgrounds who served in Vietnam, like John McCain and John Kerry, usually did so electively, and as officers. Most working-class men, on the other hand, had no choice. They could join or be drafted, and almost always, they were enlisted.

We tend to lump the “sixties generation” into one undifferentiated cohort. But there was considerable divergence between the experiences of working-class men and those of their more privileged peers. This departure explains much about politics in the 1970s and 1980s, as well as some of Donald Trump’s current struggle.

***

In September 1967 the New York Times spent several days following a group of 18-year-old students as they arrived at area colleges. Freshman orientation, the paper observed, was a wonderland of “boat rides, excursions and get-together dinners.” From the moment of their arrival, freshmen were greeted with open arms and made to feel like important members of the collegiate community. At Columbia University, volunteers helped them move into their dorm rooms. University administrators hosted teas and lunch receptions to welcome them to campus. At nearby schools like Vassar and Hofstra, students learned that they were free to attend faculty and administration meetings. At Baruch College, part of the City University of New York system, the associate dean assured freshmen that if they had “any problems or complaints, come and talk to me about it. My door is always open.”

Hundreds of miles and many worlds away, young men like Ron Kovic experienced an altogether different rite of passage. Filing off a military bus at Parris Island, South Carolina, in the pitch dark of night, Kovich and his fellow Marine recruits were greeted by a tall, muscular drill instructor who gave them three seconds to line up on yellow-painted footprints spanning the hard concrete parade deck. “Awright, ladies!” the DI barked. “My name is Staff Sergeant Joseph. This is Sergeant Mullins. I am your senior drill instructor. You will obey both of us. You will listen to everything we say. You will do everything we tell you to do. Your souls today may belong to God, but your asses belong to the United States Marine Corps.”

While college deans invited incoming students to join them for sandwiches and orientation lectures, Staff Sergeant Joseph berated his trainees. “There are eighty of you, eighty young warm bodies,” he yelled, “eighty sweatpeas … and I want you maggots to know today that you belong to me … until I have made you into marines.”

Roughly 27 million young men came of draft age between 1964 and 1973—the peak years of American military engagement in Southeast Asia. Of that total, 2.5 million men served in the Vietnam War. Roughly 25 percent of all enlisted men who served in Vietnam were from poor families, 55 percent from working-class families, and 20 percent from the ranks of the middle class. In an era when half of all Americans claimed at least some post-secondary education, only 20 percent of Vietnam War servicemen had been to college, while a staggering 19 percent had not completed 12th grade. “When I was in high school, I knew I wasn’t going to college,” remembered a typical recruit. “It was really out of the question. Even graduating from high school was a big thing in my family.”

Among enlisted men who fought in Vietnam, roughly one-third were drafted, one-third joined entirely out of choice and one-third were “draft-motivated” enlistees who expected to be swept up by the Selective Service and volunteered in hopes of choosing the branch and location of their service. Many recruits who joined of their own volition had few alternative options. Unemployment rates for young men hovered around 12.5 percent in the late 1960s (over double that figure for young black men), and even in places where unemployment was low, companies were reluctant to hire and train young working-class men, for fear they would soon be drafted. “You try to get a job,” explained one such unemployed man, “and the first thing they ask you is if you fulfilled your military service.”

By contrast, middle-class boomers enjoyed a host of options in avoiding the draft. The government extended deferments to students enrolled in college or graduate school, but only to those who were full-time students. For one draftee who was working his way through the University of Hartford, the deferment system proved useless. “I was in school,” he recalled. “But I was only carrying a course load of nine credits. You had to have 12 or 15 back then [to earn a deferment]. But I was working two jobs and didn’t have time for another three credits.” Selective Service snatched him up.

Potential conscripts could also avoid the draft if they furnished military authorities with proof of psychiatric or medical ineligibility, but as a general rule, few working-class families enjoyed regular access to private physicians who could furnish or fabricate evidence of long-term treatment for a qualifying disability. Even something so simple as orthodontic braces were grounds for ineligibility, but few working-class men could afford to pay $2000 for elective dental work.

Because of the built-in bias in the draft system, Vietnam split Americans by class and geography. Three affluent towns in Massachusetts—Milton, Lexington and Wellesley—lost 11 young men in the war out of a total population of roughly 100,000. Nearby Dorchester, a working-class enclave with a comparable population, saw 42 of its sons die in southeast Asia. A study conducted in Illinois found that young men from working-class neighborhoods were four times as likely to be killed in the war as men from middle-class neighborhoods, while in New York, Newsday studied the backgrounds of 400 Long Island men who died in Vietnam and concluded that they “were overwhelmingly white, working-class men. Their parents were typically blue collar or clerical workers, mailmen, factory workers, building tradesmen, and so on.” In 1970, where a man lived, who his parents were, and how he grew up mattered enormously.

***

For most enlisted men who fought on the front lines in Vietnam, boot camp followed a predictable pattern. “They strip you, first your hair,” one veteran recalled. “I never saw myself bald before. … Guys I had been talking to not an hour before—we were laughing and joking—I didn’t recognize no more. … It’s weird how different people look without their hair. That’s the first step.” New servicemen began a grueling routine of physical and mental conditioning that began each day at dawn 4:00 a.m. and lasted until after sunset. Long hours of pushups, sit-ups, marches and outdoor infantry training were de rigueur.

After basic, new servicemen underwent several weeks of training for their military occupational specialty (MOS) and then shipped off for the balance of their service. For many enlisted men, this meant 12 or 13 months in Vietnam, followed by another six months of stateside service.

From the very start, war was surreal. Rather than send servicemen by military transport, the government contracted with commercial airlines to shuttle fresh troops to Southeast Asia. The sleek civilian jets were “all painted in their designer colors, puce and canary yellow,” remembered one veteran. “There were stewardesses on the plane, air conditioning. You would think we were going to Phoenix or something.” One veteran remembered that “you could cut the fear on that plane with a knife. You could smell it.”

h

Reimagining the Welfare State

The New Deal welfare state was exclusionary and inequitable. We must envision and organize for something better.

 
Franklin Roosevelt signs the Social Security Act into law. Wikimedia Commons

Franklin Roosevelt signs the Social Security Act into law. Wikimedia Commons

Since the creation of the free-market Liberty League by the DuPont brothers in 1936, hostile corporate leaders, financiers, economists, and lawmakers have been bent on destroying Franklin Roosevelt’s New Deal welfare state.

Wisconsin workers have seen their right to collective bargaining outlined in the New Deal’s Wagner Act gutted, while public pensions, created during the Great Depression to bolster public employment and ensure long-term economic security, have been attacked from Alaska to Florida. Congress also continues to chip away at the state-sponsored provision of basic needs, recently targeting the food stamp program (originally created under FDR) by proposing that all recipients hold jobs, suffer lifetime limits, and receive lower overall benefits.

To many observers, it appears that the New Deal and its safety net have been shredded. Political scientists and others have argued that the perilous individual economic risk that Americans faced before the New Deal has been foisted back on them as its collective protections have withered. With the shocking growth in economic inequality that has arisen alongside cuts to the New Deal, freedom from want — the keystone of Roosevelt’s “Four Freedoms” — has been chipped away to a pebble. It’s enough to make Americans long for a revival of the politics of the 1930s.

But we should be clear-eyed rather than nostalgic about the demise of the welfare state.

The New Deal was a flawed welfare system. It was built through exclusions and inequities, embracing some Americans while cutting out many others. Though its programs enveloped a wider swath of citizens over time — more non-whites, more women, and more marginal workers — their entrance into the safety net was hard fought and politically controversial. The fractured inequities the New Deal produced among the populace never really disappeared and, in some ways, widened and sharpened the divide between those inside of the New Deal’s protective web and those beyond it.

This was in part because New Deal programs were not the only vehicles for social welfare, but existed alongside other programs that also differentiated among citizens and their entitlements. The New Deal was never synonymous with the welfare state as many European countries developed it: comprehensive and universal social welfare programs for populations enjoying rough equal citizenship rights.

Instead, the New Deal was part of a hodgepodge of varied and sometimes hidden social welfare programs — some public, some private — that rewarded different groups of Americans for different reasons.

Seeing the New Deal alongside the unwieldy and unequal panoply of American social welfare prevents us from indulging in an exceptionalist narrative of its history, or embracing a misplaced nostalgia for a glorious historical moment.

For Whom Was the New Deal a Deal?

For workers in steady industrial jobs, working year-round, the New Deal provided economic security through unionization, labor protections, and social insurance. The lucky Americans who held these jobs were largely male and white, beneficiaries of the sex and race-segregated labor markets of the time.

Reinforced by the economic growth of World War II, the GI Bill, postwar prosperity, and the union-corporate accords of the 1950s, New Deal supports afforded these men — and their families — a higher standard of living, even when they were too old or sick to work, than any common citizens of the United States had ever experienced.

Their unions protected them in the workplace. Their bank accounts were insured. For some, the Federal Housing Administration provided loans. Unemployment insurance offered unprecedented protection from the vagaries of the volatile capitalist economy. And Social Security offered the promise of retirement or, upon death, the protection of wives and children — a historic first for the working class.

The New Deal cast the net of economic security wider than ever before, but not wide enough to bring in vast numbers of Americans who labored outside of the steady, salaried primary labor market.

Unskilled non-industrial workers never made it inside the original New Deal’s safety net. Southern and Western representatives of agricultural interests would not abide social protections and entitlements for the largely non-white agricultural workforce in their states.

Southerners lobbied for the right to discriminate against African Americans, whom they feared would leave plantations and domestic work for higher paid public works jobs. One DuPont vice president, an early member of the Liberty League, wrote angrily to a political sympathizer about the “Five negroes on my place in South Carolina [who] refused work this spring . . . saying they had easy jobs with the government.”

If applied equally, the New Deal’s public works and public welfare programs could offer economic alternatives to poorly paid, exploited African-American agricultural and domestic labor. Southerners traded their votes for the white, male industrial programs of the New Deal in order to prevent such eventualities in their states.

Domestic workers, employed throughout the country and largely non-white and female, were not entitled either to labor protections or social insurance. Non-whites, largely African Americans and Mexican Americans, were denied insurance or union protections because of their low status in the secondary labor market, and they were also discriminated against in New Deal recovery and public works programs.

The National Recovery Administration gave hiring preference to whites and sanctioned separate, lower pay scales for African Americans. The Public Works Administration and Works Progress Administration offered fewer programs in the agricultural areas of the country where non-whites were concentrated.

The Civilian Conservation Corps operated racially segregated camps, and the New Deal’s agricultural programs offered incentives to white landowners to throw African-American tenants and sharecroppers off the land. In the South and West, state and local leaders used the discretionary powers granted by the federal public assistance programs to limit cash assistance to African Americans.

Women, like non-whites, found that the New Deal did not provide them a very good deal, at least not directly. In the 1930s, women constituted between 24–30 percent of workers in the labor market (their numbers in the labor market increased over the course of the decade). And although the Wagner Act legalized unionization, sex-segregated labor markets untouched by the New Deal meant that women only had access to about 10 to 15 percent of unionized jobs.

Fired in the face of men’s perceived need to possess the scarce jobs of the 1930s, many women sought public works jobs to support themselves. But women only received about 12 percent of New Deal public works program jobs — less than half as much as their representation in the labor market.  The New Deal public works jobs that were open to them often placed them in the traditional sex-segregated female positions in which they labored in the private sector, like domestic service, sewing, and nursing.

Imagining them as secondary and non-essential, the New Deal cast women as less than full economic citizens, failing to offer even regular, salaried full-time female workers the same access to social and economic security as men. New Deal policymakers imagined women as fundamentally dependent on male breadwinners, and constructed New Deal social welfare programs around that image.

Social Security — the keystone of the New Deal welfare programs — also yoked women to their husbands in old age. Married women who had paid into the Social Security program would have to share the payments of their higher paid husbands and forfeit their own. Policymakers instead siphoned off married women’s payments into the general revenues of the program.

Networks of Exclusion

The New Deal was not just limited — it was also only one of numerous coexisting systems of welfare provision. Over the course of the twentieth century, millions of Americans derived social and economic support through myriad other government “welfare states” outside the New Deal orbit. These programs tended to accentuate the inequities institutionalized in the New Deal, bringing greater economic security to white, male breadwinners in the primary labor market.

The military welfare state for veterans and active duty personnel shored up the economic and social security of the millions of Americans — overwhelmingly men — who served in the wars of the twentieth century.

The post–World War II GI Bill was practically a New Deal of its own. It vaulted millions of American men and their families into the middle class through tuition payments and stipends, and home, farm, and small-business loans. GI Bills for the veterans of Korea and Vietnam, while not as generous as the original, continued the tradition of veterans’ support.

With the creation of the all-volunteer armed forces in 1973, the military began to offer generous social and economic welfare programs in order to recruit and retain the mostly male personnel it needed. For the over ten million personnel who have served since then, and their tens of millions of spouses and children, the military has offered what might be the most comprehensive social welfare system in the United States.

The post–World War II era tax system formed another bulwark, providing write-offs for heterosexual marriage, children, and home ownership. Often unrecognized, they operated as what Suzanne Mettler has called a hidden welfare state, but their credits helped build the Ozzie and Harriet suburbs that sustained millions of white men and their families.

Many American men with good jobs in the primary labor market were also able to access a private safety net in addition to a public one. White-collar salaried workers for America’s large blue chip corporations — overwhelmingly male and white — as well as unionized blue-collar workers in America’s postwar factories — again, mostly male and white — negotiated private employer–provided insurance and medical programs. Subsidized and encouraged by the government through corporate tax incentives, private employee benefits supplied the largely male managerial and unionized industrial workforce a private supplement to the New Deal welfare state under which they were already covered.

Franklin Roosevelt’s New Deal thus provided one important avenue of social welfare rather than the sole path to welfare provision. But even the more patchwork welfare states all worked in a kind of herky-jerky synchronicity to shore up the well-being of the initial beneficiaries of the New Deal, while leaving most non-whites and women with second-class social and economic citizenship.

The New Deal Legacy

There is now a vigorous debate among historians about the New Deal’s legacy. Some, like Jefferson Cowie and Nick Salvatore, argue that the New Deal’s exclusions, while real, should not diminish its achievements.

The Wagner Act, fair labor standards, Social Security, unemployment insurance, public assistance, and public works programs — all provided greater “collective economic security” to more Americans than ever before. The New Deal programs established the basis for a principle of social and economic protection that, they argue, could in theory be expanded to others.

But a wealth of scholarship, by people like Ira Katznelson and Alice Kessler-Harris, reveals that sanguine analyses like these overlook the compromised foundation of the New Deal’s achievements: it was precisely the exclusion of blacks and Mexicans, and the imaging of women as dependent wives, that allowed for the creation of a New Deal welfare state for white male breadwinners in regularized industrial and union jobs. The architecture of protection for white men was built in part on the backs of those who were denied full economic and social citizenship.

Good, protected jobs and social welfare existed now as a laudable opposite of lesser jobs — and lesser citizens. Southern and Western landowners could still exploit non-white labor in the fields or on the docks. African Americans and women would face barriers to challenging white men in the primary labor market, while married women would continue to be reliant on male breadwinners and provide needed domestic labor in those homes. The limited citizenship of many non-whites and women were traded for — and literally made possible by — the granting of full social and economic citizenship to white men.

Over time New Deal programs did expand to include more Americans. Social Security was extended to nearly 90 percent of American workers by the 1970s so that by the mid 1970s, poverty among older Americans had dramatically declined. Unemployment insurance also expanded significantly, softening some of the hardship of the business cycle. New programs covering disabilities of various kinds, both through insurance and public assistance, were created from the 1940s through the 1970s, and in the past twenty years have constituted the fasting growing realm of social protection.

As these expansions took place, marginal workers, women, and African Americans began to finally demand their own “New Deal.” In fact, entitlement to social support and economic protection constituted one of the central goals of both the Black Freedom Movement and the feminist movement. Women and nonwhites argued that the New Deal’s support programs were hallmarks of equal citizenship. In these ways, “rights” movements actually functioned as fights for equal access to the safety net that white men already enjoyed.

But those already on the inside of the New Deal met the requests of women and non-whites for access to entitlements with sharp rebukes.

Historians like Thomas Sugrue, Lisa Levenstein, and David Freundhave documented how white, mostly male communities of workers and homeowners rejected African Americans’ claims to social protections they enjoyed, such as union-protected jobs, FHA loans, and access to public hospitals and schools. Marisa Chappell andDonald Critchlow have likewise demonstrated the ferocious backlash against women’s claims to equal employment opportunities and the feminist movement’s requests for social protections like childcare or maternity leave.

Traditional New Deal supporters balked at including non-whites and women, who now sought first-class citizenship. Indeed, they turned on those aspects of the welfare state most likely to benefit non-whites and women — public assistance, food stamps, and public housing.

Some in the traditional New Deal coalition colored the War on Poverty as a “black” program and rejected it even as Lyndon Johnson’s Great Society brought more highways, parks, and college loans to their suburban communities. They charged social movements demanding equal social protection with being divisive “individual rights” movements that undermined the imagined “collective spirit” of the “universal” New Deal and its legacy.

Even today, the charge that social movements’ focus on “individual rights” somehow fractured the liberal left and killed the New Deal coalition and its social welfare legacies carries weight among liberal and left scholars, even though it patently echoes the original unequal exclusion and entitlement of the New Deal.

Of course, this charge fuels even more fire among opponents of the New Deal, whose resistance to the inclusion of blacks and women formed an important prong of their assault.

Conservative Republicans and Southern Democrats had already beaten back additions to the New Deal — they soundly rejected legislation for full employment and universal health care in the 1940s. But their plans for rollback of the existing welfare state accelerated in the 1960s and 1970s amid the breakdown of the postwar economic order, just as African Americans and women began to gain access to the more inclusive programs of the New Deal as well as additional modes of social welfare.

In the 1980s, the frenzy over the “underclass” purportedly created by the War on Poverty and the obsession with eliminating Aid to Families with Dependent Children (AFDC) could not be understood without reference to the ways that racist and sexist ideas merged with philosophical support for “free” markets and opposition to “socialism” in the face of economic crisis.

In the past ten years,the Right has skillfully employed gendered and racialized dog whistles to delegitimize government itself through a strategy of “welfare-ization” of the state. Republicans liken public school teachers and road builders to “welfare queens” who bilk the taxpayers through their bloated benefits and dependency on government.

As Gov. Scott Walker said when he eliminated the right of Wisconsin’s public workers to collectively bargain for their wages and benefits: “We can no longer live in a society where public sector workers are the haves and the taxpayers who foot the bill are the have nots.” The jobs and pensions of government workers are now fair game.

Something From Nothing

Many of the social and political movements of today, on both the Left and the Right, lay bare the legacies of the New Deal’s opportunities and limitations. Those long cut out of the New Deal — and other social welfare programs — are still trying to secure first-class citizenship.

Retail and service workers enduring low wages, irregular hours, lack of benefits and time off, and layoffs are now involved in an increasingly recognized national movement: the $15 per hour movement. Minimum wage laws are being approved in municipalities and states, with successful democratic referenda behind many of them. Unorganized workers in various economic sectors are circumventing unionization and joining worker centers. Notably, these centers focus on entire communities of low-wage workers, and the communities’ needs, endeavoring to organize for social welfare, not just on-the-job protections.

The immigration movement — a literal fight for full citizenship — seeks access to all the social and economic protections undocumented people are now denied. Even the Black Lives Matter movement, which is primarily about policing, reflects the crises faced by communities that lack social and economic security and full citizenship. First-class citizenship means protection from police brutality along with rights to social and economic protections that can and should be shared among all citizens.

Yet the timing for these movements’ claims to the welfare state is precarious. The most marginal Americans are grasping for victories at the same moment that the long-term New Deal programs that first built a white, male middle class are coming under fire — the gutting of collective bargaining, the assault on public pensions, and even the continued threat of Social Security privatization. Whether the new movements will lay the basis for a fuller welfare state or are a last gasp before a full unraveling remains to be seen.

In this context New Deal nostalgia is a trap. It deludes us about happier times that were not in fact happy for many Americans. While the New Deal offered an unprecedented safety net for many, its holes allowed at least half of the population to fall through. And its dependence on unjust social arrangements accentuated inequalities among the population, as other parts of America’s piecemeal social welfare system amplified original exclusions.

Nostalgia’s backward-looking wistfulness discourages the vision necessary for change. New Dealers themselves never called on nostalgia for inspiration. With no existing welfare state, they could only look forward.

Those of us who value social and economic security, and embrace a radical program of social provision that challenges the drives of capital, must also look forward. We face a challenge just as difficult as the one facing activists and reformers in the 1930s — but of a far different kind.

Today we confront anti-state, pro-corporate politics stronger and more pervasive than during the Depression. We must  incorporate the claims of a far more diverse set of Americans — all those still waiting on their New Deal — and we are not inventing welfare, but taking on the unprecedented task of building from the ashes after its end.

If we are to realize the long overdue New Deal for everyone — and then go beyond even that — we’ll need an abundance of imagination rather than nostalgia.

Jennifer Mittelstadt is an associate professor of history at Rutgers University. Her latest book, The Rise of the Military Welfare State, will be published by Harvard University Press this fall.

Killing Haitian Democracy

The US’s repeated imperialist interventions in Haiti have left a legacy of despotism.

 
US Marines marching in Haiti in 1934. Bettmann / CORBIS

US Marines marching in Haiti in 1934. Bettmann / CORBIS

On July 28, 1915 the United States invaded Haiti, and imposed its diktat on the nation for close to two decades. The immediate pretext for the military intervention was the country’s chronic political instability that had culminated in the overthrow, mob killing, and bloody dismemberment of President Jean Vilbrun Guillaume Sam.

The American takeover was in tune with the Monroe Doctrine, first declared in 1823, that justified the United States presumption that it had the unilateral right to interfere in the domestic affairs of Latin America. But it was not until the late 1800s when America had become a major world capitalist power that it actually acquired the capacity to fulfill its extra-continental imperial ambitions. In 1898 it seized Cuba, Puerto Rico, and Guam and soon afterwards took control of the Philippines, the Dominican Republic, and Haiti.

The US’s goal was to transform the Caribbean into an “American Mediterranean” inoculated from the influence of French, German, and Spanish power.

The 1915 invasion was in fact the culmination of America’s earlier interferences in Haiti — on eight separate occasions US marines had temporarily landed to allegedly “protect American lives and property.” The latter part of this claim was more accurate than the former, for these earlier skirmishes served to solidify and enhance the presence of American financial banking interests.

This priority became clear when, on December 17, 1914, US marines, acting on the orders of US Secretary of State William Jennings Bryan, forcibly removed Haiti’s entire gold reserve — valued at $500,000 — from the vaults of Banque Nationale. The bullion was transported to New York on the gunboat Machias and deposited in the National City Bank.

American imperialism had thus announced its designs; it was bent on undercutting French and German economic dominance as well as signaling to Haitian authorities that they would be forced to pay their debt to US private banks. From Washington’s perspective, Haiti had to establish a political order serving American economic and strategic objectives. Ultimately, the means to that end was an occupation.

The first task of the occupiers was to select a new president to replace Sam. Rosalvo Bobo, who headed a caco army that led the insurrection ending with Sam’s brutal demise, was on the verge of moving into thePalais National. The United States, however, had other ideas. Washington viewed Bobo as too nationalistic to assume the reins of power.

While Capt. Edward Beach, the chief of staff of Adm. Caperton who led the Marines’ takeover of Haiti, acknowledged Bobo’s immense popularity, he deemed him “utterly unsuited to be Haiti’s President” because he was “an idealist and dreamer.” In fact, Beach informed Bobo that the United States considered him “a menace and a curse to [Haiti]” and thus forbade him to stand as a candidate for the presidency.

A revolutionary nationalist like Bobo was inimical to American interests. While he was being forced into exile and his cacos were launching a futile uprising against the occupying forces, Adm. Caperton installed a new president who would “realize that Haiti must agree to any terms laid down by the United States.” This new president was Philippe Sudré Dartiguenave.

The US not only imposed the unpopular Dartiguenave on Haiti, it also compelled Haitian authorities to sign a treaty legalizing the occupation. Caperton had orders “to remove all opposition” to the treaty’s ratification. If that failed, the United States had every intention to “retain control” and “proceed to complete the pacification of Haiti.”

Not surprisingly, on November 11, 1915 the Haitian Senate ratified the treaty and placed the country under an American protectorate. The United States was to take full control of the country’s military, law enforcement, and financial system. The repressive and fraudulent means by which the occupation was rendered officially “legal” symbolized what “democracy” and “constitutional rule” meant under imperial rule.

Not satisfied with the mere ratification of the treaty, the United States sought to compel the Haitian National Assembly to adopt a new constitution made in Washington. Faced with the assembly’s opposition, Maj. Smedley Butler, the head of the Gendarmerie d’Haiti — the military contingent created by the United States to replace the Haitian army that it had disbanded — arbitrarily dissolved the assembly.

Having no room to maneuver, Dartiguenave signed the decree of dissolution. In waging their own coup d’état, the occupying forces continued a long-held practice of Haitian politics, but they modernized it. As Butler proclaimed, the gendarmerie had to dissolve the assembly “by genuinely Marine Corps methods” because it had become “so impudent.”

The “impudence” of the assembly partly stemmed from its refusal to grant foreigners the right to own property in Haiti. The US found this refusal unacceptable and decided that a coup was warranted to impose the laws of the capitalist market.

Armed with military power, imbued with an imperial mentality, and convinced of their “manifest destiny” and racial superiority, the American occupiers expected deference and obedience from Haitians. In fact, the key American policymakers in both Washington and Port-au-Prince entertained racist phobias and stereotypes and were bewildered by Haitian culture.

At best, the occupiers regarded Haitians as the product of a bizarre mixture of African and Latin cultures who had to be treated like children lacking the education, maturity, and discipline for self-government. At worst, Haitians were like their African forbears, inferior human beings, “savages,” “cannibals,” “gooks,” and “niggers.”

Robert Lansing, the secretary of state in the Woodrow Wilson administration, exemplified the racist American view:

The experience of Liberia and Haiti show that the African race are devoid of any capacity for political organization and lack genius for government. Unquestionably there is in them an inherent tendency to revert to savagery and to cast aside the shackles of civilization which are irksome to their physical nature . . . It is that which makes the Negro problem practically unsolvable.

For the occupiers, Haitians thus had no capacity to run their own affairs or even appreciate the alleged benefits of America’s invasion. As High Commissioner Russell put it, “Haitian mentality only recognizes force, and appeal to reason and logic is unthinkable.”

And indeed, the American-led gendarmerie used brutal force to impose its grip on Haitian society and squash all opposition. Adm. Caperton declared martial law on September 3, 1915. It would last fourteen years, facilitating the establishment of a new regime ofcorvée (forced, unpaid labor), as well as the brutal suppression of thecaco guerrilla resistance against American forces.

Overseen by the repressive control of the gendarmerie, the unpopularcorvée system compelled peasants to work as virtual “slave gangs.” The massive mobilization of coerced labor helped build roads that reached remote areas of the territory; the creation of a viable network of transportation was not merely a means of spurring economic and commercial development, but a result of American strategic considerations.

Putting down the cacos who had supported Bobo and joined the popular guerrillas of Charlemagne Péralte required the penetration of the countryside to prevent any further recruitment of peasants into the forces of resistance.

The corvée system of forced labor extraction,and the military repression of the guerrillas were thus symbiotically connected. Riddled with abuse, the corvée failed to stifle opposition, however. Instead, coercing the peasantry to labor on infrastructural projects just fueled greater resistance to the occupation.

Popular support for the cacos grew, and soon there was an embryonic movement of national liberation with an increasingly sophisticated guerrilla force under the leadership of Péralte. Péralte, who called himself Chef Suprême de la Révolution en Haïti, explained that he was fighting the occupiers to gain Haiti’s liberation from American imperialism.

In the eyes of American authorities, however, the cacos, Péralte, and his supporters were nothing but “bandits,” “criminals,” and “killers” who had to be thoroughly “pacified.” And so they were. Péralte was shot on November 1, 1919 and his successor, Benoît Batraville, suffered a similar fate on May 19 of the following year. By 1921 the American pacification of the country was virtually complete. Some 2,000 thousand insurgents had been killed, and more than 11,000 of their sympathizers had been incarcerated.

Still, pacification did not imply popular acquiescence. It is true that the traditional Haitian elites initially collaborated with and even welcomed American imperialism. But as they experienced the unmitigated racism of the occupying forces, the elites turned against them and espoused varied forms of nationalist resistance.

While not inclined to back the caco insurgents, these elites developed a sense of nationhood that curbed the significance of color but had little impact on the salience of class identities. In the eyes of most Haitians, those who had participated actively in the occupation machinery, like President Dartiguenave or his successor, Louis Borno, were opportunistic collaborators or simply traitors.

In fact, many of these collaborators had authoritarian reflexes and shared some of the paternalistic and racist ideology of their American overlords. Convinced that Haitians were not prepared for any democratic form of self-government, these elites believed in thedespotisme éclairé of the plus capables (the enlightened despotism of the most capable).

In addition, from their privileged class position they regarded the rest of their compatriots — especially the peasantry — with contempt. In an official letter to the nation’s prefects, President Borno openly expressed this disdain:

Our rural population, which represents nine-tenths of the Haitian population, is almost totally illiterate, ignorant and poor . . . it is still incapable of exercising the right to vote, and would be the easy prey of those bold speculators whose conscience hesitates at no lie.

[The] present electoral body . . . is characterized by a flagrant inability to assume . . . the heavy responsibilities of a political action.

Borno was a dictator, but a dictator under American control. His rule embodied what Haitians called la dictature bicéphale, the “dual despotism” of American imperialism and its domestic clients. This regime of repression had unintended consequences. It intensified the level of nationalist resistance to the occupation and contributed to a convergence of interests between intellectuals, students, public workers, and peasants.

This growing mobilization against the occupation precipitated the 1929 Marchaterre massacre, when some fifteen hundred peasants protesting high taxation confronted armed marines who then opened fire on the crowd. Twenty-four Haitians died and fifty-one were wounded. The massacre set in motion a series of events that would eventually lead the United States to reassess its policies and presence in Haiti.

President Herbert Hoover created a commission whose primary objective was to investigate “when and how we are to withdraw from Haiti.” The commission — which took the name of its chair, Cameron Forbes, who served in the Philippines as chief constabulary and then as governor — acknowledged that the US had not accomplished its mission and that it had failed “to understand the social problems of Haiti.”

While the commission astonishingly claimed that the occupation’s failure was due to the “brusque attempt to plant democracy there by drill and harrow” and to “its determination to set up a middle class,” it ultimately recommended the withdrawal of the United States from Haiti.

Jacobin-Series-3bdd91b95cfc219305403acaa1630163

The commission advised, however, that the withdrawal not be immediate, but rather that it should take place only after the successful “Haitianization” of the public services as well as the gendarmerie. Forbes also understood that President Borno had no legitimacy and could be sacrificed. Borno was forced to retire and arrange the election of an interim successor who would in turn organize general elections. Sténio Vincent, a moderate nationalist who favored a gradual, negotiated ending to the occupation, thus became president in November 1930.

Vincent’s gradualism was in tune with the Forbes Commission’s recommendation for the accelerated Haitianization of the commanding ranks of the government and the eventual withdrawal of all American troops. While Forbes and Vincent operated on the assumption that the United States’ withdrawal would not occur until 1936, the election of Franklin Roosevelt in 1932 altered events.

Roosevelt’s new “Good Neighbor” strategy toward Latin America was rooted in the premise that direct occupation through military intervention was expensive, counterproductive, and in most instances unnecessary. It was not that the forceful occupation of another country was precluded; it simply became a last resort.

Roosevelt understood that in Latin America, the United States could impose its hegemony through local allies and surrogates, especially through military corps and officers that it had trained, organized, and equipped. It is this perspective that explains the American decision to withdraw from Haiti. In fact, what Haitians came to call “second independence” arrived two months earlier than expected. On a visit to Cape Haitien, in the north of the country, Roosevelt announced that the American occupation would end on August 15, 1934.

After close to twenty years of dual dictatorship, Haitians were left with a changed nation. American rule had contributed to the centralization of power in Port-au-Prince and the modernization of the monarchical presidentialism that had always characterized Haitian politics. With the American occupation, praetorian power came to reside in the barracks of the capital, which had supplanted the regional armed bands that had hitherto been decisive in the making, and unmaking, of political regimes.

Moreover, the subordination of the Haitian president to American marine forces had nurtured a politics of military vetoes and interference that would eventually undermine civilian authority and help incite the numerous coups of post-occupation Haiti. To remain in office, the executive would have to depend on the support of the military, which had been centralized in Port-au-Prince.

The supremacy of Port-au-Prince also implied the privileging of urban classes to the detriment of the rural population. Peasants continued to be excluded from the moral community of les plus capables, and they came under a strict policing regime of law and order.

The occupation never intended to cut the roots of authoritarianism; instead, it planted them in a more rational and modern terrain. By establishing a communication network that became a means of policing and punishing the population, and by creating a more effective and disciplined coercive force, American rule left a legacy of authoritarian and centralized power. It suppressed whatever democratic and popular forms of accountability and protests it confronted, and nurtured the old patterns of fraudulent electoral practices, giving the armed forces ultimate veto on who would rule Haiti.

Elections during the occupation, and for more than seventy years afterward, were never truly free and fair. In most cases, the outcome of elections had less to do with the actual popular vote than with compromises reached between Haiti’s ruling classes and imperial forces. Thus, elections lacked the degree of honesty and openness required to define a democratic order. The occupation imposed its rule through fraud, violence, and deceit, and little changed after it ended.

It is true that the imperial presence from 1915 to 1934 contributed to the building of a modest infrastructure of roads and clinics, but it did so with the most paternalistic and racist energy. American authorities convinced themselves that their mission was to bring development and civilization to Haiti. They presumed that Haitians were utterly incapable of doing so on their own. Not surprisingly, they used methods of command and control to achieve their project, a practice that reinforced the existing authoritarian patterns of unaccountable, undemocratic governance.

Interestingly, when one examines the strategy and rhetoric from the 1915–1934 occupation, one can see that it foreshadowed the contemporary “modernization” and “failed states” theories that have justified western interventionism during and after the Cold War era. Except for its unmitigated racism, the old interventionism differs little from the twenty-first century doctrines of “humanitarian militarism” and “responsibility to protect.”

In fact, since the fall of the US-backed Duvalier dictatorship in 1986 and the catastrophic earthquake of 2010, the country has been involved in an unending democratic transition marred by persistent imperial interventions that have transformed it into a quasi-protectorate of the international community.

Foreign powers, particularly the United States and to a lesser extent France and Canada, have regarded Haiti as a “failed state” that could not function without the massive political, military, and economic presence of outsiders.

One hundred years after the first American occupation and three decades after Jean-Claude Duvalier’s popular ouster, Haiti has been reoccupied twice by American marines, who have paved the way for the current, interminable, and humiliating presence of a United Nations “peace-keeping” force. The imperial language has barely changed. American rhetoric justifies occupation in the name of “stability,” “domestic security,” and the dangers of “populist and anti-market political forces.” The US continues to promise the development of a modern capitalist economy, a middle class society, and a democratic order.

That all of these occupations failed miserably to achieve these goals indicate the obdurate limits and contradictions of any project of development sponsored and imposed by imperial forces. These occupations warn us also about the justifications, dangers, and vicissitudes of interventions in the current era of neoliberal globalization.

Facilitated by the corruptions of its ruling classes, old and new imperial interventions have consistently failed to deliver what they promise; in fact, they have condemned Haiti to virtual trusteeship, a vassal country suffering from a recurring emergency syndrome.

Robert Fatton Jr is a professor in the Department of Politics at the University of Virginia. His most recent book is Haiti: Trapped in the Outer Periphery.

Disneyland’s 60th Anniversary: July 17, 1955

Walt Disney shows Disneyland plans to Orange County officials, Dec. 1954

Walt Disney shows Disneyland plans to Orange County officials, Dec. 1954(Photo: Orange County Archives)

“To all who come to this happy place: Welcome.” With those words, Walt Elias Disney officially dedicated Disneyland on July 17, 1955. But Disneyland almost didn’t happen, and its opening day was nearly a total disaster. If not for Disney’s indomitable will and savvy deal-making, “The Happiest Place on Earth” would never have succeeded.

Walt Disney was a man always on the lookout for “the next big thing.” He had burst onto the entertainment scene in the 1920s with the first sound-synchronized cartoons. He then pioneered the first color cartoons, and, thanks to the invention of the multi-plane camera, the first cartoons with visual depth. Then, of course, in 1937 he produced Snow White and the Seven Dwarves, the first feature-length animated film and a spectacular technological and artistic achievement that became a world-wide sensation. By the late 1940s, however, Disney was growing tired of animation: the stock studio characters were growing stale, the shorts were increasingly formulaic and uninspired, and the feature films paled by comparison to his great achievements of Snow White and Fantasia(1940). Moreover, Disney’s controversial foray into war-time propaganda cost the studio financially and hurt his reputation as an innovative artist.

Disappointed and bored, Disney turned to a different kind of entertainment: amusement parks. Since boyhood, he had always been fascinated by magic, theater, and public fairs. His father Elias had been a construction worker on the famed Chicago World’s Fair in 1893, and it is not too far-fetched to imagine that young Walt heard many tales of that Fair’s fantastical wonders. As early as the 1930s, Disney had toyed with mechanical “flea circuses” and miniature attractions that depicted scenes from American history. After World War Two, though, the tinkering became more serious. Disney selected his favorite animators, design artists, and story writers to form a separate company: WED Enterprises (taking the name from his initials). Under the leadership of engineer and retired navy admiral Joe Fowler, these teams of “Imagineers,” as Disney called them, produced concept art and attraction designs for the would-be park.

Simultaneously, Disney delved into live action films, and the Imagineers who worked on them gained valuable experience with set design and staging. Highly stylized cinematic successes such as Treasure Island (1950) became templates for the park. Disneyland, as Disney envisioned it, would essentially be an immersive experience in which guests would participate in a live-action show. Employees would be “cast members,” and time on the job would be called “on stage.” Even the entrance to the park was designed to be theatrical, with attraction posters, popcorn, and “red carpet” concrete.

Crafting concept art and set design were easy, but actually building the park presented enormous problems. In July 1953, Disney hired the Stanford Research Institute to study the park’s potential profitability and to scout possible locations. After an exhaustive search, the team concluded the venture would make money and should be located in sleepy Anaheim, California, near a new freeway. Next up was getting the cash to begin construction. Walt’s older brother Roy, who handled the studio’s finances and who had a knack for finding ways to fund Walt’s dreams, was deeply skeptical – there would be no big brother bailout this time, as there had been for past projects. Instead, Disney had to turn to outside investors, namely ABC Television, TWA, Richfield Oil, Monsanto, Kodak, Carnation, and Pepsi. ABC would front the initial cash in exchange for producing and airing a new weekly Disneyland television show, while the other corporations agreed to sponsor individual attractions. With impressive ease and speed, by July 1954 Disney had assembled the deals and the money needed to break ground.

Construction was frantic. Before the first shovel touched soil, Disney had agreed to an opening date of July 17, 1955, and every episode of the phenomenally successful Disneyland show reinforced that deadline and built tremendous anticipation around the world. The hectic pace and the unprecedented nature of what was effectively a massive urban planning project (entailing a 160-acre city with a main street, town hall, shops, and restaurants; a river with passenger boats; a castle; and four unique “lands”) resulted in a plethora of problems: money shortages, labor strikes, scarcity of asphalt, and rivers that would not hold water, just to name a few. As the summer of 1955 neared, construction crews were operating round-the-clock. Men, material, and money were exhausted to meet the deadline.

Finally, the day arrived. Disney, along with his entertainment buddies Art Linkletter and Ronald Reagan, went on the air for an unprecedented two-hour live broadcast of Disneyland. A smashing 90 million viewers tuned in, enthralled by the combination of live television and Disney magic. The audience never saw the disastrous failures of that momentous occasion. The reality was that the park was barely finished. The asphalt had been so recently poured that women’s heels sank into the streets; “Tomorrowland” was incomplete, and banners had to be hung at the last moment to hide the construction; gas leaks temporarily shut down “Fantasyland”; restaurants ran out of food; there were not enough drinking fountains and restrooms; and forged tickets resulted in suffocating crowds that outstripped the park’s capacity.

“Black Sunday,” as it was known by cast members and Imagineers, was a day of crises, but in the following weeks and months, Disney and his crew patched up the problems, finished the park, and set about creating new, even more exciting attractions. Influential urban planner James Rouse would soon call Disneyland “the greatest piece of urban design in the U.S. today,” and millions of visitors from around the globe flocked to see Disney’s marvel. Even celebrities and world leaders were eager to experience the excitement: Vice President Richard Nixon and his family visited a month after Disneyland opened, with Nixon chirping, “This is a paradise for children and grown-ups, too. My children have been after me for weeks to bring them here.” In 1957, the King of Morocco loved the park so much he snuck out of his hotel for a second visit, and that same year, former president Harry Truman joined the fun, joking that he would not ride the Dumbo attraction since elephants were a symbol of the Republican Party. Two years later, Senator John Kennedy and King Hussein of Jordan made the trip. The list goes on and on. And it is worth noting that the park itself has changed dramatically from those early days, improving (“plussing,” in Disney’s words) old attractions, adding new rides, and debuting cutting-edge technology, such as the Matterhorn Bobsleds in 1959, the world’s first metal-tubing roller coaster.

Disneyland today is both the same park of Disney’s dream from 1955 and a very different one. Visitors can still feel the special touch and presence of “Uncle Walt,” but park attractions and the technological innovations now surpass anything Disney could have imagined. Just as important as the thrills and pixie dust, however, is Disneyland’s role in Disney’s growing interest in urban planning and evolving partnership with major corporations. The park’s tremendous success (financially and logistically) gave Disney and his studio the momentum and money needed to delve into an even bigger project: an entire “city of the future” in central Florida. There, Disney, along with corporate backing and unprecedented political autonomy yielded by the state, would build more than a Disneyland, he would build a Disney World.

Michael Landis

Michael Todd Landis is an Assistant Professor of History at Tarleton State University. He is the author of Northern Men with Southern Loyalties: The Democratic Party and the Sectional Crisis (Cornell, 2014).