Feeds:
Entradas
Comentarios

Archive for noviembre 2014

Remember the Sand Creek Massacre

Credit Christine Marie Larsen

NEW HAVEN — MANY people think of the Civil War and America’s Indian wars as distinct subjects, one following the other. But those who study the Sand Creek Massacre know different.

On Nov. 29, 1864, as Union armies fought through Virginia and Georgia, Col. John Chivington led some 700 cavalry troops in an unprovoked attack on peaceful Cheyenne and Arapaho villagers at Sand Creek in Colorado. They murdered nearly 200 women, children and older men.

Sand Creek was one of many assaults on American Indians during the war, from Patrick Edward Connor’s massacre of Shoshone villagers along the Idaho-Utah border at Bear River on Jan. 29, 1863, to the forced removal and incarceration of thousands of Navajo people in 1864 known as the Long Walk.

In terms of sheer horror, few events matched Sand Creek. Pregnant women were murdered and scalped, genitalia were paraded as trophies, and scores of wanton acts of violence characterize the accounts of the few Army officers who dared to report them. Among them was Capt. Silas Soule, who had been with Black Kettle and Cheyenne leaders at the September peace negotiations with Gov. John Evans of Colorado, the region’s superintendent of Indians affairs (as well as a founder of both the University of Denver and Northwestern University). Soule publicly exposed Chivington’s actions and, in retribution, was later murdered in Denver.

After news of the massacre spread, Evans and Chivington were forced to resign from their appointments. But neither faced criminal charges, and the government refused to compensate the victims or their families in any way. Indeed, Sand Creek was just one part of a campaign to take the Cheyenne’s once vast land holdings across the region. A territory that had hardly any white communities in 1850 had, by 1870, lost many Indians, who were pushed violently off the Great Plains by white settlers and the federal government.

These and other campaigns amounted to what is today called ethnic cleansing: an attempted eradication and dispossession of an entire indigenous population. Many scholars suggest that such violence conforms to other 20th-century categories of analysis, like settler colonial genocide and crimes against humanity.

Sand Creek, Bear River and the Long Walk remain important parts of the Civil War and of American history. But in our popular narrative, the Civil War obscures such campaigns against American Indians. In fact, the war made such violence possible: The paltry Union Army of 1858, before its wartime expansion, could not have attacked, let alone removed, the fortified Navajo communities in the Four Corners, while Southern secession gave a powerful impetus to expand American territory westward. Territorial leaders like Evans were given more resources and power to negotiate with, and fight against, powerful Western tribes like the Shoshone, Cheyenne, Lakota and Comanche. The violence of this time was fueled partly by the lust for power by civilian and military leaders desperate to obtain glory and wartime recognition.

The United States has yet to fully recognize the violent destruction wrought against indigenous peoples by the Civil War and the Union Army. Connor and Evans have cities, monuments and plaques in their honor, as well as two universities and even Colorado’s Mount Evans, home to the highest paved road in North America.

Saturday’s 150th anniversary will be commemorated many ways: The National Park Service’s Sand Creek Massacre Historic Site, the descendant Cheyenne and Arapaho communities, other Native American community members and their non-Native supporters will commemorate the massacre. An annual memorial run will trace the route of Chivington’s troops from Sand Creek to Denver, where an evening vigil will be held Dec. 2.

The University of Denver and Northwestern are also reckoning with this legacy, creating committees that have recognized Evans’s culpability. Like many academic institutions, both are deliberating how to expand Native American studies and student service programs. Yet the near-absence of Native American faculty members, administrators and courses reflects their continued failure to take more than partial steps.

While the government has made efforts to recognize individual atrocities, it has a long way to go toward recognizing how deeply the decades-long campaign of eradication ran, let alone recognizing how, in the face of such violence, Native American nations and their cultures have survived. Few Americans know of the violence of this time, let alone the subsequent violation of Indian treaties, of reservation boundaries and of Indian families by government actions, including the half-century of forced removal of Indian children to boarding schools.

One symbolic but necessary first step would be a National Day of Indigenous Remembrance and Survival, perhaps on Nov. 29, the anniversary of Sand Creek. Another would be commemorative memorials, not only in Denver and Evanston but in Washington, too. We commemorate “discovery” and “expansion” with Columbus Day and the Gateway arch, but nowhere is there national recognition of the people who suffered from those “achievements” — and have survived amid continuing cycles of colonialism.

Correction: November 27, 2014
An earlier version of this article incorrectly stated that the American Indian leader Black Kettle was killed in the Sand Creek Massacre. He died at the Battle of Washita in Oklahoma in 1868. 

Read Full Post »

America’s Founding Myths

This Thanksgiving, it’s worth remembering that the narrative we hear about America’s founding is wrong. The country was built on genocide.

Massacre of American-Indian women and children in Idaho.

Under the crust of that portion of Earth called the United States of America — “from California . . . to the Gulf Stream waters” — are interred the bones, villages, fields, and sacred objects of American Indians. They cry out for their stories to be heard through their descendants who carry the memories of how the country was founded and how it came to be as it is today.

It should not have happened that the great civilizations of the Western Hemisphere, the very evidence of the Western Hemisphere, were wantonly destroyed, the gradual progress of humanity interrupted and set upon a path of greed and destruction. Choices were made that forged that path toward destruction of life itself—the moment in which we now live and die as our planet shrivels, overheated. To learn and know this history is both a necessity and a responsibility to the ancestors and descendants of all parties.

US policies and actions related to indigenous peoples, though often termed “racist” or “discriminatory,” are rarely depicted as what they are: classic cases of imperialism and a particular form of colonialism—settler colonialism. As anthropologist Patrick Wolfe writes, “The question of genocide is never far from discussions of settler colonialism. Land is life — or, at least, land is necessary for life.

The history of the United States is a history of settler colonialism — the founding of a state based on the ideology of white supremacy, the widespread practice of African slavery, and a policy of genocide and land theft. Those who seek history with an upbeat ending, a history of redemption and reconciliation, may look around and observe that such a conclusion is not visible, not even in utopian dreams of a better society.

That narrative is wrong or deficient, not in its facts, dates, or details but rather in its essence. Inherent in the myth we’ve been taught is an embrace of settler colonialism and genocide. The myth persists, not for a lack of free speech or poverty of information but rather for an absence of motivation to ask questions that challenge the core of the scripted narrative of the origin story.

Woody Guthrie’s “This Land Is Your Land” celebrates that the land belongs to everyone, reflecting the unconscious manifest destiny we live with. But the extension of the United States from sea to shining sea was the intention and design of the country’s founders.

“Free” land was the magnet that attracted European settlers. Many were slave owners who desired limitless land for lucrative cash crops. After the war for independence but before the US Constitution, the Continental Congress produced the Northwest Ordinance. This was the first law of the incipient republic, revealing the motive for those desiring independence. It was the blueprint for gobbling up the British-protected Indian Territory (“Ohio Country”) on the other side of the Appalachians and Alleghenies. Britain had made settlement there illegal with the Proclamation of 1763.

In 1801, President Jefferson aptly described the new settler-state’s intentions for horizontal and vertical continental expansion, stating, “However our present interests may restrain us within our own limits, it is impossible not to look forward to distant times, when our rapid multiplication will expand itself beyond those limits and cover the whole northern, if not the southern continent, with a people speaking the same language, governed in similar form by similar laws.”

Origin narratives form the vital core of a people’s unifying identity and of the values that guide them. In the United States, the founding and development of the Anglo-American settler-state involves a narrative about Puritan settlers who had a covenant with God to take the land. That part of the origin story is supported and reinforced by the Columbus myth and the “Doctrine of Discovery.”

The Columbus myth suggests that from US independence onward, colonial settlers saw themselves as part of a world system of colonization. “Columbia,” the poetic, Latinate name used in reference to the United States from its founding throughout the nineteenth century, was based on the name of Christopher Columbus.

The “Land of Columbus” was—and still is—represented by the image of a woman in sculptures and paintings, by institutions such as Columbia University, and by countless place names, including that of the national capital, the District of Columbia. The 1798 hymn “Hail, Columbia” was the early national anthem and is now used whenever the vice president of the United States makes a public appearance, and Columbus Day is still a federal holiday despite Columbus never having set foot on any territory ever claimed by the United States.

To say that the United States is a colonialist settler-state is not to make an accusation but rather to face historical reality. But indigenous nations, through resistance, have survived and bear witness to this history. The fundamental problem is the absence of the colonial framework.

Settler colonialism, as an institution or system, requires violence or the threat of violence to attain its goals. People do not hand over their land, resources, children, and futures without a fight, and that fight is met with violence. In employing the force necessary to accomplish its expansionist goals, a colonizing regime institutionalizes violence. The notion that settler-indigenous conflict is an inevitable product of cultural differences and misunderstandings, or that violence was committed equally by the colonized and the colonizer, blurs the nature of the historical processes. Euro-American colonialism had from its beginnings a genocidal tendency.

The term “genocide” was coined following the Shoah, or Holocaust, and its prohibition was enshrined in the United Nations convention adopted in 1948: the UN Convention on the Prevention and Punishment of the Crime of Genocide.

The convention is not retroactive but is applicable to US-indigenous relations since 1988, when the US Senate ratified it. The terms of the genocide convention are also useful tools for historical analysis of the effects of colonial- ism in any era. In the convention, any one of five acts is considered genocide if “committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group”:

  • killing members of the group;
  • causing serious bodily or mental harm to members of the group; deliberately inflicting on the group conditions of life
  • calculated to bring about its physical destruction in whole or in part;
  • imposing measures intended to prevent births within the group;
  • forcibly transferring children of the group to another group.

Settler colonialism is inherently genocidal in terms of the genocide convention. In the case of the British North American colonies and the United States, not only extermination and removal were practiced but also the disappearing of the prior existence of indigenous peoples—and this continues to be perpetuated in local histories.

Anishinaabe (Ojibwe) historian Jean O’Brien names this practice of writing Indians out of existence “firsting and lasting.” All over the continent, local histories, monuments, and signage narrate the story of first settlement: the founder(s), the first school, first dwelling, first everything, as if there had never been occupants who thrived in those places before Euro-Americans. On the other hand, the national narrative tells of “last” Indians or last tribes, such as “the last of the Mohicans,” “Ishi, the last Indian,” and End of the Trail, as a famous sculpture by James Earle Fraser is titled.

From the Atlantic Ocean to the Mississippi River and south to the Gulf of Mexico lay one of the most fertile agricultural belts in the world, crisscrossed with great rivers. Naturally watered, teeming with plant and animal life, temperate in climate, the region was home to multiple agricultural nations. In the twelfth century, the Mississippi Valley region was marked by one enormous city-state, Cahokia, and several large ones built of earthen, stepped pyramids, much like those in Mexico. Cahokia supported a population of tens of thousands, larger than that of London during the same period.

Other architectural monuments were sculpted in the shape of gigantic birds, lizards, bears, alligators, and even a 1,330-foot-long serpent. These feats of monumental construction testify to the levels of civic and social organization. Called “mound builders” by European settlers, the people of this civilization had dispersed before the European invasion, but their influence had spread throughout the eastern half of the North American continent through cultural influence and trade.

What European colonizers found in the southeastern region of the continent were nations of villages with economies based on agriculture and corn the mainstay. This was the territory of the nations of the Cherokee, Chickasaw, and Choctaw and the Muskogee Creek and Seminole, along with the Natchez Nation in the western part, the Mississippi Valley region.

To the north, a remarkable federal state structure, the Haudenosaunee Confederacy — often referred to as the Six Nations of the Iroquois Confederacy — was made up of the Seneca, Cayuga, Onondaga, Oneida, and Mohawk Nations and, from early in the nineteenth century, the Tuscaroras. This system incorporated six widely dispersed and unique nations of thousands of agricultural villages and hunting grounds from the Great Lakes and the St. Lawrence River to the Atlantic, and as far south as the Carolinas and inland to Pennsylvania.

The Haudenosaunee peoples avoided centralized power by means of a clan-village system of democracy based on collective stewardship of the land. Corn, the staple crop, was stored in granaries and distributed equitably in this matrilineal society by the clan mothers, the oldest women from every extended family. Many other nations flourished in the Great Lakes region where now the US-Canada border cuts through their realms. Among them, the Anishinaabe Nation (called by others Ojibwe and Chippewa) was the largest.

In the beginning, Anglo settlers organized irregular units to brutally attack and destroy unarmed indigenous women, children, and old people using unlimited violence in unrelenting attacks. During nearly two centuries of British colonization, generations of settlers, mostly farmers, gained experience as “Indian fighters” outside any organized military institution.

Anglo-French conflict may appear to have been the dominant factor of European colonization in North America during the eighteenth century, but while large regular armies fought over geopolitical goals in Europe, Anglo settlers in North America waged deadly irregular warfare against the indigenous communities.

The chief characteristic of irregular warfare is that of the extreme violence against civilians, in this case the tendency to seek the utter annihilation of the indigenous population. “In cases where a rough balance of power existed,” observes historian John Greniew, “and the Indians even appeared dominant—as was the situation in virtually every frontier war until the first decade of the 19th century—[settler] Americans were quick to turn to extravagant violence.”

Indeed, only after seventeenth- and early- eighteenth-century Americans made the first way of war a key to being a white American could later generations of ‘Indian haters,’ men like Andrew Jackson, turn the Indian wars into race wars.” By then, the indigenous peoples’ villages, farmlands, towns, and entire nations formed the only barrier to the settlers’ total freedom to acquire land and wealth. Settler colonialists again chose their own means of conquest. Such fighters are often viewed as courageous heroes, but killing the unarmed women, children, and old people and burning homes and fields involved neither courage nor sacrifice.

US history, as well as inherited indigenous trauma, cannot be understood without dealing with the genocide that the United States committed against indigenous peoples. From the colonial period through the founding of the United States and continuing in the twenty-first century, this has entailed torture, terror, sexual abuse, massacres, systematic military occupations, removals of indigenous peoples from their ancestral territories, and removals of indigenous children to military-like boarding schools.

Once in the hands of settlers, the land itself was no longer sacred, as it had been for the indigenous. Rather, it was private property, a commodity to be acquired and sold. Later, when Anglo-Americans had occupied the continent and urbanized much of it, this quest for land and the sanctity of private property were reduced to a lot with a house on it, and “the land” came to mean the country, the flag, the military, as in “the land of the free” of the national anthem, or Woody Guthrie’s “This Land Is Your Land.”

Those who died fighting in foreign wars were said to have sacrificed their lives to protect “this land” that the old settlers had spilled blood to acquire. The blood spilled was largely indigenous.

Adapted from An Indigenous Peoples’ History of the United Statesout now from Beacon Press.

Read Full Post »

This 75th Anniversary’s Been Overlooked. It Shouldn’t Be

Paula Rabinowitz  

HNN   November 22, 2014

Seventy-five years ago, paperback books returned to the United States with the brandname Pocket Books, which began publishing its mass-market paperbacks, sold at a quarter each, with ten titles, among them: Frank Buck’s Bring ‘Em Back Alive, Bambi by Felix Salten, James Hilton’s Lost Horizon and Emily Brontë’s Wuthering Heights. Returned, because nineteenth-century printers often bound books in paper, yet the practice had all but disappeared during the early part of the twentieth century. It may seem odd to commemorate the advent of cheap pulpy books instead of the far more significant anniversary: the signing of the Hitler-Stalin Pact on August 23, 1939. But the saga of cheap paperbacks’ arrival on American soil is intimately tied to the Second World War and its aftermath in a number of ways, deriving from and contributing to wartime innovation, necessity, mobility and censorship.

Modern paperbacks were the Depression-era brainchild of English editor Allen Lane, who developed Penguin Books in 1935 in order to provide high-quality literary works as cheaply as a pack of cigarettes. Publishing on such a massive scale depended on huge supplies of paper, which, once Britain declared war on Germany, was sharply curtailed in the UK. But the US still had an abundance of trees and paper mills and whether Lane’s assistant, Ian Ballantine and others, stole the idea, as E.L. Doctorow remembers in Reporting the Universe, or Lane shipped the enterprise overseas with Kurt Enoch and Victor Weybright (as he recalled in his memoir The Making of a Publisher), the new medium appearing on drugstore racks, bus stations and corner candy stores, became a kitschy icon that indelibly altered American tastes and habits during the mid-twentieth century. Within a few months of their initial arrival, paperbacks were everywhere. Despite the ubiquity of radio and the Hollywood banner year of 1939, when Gone with the Wind and The Wizard of Oz swept into movie theaters with lush colors, books were the mass media of wartime America. The advent of color assured a renewed love affair with the movies, even as the 1939 World’s Fair in New York marked the introduction of television, the next frontier in mass communications, which would come into its own in the 1950s. But the ability to own a book, one printed by the millions, connected Americans to new ideas in science, economics, art—not to mention new sensations about reading and the self and each other.

These new objects, emblazoned with lurid cover art and risqué tag lines, were priced to sell and, once the US entered the war, were imprinted with an admonition to send the volumes overseas to servicemen. War spurs technological breakthroughs, usually in weaponry or communication; paperbacks were part of this process, a new technology that transformed both the battlefield and home front. Books, unlike other mass media, such as the radio or movies, were tangible things that could be purchased and, like a salami from Katz’s Delicatessen, then sent “to your boy in the army.” Paperback books participated directly in the war effort when publishers and booksellers banded together to produce the Armed Services Editions—millions of books distributed free to the Army and Navy, which left a legacy that influenced a generation of Japanese and Italian scholars to study American literature when they found these handy yellow-covered books or their companions, the Overseas Editions, among their grandfathers’ war surplus (the ASE books could not be brought back the US so were dumped overseas). Books are always causing trouble; even this patriotic gesture ran afoul of Congressional attempts, through amendments to the 1944 Soldier’s Voting Act, to limit the use of certain words in publications distributed to troops that might appear to sway their political opinions.

By the 1950s, after paperback publishing exploded to encompass many imprint houses and augmented reprints with PBOs (paperback originals), the books’ provocative covers—which had been a crucial design elements meant to spur sales but also to bring vernacular modernist visual culture into private life—sparked police departments to impound books and Congress to investigate “Current Pornographic Materials” (during the 1952 Gathings subcommittee hearings), including paperbacks. What had been allowed to proliferate during the Second World War, when millions were in uniform and social mores superintending men’s and women’s behaviors loosened considerably, needed to be reined in during the Korean War and the Cold War.

Paperback book publishers had long been aware of real and potential censorship efforts mounted in the United States most notably, the 1933 case, United States v. One Book Called “Ulysses.” Its 1934 appeal decision by Augustus Hand declared that the book must be “taken as a whole,” so that even patently “obscene” portions “relevant to the purpose of depicting the thoughts of the characters … are introduced to give meaning to the whole.” This decision was aimed at the literary merits of the work and its “sincerity” of portraying characters, but because the law was aimed at “one book,” the book itself, as a total package from cover to cover, might be considered “as a whole.” Paperback publishers exploited the pulpy aspects of their product, with louche and debauched cover art attracting visual attention; but the covers rigorously conformed to the Ulysses decision ruling: each depicted a scene found within the book—even if only in a few words. The paperback was a complete work consisting not only of words but art as well.

This handy package, arriving on American shores in the midst of war’s horrors—offering its owners a “complete and unabridged” work, easily carried in pocket or pocketbook, complete with a visually compelling cover—helped usher readers into new sensations through art, science and literature. As objects that circulated along with their owners during and after WWII, they brought modernism to Main Street.

Paula Rabinowitz is the author of AMERICAN PULP: HOW PAPERBACKS BROUGHT MODERNISM TO MAIN STREET and Editor-in-Chief of the Oxford Encyclopedia of Literature. She is a Professor of English at University of Minnesota, where she teaches courses on twentieth-century American literature, film and visual cultures, and material culture.

Read Full Post »

 

hnn-logo-new

 

 

 

 

Slavery in America: Back in the Headlines

Daina Ramey Berry

HNN  November 23, 1014

 

People think they know everything about slavery in the United States, but they don’t. They think the majority of African slaves came to the American colonies, but they didn’t. They talk about 400 hundred years of slavery, but it wasn’t. They claim all Southerners owned slaves, but they didn’t. Some argue it was a long time ago, but it wasn’t.

Slavery has been in the news a lot lately. Perhaps it’s because of the increase in human trafficking on American soil or the headlines about income inequality, the mass incarceration of African Americans or discussions about reparations to the descendants of slaves. Several publications have fueled these conversations: Ta-Nehisi Coates’ The Case for Reparations in The Atlantic Monthly, French economist Thomas Picketty’s Capital in the Twenty First Century, historian Edward Baptist’s The Half Has Never Been Told: Slavery and The Making of American Capitalism, and law professor Bryan A. Stevenson’s Just Mercy: A Story of Justice and Redemption.

As a scholar of slavery at the University of Texas at Austin, I welcome the public debates and connections the American people are making with history. However, there are still many misconceptions about slavery.

I’ve spent my career dispelling myths about “the peculiar institution.” The goal in my courses is not to victimize one group and celebrate another. Instead, we trace the history of slavery in all its forms to make sense of the origins of wealth inequality and the roots of discrimination today. The history of slavery provides deep context to contemporary conversations and counters the distorted facts, internet hoaxes and poor scholarship I caution my students against.

Four myths about slavery

Myth One: The majority of African captives came to what became the United States.

Truth: Only 380,000 or 4-6% came to the United States. The majority of enslaved Africans went to Brazil, followed by the Caribbean. A significant number of enslaved Africans arrived in the American colonies by way of the Caribbean where they were “seasoned” and mentored into slave life. They spent months or years recovering from the harsh realities of the Middle Passage. Once they were forcibly accustomed to slave labor, many were then brought to plantations on American soil.

Myth Two: Slavery lasted for 400 years.

Popular culture is rich with references to 400 years of oppression. There seems to be confusion between the Transatlantic Slave Trade (1440-1888) and the institution of slavery, confusion only reinforced by the Bible, Genesis 15:13:

Then the Lord said to him, ‘Know for certain that for four hundred years your descendants will be strangers in a country not their own and that they will be enslaved and mistreated there.

Listen to Lupe Fiasco – just one Hip Hop artist to refer to the 400 years – in his 2011 imagining of America without slavery, “All Black Everything”:

[Hook]

You would never know

If you could ever be

If you never try

You would never see

Stayed in Africa

We ain’t never leave

So there were no slaves in our history

Were no slave ships, were no misery, call me crazy, or isn’t he

See I fell asleep and I had a dream, it was all black everything

[Verse 1]

Uh, and we ain’t get exploited

White man ain’t feared so he did not destroy it

We ain’t work for free, see they had to employ it

Built it up together so we equally appointed

First 400 years, see we actually enjoyed it

tt9cfqkm-1413841594.jpg

A plantation owner with his slaves. (National Media Museum from UK)

Truth: Slavery was not unique to the United States; it is a part of almost every nation’s history from Greek and Roman civilizations to contemporary forms of human trafficking. The American part of the story lasted fewer than 400 years.

How do we calculate it? Most historians use 1619 as a starting point: 20 Africans referred to as ”servants” arrived in Jamestown, VA on a Dutch ship. It’s important to note, however, that they were not the first Africans on American soil. Africans first arrived in America in the late 16th century not as slaves but as explorers together with Spanish and Portuguese explorers. One of the best known of these African “conquistadors” was Estevancio who traveled throughout the southeast from present day Florida to Texas. As far as the institution of chattel slavery – the treatment of slaves as property – in the United States, if we use 1619 as the beginning and the 1865 Thirteenth Amendment as its end then it lasted 246 years, not 400.

Myth Three: All Southerners owned slaves.

Truth: Roughly 25% of all southerners owned slaves. The fact that one quarter of the Southern population were slaveholders is still shocking to many. This truth brings historical insight to modern conversations about the Occupy Movement, its challenge to the inequality gap and its slogan “we are the 99%.”

Take the case of Texas. When it established statehood, the Lone Star State had a shorter period of Anglo-American chattel slavery than other Southern states – only 1845 to 1865 – because Spain and Mexico had occupied the region for almost one half of the 19th century with policies that either abolished or limited slavery. Still, the number of people impacted by wealth and income inequality is staggering. By 1860, the Texas enslaved population was 182,566, but slaveholders represented 27% of the population, controlled 68% of the government positions and 73% of the wealth. Shocking figures but today’s income gap in Texas is arguably more stark with 10% of tax filers taking home 50% of the income.

Myth Four: Slavery was a long time ago.

Truth: African-Americans have been free in this country for less time than they were enslaved. Do the math: Blacks have been free for 149 years which means that most Americans are two to three generations removed from slavery. However, former slaveholding families have built their legacies on the institution and generated wealth that African-Americans have not been privy to because enslaved labor was forced; segregation maintained wealth disparities; and overt and covert discrimination limited African-American recovery efforts.

The value of slaves

Economists and historians have examined detailed aspects of the enslaved experience for as long as slavery existed. Recent publications related to slavery and capitalism explore economic aspects of cotton production and offer commentary on the amount of wealth generated from enslaved labor.

My own work enters this conversation looking at the value of individual slaves and the ways enslaved people responded to being treated as a commodity. They were bought and sold just like we sell cars and cattle today. They were gifted, deeded and mortgaged the same way we sell houses today. They were itemized and insured the same way we manage our assets and protect our valuables.

y6br69t3-1413802703.jpg

Extensive Sale of Choice Slaves, New Orleans 1859, Girardey, C.E. (Natchez Trace Collection, Broadside Collection, Dolph Briscoe Center for American History)

Extensive Sale of Choice Slaves, New Orleans 1859, Girardey, C.E.

(Natchez Trace Collection, Broadside Collection, Dolph Briscoe Center for American History)

Enslaved people were valued at every stage of their lives, from before birth until after death. Slaveholders examined women for their fertility and projected the value of their “future increase.” As they grew up, enslavers assessed their value through a rating system that quantified their work. An “A1 Prime hand” represented one term used for a “first rate” slave who could do the most work in a given day. Their values decreased on a quarter scale from three-fourths hands to one-fourth hands, to a rate of zero, which was typically reserved for elderly or differently abled bondpeople (another term for slaves.)

Guy and Andrew, two prime males sold at the largest auction in US History in 1859, commanded different prices. Although similar in “all marketable points in size, age, and skill,” Guy commanded $1240 while Andrew sold for $1040 because “he had lost his right eye.” A reporter from the New York Tribune noted “that the market value of the right eye in the Southern country is $240.” Enslaved bodies were reduced to monetary values assessed from year to year and sometimes from month to month for their entire lifespan and beyond. By today’s standards, Andrew and Guy would be worth about $33,000-$40,000.

Slavery was an extremely diverse economic institution; one that extrapolated unpaid labor out of people in a variety of settings from small single crop farms and plantations to urban universities. This diversity is also reflected in their prices. Enslaved people understood they were treated as commodities.

“I was sold away from mammy at three years old,” recalled Harriett Hill of Georgia. “I remembers it! It lack selling a calf from the cow,” she shared in a 1930s interview with the Works Progress Administration. “We are human beings” she told her interviewer. Those in bondage understood their status. Even though Harriet Hill “was too little to remember her price when she was three, she recalled being sold for $1400 at age 9 or 10, “I never could forget it.”

Slavery in popular culture

Slavery is part and parcel of American popular culture but for more than 30 years the television mini-series Roots was the primary visual representation of the institution except for a handful of independent (and not widely known) films such as Haile Gerima’s Sankofa or the Brazilian Quilombo. Today Steve McQueen’s 12 Years a Slave is a box office success, actress Azia Mira Dungey has a popular web series called Ask a Slave, and in Cash Crop sculptor Stephen Hayes compares the slave ships of the 18th century with third world sweatshops.

From the serious – PBS’s award-winning Many Rivers to Cross – and the interactive Slave Dwelling Project- whereby school aged children spend the night in slave cabins – to the comic at Saturday Night Live, slavery is today front and center.

The elephant that sits at the center of our history is coming into focus. American slavery happened — we are still living with its consequences.

Daina Ramey Berry, Ph.D. is an associate professor of history and African and African Diaspora Studies at the University of Texas at Austin. She is also a Public Voices Fellow, author and award–winning editor of three books, currently at work on book about slave prices in the United States funded in part by the National Endowment for the Humanities. Follow her on Twitter: @lbofflesh. This articles was first published by Not Even Past.

Read Full Post »

What Happened the Last Time Republicans Had a Majority This Huge? They lost it.

Josh Zeitz

Politico.com    November 15, 2014

Since last week, many Republicans have been feeling singularly nostalgic for November 1928, and with good reason. It’s the last time that the party won such commanding majorities in the House of Representatives while also dominating the Senate. And, let’s face it, 1928 was a good time.

America was rich—or so it seemed. Charles Lindbergh was on the cover of Time. Amelia Earhart became the first woman to fly across the Atlantic. Jean Lussier went over Niagara Falls in a rubber ball (thus trumping the previous year’s vogue for flagpole sitting). Mickey Mouse made his first appearance in a talkie (“Steamboat Willie”). Irving Aaronson and His Commanders raised eyebrows with the popular—and, for its time, scandalous—song, “Let’s Misbehave,” and presidential nominee Herbert Hoover gave his Democratic opponent, Al Smith, a shellacking worthy of the history books.

The key takeaway: It’s been a really, really long time since Republicans have owned Capitol Hill as they do now.

But victory can be a fleeting thing. In 1928, Republicans won 270 seats in the House. They were on top of the world. Two years later, they narrowly lost their majority. Two years after that, in 1932, their caucus shrunk to 117 members and the number of Republican-held seats in the Senate fell to just 36. To borrow the title of a popular 1929 novel (which had nothing whatsoever to do with American politics): Goodbye to all that.

A surface explanation for the quick rise and fall of the GOP House majority of 1928 is the Great Depression. As the party in power, Republicans owned the economy, and voters punished them for it. In this sense, today’s Republicans have no historical parallel to fear. Voters—at least a working majority of the minority who turned out last week—clearly blame Barack Obama for the lingering aftershocks of the recent economic crash.

But what if the Republicans of 1928 owed their demise to a more fundamental force? What if it was demography, not economics, that truly killed the elephant?

In fact, the Great Depression was just one factor in the GOP’s stunning reversal of fortune, and in the 1930 cycle that saw Republicans lose their commanding House majority it was probably a minor factor. To be sure, the Republicans of yesteryear were victims of historical contingency (the Great Depression), but they also failed to appreciate and prepare for a long-building trend—the rise of a new urban majority comprised of over 14 million immigrants, and many millions more of their children. Democrats did see the trend, and they built a majority that lasted half a century.

The lesson for President Obama and the Democrats is to go big—very, very big—on immigration reform. Like the New Dealers, today’s Democrats have a unique opportunity to build a majority coalition that dominates American politics well into the century.

***

For the 1928 GOP House majority, victory was unusually short-lived. About one in five GOP House members elected in the Hoover landslide served little more than a year and a half before losing their seats in November 1930.

On a surface level, the Great Depression was to blame.

The stock market crash of October 1929 destroyed untold wealth. Shares in Eastman Kodak plunged from a high of $264.75 to $150. General Electric, $403 to $168.13. General Motors, $91.75 to $33.50. In the following months, millions of men and women were thrown out of work. Tens of thousands of businesses shut their doors and never reopened.

But in the 1920s—before the rise of pensions and 401Ks, college savings accounts and retail investment vehicles—very few Americans were directly implicated in the market. Moreover, in the context of their recent experience, the sudden downtick of 1929-1930 was jarring but not altogether unusual. Hoover later recalled that “for some time after the crash,” most businessmen simply did not perceive “that the danger was any more than that of run-of-the-mill, temporary slumps such as had occurred at three-to-seven year intervals in the past.”

By April 1930, stocks had recouped 20 percent of lost value and seemed on a steady course to recovery. Bank failures, though vexing, were occurring at no greater a clip than the decade’s norm. Yes, gross national product fell 12.6 percent in just one year, and roughly 8.9 percent of able-bodied Americans were out of work. But events were not nearly as dire as in 1921, when a recession sent GNP plunging 24 percent and 11.9 percent of workers were unemployed.

In fact, Americans in the Jazz Age were accustomed to a great deal of economic volatility and risk exposure. It was the age of Scott and Zelda, Babe Ruth, the Charleston, Clara Bow and Colleen Moore—the Ford Model T and the radio set. But it was also an era of massive wealth and income inequality. In these days before the emergence of the safety net—before unemployment and disability insurance—most industrial workers expected to be without work for several months of each year. For farm workers, the entire decade was particularly unforgiving, as a combination of domestic over-production and foreign competition drove down crop prices precipitously.

In hindsight, we know that voters in November 1930 were standing on the edge of a deep canyon. But in the moment, hard times struck many Americans as a normal, cyclical part of their existence.

Unsurprisingly, then, many House and local races in 1930 hinged more on cultural issues—especially on Prohibition, which in many districts set “wet” Democrats against “dry” Republicans—than economic ones.

If the Depression was not a singular determinant in the 1930 elections, neither had Herbert Hoover yet acquired an undeserved reputation for callous indifference to human suffering. Today, we think of Hoover as the laissez-faire foil to Franklin Roosevelt’s brand of muscular liberalism. But in 1930, Hoover was still widely regarded as a progressive Republican who, in his capacity as U.S. relief coordinator, saved Europe from starvation during World War I. When he was elected president, recalled a prominent journalist, we “were in a mood for magic … We had summoned a great engineer to solve our problems for us; now we sat back comfortably and confidently to watch problems being solved.”

In 1929 and 1930, Hoover acted swiftly to address what was still a seemingly routine economic emergency. He jawboned business leaders into maintaining job rolls and wages. He cajoled the Federal Reserve System into easing credit. He requested increased appropriations for public works and grew the federal budget to its largest-ever peacetime levels. In most contemporary press accounts, he had not yet acquired the stigma of a loser.

Still, in 1930 Hoover’s party took a beating. Republicans lost eight seats in the Senate and 52 seats in the House. By the time the new House was seated in December 1931, several deaths and vacancies resulted in a razor-thin Democratic majority.

If the election was not exclusively or even necessarily about economics, the same cannot be said of the FDR’s historic landslide two years later. As Europe plunged headlong into the Depression in 1931 and 1932, the American banking and financial system all but collapsed. With well over 1,000 banks failing each year, millions of depositors lost their life savings. By the eve of the election, more than 50 percent of American workers were unemployed or under-employed.

In response to the crisis, Hoover broke with decades of Republican economic orthodoxy. He stepped up work on the Boulder Dam and Grand Coulee Dam (popular lore notwithstanding, these were not first conceived as New Deal projects). He signed legislation outlawing anti-union (“yellow dog”) clauses in work agreements. And he chartered the Reconstruction Finance Corporation, a government-sponsored entity that loaned money directly to financial institutions, railroads and agricultural stabilization agencies, thereby helping them maintain liquidity. The RFC was in many ways the first New Deal agency, though Herbert Hoover pioneered it. Even the editors of the New Republic, among the president’s sharpest liberal critics, admitted at the time, “There has been nothing quite like it.”

Read Full Post »

If Obama Faces Impeachment over Immigration, Roosevelt, Truman, Eisenhower and Kennedy Should Have as Well

HNN   November 16, 2014

 

When President Obama announced last week following the mid-term elections that he would use his executive powers to make immigration changes, the incoming Senate Majority leader Mitch McConnell warned that “would be like waving a red flag in front of a bull.”  Representative Joe Barton from Texas already saw red, claiming such executive action would be grounds for impeachment.

If so, then Presidents Roosevelt, Truman, Eisenhower and Kennedy should all have been impeached.   All four skirted Congress, at times overtly flouting their administrative prerogative, to implement a guest worker program.

This was the “Bracero” agreement with the Government of Mexico to recruit workers during World War II, starting in 1942 but lasting beyond the war, all the way until 1964.  At its height in the mid 1950s, this program accounted for 450,000 Mexicans per year coming to the U.S. to work, primarily as agricultural workers.

Several aspects of the Bracero program stand out as relevant to the impasse on immigration reform over the last 15 years.  First, the program began with executive branch action, without Congressional approval.  Second, negotiations with the Mexican government occurred throughout the program’s duration, with the State Department taking the lead in those talks.  Finally, this guest worker initiative, originally conceived as a wartime emergency, evolved into a program in the 1950s that served specifically to dampen illegal migration.

Even before Pearl Harbor, growers in the southwest faced labor shortages in their fields and had lobbied Washington to allow for migrant workers, but unsuccessfully.  It took less than five months following the declaration of war to reverse U.S. government intransigence on the need for temporary workers.  Informal negotiations had been taking place between the State Department and the Mexican government, so that an agreement could be signed on April 4, 1942 between the two countries.  By the time legislation had passed authorizing the program seven months later, thousands of workers had already arrived in the U.S.

The absence of Congress was not just due to a wartime emergency.  On April 28, 1947, Congress passed Public Law 40 declaring an official end to the program by the end of January the following year.   Hearings were held in the House Agriculture Committee to deal with the closure, but its members proceeded to propose ways to keep guest workers in the country and extend the program, despite the law closing it down.  Further, without the approval of Congress, the State Department was negotiating a new agreement with Mexico, signed on February 21, 1948, weeks after Congress mandated its termination.  Another seven months later, though, Congress gave its stamp of approval on the new program and authorized the program for another year.  When the year lapsed, the program continued without Congressional approval or oversight.

The Bracero Program started out as a wartime emergency, but by the mid-1950s, its streamlined procedures made it easier for growers to hire foreign labor without having to resort to undocumented workers.  Illegal border crossings fell.

Still, there were many problems making the Bracero Program an unlikely model for the current immigration reforms.  Disregard for the treatment of the contract workers tops the list of problems and became a primary reason for shutting the program down.  However, the use of executive authority in conceiving and implementing an immigration program is undeniable.

The extent of the executive branch involvement on immigration was best captured in 1951, when a commission established by President Truman to review the status of migratory labor concluded that “The negotiation of the Mexican International Agreement is a collective bargaining situation in which the Mexican Government is the representative of the workers and the Department of State is the representative of our farm employers.”  Not only was the executive branch acting on immigration, but they were negotiating its terms and conditions, not with Congress, but with a foreign country.  Remarkable language, especially looking forward to 2014 when we are told that such action would be an impeachable offense.

Senator McConnell used the bullfighting analogy because the red flag makes the bull angry; following the analogy to its inevitable outcome is probably not what he had in mind.  The poor, but angry bull never stands a chance.  In this case, though, it won’t be those in Congress who don’t stand a chance; it will be those caught in our messy and broken immigration system.

John Dickson was Deputy Chief of Mission in Mexico and Director of the Office of Mexican Affairs at the Department of State and is a recent graduate of the University of Massachusetts public history program.

Read Full Post »

Henry Kissinger’s ‘World Order’: An Aggressive Reshaping of the Past

Henry Kissinger


The Washington Free Beacon October 11, 2014

Henry Kissinger projects the public image of a judicious elder statesman whose sweeping knowledge of history lets him rise above the petty concerns of today, in order to see what is truly in the national interest. Yet as Kissinger once said of Ronald Reagan, his knowledge of history is “tailored to support his firmly held preconceptions.” Instead of expanding his field of vision, Kissinger’s interpretation of the past becomes a set of blinders that prevent him from understanding either his country’s values or its interests. Most importantly, he cannot comprehend how fidelity to those values may advance the national interest.

So far, Kissinger’s aggressive reshaping of the past has escaped public notice. On the contrary, World Order has elicited a flood of fawning praise. The New York Times said, “It is a book that every member of Congress should be locked in a room with — and forced to read before taking the oath of office.” The Christian Science Monitor declared it “a treat to gallivant through history at the side of a thinker of Kissinger’s caliber.” In a review for the Washington Post, Hillary Clinton praised Kissinger for “his singular combination of breadth and acuity along with his knack for connecting headlines to trend lines.” The Wall Street Journal and U.K. Telegraph offered similar evaluations.

Kissinger observes that “Great statesmen, however different as personalities, almost invariably had an instinctive feeling for the history of their societies.” Correspondingly, the lengthiest component of World Order is a hundred-page survey of American diplomatic history from 1776 to the present. In those pages, Kissinger persistently caricatures American leaders as naïve amateurs, incapable of thinking strategically. Yet an extensive literature, compiled by scholars over the course of decades, paints a very different picture. Kissinger’s footnotes give no indication that he has read any of this work.

If one accepts Kissinger’s narrative at face value, then his advice seems penetrating. “America’s moral aspirations,” Kissinger says, “need to be combined with an approach that takes into account the strategic element of policy.” This is a cliché masquerading as a profound insight. Regrettably, World Order offers no meaningful advice on how to achieve this difficult balance. It relies instead on the premise that simply recognizing the need for balance represents a dramatic improvement over the black-and-white moralism that dominates U.S. foreign policy.

America’s Original Sin

John Quincy Adams

“America’s favorable geography and vast resources facilitated a perception that foreign policy was an optional activity,” Kissinger writes. This was never the case. When the colonies were British possessions, the colonists understood that their security was bound up with British success in foreign affairs. When the colonists declared independence, they understood that the fate of their rebellion would rest heavily on decisions made in foreign capitals, especially Paris, whose alliance with the colonists was indispensable.

In passing, Kissinger mentions that “the Founders were sophisticated men who understood the European balance of power and manipulated it to the new country’s advantage.” It is easy to forget that for almost fifty years, the new republic was led by its Founders. They remained at the helm through a series of wars against the Barbary pirates, a quasi-war with France begun in 1798, and a real one with Britain in 1812. Only in 1825 did the last veteran of the Revolutionary War depart from the White House—as a young lieutenant, James Monroe had crossed the Delaware with General Washington before being severely wounded.

Monroe turned the presidency over to his Secretary of State, John Quincy Adams. The younger Adams was the fourth consecutive president with prior service as the nation’s chief diplomat. With Europe at peace, the primary concern of American foreign policy became the country’s expansion toward the Pacific Ocean, a project that led to a war with Mexico as well as periodic tensions with the British, the Spanish, and even the Russians, who made vast claims in the Pacific Northwest. During the Civil War, both the Union and Confederacy recognized the vital importance of relations with Europe. Not long after the war, the United States would enter its brief age of overseas expansion.

One of Kissinger’s principal means of demonstrating his predecessors’ naïve idealism is to approach their public statements as unadulterated expressions of their deepest beliefs. With evident disdain, Kissinger writes, “the American experience supported the assumption that peace was the natural condition of humanity, prevented only by other countries’ unreasonableness or ill will.” The proof-text for this assertion is John Quincy Adams’ famous Independence Day oration of 1821, in which Adams explained, America “has invariably, often fruitlessly, held forth to [others] the hand of honest friendship, of equal freedom, of generous reciprocity … She has, in the lapse of nearly half a century, without a single exception, respected the independence of other nations.” This was a bold assertion, given that Adams was in the midst of bullying Spain on the issue of Florida, which it soon relinquished.

Kissinger spends less than six pages on the remainder of the 19th century, apparently presuming that Americans of that era did not spend much time thinking about strategy or diplomacy. Then, in 1898, the country went to war with Spain and acquired an empire. “With no trace of self-consciousness,” Kissinger writes, “[President William McKinley] presented the war…as a uniquely unselfish mission.” Running for re-election in 1900, McKinley’s campaign posters shouted, “The American flag has not been planted in foreign soil to acquire more territory, but for humanity’s sake.” The book does not mention that McKinley was then fighting a controversial war to subdue the Philippines, which cost as many lives as the war in Iraq and provoked widespread denunciations of American brutality. Yet McKinley’s words—from a campaign ad, no less—are simply taken at face value.

Worshipping Roosevelt and Damning Wilson

Theodore Roosevelt

For Kissinger, the presidency of Theodore Roosevelt represents a brief and glorious exception to an otherwise unbroken history of moralistic naïveté. Roosevelt “pursued a foreign policy concept that, unprecedentedly for America, based itself largely on geopolitical considerations.” He “was impatient with many of the pieties that dominated American thinking on foreign policy.” With more than a hint of projection, Kissinger claims, “In Roosevelt’s view, foreign policy was the art of adapting American policy to balance global power discretely and resolutely, tilting events in the direction of the national interest.”

The Roosevelt of Kissinger’s imagination is nothing like the actual man who occupied the White House. Rather than assuming his country’s values to be a burden that compromised its security, TR placed the concept of “righteousness” at the very heart of his approach to world politics. Whereas Kissinger commends those who elevate raison d’etat above personal morality, Roosevelt subscribed to the belief that there is one law for the conduct of both nations and men. At the same time, TR recognized that no authority is capable of enforcing such a law. In world politics, force remains the final arbiter. For Kissinger, this implies that ethics function as a restraint on those who pursue the national interest. Yet according to the late scholar of international relations, Robert E. Osgood, Roosevelt believed that the absence of an enforcer “magnified each nation’s obligation to conduct itself honorably and see that others did likewise.” This vision demanded that America have a proverbial “big stick” and be willing to use it.

Osgood’s assessment of Roosevelt is not atypical. What makes it especially interesting is that Osgood was an avowed Realist whose perspective was much closer to that of Kissinger than it was to Roosevelt. In 1969, Osgood took leave from Johns Hopkins to serve under Kissinger on the National Security Council staff. Yet Osgood had no trouble recognizing the difference between Roosevelt’s worldview and his own.

For Kissinger, the antithesis of his imaginary Roosevelt is an equally ahistoric Woodrow Wilson. Wilson’s vision, Kissinger says, “has been, with minor variations, the American program for world order ever since” his presidency. “The tragedy of Wilsonianism,” Kissinger explains, “is that it bequeathed to the twentieth century’s decisive power an elevated foreign policy doctrine unmoored from a sense of history or geopolitics.” Considering Theodore Roosevelt’s idealism, it seems that Wilson’s tenure represented a period of continuity rather than a break with tradition. Furthermore, although Wilson’s idealism was intense, it was not unmoored from an appreciation of power. To demonstrate Wilson’s naïveté, Kissinger takes his most florid rhetoric at face value, a tactic employed earlier at the expense of William McKinley and John Quincy Adams.

The pivotal moment of Wilson’s presidency was the United States’ declaration of war on Germany. “Imbued by America’s historic sense of moral mission,” Kissinger says, “Wilson proclaimed that America had intervened not to restore the European balance of power but to ‘make the world safe for democracy’.” In addition to misquoting Wilson, Kissinger distorts his motivations. In his request to Congress for a declaration of war, Wilson actually said, “The world must be made safe for democracy.” John Milton Cooper, the author of multiple books on Wilson, notes that Wilson employed the passive tense to indicate that the United States would not assume the burden of vindicating the cause of liberty across the globe. Rather, the United States was compelled to defend its own freedom, which was under attack from German submarines, which were sending American ships and their crewmen to the bottom of the Atlantic. (Kissinger makes only one reference to German outrages in his discussion.)

If Wilson were the crusader that Kissinger portrays, why did he wait almost three years to enter the war against Germany alongside the Allies? The answer is that Wilson was profoundly apprehensive about the war and it consequences. Even after the Germans announced they would sink unarmed American ships without warning, Wilson waited two more months, until a pair of American ships and their crewmen lay on the ocean floor as a result of such attacks.

According to Kissinger, Wilson’s simple faith in the universality of democratic ideals led him to fight, from the first moments of the war, for regime change in Germany. In his request for a declaration of war, Wilson observed, “A steadfast concert for peace can never be maintained except by a partnership of democratic nations. No autocratic government could be trusted to keep faith within it or observe its covenants.” This was more of an observation than a practical program. Eight months later, Wilson asked for a declaration of war against Austria-Hungary, yet explicitly told Congress, “we do not wish in any way to impair or to rearrange the Austro-Hungarian Empire. It is no affair of ours what they do with their own life, either industrially or politically.” Clearly, in this alleged war for liberty, strategic compromises were allowed, something one would never know from reading World Order.

Taking Ideology Out of the Cold War

John F. Kennedy

Along with the pomp and circumstance of presidential inaugurations, there is plenty of inspirational rhetoric. Refusing once again to acknowledge the complex relationship between rhetoric and reality, Kissinger begins his discussion of the Cold War with an achingly literal interpretation of John F. Kennedy’s inaugural address, in which he called on his countrymen to “pay any price, bear any burden, support any friend, oppose any foe, in order to assure the survival and the success of liberty.” Less well known is Kennedy’s admonition to pursue “not a balance of power, but a new world of law,” in which a “grand and global alliance” would face down “the common enemies of mankind.”

Kissinger explains, “What in other countries would have been treated as a rhetorical flourish has, in American discourse, been presented as a specific blueprint for global action.” Yet this painfully naïve JFK is—like Kissinger’s cartoon versions of Roosevelt or Wilson—nowhere to be found in the literature on his presidency.

In a seminal analysis of Kennedy’s strategic thinking published more than thirty years ago, John Gaddis elucidated the principles of JFK’s grand strategy, which drew on a careful assessment of Soviet and American power. Gaddis concludes that Kennedy may have been willing to pay an excessive price and bear too many burdens in his efforts to forestall Soviet aggression, but there is no question that JFK embraced precisely the geopolitical mindset that Kissinger recommends. At the same time, Kennedy comprehended, in a way Kissinger never does, that America’s democratic values are a geopolitical asset. In Latin America, Kennedy fought Communism with a mixture of force, economic assistance, and a determination to support elected governments. His “Alliance for Progress” elicited widespread applause in a hemisphere inclined to denunciations of Yanquí imperialism. This initiative slowly fell apart after Kennedy’s assassination, but he remains a revered figure in many corners of Latin America.

Kissinger’s fundamental criticism of the American approach to the Cold War is that “the United States assumed leadership of the global effort to contain Soviet expansionism—but as a primarily moral, not geopolitical endeavor.” While admiring the “complex strategic considerations” that informed the Communist decision to invade South Korea, Kissinger laments that the American response to this hostile action amounted to nothing more than “fighting for a principle, defeating aggression, and a method of implementing it, via the United Nations.”

It requires an active imagination to suppose that President Truman fought a war to vindicate the United Nations. He valued the fig leaf of a Security Council resolution (made possible by the absence of the Soviet ambassador), but the purpose of war was to inflict a military and psychological defeat on the Soviets and their allies, as well as to secure Korean freedom. Yet Kissinger does not pause, even for a moment, to consider that the United States could (or should) have conducted its campaign against Communism as both a moral and a geopolitical endeavor.

An admission of that kind would raise the difficult question of how the United States should integrate both moral and strategic imperatives in its pursuit of national security. On this subject, World Order has very little to contribute. It acknowledges that legitimacy and power are the prerequisites of order, but prefers to set up and tear down an army of strawmen rather than engaging with the real complexity of American diplomatic history.

Forgetting Reagan

Ronald Reagan

In 1976, while running against Gerald Ford for the Republican nomination, Ronald Reagan “savaged” Henry Kissinger for his role as the architect of Nixon and Ford’s immoral foreign policy. That is how Kissinger recalled things twenty years ago in Diplomacy, his 900-page treatise on world politics in the 20th century. Not surprisingly, Kissinger employed a long chapter in his book to return the favor. Yet in World Order, there is barely any criticism to leaven its praise of Reagan. Perhaps this change reflects a gentlemanly concern for speaking well of the dead. More likely, Kissinger recognizes that Reagan’s worldview has won the heart of the Republican Party. Thus, to preserve his influence, Kissinger must create the impression he and Reagan were not so different.

In Diplomacy, Kissinger portrays Reagan as a fool and an ideologue. “Reagan knew next to no history, and the little he did know he tailored to support his firmly held preconceptions. He treated biblical references to Armageddon as operational predictions. Many of the historical anecdotes he was so fond of recounting had no basis in fact.” In World Order, one learns that Reagan “had read more deeply in American political philosophy than his domestic critics credited” him with. Thus, he was able to “combine American’s seemingly discordant strengths: its idealism, its resilience, its creativity, and its economic vitality.” Just as impressively, “Reagan blended the two elements—power and legitimacy” whose combination Kissinger describes as the foundation of world order.

Long gone is the Reagan who was bored by “the details of foreign policy” and whose “approach to the ideological conflict [with Communism] was a simplified version of Wilsonianism” while his strategy for ending the Cold War “was equally rooted in American utopianism.” Whereas Nixon had a deep understanding of the balance of power, “Reagan did not in his own heart believe in structural or geopolitical causes of tension.”

In contrast, World Order says that Reagan “generated psychological momentum with pronouncements at the outer edge of Wilsonian moralism.” Alone among American statesmen, Reagan receives credit for the strategic value of his idealistic public statements, instead of having them held up as evidence of his ignorance and parochialism.

Kissinger observes that while Nixon did not draw inspiration from Wilsonian visions, his “actual policies were quite parallel and not rarely identical” to Reagan’s. This statement lacks credibility. Reagan wanted to defeat the Soviet Union. Nixon and Kissinger wanted to stabilize the Soviet-American rivalry. They pursued détente, whereas Reagan, according to Diplomacy, “meant to reach his goal by means of relentless confrontation.”

Kissinger’s revised recollections of the Reagan years amount to a tacit admission that a president can break all of the rules prescribed by the Doctor of Diplomacy, yet achieve a more enduring legacy as a statesman than Kissinger himself.

The Rest of the World

Henry Kissinger

Three-fourths of World Order is not about the United States of America. The book also includes long sections on the history of Europe, Islam, and Asia. The sections on Islam and Asia are expendable, although for different reasons.

The discussion of Islamic history reads like a college textbook. When it comes to the modern Middle East, World Order has the feel of a news clipping service, although the clippings favor the author’s side of the debate. In case you didn’t already know, Kissinger is pro-Israel and pro-Saudi, highly suspicious of Iran, and dismissive of the Arab Spring. The book portrays Syria as a quagmire best avoided, although it carefully avoids criticism of Obama’s plan for airstrikes in 2013. Kissinger told CNN at the time that the United States ought to punish Bashar al-Assad for using chemical weapons, although he opposed “intervention in the civil war.”

The book’s discussion of China amounts to an apologia for the regime in Beijing. To that end, Kissinger is more than willing to bend reality. When he refers to what took place in Tiananmen Square in 1989, he calls it a “crisis”—not a massacre or an uprising. Naturally, there are no references to political prisoners, torture, or compulsory abortion and sterilization. There is a single reference to corruption, in the context of Kissinger’s confident assertion that President Xi Jinping is now challenging it and other vices “in a manner that combines vision with courage.”

Whereas Kissinger’s lack of candor is not surprising with regard to human rights, one might expect an advocate of realpolitik to provide a more realistic assessment of how China interacts with foreign powers. Yet the book only speaks of “national rivalries” in the South China Sea, not of Beijing’s ongoing efforts to intimidate its smaller neighbors. It also portrays China as a full partner in the effort to denuclearize North Korea. What concerns Kissinger is not the ruthlessness of Beijing, but the potential for the United States and China to be “reinforced in their suspicions by the military maneuvers and defense programs of the other.”

Rather than an aggressive power with little concern for the common good, Kissinger’s China is an “indispensable pillar of world order” just like the United States. If only it were so.

In its chapters on Europe, World Order recounts the history that has fascinated Kissinger since his days as a doctoral candidate at Harvard. It is the story of “the Westphalian order,” established and protected by men who understood that stability rests on a “balance of power—which, by definition, involves ideological neutrality”—i.e. a thorough indifference to the internal arrangements of other states.

“For more than two hundred years,” Kissinger says, “these balances kept Europe from tearing itself to pieces as it had during the Thirty Years War.” To support this hypothesis, Kissinger must explain away the many great wars of that era as aberrations that reflect poorly on particular aggressors—like Louis XIV, the Jacobins, and Napoleon—rather than failures of the system as a whole. He must even exonerate the Westphalian system from responsibility for the war that crippled Europe in 1914. But this he does, emerging with complete faith that balances of power and ideological neutrality remain the recipe for order in the 21st century.

Wishing Away Unipolarity

AP

Together, Kissinger’s idiosyncratic interpretations of European and American history have the unfortunate effect of blinding him to the significance of the two most salient features of international politics today. The first is unipolarity. The second is the unity of the democratic world, led by the United States.

Fifteen years ago, Dartmouth Professor William Wohlforth wrote that the United States “enjoys a much larger margin of superiority over the next powerful state or, indeed, all other great powers combined, than any leading state in the last two centuries.” China may soon have an economy of comparable size, but it has little prospect of competing militarily in the near- or mid-term future. Six of the next ten largest economies belong to American allies. Only one belongs to an adversary—Vladimir Putin’s Russia—whose antipathy toward the United States has not yielded a trusting relationship with China, let alone an alliance. (Incidentally, Putin is not mentioned in World Order, a significant oversight for a book that aspires to a global field of vision.)

The reason that the United States is able to maintain a globe-spanning network of alliances is precisely because it has never had a foreign policy based on ideological neutrality. Its network of alliances continues to endure and expand, even in the absence of a Soviet threat, because of shared democratic values. Of course, the United States has partnerships with non-democratic states as well. It has never discarded geopolitical concerns, pace Kissinger. Yet the United States and its principal allies in Europe and Asia continue to see their national interests as compatible because their values play such a prominent role in defining those interests. Similarly, America’s national interest entails a concern for spreading democratic values, because countries that make successful transitions to democracy tend to act in a much more pacific and cooperative manner.

These are the basic truths about world order that elude Kissinger because he reflexively exaggerates and condemns the idealism of American foreign policy. In World Order, Kissinger frequently observes that a stable order must be legitimate, in addition to reflecting the realities of power. If he were less vehement in his denunciations of American idealism, he might recognize that it is precisely such ideals that provide legitimacy to the order that rests today on America’s unmatched power.

Rather than functioning as a constraint on its pursuit of the national interest, America’s democratic values have ensured a remarkable tolerance for its power. Criticism of American foreign policy may be pervasive, but inaction speaks louder than words. Rather than challenging American power, most nations rely on it to counter actual threats. At the moment, with the Middle East in turmoil, Ukraine being carved up, and Ebola spreading rapidly, the current world order may not seem so orderly. Yet no world order persists on its own. Those who have power and legitimacy must fight to preserve it.

Agradezco al amigo Luis Ponce por ponerme en contacto con esta nota.

Read Full Post »

Mabini in America

John Nery

Philippine Daily Inquirer      November 18, 2014

Late in December 1899, an advertisement appeared in the pages of at least two New York newspapers. It was a notice that the January 1900 issue of the North American Review, a journal of letters and opinion pieces, was already on sale.

The format of the advertisement included a package of six essays on the Second Boer War, which had just broken out in South Africa. There was a “character study” by the influential critic Edmund Gosse, an account of the Anglican crisis by the controversial Protestant theologian Charles Augustus Briggs, and a book review of the letters of Robert Louis Stevenson, by the eminent novelist Henry James.

Between the essays on the Boer War, packaged under the rubric “The War for an Empire,” and the review by Henry James was “A Filipino Appeal to the American People,” by Apolinario Mabini. Of the 14 authors listed in the advertisement, only three were new or under-known enough to warrant an identifying label. Mabini’s is “Formerly Prime Minister in Aguinaldo’s Cabinet.”

It might be a useful exercise to speculate on the editorial decision-making that led to the inclusion of Mabini’s appeal in the journal’s first issue of the year. At that time, the North American Review was very much a Boston publication (today it is published by the University of Northern Iowa), and Boston was a capital of anti-imperialist sentiment. By January 1900, US military forces had occupied parts of the Philippines for some 18 months. The Philippine-American War—a mere insurrection in the American view—was a month short of its first anniversary. Gen. Emilio Aguinaldo was on the defensive but remained at large. (As the leader of Philippine forces, he was possibly the best-known Asian of the time; note how Mabini’s label assumes general knowledge about Aguinaldo.) Not least, the appeal was a pained, patient presentation of American perfidy, beginning with Admiral George Dewey’s effusive promises to Aguinaldo. And it was written by Mabini—a man gaining a reputation as the Philippines’ leading intellectual and America’s “chief irreconcilable,” and who had just been arrested by US cavalry in the Philippines.

I would like to explore “the idea of Mabini” from the American perspective. Since my research is only in its preliminary stages, I wish to trace the reception of this idea, this image of Mabini as that rare thing, a revolutionary intellectual, through three moments: his incarceration in Guam, his death from cholera, and his funeral—the first recorded instance of a massively attended political funeral in the Philippines.

[Key excerpts from American “readings” of Mabini follow, beginning with a warrior-writer who looked on him with disdain.]

Theodore Roosevelt to Sen. George Frisbie Hoar, Jan. 12, 1903: “I have not wished to discuss my view of Mabini’s character and intellect, but perhaps I ought to say, my dear Senator, that it does not agree with yours. Mabini seems to me to belong to a very ordinary type common among those South American revolutionists who have worked such mischief to their fellow-countrymen.”

The historian James LeRoy took a more nuanced view: “But … he was the real power, first at Bakoor, then at Malolos, in framing a scheme of independent government, and then in resisting every step toward peaceful conciliation with the United States… Aguinaldo was plainly not averse to accommodation, on several occasions; but Mabini was, from first to last, inflexible in opposition to the efforts of the party of older and more conservative Filipinos to establish a modus vivendi with the Americans. Whoever may be said to have carried on the war, he chiefly made war inevitable …”

The news of Mabini’s death on May 13, 1903, was duly noted in American newspapers…. In the July 5, 1903 issue of the Springfield Republican, we read the “sympathetic standpoint” of anti-imperialist Canning Eyot, in praise of “the eminent Filipino patriot.” The prose is purple, but instructively so:

“… there is some alleviation in the thought that at last Mabini has found freedom—that his serene soul is beyond the reach of tyranny, beyond the power of every one and everything that is sordid and selfish and time-serving …. Crucify the reformer and the good in his cause is assured of success; kill or imprison the patriot and the true in his ideal may become real.”

The day Mabini was buried saw an unprecedented outpouring of support and sympathy [I have written on this before]. Thousands of people joined the funeral procession. A visiting American woman, whose name I [still] have not yet been able to determine, wrote a vivid account for a Boston newspaper [which included this extraordinary passage]: “It seemed as though the whole city of Manila had gathered, and I could not help noticing the large proportion of strong and finely intelligent faces, especially among Mabini’s more intimate friends. Most noticeable, also, and with a certain suggestiveness for the futrue (sic), was the extraordinary number of young men, many of them evidently students, keen, thoughtful and intelligent looking.” [She saw Mabini in his mourners.]

Mabini was never in America, of course. At the turn of the 20th century, Guam [his place of exile] was a new possession of the United States, American soil-in-the-making. So the man whom LeRoy called the “chief irreconcilable,” whom Gen. Elwell Otis labelled the “masterful spirit” behind Philippine resistance to American occupation, was only present in the United States in the sense that he represented a new idea—an intellectual at the head of a revolution, an ideologue.

In Mabini’s America, he was the un-Aguinaldo.

Excerpts from a paper read on Nov. 13 at the 2014 national conference of the Philippine Studies Association, convened by the indispensable Dr. Bernardita Churchill.

Read Full Post »

Cameristas

DisunionRoughly 150 years ago, in March or April 1863, a shocking photograph was taken in Louisiana. Unlike most photos, it was given a title, “The Scourged Back,” as if it were a painting hanging in an art museum. Indeed, it fit inside a recognizable painter’s category — the nude — but this was a nude from hell. The sitter, an African-American male named Gordon, had been whipped so many times that a mountainous ridge of scar tissue was climbing out of his back. It was detailed, like a military map, and resulted from so many whippings that the scars had to form on top of one another. Gordon had escaped from a nearby Mississippi plantation to a camp of federal soldiers, supporting the great Vicksburg campaign of the spring. Medical officers examined him, and after seeing his back, asked a local photography firm, McPherson and Oliver, to document the scar tissue.

The image made its way back to New England, where it was converted by an artist into a wood engraving, a backwards technological step that allowed it to be published in the newspapers. On July 4, 1863, the same day that Vicksburg fell, “The Scourged Back” appeared in a special Independence Day issue of Harper’s Weekly. All of America could see those scars, and feel that military and moral progress were one. The Civil War, in no way a war to exterminate slavery in 1861, was increasingly just that in 1863. “The Scourged Back” may have been propaganda, but as a photograph, which drew as much from science as from art, it presented irrefutable evidence of the horror of slavery. Because those scars had been photographed, they were real, in a way that no drawing could be.

The original photograph of “The Scourged Back” is one of hundreds on display in a new exhibit that opened on April 2 at the Metropolitan Museum of Art in New York, entitled “Photography and the American Civil War.” Curated by Jeff L. Rosenheim, the show offers a stunning retrospective, proving how inextricably linked the war and the new medium were.

It was not possible then, nor is it now, to tell the story of the conflict without recourse to the roughly one million images that were created in darkrooms around America. All historians are indebted to the resourceful Americans who left this priceless record to later generations. The war was captured, nearly instantaneously, by photographers as brave as the soldiers going into battle. Indeed, the photographers were going into battle; they pitched their tents alongside those of the armies, they heard the whistle of bullets, and they recorded the battle scenes, including the dead, as soon as the armies left the field.

Soldiers were themselves photographers; and photographs could be found in every place touched by the war; in the pockets of those who fought and fell, and above the hearths of the families that waited desperately for their return. Cameras caught nearly all of it, including the changes wrought on non-combatants — the Americans who seemed to age prematurely during those four years (none more so than the Commander in Chief), the families that survived, despite losing a member; the bodies that survived, despite losing a limb. The very land seemed to age, as armies passed like locusts through Southern valleys, devouring forests and livestock.

The Civil War was not the first war photographed; a tiny number of photographs were taken of the Mexican War, and a larger number of the Crimean War. But the medium had evolved a great deal across the 1850s, and America’s leading photographers sprang into action when the attack on Fort Sumter came in 1861. Many, like Mathew Brady, threw all of their resources at the gigantic task ofcapturing the war. On Aug. 30, 1862, the Times of London commented, “America swarms with the members of the mighty tribe of cameristas, and the civil war has developed their business in the same way that it has given an impetus to the manufacturers of metallic air-tight coffins and embalmers of the dead.”

There are so many cameristas in the Met’s show. The Southern perspective is well represented, in the faces of young Confederates brandishing knives menacingly, and in numerous landscape photographs that convey a haunting beauty, deepened by our knowledge that horrific violence is about to happen in these Edenic vales. For generations, American intellectuals had lamented that the United States had no picturesque ruins as Europe did; suddenly, there were ruins everywhere one cared to look. Photographs of Richmond and Charleston from the war’s end retain the power to shock. For their utter desolation; this could be Carthage or Tyre, a thousand years after their glory.

But of course, this was still the United States of America, a very busy country to begin with, accelerated by the incessant demands of the war. One gets a sense of that urgency from the show — trains chugging in the background, people moving so quickly that they become blurs, and a huge array of participants crowding into the picture — contraband former slaves who have fled to Northern lines but are not yet free; old men and children trying to get a taste of the action; regiments in training, looking very young in 1861, and spectral four years later. Some turned into seasoned veterans, some became ghastly prisoners of war, barely able to sit for a photograph; and of course many didn’t come back at all. Fortunately, they still existed in these images. To this day, some people feel the old superstition that a photograph robs the soul of its vitality. But during the war, it had an opposite, life-giving effect. With just a few dabs of silver, iodine and albumen (from egg whites), these dabblers in the dark arts could confer a form of immortality.

The camera’s unblinking eye also turns to the medical aspect of the war; the amputations and bullet wounds and gangrenous injuries that overwhelmed the doctors who also followed the battles. An entire room forces the viewer to confront this unavoidable result of the war; it offers a healthy antidote to our tendency to romanticize the conflict. But the show contains beauty and trauma in equal measure. There is considerable artistry in many of the photographs, especially the landscapes, delicate compositions in black and white that reveal that the medium was becoming something more than just a documentary record. Some rooms seem like parlors, Victorian spaces where we behold the elaborate efforts Americans made to turn photographs into something more decorative than they were. They become objects of furniture, and albums, and stylized wall hangings, sometimes with paint added to the photograph — flashes of color enliven a Zouave or two. Many of the photos in the show remain in their original casings, elaborate brass and velvet contraptions designed to protect the photograph, and perhaps the viewer as well, from losing too much innocence.

If photography was essential to recording the war, it was no less essential in remembering it. Generations of historians have depended on the photographers to revivify the conflict, from Brady, who published his photos long after the fact, to Ken Burns, whose nine-part documentary on the Civil War was utterly dependent on the old photographs. The Disunion series has benefited from them as well.

Reflecting on the enormity of the Civil War, and the problem of how to remember it accurately, Walt Whitman thought the photographers came as close as possible. Like him, they had been in the thick of it. In their uncompromising realism, they offered “the best history — history from which there could be no appeal.”

Photographs can still testify, as “The Scourged Back” did in the spring of 1863. A recent New York Times piece described photographs of violence, taken in 1992 in Bosnia, that are still furnishing evidence to the war crimes tribunal of The Hague.

For as long as wars are fought, we will need photographs to understand how and why we are fighting, and to reflect on the meaning of war, long after the fact. These evanescent objects, composed of such delicate chemicals, bear enduring witness.

Toward that end, for the benefit of Disunion readers who cannot easily visit New York, we offer a few images from the show, with commentary from its curator, Jeff Rosenheim.

Ted Widmer

Ted Widmer is assistant to the president for special projects at Brown University. He edited, with Clay Risen and George Kalogerakis, a forthcoming volume of selections from the Disunion series, to be published this month.

Read Full Post »

opinionator_main

The Civil War’s Environmental Impact

The Civil War was the most lethal conflict in American history, by a wide margin. But the conventional metric we use to measure a war’s impact – the number of human lives it took – does not fully convey the damage it caused. This was an environmental catastrophe of the first magnitude, with effects that endured long after the guns were silenced. It could be argued that they have never ended.

All wars are environmental catastrophes. Armies destroy farms and livestock; they go through forests like termites; they foul waters; they spread disease; they bombard the countryside with heavy armaments and leave unexploded shells; they deploy chemical poisons that linger far longer than they do; they leave detritus and garbage behind.

As this paper recently reported, it was old rusted-out chemical weapons from the 1980s that harmed American soldiers in Iraq – chemical weapons designed in the United States, and never properly disposed of. World War II’s poisons have been leaching into the earth’s waters and atmosphere for more than half a century. In Flanders, farmers still dig up unexploded shells from World War I.

Now, a rising school of historians has begun to go back further in time, to chronicle the environmental impact of the Civil War. It is a devastating catalog. The war may have begun haltingly, but it soon became total, and in certain instances, a war upon civilians and the countryside as well as upon the opposing forces. Gen. William T. Sherman famously explained that he wanted the people of the South to feel “the hard hand of war,” and he cut a wide swath on his march to the sea in November and December 1864. “We devoured the land,” he wrote in a letter to his wife.

Gen. Philip H. Sheridan pursued a similar scorched-earth campaign in the Shenandoah Valley in September and October 1864, burning farms and factories and anything else that might be useful to the Confederates. Gen. Ulysses S. Grant told him to “eat out Virginia clear and clear as far as they go, so that crows flying over it for the balance of the season will have to carry their provender with them.”

But the war’s damage was far more pervasive than that. In every theater, Northern and Southern armies lived off the land, helping themselves to any form of food they could find, animal and vegetable. These armies were huge, mobile communities, bigger than any city in the South save New Orleans. They cut down enormous numbers of trees for the wood they needed to warm themselves, to cook, and to build military structures like railroad bridges. Capt. Theodore Dodge of New York wrote from Virginia, “it is wonderful how the whole country round here is literally stripped of its timber. Woods which, when we came here, were so thick that we could not get through them any way are now entirely cleared.”

Fortifications and bomb-proof structures in Petersburg, Va., 1865.

Fortifications and bomb-proof structures in Petersburg, Va., 1865.Credit Mathew Brady/George Eastman House/Getty Images

Northern trees were also cut in prodigious numbers to help furnish railroad ties, corduroy roads, ship masts and naval stores like turpentine, resin, pitch and tar. The historian Megan Kate Nelson estimates that two million trees were killed during the war. The Union and Confederate armies annually consumed 400,000 acres of forest for firewood alone. With no difficulty, any researcher can find photographs from 1864 and 1865 that show barren fields and a landscape shorn of vegetation.

When the armies discharged their weapons, it was even worse. In the aftermath of a great battle, observers were dumbstruck at the damage caused to farms and forests. A New York surgeon, Daniel M. Holt, was at the Battle of Spotsylvania Court House in 1864, and wrote, “Trees are perfectly riddled with bullets.” Perhaps no battle changed the landscape more than the Battle of the Crater, in which an enormous, explosive-packed mine was detonated underneath Confederate lines and left 278 dead, and a depression that is still visible.

Still, the weapons used were less terrible than the weapons contemplated. Chemical weapons were a topic of considerable interest, North and South. A Richmond newspaper reported breathlessly on June 4, 1861, “It is well known that there are some chemicals so poisonous that an atmosphere impregnated with them, makes it impossible to remain where they are by filling larges shells of extraordinary capacity with poisonous gases and throwing them very rapidly.” In May 1862, Lincoln received a letter from a New York schoolteacher, John W. Doughty, urging that he fill heavy shells with a choking gas of liquid chlorine, to poison the enemy in their trenches. The letter was routed to the War Department, and never acted upon, but in 1915, the Germans pursued a similar strategy at Ypres, to devastating effect.

But the land fought back in its way. Insects thrived in the camps, in part because the armies destroyed the forest habitats of the birds, bats and other predators that would keep pest populations down. Mosquitoes carried out their own form of aerial attack upon unsuspecting men from both sides. More than 1.3 million soldiers in the Union alone were affected by mosquito-borne illnesses like malaria and yellow fever. An Ohio private. Isaac Jackson, wrote, “the skeeters here are – well, there is no use talking … I never seen the like.” Flies, ticks, maggots and chiggers added to the misery.

The army camps were almost designed to attract them. Fetid latrines and impure water bred disease and did more to weaken the ranks than actual warfare. Some 1.6 million Union troops suffered from diarrhea and dysentery; Southern numbers were surely proportional. Rats were abundantly present on both sides, carrying germs and eating their way through any food they could find.

Probably the worst places of all were the prisoner camps. A Massachusetts private, Amos Stearns, wrote a two-line poem from his confinement in South Carolina: “A Confederate prison is the place/Where hunting for lice is no disgrace.” Some Alabama prisoners in a New York prison made a stew of the prison’s rat population. (“They taste very much like a young squirrel,” wrote Lt. Edmund D. Patterson.)

Smart soldiers adapted to the land, using local plants as medicines and food and taking shelter behind canebrakes and other natural formations. In this, the Southerners surely had an advantage (a Georgia private, William R. Stillwell, wrote his wife facetiously of Northern efforts to starve the South: “You might as well try to starve a black hog in the piney woods”). But the better Northern soldiers adapted, too, finding fruits, nuts and berries as needed. A Vermont corporal, Rufus Kinsley, making his way through Louisiana, wrote, “not much to eat but alligators and blackberries: plenty of them.” Shooting at birds was another easy way to find food; a Confederate sergeant stationed in Louisiana, Edwin H. Fay, credited local African-Americans with great skill at duck-hunting, and wrote his wife, “Negroes bring them in by horseback loads.”

Nevertheless, the Northern effort to reduce the food available to Southern armies did take a toll. In the spring of 1863, Robert E. Lee wrote, “the question of food for this army gives me more trouble than anything else combined.” His invasion of Pennsylvania was driven in part by a need to find new ways to feed his troops, and his troops helped themselves to food just as liberally as Sherman’s did in Georgia, appropriating around 100,000 animals from Pennsylvania farms.

While the old economy was adapting to the extraordinary demands of the war, a new economy was also springing up alongside it, in response to a never-ceasing demand for energy – for heat, power, cooking and a thousand other short-term needs. As the world’s whale population began to decline in the 1850s, a new oily substance was becoming essential. Petroleum was first discovered in large quantities in northwestern Pennsylvania in 1859, on the eve of the war. As the Union mobilized for the war effort, it provided enormous stimulus to the new commodity, whose uses were not fully understood yet, but included lighting and lubrication. Coal production also rose quickly during the war. The sudden surge in fossil fuels altered the American economy permanently.

Every mineral that had an industrial use was extracted and put to use, in significantly larger numbers than before the war. A comparison of the 1860 and 1870 censuses reveals a dramatic surge in all of the extractive industries, and every sector of the American economy, with one notable exception – Southern agriculture, which would need another decade to return to prewar levels. These developments were interpreted as evidence of the Yankee genius for industry, and little thought was given to after-effects. The overwhelming need to win the war was paramount, and outweighed any moral calculus about the price to be borne by future generations. Still, that price was beginning to be calculated – the first scientific attempt to explain heat-trapping gases in the earth’s atmosphere and the greenhouse effect was made in 1859 by an Irish scientist, John Tyndall.

Other effects took more time to be noticed. It is doubtful that any species loss was sustained during the war, despite the death of large numbers of animals who wandered into harm’s way: It has been speculated that more than a million horses and mules were casualties of the war. But we should note that the most notable extinction of the late 19th century and early 20th century – that of the passenger pigeon – began to occur as huge numbers of veterans were returning home, at the same time the arms industry was reaching staggering levels of production, and designing new weapons that nearly removed the difficulty of reloading. The Winchester Model 66 repeating rifle debuted the year after the war ended, firing 30 times a minute. More than 170,000 would be sold between 1866 and 1898. Colt’s revolvers sold in even higher numbers; roughly 200,000 of the Model 1860 Army Revolver were made between 1860 and 1873. Gun clubs sprang up nearly overnight; sharpshooters become popular heroes, and the National Rifle Association was founded by two veterans in 1871.

History does not prove that this was the reason for the demise of the passenger pigeon, a species that once astonished observers for flocks so large that they darkened the sky. But a culture of game-shooting spread quickly in the years immediately after the war, accelerated not only by widespread gun ownership, but by a supply-and-demand infrastructure developed during the war, along the rails. When Manhattan diners needed to eat pigeon, there were always hunters in the upper Midwest willing to shoot at boundless birds – until suddenly the birds were gone. They declined from billions to dozens between the 1870s and the 1890s. One hunt alone, in 1871, killed 1.5 million birds. Another, three years later, killed 25,000 pigeons a day for five to six weeks. The last known passenger pigeon, Martha, died on Sept. 1, 1914.

That was only one way in which Americans ultimately came to face the hard fact of nature’s limits. It was a fact that defied most of their cultural assumptions about the limitless quality of the land available to them. But it was a fact all the same. Some began to grasp it, even while the war was being fought. If the fighting left many scars upon the land, it also planted the seeds for a new movement, to preserve what was left. As the forests vanished, a few visionaries began to speak up on their behalf, and argue for a new kind of stewardship. Though simplistic at first (the world “ecology” would not be invented until 1866), it is possible to see a new vocabulary emerging, and a conservation movement that would grow out of these first, halting steps. Henry David Thoreau would not survive the war – he died in 1862 – but he borrowed from some of its imagery to bewail a “war on the wilderness” that he saw all around him. His final manuscripts suggest that he was working on a book about the power of seeds to bring rebirth – not a great distance from what Abraham Lincoln would say in the Gettysburg Address.

Soldiers escaping a forest fire during the Battle of the Wilderness, 1864.

Soldiers escaping a forest fire during the Battle of the Wilderness, 1864.Credit Library of Congress

Another advocate came from deep within Lincoln’s State Department – his minister to Italy, George Perkins Marsh, a polymath who spent the Civil War years working on his masterpiece, “Man and Nature,” which came out in 1864. With passion and painstaking evidence, it condemned the unthinking, unseeing way in which most Americans experienced their environment, dismissing nature as little more than a resource to be used and discarded. Marsh was especially eloquent on American forests, which he had studied closely as a boy growing up in Vermont, and then as a businessman in lumber. With scientific precision, he affirmed all of their life-giving properties, from soil improvement to species diversification to flood prevention to climate moderation to disease control. But he was a philosopher too, and like Thoreau, he worried about a consumerist mentality that seemed to be conducting its own form of “war” against nature. In a section on “The Destructiveness of Man,” he wrote, “Man has too long forgotten that the earth was given to him for usufruct alone, not for consumption, still less for profligate waste.”

Slowly, the government began to respond to these voices. After some agitation by the landscape architect Frederick Law Olmsted, then living in California, a bill to set aside the land for Yosemite National Park was signed by Abraham Lincoln on June 30, 1864. The land was given to California on the condition that the land “shall be held for public use, resort, and recreation” and shall, like the rights enshrined by the Declaration, be “inalienable for all time.” In 1872, even more land would be set aside for Yellowstone.

Southerners, too, expressed reverence for nature. On Aug. 4, 1861, General Lee wrote his wife from what is now West Virginia, “I enjoyed the mountains, as I rode along. The views are magnificent – the valleys so beautiful, the scenery so peaceful. What a glorious world Almighty God has given us. How thankless and ungrateful we are, and how we labour to mar his gifts.”

But neither he nor his fellow Southerners were able to resist a second invasion of the South that followed the war – the rush by Northern interests to buy huge quantities of forested land in order to fill the marketplace for lumber in the decades of rebuilding and westward migration that ensued, including the fences that were needed to mark off new land, the railroads that were needed to get people there, and the telegraph lines that were needed to stay in communication with them. Railroad tracks nearly tripled between 1864 and 1875, to 90,000 miles in 1875 from 32,000 miles in 1864. Between 1859 and 1879 the consumption of wood in the United States roughly doubled, to 6.84 billion cubic feet a year from 3.76 billion. Roughly 300,000 acres of forests a year needed to be cut down to satisfy this demand.

Fort SumterThe historian Michael Williams has called what followed “the assault on Southern forests.” As the industry exhausted the forests of the upper Midwest (having earlier exhausted New England and New York), it turned to the South, and over the next generation reduced its woodlands by about 40 percent, from 300 million acres to 178 million acres, of which only 39 million acres were virgin forest. By about 1920, the South had been sufficiently exploited that the industry largely moved on, leaving a defoliated landscape behind, and often found loopholes to avoid paying taxes on the land it still owned. In 1923, an industry expert, R.D. Forbes, wrote, “their villages are Nameless Towns, their monuments huge piles of saw dust, their epitaph: The mill cut out.”

Paradoxically, there are few places in the United States today where it is easier to savor nature than a Civil War battlefield. Thanks to generations of activism in the North and South, an extensive network of fields and cemeteries has been protected by state and federal legislation, generally safe from development. These beautiful oases of tranquility have become precisely the opposite of what they were, of course, during the heat of battle. (Indeed, they have become so peaceful that Gettysburg officials have too many white-tailed deer, requiring what is euphemistically known as “deer management,” as shots again ring out on the old battlefield.) They promote a reverence for the land as well as our history, and in their way, have become sacred shrines to conservation.

Perhaps we can do more to teach the war in the same way that we walk the battlefields, conscious of the environment, using all of our senses to hear the sounds, see the sights and feel the great relevance of nature to the Civil War. Perhaps we can do even better than that, and summon a new resolve before the environmental challenges that lie ahead. As Lincoln noted, government of the people did not perish from the earth. Let’s hope that the earth does not perish from the people.

Follow Disunion at twitter.com/NYTcivilwar or join us on Facebook.

Ted Widmer is director of the John Carter Brown Library at Brown University.


Sources: Joseph K. Barnes, ed., “The Medical and Surgical History of the War of the Rebellion”; Andrew McIlwaine Bell, “Mosquito Soldiers: Malaria, Yellow Fever and the Course of the American Civil War”; Lisa Brady, “The Future of Civil War Era Studies: Environmental Histories”; Lisa M. Brady, “War Upon the Land: Military Strategy and the Transformation of Southern Landscapes During the American Civil War”; Robert V. Bruce, “Lincoln and the Tools of War”; Eighth Census of the United States (1860); Drew Gilpin Faust, “This Republic of Suffering: Death and the American Civil War”; Paul H. Giddens, “The Birth of the Oil Industry”; Frances H. Kennedy, ed., “The Civil War Battlefield Guide”; Jack Temple Kirby, “The American Civil War: An Environmental View”; David Lowenthal, “George Perkins Marsh: Prophet of Conservation”; Manufactures of the United States in 1860, Compiled from the Original Returns of the Eighth Census; George P. Marsh, “Man and Nature: or, Physical Geography as Modified by Human Action”; Kathryn Shively Meier, “Nature’s Civil War: Common Soldiers and the Environment in 1862 Virginia”; Megan Kate Nelson, “Ruin Nation: Destruction and the American Civil War”; Kelby Ouchley, “Flora and Fauna of the Civil War”; Jennifer Price, “Flight Maps: Adventures with Nature in Modern America”; Jeff L. Rosenheim, “Photography and the American Civil War”; Henry D. Thoreau, “Faith in a Seed: The Dispersion of Seeds and Other Late Natural History Writings”; Michael Williams, “Americans and their Forests: A Historical Geography”; Harold F. Williamson, “Winchester, the Gun that Won the West”; R.L. Wilson, “Colt: An American Legend.” Thanks to Sam Gilman for his excellent research. Thanks also to Tony Horwitz and Adam Goodheart.

Read Full Post »

Older Posts »