Feeds:
Entradas
Comentarios

Archive for 2014

If Obama Faces Impeachment over Immigration, Roosevelt, Truman, Eisenhower and Kennedy Should Have as Well

HNN   November 16, 2014

 

When President Obama announced last week following the mid-term elections that he would use his executive powers to make immigration changes, the incoming Senate Majority leader Mitch McConnell warned that “would be like waving a red flag in front of a bull.”  Representative Joe Barton from Texas already saw red, claiming such executive action would be grounds for impeachment.

If so, then Presidents Roosevelt, Truman, Eisenhower and Kennedy should all have been impeached.   All four skirted Congress, at times overtly flouting their administrative prerogative, to implement a guest worker program.

This was the “Bracero” agreement with the Government of Mexico to recruit workers during World War II, starting in 1942 but lasting beyond the war, all the way until 1964.  At its height in the mid 1950s, this program accounted for 450,000 Mexicans per year coming to the U.S. to work, primarily as agricultural workers.

Several aspects of the Bracero program stand out as relevant to the impasse on immigration reform over the last 15 years.  First, the program began with executive branch action, without Congressional approval.  Second, negotiations with the Mexican government occurred throughout the program’s duration, with the State Department taking the lead in those talks.  Finally, this guest worker initiative, originally conceived as a wartime emergency, evolved into a program in the 1950s that served specifically to dampen illegal migration.

Even before Pearl Harbor, growers in the southwest faced labor shortages in their fields and had lobbied Washington to allow for migrant workers, but unsuccessfully.  It took less than five months following the declaration of war to reverse U.S. government intransigence on the need for temporary workers.  Informal negotiations had been taking place between the State Department and the Mexican government, so that an agreement could be signed on April 4, 1942 between the two countries.  By the time legislation had passed authorizing the program seven months later, thousands of workers had already arrived in the U.S.

The absence of Congress was not just due to a wartime emergency.  On April 28, 1947, Congress passed Public Law 40 declaring an official end to the program by the end of January the following year.   Hearings were held in the House Agriculture Committee to deal with the closure, but its members proceeded to propose ways to keep guest workers in the country and extend the program, despite the law closing it down.  Further, without the approval of Congress, the State Department was negotiating a new agreement with Mexico, signed on February 21, 1948, weeks after Congress mandated its termination.  Another seven months later, though, Congress gave its stamp of approval on the new program and authorized the program for another year.  When the year lapsed, the program continued without Congressional approval or oversight.

The Bracero Program started out as a wartime emergency, but by the mid-1950s, its streamlined procedures made it easier for growers to hire foreign labor without having to resort to undocumented workers.  Illegal border crossings fell.

Still, there were many problems making the Bracero Program an unlikely model for the current immigration reforms.  Disregard for the treatment of the contract workers tops the list of problems and became a primary reason for shutting the program down.  However, the use of executive authority in conceiving and implementing an immigration program is undeniable.

The extent of the executive branch involvement on immigration was best captured in 1951, when a commission established by President Truman to review the status of migratory labor concluded that “The negotiation of the Mexican International Agreement is a collective bargaining situation in which the Mexican Government is the representative of the workers and the Department of State is the representative of our farm employers.”  Not only was the executive branch acting on immigration, but they were negotiating its terms and conditions, not with Congress, but with a foreign country.  Remarkable language, especially looking forward to 2014 when we are told that such action would be an impeachable offense.

Senator McConnell used the bullfighting analogy because the red flag makes the bull angry; following the analogy to its inevitable outcome is probably not what he had in mind.  The poor, but angry bull never stands a chance.  In this case, though, it won’t be those in Congress who don’t stand a chance; it will be those caught in our messy and broken immigration system.

John Dickson was Deputy Chief of Mission in Mexico and Director of the Office of Mexican Affairs at the Department of State and is a recent graduate of the University of Massachusetts public history program.

Read Full Post »

Henry Kissinger’s ‘World Order’: An Aggressive Reshaping of the Past

Henry Kissinger


The Washington Free Beacon October 11, 2014

Henry Kissinger projects the public image of a judicious elder statesman whose sweeping knowledge of history lets him rise above the petty concerns of today, in order to see what is truly in the national interest. Yet as Kissinger once said of Ronald Reagan, his knowledge of history is “tailored to support his firmly held preconceptions.” Instead of expanding his field of vision, Kissinger’s interpretation of the past becomes a set of blinders that prevent him from understanding either his country’s values or its interests. Most importantly, he cannot comprehend how fidelity to those values may advance the national interest.

So far, Kissinger’s aggressive reshaping of the past has escaped public notice. On the contrary, World Order has elicited a flood of fawning praise. The New York Times said, “It is a book that every member of Congress should be locked in a room with — and forced to read before taking the oath of office.” The Christian Science Monitor declared it “a treat to gallivant through history at the side of a thinker of Kissinger’s caliber.” In a review for the Washington Post, Hillary Clinton praised Kissinger for “his singular combination of breadth and acuity along with his knack for connecting headlines to trend lines.” The Wall Street Journal and U.K. Telegraph offered similar evaluations.

Kissinger observes that “Great statesmen, however different as personalities, almost invariably had an instinctive feeling for the history of their societies.” Correspondingly, the lengthiest component of World Order is a hundred-page survey of American diplomatic history from 1776 to the present. In those pages, Kissinger persistently caricatures American leaders as naïve amateurs, incapable of thinking strategically. Yet an extensive literature, compiled by scholars over the course of decades, paints a very different picture. Kissinger’s footnotes give no indication that he has read any of this work.

If one accepts Kissinger’s narrative at face value, then his advice seems penetrating. “America’s moral aspirations,” Kissinger says, “need to be combined with an approach that takes into account the strategic element of policy.” This is a cliché masquerading as a profound insight. Regrettably, World Order offers no meaningful advice on how to achieve this difficult balance. It relies instead on the premise that simply recognizing the need for balance represents a dramatic improvement over the black-and-white moralism that dominates U.S. foreign policy.

America’s Original Sin

John Quincy Adams

“America’s favorable geography and vast resources facilitated a perception that foreign policy was an optional activity,” Kissinger writes. This was never the case. When the colonies were British possessions, the colonists understood that their security was bound up with British success in foreign affairs. When the colonists declared independence, they understood that the fate of their rebellion would rest heavily on decisions made in foreign capitals, especially Paris, whose alliance with the colonists was indispensable.

In passing, Kissinger mentions that “the Founders were sophisticated men who understood the European balance of power and manipulated it to the new country’s advantage.” It is easy to forget that for almost fifty years, the new republic was led by its Founders. They remained at the helm through a series of wars against the Barbary pirates, a quasi-war with France begun in 1798, and a real one with Britain in 1812. Only in 1825 did the last veteran of the Revolutionary War depart from the White House—as a young lieutenant, James Monroe had crossed the Delaware with General Washington before being severely wounded.

Monroe turned the presidency over to his Secretary of State, John Quincy Adams. The younger Adams was the fourth consecutive president with prior service as the nation’s chief diplomat. With Europe at peace, the primary concern of American foreign policy became the country’s expansion toward the Pacific Ocean, a project that led to a war with Mexico as well as periodic tensions with the British, the Spanish, and even the Russians, who made vast claims in the Pacific Northwest. During the Civil War, both the Union and Confederacy recognized the vital importance of relations with Europe. Not long after the war, the United States would enter its brief age of overseas expansion.

One of Kissinger’s principal means of demonstrating his predecessors’ naïve idealism is to approach their public statements as unadulterated expressions of their deepest beliefs. With evident disdain, Kissinger writes, “the American experience supported the assumption that peace was the natural condition of humanity, prevented only by other countries’ unreasonableness or ill will.” The proof-text for this assertion is John Quincy Adams’ famous Independence Day oration of 1821, in which Adams explained, America “has invariably, often fruitlessly, held forth to [others] the hand of honest friendship, of equal freedom, of generous reciprocity … She has, in the lapse of nearly half a century, without a single exception, respected the independence of other nations.” This was a bold assertion, given that Adams was in the midst of bullying Spain on the issue of Florida, which it soon relinquished.

Kissinger spends less than six pages on the remainder of the 19th century, apparently presuming that Americans of that era did not spend much time thinking about strategy or diplomacy. Then, in 1898, the country went to war with Spain and acquired an empire. “With no trace of self-consciousness,” Kissinger writes, “[President William McKinley] presented the war…as a uniquely unselfish mission.” Running for re-election in 1900, McKinley’s campaign posters shouted, “The American flag has not been planted in foreign soil to acquire more territory, but for humanity’s sake.” The book does not mention that McKinley was then fighting a controversial war to subdue the Philippines, which cost as many lives as the war in Iraq and provoked widespread denunciations of American brutality. Yet McKinley’s words—from a campaign ad, no less—are simply taken at face value.

Worshipping Roosevelt and Damning Wilson

Theodore Roosevelt

For Kissinger, the presidency of Theodore Roosevelt represents a brief and glorious exception to an otherwise unbroken history of moralistic naïveté. Roosevelt “pursued a foreign policy concept that, unprecedentedly for America, based itself largely on geopolitical considerations.” He “was impatient with many of the pieties that dominated American thinking on foreign policy.” With more than a hint of projection, Kissinger claims, “In Roosevelt’s view, foreign policy was the art of adapting American policy to balance global power discretely and resolutely, tilting events in the direction of the national interest.”

The Roosevelt of Kissinger’s imagination is nothing like the actual man who occupied the White House. Rather than assuming his country’s values to be a burden that compromised its security, TR placed the concept of “righteousness” at the very heart of his approach to world politics. Whereas Kissinger commends those who elevate raison d’etat above personal morality, Roosevelt subscribed to the belief that there is one law for the conduct of both nations and men. At the same time, TR recognized that no authority is capable of enforcing such a law. In world politics, force remains the final arbiter. For Kissinger, this implies that ethics function as a restraint on those who pursue the national interest. Yet according to the late scholar of international relations, Robert E. Osgood, Roosevelt believed that the absence of an enforcer “magnified each nation’s obligation to conduct itself honorably and see that others did likewise.” This vision demanded that America have a proverbial “big stick” and be willing to use it.

Osgood’s assessment of Roosevelt is not atypical. What makes it especially interesting is that Osgood was an avowed Realist whose perspective was much closer to that of Kissinger than it was to Roosevelt. In 1969, Osgood took leave from Johns Hopkins to serve under Kissinger on the National Security Council staff. Yet Osgood had no trouble recognizing the difference between Roosevelt’s worldview and his own.

For Kissinger, the antithesis of his imaginary Roosevelt is an equally ahistoric Woodrow Wilson. Wilson’s vision, Kissinger says, “has been, with minor variations, the American program for world order ever since” his presidency. “The tragedy of Wilsonianism,” Kissinger explains, “is that it bequeathed to the twentieth century’s decisive power an elevated foreign policy doctrine unmoored from a sense of history or geopolitics.” Considering Theodore Roosevelt’s idealism, it seems that Wilson’s tenure represented a period of continuity rather than a break with tradition. Furthermore, although Wilson’s idealism was intense, it was not unmoored from an appreciation of power. To demonstrate Wilson’s naïveté, Kissinger takes his most florid rhetoric at face value, a tactic employed earlier at the expense of William McKinley and John Quincy Adams.

The pivotal moment of Wilson’s presidency was the United States’ declaration of war on Germany. “Imbued by America’s historic sense of moral mission,” Kissinger says, “Wilson proclaimed that America had intervened not to restore the European balance of power but to ‘make the world safe for democracy’.” In addition to misquoting Wilson, Kissinger distorts his motivations. In his request to Congress for a declaration of war, Wilson actually said, “The world must be made safe for democracy.” John Milton Cooper, the author of multiple books on Wilson, notes that Wilson employed the passive tense to indicate that the United States would not assume the burden of vindicating the cause of liberty across the globe. Rather, the United States was compelled to defend its own freedom, which was under attack from German submarines, which were sending American ships and their crewmen to the bottom of the Atlantic. (Kissinger makes only one reference to German outrages in his discussion.)

If Wilson were the crusader that Kissinger portrays, why did he wait almost three years to enter the war against Germany alongside the Allies? The answer is that Wilson was profoundly apprehensive about the war and it consequences. Even after the Germans announced they would sink unarmed American ships without warning, Wilson waited two more months, until a pair of American ships and their crewmen lay on the ocean floor as a result of such attacks.

According to Kissinger, Wilson’s simple faith in the universality of democratic ideals led him to fight, from the first moments of the war, for regime change in Germany. In his request for a declaration of war, Wilson observed, “A steadfast concert for peace can never be maintained except by a partnership of democratic nations. No autocratic government could be trusted to keep faith within it or observe its covenants.” This was more of an observation than a practical program. Eight months later, Wilson asked for a declaration of war against Austria-Hungary, yet explicitly told Congress, “we do not wish in any way to impair or to rearrange the Austro-Hungarian Empire. It is no affair of ours what they do with their own life, either industrially or politically.” Clearly, in this alleged war for liberty, strategic compromises were allowed, something one would never know from reading World Order.

Taking Ideology Out of the Cold War

John F. Kennedy

Along with the pomp and circumstance of presidential inaugurations, there is plenty of inspirational rhetoric. Refusing once again to acknowledge the complex relationship between rhetoric and reality, Kissinger begins his discussion of the Cold War with an achingly literal interpretation of John F. Kennedy’s inaugural address, in which he called on his countrymen to “pay any price, bear any burden, support any friend, oppose any foe, in order to assure the survival and the success of liberty.” Less well known is Kennedy’s admonition to pursue “not a balance of power, but a new world of law,” in which a “grand and global alliance” would face down “the common enemies of mankind.”

Kissinger explains, “What in other countries would have been treated as a rhetorical flourish has, in American discourse, been presented as a specific blueprint for global action.” Yet this painfully naïve JFK is—like Kissinger’s cartoon versions of Roosevelt or Wilson—nowhere to be found in the literature on his presidency.

In a seminal analysis of Kennedy’s strategic thinking published more than thirty years ago, John Gaddis elucidated the principles of JFK’s grand strategy, which drew on a careful assessment of Soviet and American power. Gaddis concludes that Kennedy may have been willing to pay an excessive price and bear too many burdens in his efforts to forestall Soviet aggression, but there is no question that JFK embraced precisely the geopolitical mindset that Kissinger recommends. At the same time, Kennedy comprehended, in a way Kissinger never does, that America’s democratic values are a geopolitical asset. In Latin America, Kennedy fought Communism with a mixture of force, economic assistance, and a determination to support elected governments. His “Alliance for Progress” elicited widespread applause in a hemisphere inclined to denunciations of Yanquí imperialism. This initiative slowly fell apart after Kennedy’s assassination, but he remains a revered figure in many corners of Latin America.

Kissinger’s fundamental criticism of the American approach to the Cold War is that “the United States assumed leadership of the global effort to contain Soviet expansionism—but as a primarily moral, not geopolitical endeavor.” While admiring the “complex strategic considerations” that informed the Communist decision to invade South Korea, Kissinger laments that the American response to this hostile action amounted to nothing more than “fighting for a principle, defeating aggression, and a method of implementing it, via the United Nations.”

It requires an active imagination to suppose that President Truman fought a war to vindicate the United Nations. He valued the fig leaf of a Security Council resolution (made possible by the absence of the Soviet ambassador), but the purpose of war was to inflict a military and psychological defeat on the Soviets and their allies, as well as to secure Korean freedom. Yet Kissinger does not pause, even for a moment, to consider that the United States could (or should) have conducted its campaign against Communism as both a moral and a geopolitical endeavor.

An admission of that kind would raise the difficult question of how the United States should integrate both moral and strategic imperatives in its pursuit of national security. On this subject, World Order has very little to contribute. It acknowledges that legitimacy and power are the prerequisites of order, but prefers to set up and tear down an army of strawmen rather than engaging with the real complexity of American diplomatic history.

Forgetting Reagan

Ronald Reagan

In 1976, while running against Gerald Ford for the Republican nomination, Ronald Reagan “savaged” Henry Kissinger for his role as the architect of Nixon and Ford’s immoral foreign policy. That is how Kissinger recalled things twenty years ago in Diplomacy, his 900-page treatise on world politics in the 20th century. Not surprisingly, Kissinger employed a long chapter in his book to return the favor. Yet in World Order, there is barely any criticism to leaven its praise of Reagan. Perhaps this change reflects a gentlemanly concern for speaking well of the dead. More likely, Kissinger recognizes that Reagan’s worldview has won the heart of the Republican Party. Thus, to preserve his influence, Kissinger must create the impression he and Reagan were not so different.

In Diplomacy, Kissinger portrays Reagan as a fool and an ideologue. “Reagan knew next to no history, and the little he did know he tailored to support his firmly held preconceptions. He treated biblical references to Armageddon as operational predictions. Many of the historical anecdotes he was so fond of recounting had no basis in fact.” In World Order, one learns that Reagan “had read more deeply in American political philosophy than his domestic critics credited” him with. Thus, he was able to “combine American’s seemingly discordant strengths: its idealism, its resilience, its creativity, and its economic vitality.” Just as impressively, “Reagan blended the two elements—power and legitimacy” whose combination Kissinger describes as the foundation of world order.

Long gone is the Reagan who was bored by “the details of foreign policy” and whose “approach to the ideological conflict [with Communism] was a simplified version of Wilsonianism” while his strategy for ending the Cold War “was equally rooted in American utopianism.” Whereas Nixon had a deep understanding of the balance of power, “Reagan did not in his own heart believe in structural or geopolitical causes of tension.”

In contrast, World Order says that Reagan “generated psychological momentum with pronouncements at the outer edge of Wilsonian moralism.” Alone among American statesmen, Reagan receives credit for the strategic value of his idealistic public statements, instead of having them held up as evidence of his ignorance and parochialism.

Kissinger observes that while Nixon did not draw inspiration from Wilsonian visions, his “actual policies were quite parallel and not rarely identical” to Reagan’s. This statement lacks credibility. Reagan wanted to defeat the Soviet Union. Nixon and Kissinger wanted to stabilize the Soviet-American rivalry. They pursued détente, whereas Reagan, according to Diplomacy, “meant to reach his goal by means of relentless confrontation.”

Kissinger’s revised recollections of the Reagan years amount to a tacit admission that a president can break all of the rules prescribed by the Doctor of Diplomacy, yet achieve a more enduring legacy as a statesman than Kissinger himself.

The Rest of the World

Henry Kissinger

Three-fourths of World Order is not about the United States of America. The book also includes long sections on the history of Europe, Islam, and Asia. The sections on Islam and Asia are expendable, although for different reasons.

The discussion of Islamic history reads like a college textbook. When it comes to the modern Middle East, World Order has the feel of a news clipping service, although the clippings favor the author’s side of the debate. In case you didn’t already know, Kissinger is pro-Israel and pro-Saudi, highly suspicious of Iran, and dismissive of the Arab Spring. The book portrays Syria as a quagmire best avoided, although it carefully avoids criticism of Obama’s plan for airstrikes in 2013. Kissinger told CNN at the time that the United States ought to punish Bashar al-Assad for using chemical weapons, although he opposed “intervention in the civil war.”

The book’s discussion of China amounts to an apologia for the regime in Beijing. To that end, Kissinger is more than willing to bend reality. When he refers to what took place in Tiananmen Square in 1989, he calls it a “crisis”—not a massacre or an uprising. Naturally, there are no references to political prisoners, torture, or compulsory abortion and sterilization. There is a single reference to corruption, in the context of Kissinger’s confident assertion that President Xi Jinping is now challenging it and other vices “in a manner that combines vision with courage.”

Whereas Kissinger’s lack of candor is not surprising with regard to human rights, one might expect an advocate of realpolitik to provide a more realistic assessment of how China interacts with foreign powers. Yet the book only speaks of “national rivalries” in the South China Sea, not of Beijing’s ongoing efforts to intimidate its smaller neighbors. It also portrays China as a full partner in the effort to denuclearize North Korea. What concerns Kissinger is not the ruthlessness of Beijing, but the potential for the United States and China to be “reinforced in their suspicions by the military maneuvers and defense programs of the other.”

Rather than an aggressive power with little concern for the common good, Kissinger’s China is an “indispensable pillar of world order” just like the United States. If only it were so.

In its chapters on Europe, World Order recounts the history that has fascinated Kissinger since his days as a doctoral candidate at Harvard. It is the story of “the Westphalian order,” established and protected by men who understood that stability rests on a “balance of power—which, by definition, involves ideological neutrality”—i.e. a thorough indifference to the internal arrangements of other states.

“For more than two hundred years,” Kissinger says, “these balances kept Europe from tearing itself to pieces as it had during the Thirty Years War.” To support this hypothesis, Kissinger must explain away the many great wars of that era as aberrations that reflect poorly on particular aggressors—like Louis XIV, the Jacobins, and Napoleon—rather than failures of the system as a whole. He must even exonerate the Westphalian system from responsibility for the war that crippled Europe in 1914. But this he does, emerging with complete faith that balances of power and ideological neutrality remain the recipe for order in the 21st century.

Wishing Away Unipolarity

AP

Together, Kissinger’s idiosyncratic interpretations of European and American history have the unfortunate effect of blinding him to the significance of the two most salient features of international politics today. The first is unipolarity. The second is the unity of the democratic world, led by the United States.

Fifteen years ago, Dartmouth Professor William Wohlforth wrote that the United States “enjoys a much larger margin of superiority over the next powerful state or, indeed, all other great powers combined, than any leading state in the last two centuries.” China may soon have an economy of comparable size, but it has little prospect of competing militarily in the near- or mid-term future. Six of the next ten largest economies belong to American allies. Only one belongs to an adversary—Vladimir Putin’s Russia—whose antipathy toward the United States has not yielded a trusting relationship with China, let alone an alliance. (Incidentally, Putin is not mentioned in World Order, a significant oversight for a book that aspires to a global field of vision.)

The reason that the United States is able to maintain a globe-spanning network of alliances is precisely because it has never had a foreign policy based on ideological neutrality. Its network of alliances continues to endure and expand, even in the absence of a Soviet threat, because of shared democratic values. Of course, the United States has partnerships with non-democratic states as well. It has never discarded geopolitical concerns, pace Kissinger. Yet the United States and its principal allies in Europe and Asia continue to see their national interests as compatible because their values play such a prominent role in defining those interests. Similarly, America’s national interest entails a concern for spreading democratic values, because countries that make successful transitions to democracy tend to act in a much more pacific and cooperative manner.

These are the basic truths about world order that elude Kissinger because he reflexively exaggerates and condemns the idealism of American foreign policy. In World Order, Kissinger frequently observes that a stable order must be legitimate, in addition to reflecting the realities of power. If he were less vehement in his denunciations of American idealism, he might recognize that it is precisely such ideals that provide legitimacy to the order that rests today on America’s unmatched power.

Rather than functioning as a constraint on its pursuit of the national interest, America’s democratic values have ensured a remarkable tolerance for its power. Criticism of American foreign policy may be pervasive, but inaction speaks louder than words. Rather than challenging American power, most nations rely on it to counter actual threats. At the moment, with the Middle East in turmoil, Ukraine being carved up, and Ebola spreading rapidly, the current world order may not seem so orderly. Yet no world order persists on its own. Those who have power and legitimacy must fight to preserve it.

Agradezco al amigo Luis Ponce por ponerme en contacto con esta nota.

Read Full Post »

Mabini in America

John Nery

Philippine Daily Inquirer      November 18, 2014

Late in December 1899, an advertisement appeared in the pages of at least two New York newspapers. It was a notice that the January 1900 issue of the North American Review, a journal of letters and opinion pieces, was already on sale.

The format of the advertisement included a package of six essays on the Second Boer War, which had just broken out in South Africa. There was a “character study” by the influential critic Edmund Gosse, an account of the Anglican crisis by the controversial Protestant theologian Charles Augustus Briggs, and a book review of the letters of Robert Louis Stevenson, by the eminent novelist Henry James.

Between the essays on the Boer War, packaged under the rubric “The War for an Empire,” and the review by Henry James was “A Filipino Appeal to the American People,” by Apolinario Mabini. Of the 14 authors listed in the advertisement, only three were new or under-known enough to warrant an identifying label. Mabini’s is “Formerly Prime Minister in Aguinaldo’s Cabinet.”

It might be a useful exercise to speculate on the editorial decision-making that led to the inclusion of Mabini’s appeal in the journal’s first issue of the year. At that time, the North American Review was very much a Boston publication (today it is published by the University of Northern Iowa), and Boston was a capital of anti-imperialist sentiment. By January 1900, US military forces had occupied parts of the Philippines for some 18 months. The Philippine-American War—a mere insurrection in the American view—was a month short of its first anniversary. Gen. Emilio Aguinaldo was on the defensive but remained at large. (As the leader of Philippine forces, he was possibly the best-known Asian of the time; note how Mabini’s label assumes general knowledge about Aguinaldo.) Not least, the appeal was a pained, patient presentation of American perfidy, beginning with Admiral George Dewey’s effusive promises to Aguinaldo. And it was written by Mabini—a man gaining a reputation as the Philippines’ leading intellectual and America’s “chief irreconcilable,” and who had just been arrested by US cavalry in the Philippines.

I would like to explore “the idea of Mabini” from the American perspective. Since my research is only in its preliminary stages, I wish to trace the reception of this idea, this image of Mabini as that rare thing, a revolutionary intellectual, through three moments: his incarceration in Guam, his death from cholera, and his funeral—the first recorded instance of a massively attended political funeral in the Philippines.

[Key excerpts from American “readings” of Mabini follow, beginning with a warrior-writer who looked on him with disdain.]

Theodore Roosevelt to Sen. George Frisbie Hoar, Jan. 12, 1903: “I have not wished to discuss my view of Mabini’s character and intellect, but perhaps I ought to say, my dear Senator, that it does not agree with yours. Mabini seems to me to belong to a very ordinary type common among those South American revolutionists who have worked such mischief to their fellow-countrymen.”

The historian James LeRoy took a more nuanced view: “But … he was the real power, first at Bakoor, then at Malolos, in framing a scheme of independent government, and then in resisting every step toward peaceful conciliation with the United States… Aguinaldo was plainly not averse to accommodation, on several occasions; but Mabini was, from first to last, inflexible in opposition to the efforts of the party of older and more conservative Filipinos to establish a modus vivendi with the Americans. Whoever may be said to have carried on the war, he chiefly made war inevitable …”

The news of Mabini’s death on May 13, 1903, was duly noted in American newspapers…. In the July 5, 1903 issue of the Springfield Republican, we read the “sympathetic standpoint” of anti-imperialist Canning Eyot, in praise of “the eminent Filipino patriot.” The prose is purple, but instructively so:

“… there is some alleviation in the thought that at last Mabini has found freedom—that his serene soul is beyond the reach of tyranny, beyond the power of every one and everything that is sordid and selfish and time-serving …. Crucify the reformer and the good in his cause is assured of success; kill or imprison the patriot and the true in his ideal may become real.”

The day Mabini was buried saw an unprecedented outpouring of support and sympathy [I have written on this before]. Thousands of people joined the funeral procession. A visiting American woman, whose name I [still] have not yet been able to determine, wrote a vivid account for a Boston newspaper [which included this extraordinary passage]: “It seemed as though the whole city of Manila had gathered, and I could not help noticing the large proportion of strong and finely intelligent faces, especially among Mabini’s more intimate friends. Most noticeable, also, and with a certain suggestiveness for the futrue (sic), was the extraordinary number of young men, many of them evidently students, keen, thoughtful and intelligent looking.” [She saw Mabini in his mourners.]

Mabini was never in America, of course. At the turn of the 20th century, Guam [his place of exile] was a new possession of the United States, American soil-in-the-making. So the man whom LeRoy called the “chief irreconcilable,” whom Gen. Elwell Otis labelled the “masterful spirit” behind Philippine resistance to American occupation, was only present in the United States in the sense that he represented a new idea—an intellectual at the head of a revolution, an ideologue.

In Mabini’s America, he was the un-Aguinaldo.

Excerpts from a paper read on Nov. 13 at the 2014 national conference of the Philippine Studies Association, convened by the indispensable Dr. Bernardita Churchill.

Read Full Post »

Cameristas

DisunionRoughly 150 years ago, in March or April 1863, a shocking photograph was taken in Louisiana. Unlike most photos, it was given a title, “The Scourged Back,” as if it were a painting hanging in an art museum. Indeed, it fit inside a recognizable painter’s category — the nude — but this was a nude from hell. The sitter, an African-American male named Gordon, had been whipped so many times that a mountainous ridge of scar tissue was climbing out of his back. It was detailed, like a military map, and resulted from so many whippings that the scars had to form on top of one another. Gordon had escaped from a nearby Mississippi plantation to a camp of federal soldiers, supporting the great Vicksburg campaign of the spring. Medical officers examined him, and after seeing his back, asked a local photography firm, McPherson and Oliver, to document the scar tissue.

The image made its way back to New England, where it was converted by an artist into a wood engraving, a backwards technological step that allowed it to be published in the newspapers. On July 4, 1863, the same day that Vicksburg fell, “The Scourged Back” appeared in a special Independence Day issue of Harper’s Weekly. All of America could see those scars, and feel that military and moral progress were one. The Civil War, in no way a war to exterminate slavery in 1861, was increasingly just that in 1863. “The Scourged Back” may have been propaganda, but as a photograph, which drew as much from science as from art, it presented irrefutable evidence of the horror of slavery. Because those scars had been photographed, they were real, in a way that no drawing could be.

The original photograph of “The Scourged Back” is one of hundreds on display in a new exhibit that opened on April 2 at the Metropolitan Museum of Art in New York, entitled “Photography and the American Civil War.” Curated by Jeff L. Rosenheim, the show offers a stunning retrospective, proving how inextricably linked the war and the new medium were.

It was not possible then, nor is it now, to tell the story of the conflict without recourse to the roughly one million images that were created in darkrooms around America. All historians are indebted to the resourceful Americans who left this priceless record to later generations. The war was captured, nearly instantaneously, by photographers as brave as the soldiers going into battle. Indeed, the photographers were going into battle; they pitched their tents alongside those of the armies, they heard the whistle of bullets, and they recorded the battle scenes, including the dead, as soon as the armies left the field.

Soldiers were themselves photographers; and photographs could be found in every place touched by the war; in the pockets of those who fought and fell, and above the hearths of the families that waited desperately for their return. Cameras caught nearly all of it, including the changes wrought on non-combatants — the Americans who seemed to age prematurely during those four years (none more so than the Commander in Chief), the families that survived, despite losing a member; the bodies that survived, despite losing a limb. The very land seemed to age, as armies passed like locusts through Southern valleys, devouring forests and livestock.

The Civil War was not the first war photographed; a tiny number of photographs were taken of the Mexican War, and a larger number of the Crimean War. But the medium had evolved a great deal across the 1850s, and America’s leading photographers sprang into action when the attack on Fort Sumter came in 1861. Many, like Mathew Brady, threw all of their resources at the gigantic task ofcapturing the war. On Aug. 30, 1862, the Times of London commented, “America swarms with the members of the mighty tribe of cameristas, and the civil war has developed their business in the same way that it has given an impetus to the manufacturers of metallic air-tight coffins and embalmers of the dead.”

There are so many cameristas in the Met’s show. The Southern perspective is well represented, in the faces of young Confederates brandishing knives menacingly, and in numerous landscape photographs that convey a haunting beauty, deepened by our knowledge that horrific violence is about to happen in these Edenic vales. For generations, American intellectuals had lamented that the United States had no picturesque ruins as Europe did; suddenly, there were ruins everywhere one cared to look. Photographs of Richmond and Charleston from the war’s end retain the power to shock. For their utter desolation; this could be Carthage or Tyre, a thousand years after their glory.

But of course, this was still the United States of America, a very busy country to begin with, accelerated by the incessant demands of the war. One gets a sense of that urgency from the show — trains chugging in the background, people moving so quickly that they become blurs, and a huge array of participants crowding into the picture — contraband former slaves who have fled to Northern lines but are not yet free; old men and children trying to get a taste of the action; regiments in training, looking very young in 1861, and spectral four years later. Some turned into seasoned veterans, some became ghastly prisoners of war, barely able to sit for a photograph; and of course many didn’t come back at all. Fortunately, they still existed in these images. To this day, some people feel the old superstition that a photograph robs the soul of its vitality. But during the war, it had an opposite, life-giving effect. With just a few dabs of silver, iodine and albumen (from egg whites), these dabblers in the dark arts could confer a form of immortality.

The camera’s unblinking eye also turns to the medical aspect of the war; the amputations and bullet wounds and gangrenous injuries that overwhelmed the doctors who also followed the battles. An entire room forces the viewer to confront this unavoidable result of the war; it offers a healthy antidote to our tendency to romanticize the conflict. But the show contains beauty and trauma in equal measure. There is considerable artistry in many of the photographs, especially the landscapes, delicate compositions in black and white that reveal that the medium was becoming something more than just a documentary record. Some rooms seem like parlors, Victorian spaces where we behold the elaborate efforts Americans made to turn photographs into something more decorative than they were. They become objects of furniture, and albums, and stylized wall hangings, sometimes with paint added to the photograph — flashes of color enliven a Zouave or two. Many of the photos in the show remain in their original casings, elaborate brass and velvet contraptions designed to protect the photograph, and perhaps the viewer as well, from losing too much innocence.

If photography was essential to recording the war, it was no less essential in remembering it. Generations of historians have depended on the photographers to revivify the conflict, from Brady, who published his photos long after the fact, to Ken Burns, whose nine-part documentary on the Civil War was utterly dependent on the old photographs. The Disunion series has benefited from them as well.

Reflecting on the enormity of the Civil War, and the problem of how to remember it accurately, Walt Whitman thought the photographers came as close as possible. Like him, they had been in the thick of it. In their uncompromising realism, they offered “the best history — history from which there could be no appeal.”

Photographs can still testify, as “The Scourged Back” did in the spring of 1863. A recent New York Times piece described photographs of violence, taken in 1992 in Bosnia, that are still furnishing evidence to the war crimes tribunal of The Hague.

For as long as wars are fought, we will need photographs to understand how and why we are fighting, and to reflect on the meaning of war, long after the fact. These evanescent objects, composed of such delicate chemicals, bear enduring witness.

Toward that end, for the benefit of Disunion readers who cannot easily visit New York, we offer a few images from the show, with commentary from its curator, Jeff Rosenheim.

Ted Widmer

Ted Widmer is assistant to the president for special projects at Brown University. He edited, with Clay Risen and George Kalogerakis, a forthcoming volume of selections from the Disunion series, to be published this month.

Read Full Post »

opinionator_main

The Civil War’s Environmental Impact

The Civil War was the most lethal conflict in American history, by a wide margin. But the conventional metric we use to measure a war’s impact – the number of human lives it took – does not fully convey the damage it caused. This was an environmental catastrophe of the first magnitude, with effects that endured long after the guns were silenced. It could be argued that they have never ended.

All wars are environmental catastrophes. Armies destroy farms and livestock; they go through forests like termites; they foul waters; they spread disease; they bombard the countryside with heavy armaments and leave unexploded shells; they deploy chemical poisons that linger far longer than they do; they leave detritus and garbage behind.

As this paper recently reported, it was old rusted-out chemical weapons from the 1980s that harmed American soldiers in Iraq – chemical weapons designed in the United States, and never properly disposed of. World War II’s poisons have been leaching into the earth’s waters and atmosphere for more than half a century. In Flanders, farmers still dig up unexploded shells from World War I.

Now, a rising school of historians has begun to go back further in time, to chronicle the environmental impact of the Civil War. It is a devastating catalog. The war may have begun haltingly, but it soon became total, and in certain instances, a war upon civilians and the countryside as well as upon the opposing forces. Gen. William T. Sherman famously explained that he wanted the people of the South to feel “the hard hand of war,” and he cut a wide swath on his march to the sea in November and December 1864. “We devoured the land,” he wrote in a letter to his wife.

Gen. Philip H. Sheridan pursued a similar scorched-earth campaign in the Shenandoah Valley in September and October 1864, burning farms and factories and anything else that might be useful to the Confederates. Gen. Ulysses S. Grant told him to “eat out Virginia clear and clear as far as they go, so that crows flying over it for the balance of the season will have to carry their provender with them.”

But the war’s damage was far more pervasive than that. In every theater, Northern and Southern armies lived off the land, helping themselves to any form of food they could find, animal and vegetable. These armies were huge, mobile communities, bigger than any city in the South save New Orleans. They cut down enormous numbers of trees for the wood they needed to warm themselves, to cook, and to build military structures like railroad bridges. Capt. Theodore Dodge of New York wrote from Virginia, “it is wonderful how the whole country round here is literally stripped of its timber. Woods which, when we came here, were so thick that we could not get through them any way are now entirely cleared.”

Fortifications and bomb-proof structures in Petersburg, Va., 1865.

Fortifications and bomb-proof structures in Petersburg, Va., 1865.Credit Mathew Brady/George Eastman House/Getty Images

Northern trees were also cut in prodigious numbers to help furnish railroad ties, corduroy roads, ship masts and naval stores like turpentine, resin, pitch and tar. The historian Megan Kate Nelson estimates that two million trees were killed during the war. The Union and Confederate armies annually consumed 400,000 acres of forest for firewood alone. With no difficulty, any researcher can find photographs from 1864 and 1865 that show barren fields and a landscape shorn of vegetation.

When the armies discharged their weapons, it was even worse. In the aftermath of a great battle, observers were dumbstruck at the damage caused to farms and forests. A New York surgeon, Daniel M. Holt, was at the Battle of Spotsylvania Court House in 1864, and wrote, “Trees are perfectly riddled with bullets.” Perhaps no battle changed the landscape more than the Battle of the Crater, in which an enormous, explosive-packed mine was detonated underneath Confederate lines and left 278 dead, and a depression that is still visible.

Still, the weapons used were less terrible than the weapons contemplated. Chemical weapons were a topic of considerable interest, North and South. A Richmond newspaper reported breathlessly on June 4, 1861, “It is well known that there are some chemicals so poisonous that an atmosphere impregnated with them, makes it impossible to remain where they are by filling larges shells of extraordinary capacity with poisonous gases and throwing them very rapidly.” In May 1862, Lincoln received a letter from a New York schoolteacher, John W. Doughty, urging that he fill heavy shells with a choking gas of liquid chlorine, to poison the enemy in their trenches. The letter was routed to the War Department, and never acted upon, but in 1915, the Germans pursued a similar strategy at Ypres, to devastating effect.

But the land fought back in its way. Insects thrived in the camps, in part because the armies destroyed the forest habitats of the birds, bats and other predators that would keep pest populations down. Mosquitoes carried out their own form of aerial attack upon unsuspecting men from both sides. More than 1.3 million soldiers in the Union alone were affected by mosquito-borne illnesses like malaria and yellow fever. An Ohio private. Isaac Jackson, wrote, “the skeeters here are – well, there is no use talking … I never seen the like.” Flies, ticks, maggots and chiggers added to the misery.

The army camps were almost designed to attract them. Fetid latrines and impure water bred disease and did more to weaken the ranks than actual warfare. Some 1.6 million Union troops suffered from diarrhea and dysentery; Southern numbers were surely proportional. Rats were abundantly present on both sides, carrying germs and eating their way through any food they could find.

Probably the worst places of all were the prisoner camps. A Massachusetts private, Amos Stearns, wrote a two-line poem from his confinement in South Carolina: “A Confederate prison is the place/Where hunting for lice is no disgrace.” Some Alabama prisoners in a New York prison made a stew of the prison’s rat population. (“They taste very much like a young squirrel,” wrote Lt. Edmund D. Patterson.)

Smart soldiers adapted to the land, using local plants as medicines and food and taking shelter behind canebrakes and other natural formations. In this, the Southerners surely had an advantage (a Georgia private, William R. Stillwell, wrote his wife facetiously of Northern efforts to starve the South: “You might as well try to starve a black hog in the piney woods”). But the better Northern soldiers adapted, too, finding fruits, nuts and berries as needed. A Vermont corporal, Rufus Kinsley, making his way through Louisiana, wrote, “not much to eat but alligators and blackberries: plenty of them.” Shooting at birds was another easy way to find food; a Confederate sergeant stationed in Louisiana, Edwin H. Fay, credited local African-Americans with great skill at duck-hunting, and wrote his wife, “Negroes bring them in by horseback loads.”

Nevertheless, the Northern effort to reduce the food available to Southern armies did take a toll. In the spring of 1863, Robert E. Lee wrote, “the question of food for this army gives me more trouble than anything else combined.” His invasion of Pennsylvania was driven in part by a need to find new ways to feed his troops, and his troops helped themselves to food just as liberally as Sherman’s did in Georgia, appropriating around 100,000 animals from Pennsylvania farms.

While the old economy was adapting to the extraordinary demands of the war, a new economy was also springing up alongside it, in response to a never-ceasing demand for energy – for heat, power, cooking and a thousand other short-term needs. As the world’s whale population began to decline in the 1850s, a new oily substance was becoming essential. Petroleum was first discovered in large quantities in northwestern Pennsylvania in 1859, on the eve of the war. As the Union mobilized for the war effort, it provided enormous stimulus to the new commodity, whose uses were not fully understood yet, but included lighting and lubrication. Coal production also rose quickly during the war. The sudden surge in fossil fuels altered the American economy permanently.

Every mineral that had an industrial use was extracted and put to use, in significantly larger numbers than before the war. A comparison of the 1860 and 1870 censuses reveals a dramatic surge in all of the extractive industries, and every sector of the American economy, with one notable exception – Southern agriculture, which would need another decade to return to prewar levels. These developments were interpreted as evidence of the Yankee genius for industry, and little thought was given to after-effects. The overwhelming need to win the war was paramount, and outweighed any moral calculus about the price to be borne by future generations. Still, that price was beginning to be calculated – the first scientific attempt to explain heat-trapping gases in the earth’s atmosphere and the greenhouse effect was made in 1859 by an Irish scientist, John Tyndall.

Other effects took more time to be noticed. It is doubtful that any species loss was sustained during the war, despite the death of large numbers of animals who wandered into harm’s way: It has been speculated that more than a million horses and mules were casualties of the war. But we should note that the most notable extinction of the late 19th century and early 20th century – that of the passenger pigeon – began to occur as huge numbers of veterans were returning home, at the same time the arms industry was reaching staggering levels of production, and designing new weapons that nearly removed the difficulty of reloading. The Winchester Model 66 repeating rifle debuted the year after the war ended, firing 30 times a minute. More than 170,000 would be sold between 1866 and 1898. Colt’s revolvers sold in even higher numbers; roughly 200,000 of the Model 1860 Army Revolver were made between 1860 and 1873. Gun clubs sprang up nearly overnight; sharpshooters become popular heroes, and the National Rifle Association was founded by two veterans in 1871.

History does not prove that this was the reason for the demise of the passenger pigeon, a species that once astonished observers for flocks so large that they darkened the sky. But a culture of game-shooting spread quickly in the years immediately after the war, accelerated not only by widespread gun ownership, but by a supply-and-demand infrastructure developed during the war, along the rails. When Manhattan diners needed to eat pigeon, there were always hunters in the upper Midwest willing to shoot at boundless birds – until suddenly the birds were gone. They declined from billions to dozens between the 1870s and the 1890s. One hunt alone, in 1871, killed 1.5 million birds. Another, three years later, killed 25,000 pigeons a day for five to six weeks. The last known passenger pigeon, Martha, died on Sept. 1, 1914.

That was only one way in which Americans ultimately came to face the hard fact of nature’s limits. It was a fact that defied most of their cultural assumptions about the limitless quality of the land available to them. But it was a fact all the same. Some began to grasp it, even while the war was being fought. If the fighting left many scars upon the land, it also planted the seeds for a new movement, to preserve what was left. As the forests vanished, a few visionaries began to speak up on their behalf, and argue for a new kind of stewardship. Though simplistic at first (the world “ecology” would not be invented until 1866), it is possible to see a new vocabulary emerging, and a conservation movement that would grow out of these first, halting steps. Henry David Thoreau would not survive the war – he died in 1862 – but he borrowed from some of its imagery to bewail a “war on the wilderness” that he saw all around him. His final manuscripts suggest that he was working on a book about the power of seeds to bring rebirth – not a great distance from what Abraham Lincoln would say in the Gettysburg Address.

Soldiers escaping a forest fire during the Battle of the Wilderness, 1864.

Soldiers escaping a forest fire during the Battle of the Wilderness, 1864.Credit Library of Congress

Another advocate came from deep within Lincoln’s State Department – his minister to Italy, George Perkins Marsh, a polymath who spent the Civil War years working on his masterpiece, “Man and Nature,” which came out in 1864. With passion and painstaking evidence, it condemned the unthinking, unseeing way in which most Americans experienced their environment, dismissing nature as little more than a resource to be used and discarded. Marsh was especially eloquent on American forests, which he had studied closely as a boy growing up in Vermont, and then as a businessman in lumber. With scientific precision, he affirmed all of their life-giving properties, from soil improvement to species diversification to flood prevention to climate moderation to disease control. But he was a philosopher too, and like Thoreau, he worried about a consumerist mentality that seemed to be conducting its own form of “war” against nature. In a section on “The Destructiveness of Man,” he wrote, “Man has too long forgotten that the earth was given to him for usufruct alone, not for consumption, still less for profligate waste.”

Slowly, the government began to respond to these voices. After some agitation by the landscape architect Frederick Law Olmsted, then living in California, a bill to set aside the land for Yosemite National Park was signed by Abraham Lincoln on June 30, 1864. The land was given to California on the condition that the land “shall be held for public use, resort, and recreation” and shall, like the rights enshrined by the Declaration, be “inalienable for all time.” In 1872, even more land would be set aside for Yellowstone.

Southerners, too, expressed reverence for nature. On Aug. 4, 1861, General Lee wrote his wife from what is now West Virginia, “I enjoyed the mountains, as I rode along. The views are magnificent – the valleys so beautiful, the scenery so peaceful. What a glorious world Almighty God has given us. How thankless and ungrateful we are, and how we labour to mar his gifts.”

But neither he nor his fellow Southerners were able to resist a second invasion of the South that followed the war – the rush by Northern interests to buy huge quantities of forested land in order to fill the marketplace for lumber in the decades of rebuilding and westward migration that ensued, including the fences that were needed to mark off new land, the railroads that were needed to get people there, and the telegraph lines that were needed to stay in communication with them. Railroad tracks nearly tripled between 1864 and 1875, to 90,000 miles in 1875 from 32,000 miles in 1864. Between 1859 and 1879 the consumption of wood in the United States roughly doubled, to 6.84 billion cubic feet a year from 3.76 billion. Roughly 300,000 acres of forests a year needed to be cut down to satisfy this demand.

Fort SumterThe historian Michael Williams has called what followed “the assault on Southern forests.” As the industry exhausted the forests of the upper Midwest (having earlier exhausted New England and New York), it turned to the South, and over the next generation reduced its woodlands by about 40 percent, from 300 million acres to 178 million acres, of which only 39 million acres were virgin forest. By about 1920, the South had been sufficiently exploited that the industry largely moved on, leaving a defoliated landscape behind, and often found loopholes to avoid paying taxes on the land it still owned. In 1923, an industry expert, R.D. Forbes, wrote, “their villages are Nameless Towns, their monuments huge piles of saw dust, their epitaph: The mill cut out.”

Paradoxically, there are few places in the United States today where it is easier to savor nature than a Civil War battlefield. Thanks to generations of activism in the North and South, an extensive network of fields and cemeteries has been protected by state and federal legislation, generally safe from development. These beautiful oases of tranquility have become precisely the opposite of what they were, of course, during the heat of battle. (Indeed, they have become so peaceful that Gettysburg officials have too many white-tailed deer, requiring what is euphemistically known as “deer management,” as shots again ring out on the old battlefield.) They promote a reverence for the land as well as our history, and in their way, have become sacred shrines to conservation.

Perhaps we can do more to teach the war in the same way that we walk the battlefields, conscious of the environment, using all of our senses to hear the sounds, see the sights and feel the great relevance of nature to the Civil War. Perhaps we can do even better than that, and summon a new resolve before the environmental challenges that lie ahead. As Lincoln noted, government of the people did not perish from the earth. Let’s hope that the earth does not perish from the people.

Follow Disunion at twitter.com/NYTcivilwar or join us on Facebook.

Ted Widmer is director of the John Carter Brown Library at Brown University.


Sources: Joseph K. Barnes, ed., “The Medical and Surgical History of the War of the Rebellion”; Andrew McIlwaine Bell, “Mosquito Soldiers: Malaria, Yellow Fever and the Course of the American Civil War”; Lisa Brady, “The Future of Civil War Era Studies: Environmental Histories”; Lisa M. Brady, “War Upon the Land: Military Strategy and the Transformation of Southern Landscapes During the American Civil War”; Robert V. Bruce, “Lincoln and the Tools of War”; Eighth Census of the United States (1860); Drew Gilpin Faust, “This Republic of Suffering: Death and the American Civil War”; Paul H. Giddens, “The Birth of the Oil Industry”; Frances H. Kennedy, ed., “The Civil War Battlefield Guide”; Jack Temple Kirby, “The American Civil War: An Environmental View”; David Lowenthal, “George Perkins Marsh: Prophet of Conservation”; Manufactures of the United States in 1860, Compiled from the Original Returns of the Eighth Census; George P. Marsh, “Man and Nature: or, Physical Geography as Modified by Human Action”; Kathryn Shively Meier, “Nature’s Civil War: Common Soldiers and the Environment in 1862 Virginia”; Megan Kate Nelson, “Ruin Nation: Destruction and the American Civil War”; Kelby Ouchley, “Flora and Fauna of the Civil War”; Jennifer Price, “Flight Maps: Adventures with Nature in Modern America”; Jeff L. Rosenheim, “Photography and the American Civil War”; Henry D. Thoreau, “Faith in a Seed: The Dispersion of Seeds and Other Late Natural History Writings”; Michael Williams, “Americans and their Forests: A Historical Geography”; Harold F. Williamson, “Winchester, the Gun that Won the West”; R.L. Wilson, “Colt: An American Legend.” Thanks to Sam Gilman for his excellent research. Thanks also to Tony Horwitz and Adam Goodheart.

Read Full Post »

The Politics of Thanksgiving Day

William Loren Katz 

HNN November 16, 2014

With family excitement building with the approach of Thanksgiving, you would never know November was Native American History Month. President Obama had publicly announced the month, but many more Americans will be paying attention to his announcement of Thanksgiving.

Thanksgiving remains the most treasured holiday in the United States, honored by the White House since Abraham Lincoln initiated the Holiday to rouse northern patriotism for a war that was not going well.

Thanksgiving has often served political ends. In 2003, in the current age of US Middle East invasions, President George Bush flew to Bagdad, Iraq to celebrate Thanksgiving Day with U.S. troops. He sought to rally the public behind an invasion based on lies. A host of photographers came along to snap him carrying a glazed turkey to eager soldiers. In three hours he flew home, and TV brought his act of solidarity and generosity to millions of US living rooms. But the turkey the President carried to Bagdad was never eaten. It was cardboard, a stage prop.

Thanksgiving 2003 had a lot in common with the first Thanksgiving Day. In 1620 149 English Pilgrims aboard the Mayflower landed at Plymouth and survived their first New England winter when Wampanoug people brought them corn, meat and other gifts, and taught them survival skills. In 1621 Governor William Bradford of Plymouth proclaimed a day of Thanksgiving – not for his Wampanoug saviors but his brave Pilgrims. Through resourcefulness and devotion to God his Christians had defeated hunger.

We are still asked to see Thanksgiving through the eyes of Governor Bradford. But Bradford’s fable is an early example of “Euro think” — a concoction by Europeans that casts their conquest as heroic.

Bradford claims Native Americans were invited to the dinner. A seat at the table? Really? Since Pilgrims classified their nonwhite saviors as “infidels” and inferiors — if invited at all, they were asked to provide and serve and not share the food.

Pilgrim armies soon pushed westward. In 1637 Governor Bradford sent his troops to raid a Pequot village. As devout Christians locked in mortal combat with heathens, Pilgrims systematically destroyed a village of sleeping men, women and children.

Bradford was overjoyed:

“It was a fearful sight to see them frying in the fire and the streams of blood quenching the same and horrible was the stink and stench thereof. But the victory seemed a sweet sacrifice and they [the Pilgrim militia] gave praise thereof to God.”

Years later Pilgrim Reverend Increase Mather asked his congregation to celebrate the “victory” and thank God “that on this day we have sent six hundred heathen souls to hell.”

School and scholarly texts still honor Bradford. The 1993 edition of the Columbia Encyclopedia [P. 351] states of Bradford, “He maintained friendly relations with the Native Americans.” The scholarly Dictionary of American History [P. 77] said, “He was a firm, determined man and an excellent leader; kept relations with the Indians on friendly terms; tolerant toward newcomers and new religions….”

The Mayflower, renamed the Meijbloom (Dutch for Mayflower), continued to carve its place in history. It became one of the first ships to carry enslaved Africans to the Americas.

Thanksgiving Day celebrates not justice or equality but aggression and enslavement. It affirms the genocidal beliefs that destroyed millions of Native American people and their cultures from the Pilgrim landings to the 20th century.

Americans proudly count themselves among the earliest to fight for freedom and independence. On Thanksgiving Americans could honor the first freedom fighters of the Americas – those who resisted foreign invasion – but they were not Europeans, and they started long before 1776.

Long before Pilgrims landed at Plymouth thousands of enslaved Africans and Native Americans united to fight the European invaders and slavers. In the age of Columbus and the Spanish invasion they were led by Taino leaders such as Anacoana a woman poet who was captured at 29; and a man Hatuey who in 1511 led his 400 followers from Hispaniola to Cuba to warn of the foreigners, and the next year was captured. Anacoana and Hatuey were both burned at the stake.

Before the Mayflower, thousands of runaway Africans and Indians in northeast Brazil had begun to unite in the Republic of Palmaris, a three walled maroon fortress that enabled Genga Zumba’s 10,000 people of color to defeat Dutch and Portuguese armies. Palmares lasted until 1694, almost a hundred years.

These early nonwhite freedom fighters kept no written records, but some of their ideas about freedom, justice and equality found their way into a sacred parchment Americans celebrate each July 4th.

The traditional and logical way to celebrate freedom fighters has been to start with the earliest. Anacoana and Hatuey would tell us Columbus and the Pilgrims do not qualify for anything but condemnation.

William Loren Katz is the author of “Black Indians: A Hidden Heritage” and 40 other books. His website is: williamlkatz.com. This essay is adapted from the 2012 edition of “Black Indians.

Read Full Post »

Why Naming John Marshall Chief Justice Was John Adams’s “Greatest Gift” to the Nation

Harlow Gikes

HNN   November 16, 2014

As the final hours of John Adams’s short-lived administration ticked away, the President faced a critical last decision: the nomination of a new Chief Justice of the United States Supreme Court. Former Connecticut senator Oliver Ellsworth, who had helped write the Constitution, was ill and had resigned as the nation’s third Chief Justice.

Instinctively, the President turned to his old friend, New York’s John Jay, whom George Washington had appointed as the nation’s first Chief Justice in 1789. After five years, Jay was so bored with the job he resigned to become governor of his home state—and with good reason: The Supreme Court was not an important element of the American government, and its members had little or nothing to do.

The Constitution and four of ten amendments in the Bill of Rights had shorn the federal judiciary of power and left the Supreme Court a relatively impotent appellate court, with almost no original jurisdiction. By 1800, when Adams searched for a new Chief Justice, the Court had issued only eleven decisions during the federal government’s 11-year existence—one a year. There simply weren’t enough federal laws on the books to provoke much legal activity, and most Americans were more intent on plowing land than filing lawsuits.

Secretary of State John Marshall was in the President’s office when Jay’s letter of refusal arrived. Like Adams and Jay, the 45-year-old Marshall was a fervent Federalist intent on thwarting the radical changes in government that the anti-Federalist President-elect Thomas Jefferson was planning. In effect, Jefferson sought nothing less than a populist revolution, shifting power from the federal government to the states and extending the vote—then limited to property owners of means—to all white adult males. With Jefferson’s followers a majority in Congress, nothing stood in Jefferson’s way but the judiciary, and Marshall urged Adams to appoint as many Federalist judges as possible to frustrate Jefferson’s schemes.

Born and raised in Virginia, Marshall had fought heroically in the Revolutionary War—at Trenton, Brandywine, and Monmouth—and shivered through the bitter winter at Valley Forge. After the war, he studied law, became one of Richmond’s most prominent lawyers, and a fervent champion of constitutional ratification. He won election to Congress in the Federalist sweep that lifted Vice President Adams to the presidency in 1796, and Adams sent him to Paris to help negotiate an end to the Franco-American naval conflict then raging in the Caribbean. Marshall’s tough negotiating skills earned him a hero’s welcome on returning to America—and appointment as Secretary of State, then the second most important federal post.

“When I waited on the President with Mr. Jay’s letter declining the appointment,” Marshall recalled, “the President asked thoughtfully, ‘Whom shall I nominate now?’

“I replied that I could not tell.

“After a moment’s hesitation, he said, ‘I believe I must nominate you.’

“I had never before heard myself named for the office and had not even thought of it. I was pleased as well as surprised and bowed in silence. Next day I was nominated.”

From the first, Marshall saw the High Court as a bulwark against executive and legislative tyranny, with the Constitution as the Court’s primary weapon.

“He hit the Constitution much as the Lord hit the chaos, at a time when everything needed creating,” legal scholar John Paul Frank said of Marshall. “Only a first-class creative genius could have risen so magnificently to the opportunity of the hour.”

Like Moses, Marshall climbed the Mount and thundered commandments to those in government, asserting what “thou shall” and “shall not” do. Both Presidents Washington and Adams had violated constitutional restrictions on their power—each initiating wars without congressional authorization and, in Washington’s case, borrowing funds to finance government operations.

Jefferson planned even more radical usurpations of power—the replacement of judges appointed for life with anti-Federalist jurists who supported the Jefferson political program. Jefferson’s first victim was William Marbury, one of President Adams’s last-minute judicial appointees. When Marbury demanded that Secretary of State deliver the commission Adams had signed, Jefferson ordered Madison to withhold it while he found am anti-Federalist replacement.

Incensed, Marbury asked the Supreme Court for a court order, or writ of mandamus, to force Madison to give Marbury his commission. In 1803, John Marshall stunned Jefferson and the nation by declaring the President and Secretary of State in violation of the law. Under British rule, the king could do no wrong, Marshall conceded, but under the American Constitution, the President remained a citizen like every other American—subject to the law like every other American. A President of the United States had appointed Marbury to the bench and signed his commission, and a Secretary of State had embossed it with the Seal of the United States. For Jefferson and Madison to withhold the commission was a crime.

In a second, even more consequential ruling, however, Marshall refused to issue Marbury his writ, explaining that the Supreme Court was an appellate court not a court of original jurisdiction and that Marbury should have applied first to lower courts for the writ. Marbury cited a provision of the Judiciary Act of 1789 that specifically allowed plaintiffs to bypass lower courts in seekingwrits of mandamus. As onlookers gasped, Marshall then declared the provision unconstitutional.

It was a declaration of historic proportions: For the first time in American history, the Supreme Court had exercised the power of judicial review and declared a federal law unconstitutional. Unmentioned in the Constitution, judicial review was John Marshall’s creation, asserting Supreme Court power to declare any law—federal, state, or local—unconstitutional.

Marbury v. Madison was one of nearly 1,200 decisions Marshall’s court would deliver during his thirty-five years as Chief Justice. The longest serving Chief Justice in American history, he wrote nearly half the decisions himself, effectively appending them to the Constitution to form “the supreme law of the land” as a bulwark against tyranny by ambitious executives and legislators.

John Adams called his appointment of John Marshall as Chief Justice “the proudest act of my life.”

Harlow Giles Unger is the author of more than twenty books on the Founding Fathers and early American history. His latest book is “John Marshall: The Chief Justice Who Saved the Nation,” just published by Da Capo Press, a member of the Perseus Books Group.

Read Full Post »

Even George Washington couldn’t get along with the Senate
By Jonathan Zimmerman

Los Angeles Time November 8, 2014

'Senators Only'

A sign for a private area for ‘Senators only’ is seen inside the Capitol in Washington, D.C. (Saul Loeb / AFP/Getty Images)

Will President Obama’s relations with the Senate change, now that Democrats have lost control of it? Probably not. And that’s because he didn’t have much of a relationship with it in the first place.

Neither did most of our previous presidents, even when the Senate was in their own party’s hands. Tension between the chief executive and the upper body of Congress is baked into our national DNA. And elections don’t seem to affect it all that much.
Tension between the chief executive and the upper body of Congress is baked into our national DNA. –

Before the nation’s first president took office, the Senate voted to bestow upon George Washington the title of “His Majesty, the President of the United States of America, and the Protector of the Same.” But Washington’s relationship with the Senate cooled just a few months later, when he visited the body to request its approval of a commission to negotiate land treaties with Native Americans.

Senators asked for time to consider the proposal, but Washington wanted their consent on the spot. He departed in a huff, leaving bad feelings on both sides. “I cannot be mistaken,” one senator wrote in his journal. “The President wishes to tread on the necks of the Senate.”
lRelated Can Obama’s presidency be saved.

The new Constitution gave the Senate power to approve federal appointments, not just treaties. When the Senate rejected his nominee for a naval post in Georgia, Washington personally went to the body to ask why. One senator replied that its deliberations were secret, and they were none of the president’s business anyhow. After that, Washington resolved never to visit the Senate again.

Similar acrimony arose between 19th century presidents and the Senate, even when the president (like our current chief executive) had served in the body himself. After nine-year Senate veteran John Tyler became the country’s first unelected president, replacing the deceased William Henry Harrison, one senator proposed that Tyler be addressed as “The Vice President, on whom, by the death of the late President, the powers and duties of the office of President have devolved.” The Senate went on to reject four of Tyler’s Cabinet nominees and four of his appointments to the Supreme Court.

Nor did it matter that Tyler’s own party, the Whigs, controlled the Senate. Two decades later, as the Civil War raged, not a single member of the GOP-dominated Senate supported Abraham Lincoln’s 1864 reelection bid. Lincoln was locked in a battle over postwar Reconstruction with his fellow Republicans, many of whom believed that his assassination would pave their way to victory. “By the gods,” GOP Sen. Ben Wade told Lincoln’s vice president, Andrew Johnson, after he assumed the presidency, “there will be no trouble now in running the government!”

But there was, of course, into the next century and beyond. Upon ascending to the White House in 1901, Theodore Roosevelt clashed with his GOP Senate colleagues over his plans for banking regulation, the construction of the Panama Canal and more. Privately, Roosevelt called one Republican senator “a well-meaning, pin-headed, anarchistic crank, of hirsute and slab-sided aspect.” As one of Roosevelt’s friends wrote, the president had “as much respect for the Senate as a dog has for a marriage license.” And the Senate returned the feelings, of course.

Woodrow Wilson got his taste of the Senate’s wrath after World War I, when it rejected his plea to join the League of Nations. “The senators of the United States have no use for their heads,” a bitter Wilson declared, “except to serve as a knot to keep their bodies from unraveling.”

And so it continued, from Franklin D. Roosevelt’s tangle with the Senate over his court-packing bill through Richard Nixon’s battle over White House tapes and Bill Clinton’s impeachment. During FDR’s failed bid to add justices to the Supreme Court, one of his Democratic foes in the Senate said the president was his own worst enemy. “Not as long as I am alive,” another Democratic senator quipped.

Unlike FDR, Obama will now have to deal with a GOP-led Senate. But it’s hard to imagine that Obama’s relationship with the body could get any chillier than it was when his party controlled it. Twelve Democratic senators were invited to the White House on St. Patrick’s Day, and exactly one showed up.

From the very start, the Senate has tried to show up the president — and vice versa. And that’s unlikely to change, no matter which party is in charge.

Jonathan Zimmerman is a professor of history and education at New York University. He is the author of the forthcoming «Too Hot to Handle: A Global History of Sex Education.

Read Full Post »

Why Did the British Try to Burn Down the White House?

HNN   November 9, 2014

 

When I speak to British audiences about my latest book – When Britain burned the White House – all but a very few express their astonishment that it ever happened.   But that’s just what a small British force did to the American President’s mansion in Washington nearly 200 years ago.  What’s more, hoping that their army would beat the British, President Madison and his wife had ordered a slap-up meal prepared for forty guests. Instead they found themselves fleeing for their lives.

When the British invaders in their bloodstained uniforms burst in to the White House, they found the table elegantly laid for dinner, meat roasting on spits and the President’s favorite wine on the sideboard.   They tucked in with delight. One young officer said the President’s Madeira tasted “like nectar to the palates of the gods.”  Afterwards he dashed up to Madison’s  bedroom and swopped his sweaty tunic for one of the President’s neatly ironed shirts.  One of his comrades bundled up the silver White House cutlery in the tablecloth.   The British commander then calmly told his men to pile the chairs on the tables and torch the building.   Before the night was done they also burned both houses of Congress, the War Office, the State Department and the Treasury.

It’s the only other time in US history – apart from the 9/11 terrorist attack – that outsiders have attacked the US capital.    Just like President George Bush, who was airlifted to safety in 2001,  James Madison, America’s fourth president, and his wife Dolley, were fugitives in their own country.

The attack on Washington was one of the most audacious military enterprises ever and the single most destructive act in the almost forgotten war of 1812. So what drove Britain to do it?

In 1812 the United States, a nation only some thirty years of age and poorly equipped militarily, declared war on Britain, the most powerful country in the world. It was a hot-tempered reaction to Britain’s interference with American trade with France and to the British navy’s arrogant forcible seizure of American sailors to man British ships. Britain for its part was fighting a war of survival with the French Emperor Napoleon.  And when the US invaded British Canada and attacked Royal Navy ships on the high seas, London seethed with resentment.  Enough troops were found to defend Canada, but the war effort in Europe precluded further action against the US for the first year.  Then in 1813 a fiery Admiral, George Cockburn was dispatched with a squadron of Royal Navy vessels to cause what damage he could in Chesapeake Bay.  He made himself deeply unpopular with a whole series of depredations in the small towns around the bay.  But he was unable to do decisive damage to America so painful that the country sought for peace. It wasn’t till the summer of 1814 when Napoleon abdicated after his defeat by British force in Spain and Portugal and Russian, Austrian and Prussian forces in central Europe, that the British felt free to give the Americans what the British government called “a good drubbing.” A force of some 4,500 grizzled British veterans arrived on the coast of Maryland in August 1814.  Cockburn was delighted and immediately suggested an attack on Washington.  It would massively humiliate the American government, take the pressure off the hard pressed forces defending British Canada and avenge the American burning of the parliament in York (modern Toronto) a year earlier. What the British wanted was to see the American administration on its knees begging for peace.  But that’s not quite the way it went.

There was indeed massive humiliation.  James  and Dolley Madison found themselves the most unpopular couple in the US in the aftermath of the burning of Washington.  Mrs Madison, up to then the most popular first lady in America’s short history, found herself being thrown out of a house in Virginia where she and her husband sought shelter.  The words of the owner – “Damn you Mrs Madison, if that’s you, get out of my house” – rang in her ears for years afterwards. It was one of the most shameful episodes in American history and James Madison was largely responsible for it.  It had been his own incompetent appointees, War Secretary John Armstrong and army commander William Winder, who had lost the battle for Washington.  On 24 August 1814 the British came near achieving their objective, which was to hurt the Americans so much that the war, which the British saw as a tiresome sideshow, would be brought to a quick end.

But America’s honor and her will to fight on were saved by another commander in another city.  Baltimore’s General Sam Smith made a pledge to his citizens: “What happened in Washington will not happen here.”  He guessed, rightly, that Baltimore would be the task force’s next target.  And his prompt action and the courage of the men who defended Fort McHenry at the entrance to the city’s inner harbor restored America’s confidence and pride and rescued the President from utter disgrace.  A young lawyer and poet, Francis Scott Key, saw to his astonishment and delight that it was not the Union Jack but America’s star spangled banner that flew over the fort at the end of the British bombardment.  And the poem he wrote about the red glare of the British rockets that failed to destroy Baltimore’s defenses was one day to become America’s national anthem.

The war between Britain and the United States lasted only another four months with no real gains for either side. Both soon saw the endless tit for tat bloodshed and destruction as futile, and the peace that followed laid the basis for the special relationship between Britain and America that has lasted ever since.  The two English speaking powers that had been bitter enemies became the closest of friends and never fought each other again.

Peter Snow is a British journalist, author, and broadcaster. He was ITN’s diplomatic and defence correspondent from 1966 to 1979 and presented Newsnight from 1980 to 1997. He is the author of the recently published book, «When Britain Burned the White House.»

Read Full Post »

Why Americans Have Been Duped over the Use of the Atomic Bomb

Paul Ham

HNN   November 9, 2014

One day somebody in high office in Washington will have the intellectual honesty to acknowledge, if not apologise for, a grotesque distortion of the truth that the Truman Administration visited on the American people in the pages of Harper’s Magazine in 1947.

In an article bearing the name of Henry Stimson, the then octogenarian former War Secretary, and written by Truman fixers, the American government invented the notion that the atomic bombs dropped on Hiroshima and Nagasaki were ‘our least abhorrent choice’, avoided a land invasion of Japan and saved hundreds of thousands of American lives (a figure the media rounded off to ‘a million’ soon after publication).

This line of thinking has since insinuated itself into the public consciousness as the official version of the history of the nuclear destruction of two cities, in which 100,000 people, mostly civilians, were killed instantly and hundreds of thousands have since succumbed to cancers linked to radiation poisoning.

Yet, the Harper’s defense of the bomb was a gross political deception. It recast the story of the use of the weapon in soothing phrases the American public wanted to hear, and which have, for 70 years, been accepted as the atomic gospel, or, as historians like to say, the orthodox version of history (as distinct from revisionist versions that post-date Stimson’s original deception).

In fact, the Harper’s article was itself the first revision of history. It has since been replayed in thousands of news articles, history texts and online commentary, by a thoroughly gulled media and mainstream America, who have gorged themselves on this Hollywood ending to the war, the atomic slam dunk that avenged Pearl Harbor.

They all repeat, more or less, the Truman Administration’s original lie: that the atomic bombs forced Japan to surrender unconditionally, ended the war and saved hundreds of thousands if not a million American lives. So entrenched is this line of thinking in America that any deviation is branded ‘revisionist’ and hence inadmissible, perversely ignoring the fact that the ‘orthodox’ line grotesquely revises the facts and is the original travesty of the truth.

To demonstrate how far that travesty plays out, we need to compare the actual narrative of the last days of the war with Stimson’s 1947 reconstruction. In so doing, we do not expect to change the minds of the present and older generation, who will admit no deviation from their line on the bomb whatever evidence is thrown in their path. We hope merely to enlighten younger and/or future generations of Americans who are less susceptible to the lies of politicians and the compassionless hatred of the post-war generation.

In 1947, President Truman and members of his administration were concerned at the cumulative voices of churches, scientists, a few prominent journalists and the embryonic anti-nuclear movement, who felt they had been misled over what actually happened to the people of Hiroshima on 6th August and Nagasaki on 9th August, 1945, and were concerned at the alarming evidence of radiation sickness in the cities’ populations two years after the end of the war (cases of lymphoma linked to the bombs would peak in the early 1950s).

The Truman administration, on the suggestion of James B. Conant, a Harvard professor who had been closely involved in the bomb’s development, decided to try to quell these concerns by commissioning a long article, in Stimson’s redoubtable name, sourced to a memorandum from his assistant, Harvey Bundy, and written largely by Harvey’s precociously clever son, McGeorge. General Leslie Groves, the head of the Manhattan Project (which built the bomb) and several senior officials edited the draft. The article, “The Decision to Use the Atomic Bomb,” first appeared in the February 1947 issue of Harper’s, was reprinted in major newspapers and magazines, and aired on mainstream radio. It purported to be a straight statement of the facts, and quickly gained legitimacy as the official, ‘orthodox’ case for the weapon.

The Harper’s article (and a parallel piece in the Atlantic Monthly by Karl Compton) introduced the American public to the tendentious idea that the atomic bomb ‘saved’ hundreds of thousands (perhaps «several millions,» Compton claimed) of American lives by preventing an invasion of Japan. The article’s central plank was that America had had no choice other than to use the weapon. There was no way to force the Japanese to surrender other than to drop atomic bombs on them. By this argument, the atomic bombings were not only a patriotic duty but also a moral expedient:

“In the light of the alternatives which, upon a fair estimate, were open to us,” Stimson/Truman wrote, “I believe that no man, in our position and subject to our responsibilities, holding in his hands a weapon of such possibilities for accomplishing this purpose and saving those lives, could have failed to use it and afterwards looked his countrymen in the face. The decision to use the atomic bomb brought death to over a hundred thousand Japanese. No explanation can change that fact and I do not wish to gloss over it. But this deliberate, premeditated destruction was our least abhorrent choice. The destruction of Hiroshima and Nagasaki put an end to the Japanese war. It stopped the fire raids, and the strangling blockade; it ended the ghastly specter of the clash of great land armies.”

Editors and the public warmly approved: here, they felt, was an honest justification for this horrific weapon: the A-Bomb did good, in the end. The Harper’s article put the American mind at ease, slipped into national folklore, and the Stimsonian spell appeared to tranquillize the nation’s critical faculties on the subject.

Yet the article’s case for the use of the weapon was profoundly flawed. Most erroneously it argued that a land invasion of Japan and the atomic weapons were mutually exclusive – a case of either-or. This nexus was made up after the war. In 1945, it was never a case of “either the bomb or the invasion.” The question did not arise. The facts show that in early July 1945, about two weeks before the bomb was tested, Truman and senior military advisers abandoned plans to invade Japan. The success of the atomic test had no bearing on this decision. In fact, Truman had already decided that it made no sense to risk American lives invading a nation that was already comprehensively defeated, ringed by the US navy blockade, possessed few supplies or raw materials, and was being daily flattened by General Curtis LeMay’s conventional firebombing air raids, which had already burnt down 66 Japanese cities (including the air strike on Tokyo on 9-10 March 1945, which incinerated more than 100,000 civilians in a single night and is today remembered as the single most deadly bombing raid in history).

Basic errors of fact and sins of omission compounded this monstrous deception. The article was plain wrong, for example, to claim that the ‘direct military use’ of the bomb had destroyed ‘active working parts of the Japanese war effort’. This was post-facto propaganda. Nobody on the powerful Target Committee (set up in early 1945 to decide which cities to target with the first nuclear weapons) pretended that Hiroshima was a military target of any significance: its barracks were barely functioning in 1945 and more than 90 per cent of Hiroshima’s war-related factories were on the city’s periphery. Hiroshima was shortlisted for nuclear destruction for very different reasons: in mid-1945 it remained a pristine city, full of ‘working men’s homes’ as yet undisturbed by LeMay’s conventional air raids. Its annihilation would thus show off the weapon’s destructive force, and supposedly ‘shock Japan into submission’. The Harper’s article made no mention of this, peddling the notion that ‘workers’ homes’ could somehow be construed as legitimate military targets.

As to Stimson’s claim that America used the bomb reluctantly – ‘our least abhorrent choice’ – suggesting that Washington and the Pentagon had wrestled painfully with alternatives, the facts demonstrate precisely the opposite. Everyone involved in making the bomb wanted, indeed hoped, to use the weapon as soon as possible, and gave no serious consideration to any other course of action. The Target and Interim Committees (the latter set up to examine the control of nuclear weapons after the war) swiftly dispensed with alternatives – for example, a warning, a demonstration, or attacking a genuine military target. In fact, Secretary of State James Byrnes rejected most of these possibilities in a few minutes over lunch in the Pentagon. No doubt they were fraught with risks, and possibly unworkable, but if Truman was serious about considering alternatives to the bomb, he might have more closely examined them.

Byrnes argued that a prior demonstration of the bomb would imperil the lives of Allied POWs whom the Japanese would move to the target area (the US Air Force had shown no such restraint during the conventional air war, which daily endangered POWs); that a demonstration may be a dud (unlikely, given the successful test of the plutonium weapon near Alamogordo, and the fact that Manhattan scientists saw no need even to test the gun-type uranium bomb used on Hiroshima); they had only two bombs so had to use them (untrue – at least three were prepared for August, and several in line for September through to November); and that there were no military targets big enough to contain the bomb (Truk Naval Base was considered and rejected; no other military target was seriously examined except Kokura, a city containing a large arsenal. The attempt to bomb it was abandoned due to bad weather, and Bockscar, the delivery plane, dropped the weapon on Nagasaki instead). In short, the use of the bomb was an active choice, a desirable outcome, not a regrettable or painful last resort, as Truman insisted. Every high office-holder believed, and supported, its use at the time: ‘I never had any doubt it should be used,’ Truman said on many occasions. The Harper’s phrase ‘our least abhorrent choice’ thus grossly misrepresents a gung-ho, diabolically zealous, enterprise.

Stimson’s least persuasive claim was that the atomic bombs prevented hundreds of thousands of American casualties (dead, wounded and missing). This number has since been rounded up to 1 million or ‘millions’, and has become a particularly stubborn zombie. Yet a school child’s arithmetic is enough to do the job of killing it: in 1945, the number of American (and allied) combat troops earmarked for the planned (but never approved) invasion of Japan numbered about 750,000. That is well short of a million, of course. Yet for the sake of clarity, let’s believe the post-war consensus of a million casualties. If true, that means every American soldier would have been killed, wounded, or MIA during the land invasion of Japan. The notion is absurd, of course, and hardly reflects well on the fighting ability of the US armed forces, who would have confronted a hungry and demoralized nation whose airforce and navy had been destroyed, and whose skies were totally controlled by American bombers and fighters. Yes, Japan retained substantial ground forces, as well as the fierce loyalty of its people, but they were undersupplied, ill-equipped and lacked artillery and air cover: sitting ducks, in other words, to US strafing raids.

In truth, the actual estimate of likely casualties of a land invasion, drawn up by the Joint Chiefs in a meeting with Truman in July 1945, was 31,000. The count was later lifted to between 60,000-90,000, nowhere near the post-war estimate of up to one million, which can now be seen for what it was: a post-facto justification for the bomb, conjured by Washington out of thin air, to ease America’s troubled conscience.

The Harper’s article also claimed, wrongly, that the atomic bombs had forced Japan to ‘unconditional surrender’. While the bombs obviously contributed to Japan’s general sense of defeat, not a shred of evidence supports the contention that the Japanese leadership surrendered in direct response to the atomic attacks. On the contrary, when they heard of the annihilation of Hiroshima and Nagasaki, Japan’s hardline militarists shrugged off the news – that a ‘special bomb’ had destroyed two more cities – and vowed to continue fighting.

If you disbelieve this, read the Minutes of the epic meetings of the samurai leadership in August 1945. The ‘Big Six’, the ministers who ran Japan from a bunker beneath Tokyo at the time, barely acknowledged Nagasaki’s destruction when a messenger arrived with the news on 9th August. The messenger, who had interrupted their meeting to discuss Russia’s invasion of Japanese occupied territory the day before, was abruptly dismissed. The loss of another city of civilians was hardly of interest. In fact, state propaganda responded to Hiroshima and Nagasaki by girding the nation for a continuing war – against a nuclear-armed America.

Nor would a nuclear-battered Japan consider modifying its terms of ‘conditional surrender’. The Big Six clung stubbornly to their last condition – the retention of the Emperor – to the bitter end. A regime that cared so little for its people except insofar as they served as cannon fodder in a last, miserable act of national seppuku; a nation so fearful of the Soviet Union that it sent message after message to Moscow imploring it to intervene and start peace negotiations (on Japan’s terms, of course, which Truman rightly rejected); a people so steadfast in their refusal to yield that they were preparing to defend their cities against further atomic bombs – this was not a country easily ‘shocked into submission’ by the sight of a mushroom cloud in the sky (and it is worth remembering that, the day after, Tokyo had no film or photographs of the bomb; only US pamphlets and military reports claiming it had been used).

A greater threat – in Tokyo’s eyes – than nuclear weapons drove Japan finally to contemplate a (conditional) surrender: the regime’s suffocating fear of Russia. The Soviet invasion on 8 August crushed the Kwantung Army’s frontline units within days, and sent a crippling loss of confidence across Tokyo. The Japanese warlords despaired; Russia, their erstwhile ‘neutral’ partner, had turned into their worst nightmare: the invasion invoked the spectre of a Communist Japan, no less. Russia matched iron with iron, battalion with battalion. This was a war that Tokyo’s samurai leaders understood, a clash they respected – in stark contrast to America’s incendiary and atomic raids, which they saw as cowardly attacks on defenceless civilians.

In the end, Japan surrendered conditionally, on 14th August, after Washington had agreed to Tokyo’s final terms: that Emperor Hirohito would be allowed to live, and the Imperial Dynasty, to continue. This condition the US government effectively met in the Byrnes Note, sent on 11th August, two days after Nagasaki’s destruction. In sum, the atomic bombs had had no direct influence on Tokyo’s decision, despite Hirohito citing the ‘cruel weapon’ in his surrender speech (one of the more grotesque pieces of propaganda in this sorry episode).

In the end, what are we left with? America used the bomb, without warning, in an attempt to extract ‘unconditional surrender’ from a defeated foe, ‘manage’ (ie draw a line against) Russian aggression in Europe and Asia, and avenge Pearl Harbor, as Truman and Byrnes later said. The bomb achieved none of those goals (unless the neutron saturation of two cities is accepted as proportionate punishment for Pearl Harbor). In fact, Tokyo surrendered with its sole condition intact; and Russia, unperturbed by the first use of atomic weapons in anger, continued to stamp and snort and foment communist revolution around the world, before rushing to join the nuclear arms race.

In short, the Truman administration’s attempt, in Harper’s magazine, to justify the destruction of Hiroshima and Nagasaki has no basis in fact, and was merely a post-facto piece of propaganda. Yet it has been accepted as ‘orthodox’ history. Let us call it by its correct name: a ‘revision’ of the truth, which is a polite way of saying it was a pack of lies. This article asks the reader do reconsider the source of those lies. Nothing, no twisted logic or ethical somersault or infantile ‘they started it’ etc can justify the massacre of innocent civilians. We debase ourselves, and the history of civilisation, if we accept that Japanese atrocities warranted an American atrocity in reply.

Paul Ham is an Australian historian who specialises in the 20th century history of war, politics and diplomacy. His latest book is «Hiroshima Nagasaki» (Thomas Dunne, 2014).

Read Full Post »

« Newer Posts - Older Posts »