Home  |  About  |  Contact  |  Site Map

Tuesday, September 22, 2009

The Beautiful People Myth

We're always told that a long time ago there lived beautiful people co-existing with nature in balanced eco-harmony, taking only what they needed, and giving back to Mother Earth what was left. Wonderful, caring stewards of the environment. No wars and few conflicts. The people were happy, living long and prosperous lives. Hunting, fishing, gathering. But then came the evil European White Males carrying the disease of imperialism, industrialism, capitalism, scientism, greed, carelessness, and short-term thinking. The environment became exploited, the rivers soiled, the air polluted, and the beautiful people were driven from their land. This is the Beautiful People Myth.

THE BEAUTIFUL PEOPLE MYTH
Why The Grass is Always Greener in the Other Century
by Michael Shermer

Long, long ago, in a century far, far away, there lived beautiful people co-existing with nature in balanced eco-harmony, taking only what they needed, and giving back to Mother Earth what was left. Women and men lived in egalitarian accord and there were no wars and few conflicts. The people were happy, living long and prosperous lives. The men were handsome and muscular, well-coordinated in their hunting expeditions as they successfully brought home the main meals for the family. The tanned barebreasted women carried a child in one arm and picked nuts and berries to supplement the hunt. Children frolicked in the nearby stream, dreaming of the day when they too would grow up to fulfill their destiny as beautiful people.

But then came the evil empire—European White Males carrying the disease of imperialism, industrialism, capitalism, scientism, and the other "isms" brought about by human greed, carelessness, and short-term thinking. The environment became exploited, the rivers soiled, the air polluted, and the beautiful people were driven from their land, forced to become slaves, or simply killed.

This tragedy, however, can be reversed if we just go back to living off the land where everyone would grow just enough food for themselves and use only enough to survive. We would then all love one another, as well as our caretaker Mother Earth, just as they did long, long ago, in a century far, far away.

Enviromental Mythmaking

There are actually several myths packed into this fairy tale, proffered by no one in particular but compiled from many sources as myth making (in the literary sense) for our time. This genre of myth making, in fact, tucks nicely into the larger framework of golden age fantasies, and has a long and honorable history. The Greeks believed they lived in the Age of Iron, but before them there was the Age of Gold. Jews and Christians, of course, both believe in the golden age before the fall in the Garden. Medieval scholars looked back longingly to the biblical days of Moses and the prophets, while Renaissance scholars pursued a rebirth of classical learning, coming around full circle to the Greeks. Even Newt Gingrich has his own version of the myth when he told the Boston Globe on May 20, 1995, that there were "long periods of American history where people didn't get raped, people didn't get murdered, people weren't mugged routinely."

I first encountered what I call the Beautiful People Myth (BPM) in a graduate course co-taught by an anthropologist and a historian in the late 1980s when both fields were being "deconstructed" by literary critics and social theorists. Anticipating the kind of anthropology done in the 1970s when I last studied the science—the customs, rituals, and beliefs of indigenous preindustrial peoples around the world—I was shocked, and soon dismayed, to find myself bogged down in such books as Michael Taussig's The Devil and Commodity Fetishism in South America (1980), with such chapters as "Fetishism and the Dialectical Deconstruction" and "The Devil and the Cosmogenesis of Capitalism." I couldn't figure out what was going on until the anthropologist announced his was a Marxist interpretation of history that sees the past in terms of class conflict and economic exploitation (the Beautiful People lived before capitalism). Taussig's anthropology of indigenous peoples of South America proclaims (229):
Marx's work strategically opposes the objectivist categories and culturally naive self-acceptance of the reified world that capitalism creates, a world in which economic goods known as commodities and, indeed, objects that appear not merely as things in themselves but as determinants of the reciprocating human relations that form them. Read this way the commodity labor-time and value itself become not merely historically relative categories but social constructions (and deceptions) of reality. The critique of political economy demands the deconstruction of that reality and the critique of that deception.
This is as clear as the waters of the Rio Negro to me. I gleaned from my professors' commentary on the book (for I simply could not get enough it) that indigenous peoples lived m relative harmony with their environment until you-know-who came along. Fortunately the class was provided with some readable material that brought balance to the discussion, such as William Cronon's Changes in the Land: Indians, Colonists, and the Ecology of New England (1983). Cronon restates the Beautiful People Myth and why we must resist its temptation (12-13):
It is tempting to believe that when the Europeans arrived in the New World they confronted Virgin Land, the Forest Primeval, a wilderness which had existed for eons uninfluenced by human hands. Nothing could be further from the truth . . . . Indians had lived on the continent for thousands of years, and had to a significant extent modified its environment to their purposes . . . . The choice is not between two landscapes, one with and one without a human influence; it is between two human ways of living, two ways of belonging to an ecosystem . . . . All human groups consciously change their environments to some extent—one might even argue that this, in combination with language, is the crucial trait distinguishing people from other animals—and the best measure of a culture's ecological stability may well be how successfully its environmental changes maintain its ability to reproduce itself.
In the early 1990s when I was team-teaching courses in cultural studies I encountered two more versions of the BPM, both of which push the blame further back in time and put it on other entities. In Carolyn Merchant's The Death of Nature: Women, Ecology and the Scientific Revolution (1980), the author points her finger at science when, "between the sixteenth and seventeenth centuries the image of an organic cosmos with a living female earth at its center gave way to a mechanistic world view in which nature was reconstructed as dead and passive, to be dominated and controlled by humans" (xvi). The pre-scientific organic model of nature, says Merchant, was as a nurturing mother, "a kindly beneficent female who provided for the needs of mankind in an ordered, planned universe" (2). Then the Dead White European Males (DWEMS) destroyed this organism, and with it egalitarianism. Hierarchy, patriarchy, commercialism, imperialism, exploitation, and environmental degradation soon followed. To prevent doom, Merchant concludes, we will have to adopt a new lifestyle: "Decentralization, nonhierarchical forms of organization, recycling of wastes, simpler living styles involving less-polluting 'soft' technologies, and labor-intensive rather than capital-intensive economic methods are possibilities only beginning to be explored" (295).

In Riane Eisler's The Chalice and the Blade (1987), the author goes back 13,000 years to find another bogeyman. Instead of DWEMs perhaps they should be called DMOACS: Dead Males of All Colors. Before the DMOACs there was "a long period of peace and prosperity when our social, technological, and cultural evolution moved upward: many thousands of years when all the basic technologies on which civilization is built were developed in societies that were not male dominant, violent and hierarchic'" (vxi). As Paleolithic hunting, gathering, and fishing gave way to Neolithic farming, this "partnership model" of equality between the sexes gave way to the "dominator model," and with it came wars, exploitation, slavery, and the like. The solution, says Eisler, is to return to the equalitarian partnership model where "not only will material wealth be shared more equitably, but this will also be an economic order in which amassing more and more property as a means of protecting oneself from, as well as controlling others will be seen for what it is: a form of sickness or aberration" (201). The Beautiful People Myth lives. Why?

The grass seems greener in the other century for the same reason it does on the other side: the very human tendency to want what we don't have (hope springs eternal in the human breast), reinforced by a distorted and unfair comparison of the realities (warts and all) of what we do have. The BPM is just one manifestation of the greener-grass psychology, but it has special appeal to us because of a conjunction of two historical circumstances: (1) we know more about our past than anyone in history and, with the aid of books, films, and television, can envision that past—or at least a fantasy of it—as never before; (2) this normal historical fantasizing is exaggerated by the realities of modern overpopulation and environmental pollution. In other words, pressures on the environment are higher now than they were in the past, but that past was no idyllic Eden.

Debunking the Beautiful People Myth

While political demagogues and apologists for certain business interests may want to debunk the BPM as a way of cutting off discussion of the environmental pressures of modern industrial society and its impact on indigenous cultures and peoples, that is definitely not my purpose. Rather, I want to examine the anthropological and historical evidence that disproves the BPM and then show how holding to the myth in the face of contrary evidence actually stands in the way of our effectively solving environmental and societal problems.

In a fascinating 1996 study, University of Michigan Ecologist Bobbi Low used Murdock and White's 1969 Standard Cross-Cultural Sample to test empirically the proposition that we can solve our ecological problems by returning to the BPM attitudes of reverence for (rather than exploitation of) the natural world, and opting for long-term group-oriented values (rather than short-term individual values). Her analysis of 186 Hunting-Fishing-Gathering (HFG) societies around the world showed that their use of the environment is driven by ecological constraints and not by attitudes (such as sacred prohibitions), and that their relatively low environmental impact is the result of low population density, inefficient technology, and the lack of profitable markets, not from conscious efforts at conservation. She also showed that in 32% of HFG societies not only were they not practicing conservation, environmental degradation was severe.

University of Illinois anthropologist Lawrence Keeley's new book, War Before Civilization: The Myth of the Peaceful Savage (1996), examines one element of the BPM—that prehistoric warfare was rare, harmless, and little more than ritualized sport. Surveying primitive and civilized societies, he demonstrates that prehistoric war was at least as frequent (as measured in years at war versus peace), as deadly (as measured by percentage of conflict deaths), and as ruthless (as measured by killing and maiming of noncombatant women and children), as modern war. One prehistoric mass grave in South Dakota, for example, yielded the remains of 500 scalped and mutilated men, women, and children—a full 50 years before Columbus ever left port.

UCLA anthropologist Robert Edgerton, in his book Sick Societies: Challenging the Myth of Primitive Harmony (1992), Edgerton surveys the anthropological record and finds clear evidence of drug addition, abuse of women and children, bodily mutilation, economic exploitation of the group by political leaders, suicide, and mental illness in indigenous preindustrial peoples.

Richard Wragham, in his book Demonic Males: Apes and the Origins of Human Violence (co-authored by Dale Peterson, 1996), traces the origins of patriarchy and violence across cultures and through history all the way back to our hominid origins long before the Neolithic Revolution.

In other words, centuries before and continents away from modern economies and technologies, and long before DWEMS, humans dramatically altered their environments. As we shall see below, the Beautiful People turned such ecosystems into deserts (Southwestern America), caused the extinction of dozens of major species (North America, New Zealand), and even committed mass eco-suicide (Easter Island and possibly Machu Picchu).

The Beautiful People have never existed except in myth. Humans are neither Beautiful People nor Ugly People. Humans are only doing what any species does to survive; but we do it with a twist—instead of our environment shaping us through natural selection, we are shaping our environment through human selection. Since we have been doing this for several million years the solution is not to do less selecting, but higher quality selecting based on the best science and technology available. Demythologizing the BPM is one place to start.

The Eco-Survival Problem

The story begins two to three million years ago when ancient hominids in Olduvai Gorge in eastern Africa began chipping stones into tools. Archaeological evidence reveals an environmental mess of bones of large mammals scattered amongst hundreds of stone tools probably abandoned after use—in other words, our hominid ancestors littered the place. It is not for nothing that the Leakeys (1992) called this hominid Homo habilis—the handy man.

Around one million years ago Homo erectus added controlled fire to our technologies, and between half a million and 100,000 years ago Homo neanderthalensis and Homo heidelbergensis developed throwing-spears with finely crafted spear points, lived in caves, and had elaborate tool kits. It appears that many hominid species lived simultaneously and at this point we can only guess what speciation pressures these changing technologies put on natural selection. Perhaps human selection was already at work on itself (Gould, 1997).

Sometime round 30,000 to 35,000 years before present (BP), Neanderthals went extinct (for reasons hotly debated amongst paleoanthropologists) and Cro Magnons flourished. By now the tool kits were complex and varied, clothing covered bodies, art adored caves, bones and wood formed the structure of living abodes, language produced sophisticated communication, and anatomically modern humans began to wrap themselves in a blanket of crude but effective technology. The pace of technological change, along with the human selection and alteration of the environment, took another quantum leap (Tattersall, 1995).

From 35,000 to 13,000 BP humans had spread to nearly every region of the globe and all lived in a condition called HFG: hunting, fishing, gathering. Some were nomadic, while others stayed in one place. Small communities began to form, and with them possessions became valuable, rules of conduct grew more complex, and population numbers creeped steadily upward. Then, at the end of the last Ice Age roughly 13,000 years ago, population pressures in numerous places around the globe grew too intense for the HFG lifestyle to support. The result was the Neolithic, or Agricultural Revolution. The simultaneous shift to farming was no accident; nor does it appear to be the invention of a single people from whence it diffused to others. Farming replaced HFG in too many places too far apart for diffusion to account for the change. The domestication of grains and large mammals the produced necessary calories to support the larger populations (Roberts, 1989). In other words, overpopulation triggered another quantum leap in human selection and alteration of the environment.

Now a feedback loop was established—a 13,000 year-long complex interaction between humans and the environment, with the rate of change accelerating dramatically beyond what stone tools, fire, and HFG could ever do. Around the globe peoples of all colors, races, and cultures altered their environments to meet their needs. And these altered environments, in turn, changed how humans survived: some continued destroying their ecosystems, some moved, some went extinct (Crosby, 1986). By the first year of the Common Era (2,000 years ago) the globe was filled with humans living in one of five conditions: (1) complex stratified agriculturalism, (2) simple peasant agriculturalism, (3) nomadic pastorialism, (4) general HFG, and (5) specialized HFG.

From the earliest civilizations at Babylon, Ur, Mesopotamia, and the Indus valley, to Egypt, Greece, and Rome, all the way to the Early Modern Period, most people lived similar lifestyles: over 90% were farmers. They used the barter system or had crude forms of money. Small numbers of the elite had access to goods and services, but the majority did not (Geliner, 1988).

The problem these Neolithic farmers faced was, at the most basic level, the same one their Paleolithic HFG ancestors faced and, for that matter, the same one we face. I call this the Eco-Survival Problem: since humans need environmental products to survive, how can we meet the needs of our population without destroying our environment and causing our own extinction? In other words, how can human selection continue without selecting ourselves into oblivion?

The Trade Off

One of the problems in shattering the myth of the Beautiful People is that the alternative would seem to imply that civilization was a universally progressive step in cultural evolution. If they weren't the Beautiful People, then we must be. Not necessarily. One of the mysteries for archaeologists and environmental historians to solve is why our ancestors made the shift from HFG to farming. Writers like Jacob Bronowski (1973) see this step as the first great achievement in the "ascent of man." In reality, if judged solely by health and longevity, Paleolithic people were taller, bigger boned, ate better, lived longer, and had more free time than anyone from 13,000 BP to this century. The average height of HFGers at 13,000 BP, for example was 5'10" for men, 5'6" for women. By 6,000 BP the average dramatically dropped to 5'3" for men, 5'1" for women. Not until the 20th century have heights again approached these marks, and still haven't. Studies from modern HFG societies also show that they have much more free time than Neolithic farmers (or any farmers up to the Industrial Revolution). Kalahari bushmen, for example, invest between 12 and 19 hours per week in food gathering and production, with an average of 2,140 calories and 93 grams of protein, higher than the FDA's Recommended Daily Allowance (Boserup, 1988, 31).

If HFG is so great, why did human groups take up farming? It requires more hours per day of work, it produces dependency on a narrower base of a less dependable food supply, and generates greater populations through which spread more rapidly (Crosby, 1994). For starters in many parts of the world the Neolithic revolution was really an evolution. According to Ester Boserup, "It apparently took ancient Mesopotamia over four thousand years to pass from the beginning of food production to intensive, irrigated agriculture, and it took Europe still longer to pass from the introduction of forest fallow to the beginning of annual cropping a few hundred years ago" (29). Still, in the grand historical sweep of the last 100,000 years, something big happened at the start of the Neolithic that demands an explanation.

Archeologist Kent Flannery (1969) concludes, from his digs in 10,000 year old Mesopotamian village sites, humans turned to farming not to improve their diets or the stability of their diets (since it did neither), but to increase the carrying capacity of the environment in response to larger populations. Small, local populations had grown large enough that they had exceeded the carrying capacity of their ecosystem and so turned to farming in order to produce enough calories to survive. In his book The Food Crisis in Prehistory: Overpopulation and the Origins of Agriculture (1977), Mark Cohen argues that the planet held as many people as could be supported by Paleolithic technology. Another way to say this is that the Beautiful People overpopulated and exploited their environment and so were forced to turn to technology to save themselves. Or as Alfed Crosby put it so well: "Homo sapiens needed, not for the only time in the history of the species, to become either celibate or clever. Predictably, the species chose the latter course" (1986, 20). The Neolithic evolution was simply a human selection response to an Eco-Survival Problem. Whether the trade out was worth it or not is irrelevant. We had to take that road or go extinct. We face a similar problem today.

Ecocide

Environmental history is the study of the effects of large-scale natural forces and contingent ecological events on human history, how human actions have altered the environment, and how these two forces interact (Worster, 1988). This is not drum-and-trumpet history—wars and politics, generals and kings—the proximate causes of history. This is the study of the currents and eddies upon which we ride like flotsam and jetsam on the historical sea of change—the ultimate causes of history.

Reconstructing environmental history reveals that the great rise and fall of civilizations, previously credited to "great men" or "class conflicts," was as often as not the result of human environmental destruction. In each of four geographical locales we find a form of ecological suicide—ecocide—where the people living there filed to solve the Eco-Survival Problem. That is, they were unable to meet the needs of their population without destroying their environment and thereby caused their own extinction. These four examples not only demonstrate that the BPM is wrong, they show what could be in store for us if the global rate of population growth is not checked, and if solutions to environmental problems caused by human selection are not found.

1. New Zealand - If anyone fits the BPM it is Polynesian peoples as portrayed in films living in an Eden-like condition of endless summers and timeless love. Yet environmental history paints a different portrait. When Europeans arrived in New Zealand in the 1800s, the only native mammal was the bat. But they found bones and eggshells of Large moa birds that were then already extinct. From skeletons and feather remains, we know they were an ostrichlike bird of a dozen species, ranging from three feet tall and 40 pounds to 10 feet tall and 500 pounds. Preserved moa gizzards containing pollen and leaves of dozens of plant species give us a clue to the environment of New Zealand, and archaeological digs of Polynesian trash heaps reveal that ecocide was well under way before the DWEMs arrived (Crosby, 1986).

Moas are believed to have evolved their flightless condition in New Zealand over millions of years of a predatorless environment. Their sudden extinction around the time of the arrival of the first Polynesians—Maoris—offers a causal clue. Although many biologists have suggested a change in climate as the cause, or Maori hunting as the last straw to an already drastically changing environment, Jared Diamond (1992) makes the case that when the extinction occurred New Zealand was enjoying the best climate it had in a long time. If anything, the preceding Ice Age would be a more logical choice for an extinction trigger. Also, C14-dated bird bones from Maori archaeological sites prove that all known moa species were still present in abundance when the Maoris arrived around 1000 C.E. By 1200 C.E. they were all gone. What happened?

Archaeologists have uncovered Maori sites containing between 100,000 and 500,000 moa skeletons, 10 times the number living at any one time. In other words, they had been slaughtering moas for many generations, until they were all gone (Cassels, 1984). How did they do it so easily? As Darwin and hungry sailors discovered on the Galapagos islands, animals that evolved in an environment with no major predators often have no fear of newly introduced predators, including humans. It would appear that moas were to the Maori what buffalos were to armed American hunters: sitting ducks. In the process the Beautiful Maori People exterminated one of their major resources.

2. Native America - When anatomically modern humans crossed the Bering Strait from Asia into the Americas some 20,000 years ago (estimates vary considerably), they found a land teeming with big mammals: elephant-like mammoths and mastodons, ground sloths weighing up to three tons, one-ton armadillolike glyptodonts, bear-sized beavers, and beefy sabertooth cats, not to mention Native American lions, cheetahs, camels, horses, and many others. They are now all extinct. Why?

Reed (1970) has suggested that these species were unable to adapt during the period of rapid climactic change at the end of the last ice age. But the weather was getting warmer, not colder, meaning that as the glaciers receded there were more niches to fill, not less; plus, comparable extinctions at the termini of previous ice ages resulted in no comparable megafaunal extinctions. Paul Martin and Richard Klein (1984) point to massive archeological "kill" sites where huge numbers of animal bones are found, accompanied by spear points buried in the rib cages of such animals as mammoths, bison, mastodons, tapir, camels, horses, bears, and others—the obvious remains of multiple species hunted into oblivion. Since mammals adapted to both cold and warm weather went extinct, climate is an unlikely cause. Krantz (1970), in balance, argues for a combination of climate and hunting as the trigger of the megafaunal extinctions, showing how human hunters also took over the niche of the carnivores they hunted, and in the process threatened the niche of such herbivores as the now extinct American Shasta ground sloth. Either way, the actions of the Beautiful Native American People were the deeper, ultimate cause since without the intervention of these sapient hunters such mass extinctions almost surely never would have occurred.

Archaeologists are also discovering that these indigenous Americans were no less destructive of their botanical resources. When the DWEMs first arrived in the American Southwest they found gigantic multi-story dwellings (pueblos) standing uninhabited in the middle of treeless desert. When I first visited these numerous sites in Arizona, Colorado, and New Mexico, I could not help but wonder how the Anasazi (Navajo for "Old Ones") could have survived in this desolate landscape. Pueblo Bonito in Chaco Canyon, New Mexico, is one of the most striking examples. Here you find a "D" shaped structure that was originally five stories high, 670 feet long, 315 feet wide, containing no less than 650 rooms and supporting thousands of people, all nestled in a dry, and, treeless desert.

Construction at Pueblo Bonito began around 900 C.E. but occupation terminated a scant two centuries later. Why? Well-meaning tour guides there will tell you that a drought drove the Anasazi out. David Muench (Muench and Pike, 1974) dramatically closes his volume by concluding that the Anasazi "were a people escaping—from precisely what we cannot be sure—and they fanned out across the land like so many gypsies, carrying a few possessions on their backs and the cultural heritage of a thousand years in their heads" (161). We now have a fairly good idea of precisely what they were escaping from: their own ecocide. Archaeologists have calculated that the Anasazi would have needed well over 200,000 16-foot wooden beams to support the roofs of the multi-storied Pueblo Bonito. Paleobotanists Julio Betancourt and Thomas Van Devender (1981) used packrat "middens" in Chaco Canyon to identify the fauna of the region before, during, and after the Anasazi occupation. C14 dating of the pollen and remains of plants in the midden reveals that when the Anasazi arrived in Chaco Canyon there was a dense pinyon-juniper woodland, with ponderosa pine forest nearby. This explains where the wood came from for building. As the population grew, the forest was denuded and the environment destroyed, leaving the desert we see today. As they destroyed their environment the Anasazi built an extensive road system to reach further for trees—upwards of 50 miles—until there were no more trees to cut. In addition, they built elaborate irrigation systems to channel water into the valley bottoms, but the erosion following the deforestation gouged out the land until the water table was below the level of the Anasazi fields, making irrigation impossible. Then, when the drought did hit, the Anasazi were unable to respond and their civilization collapsed.

3. Machu Picchu - The closest I have ever come to having a mystical experience was a 1986 trip to Machu Picchu, the so-called "Lost City of the Incas" in the Andes Mountains in south central Peru. Nestled in a narrow saddle between two peaks at 9,000 feet altitude and 50 miles northwest of the 12,000-foot high Cuzco (the world's highest city), Machu Picchu is 4.5 hours by train, where you then take a circuitously steep dirt road to a tiny plateau hanging on the edge of a cliff. Clouds swirl around the adjacent peaks, and when dusk descends upon the stark ruins and the fog rolls in, you can almost sense the presence of the last of the Incas who escaped the predatory Spanish centuries ago.

(The experience was enhanced by the fact that a terrorist organization, the Shining Path, had started a major prison riot to free their comrades. Many were killed on both sides. The head of the Peruvian military convinced the terrorists to surrender, whereupon he had over 50 of them murdered. In response, the Shining Path blew up the train to Machu Picchu the day after I was on it. In addition, New Agers believed that there was a planetary "harmonic convergence" at that time, so they converged on Machu Picchu, forming circles and chanting New Age mantras. Finally, to my shock I discovered too late that I was not booked at the pleasant Machu Picchu Hotel at the top of the mountain, but rather I was at the corrugated tin-roofed Hotel Machu Picchu down in the Urubamba River Valley, next to the train station. My wife and I were rescued by a school teacher with Retinitis Pigmentosis who offered her spare bed to us if I would assist her to the top of the treacherously steep Huayna Picchu peak adjacent to Machu Picchu.)

No wonder it took the Yale archeologist Hiram Bingham so long to find the place in 1911. He spent decades attempting to determine if this was the famed 16th century last citadel of the escaping Inca leaders–Vilcambamba—concluding, over the opposition of most archaeologists, that it was. It appears that it was not. So what was Machu Picchu and what happened to the people who lived there?

What Hiram Bingham (1948) discovered was a five-square mile city with a temple, citadel, about 100 houses, and agricultural terraces linked by more than 3,000 steps and an elaborate irrigation system carved into granite, for what appears to be an extremely limited system of farming. The Incas had no cattle, horses, domestic pigs, poultry, or sheep, and most of their meat came from small animals such as guinea pigs, rabbits, and pigeons. Llamas were mostly beasts of burden and sources of wool rather than protein. The Incas were, therefore, highly dependent on agriculture. Yet here they had no native wheat or other cereals, no olives, rice, or grapes, and few green vegetables. Maize and potatoes were the primary source of calories so we can assume that this is what the Incas grew on these agricultural terraces surrounding Machu Picchu (Hemming, 1970). Strangely, 173 skeletons were found, perhaps 150 of which were women (Hemming says 135 skeletons, 102 females), but it is unlikely that people of Machu Picchu were the victims of war because of the natural defenses of the geography. The Spanish never knew about Machu Picchu, and archeologist Paul Fejos (1944) believes no defense was necessary as this was most likely a sacred religious city, not a military outpost.

According to Hemming (1981), Machu Picchu was not a last refuge, but an older city that flourished at the height of the Inca empire. If so, and this point remains highly debated, what happened to the people of Machu Picchu? In light of what we have seen happen around the globe, particularly in places of limited agricultural and animal resources, it seems reasonable to consider the possibility that the extremely limited carrying capacity of Machu Picchu was exceeded by population and environmental pressures, and the people there were forced to abandon this magnificent outpost.

4. Easter Island - In 1722 the Dutch explorer Jakob Roggeveen came upon the most isolated hunk of real estate on the planet—an island 2,323 miles west of Chile, 4,000 miles east of New Zealand, and so remote that the nearest island is 1,400 miles away—Pitcairn, the desolate outpost where the Bounty mutineers took refuge. What he found when he arrived on Easter Sunday (hence the name), were hundreds of statues weighing up to 85 tons, some of which stood 37 feet tall. It appeared that these statues were carved from volcanic quarries, exported several miles, raised to an upright position without metal, wheels, or even an animal power source. Oddly, many of the statues were still in the quarries, unfinished. The whole scene looks as if the carvers quit in the middle of the job.

How and why did these Polynesian peoples carve and raise these statues and, more importantly, what happened to them? Thor Heyerdahl (1958) was shown by modern islanders that their ancestors used logs as rollers to transport the statues, and then as levers to erect them. Piecing together the history of the Easter Islanders from archeological and botanical remains, it appears that around the time of the fall of Rome—400 C.E.—eastward migrating Polynesians discovered an island covered by a dense palm forest that they gradually but systematically proceeded to clear in order to make land for farming, use logs for boats, and for transporting statues from the quarry to their final destination (Bellwood, 1987). Between 1100 and 1650 C.E. the population had reached 7,000 living in a fairly dense 103 square miles. The Easter Islanders had carved upwards of 1,000 statues, 324 of which had been transported and erected. By the time Roggeveen arrived the forests had been destroyed and not a single tree stood. What happened in between? As archeologist Paul Bahn and ecologist John Flenley conclude in their intriguing 1992 book Easter Island, Earth Island, they committed ecocide. "We consider that Easter Island was a microcosm which provides a model for the whole planet" (213). Initial deforestation led to greater population, but this triggered massive soil erosion, resulting in lower crop yield. Palm fruits would have been eaten by both humans and rats, initially introduced for food, attenuating the regeneration of felled palms. Without palms and palm fruits, the rats would have raided sea bird nests, while humans would have eaten both birds and eggs. No logs for boats meant less fishing, so the people began to starve. This, coupled to limited land, led to internecine warfare and cannibalism. At that point a warrior-class took over, battle spear-points were manufactured in huge quantities and littered the landscape. The defeated peoples were enslaved, killed, and some were even eaten. With no logs or ropes there would have been no point in carving additional statues, or finishing the ones already started. The statue cult lost its appeal, rival clans pulled down each other's statues, and the population crashed, leaving only a handful by 1722 (Flenley and King, 1984).

The lesson is clear but especially disturbing when you consider that on an island 10 by 11 by 13 miles it would have been possible to witness the last palm tree being cut down. They had to know it was the end of their most important resource, yet they did nothing to stop it. The Easter Islanders were not the Beautiful People, but neither were they any worse than the DWEMS. It would appear this is a very human problem. Easter Island may very well represent Earth Island.

What Will We Do?

Change in physical, biological, and human systems is inevitable, and the science of history records this change. Humans have been altering their environment for millions of years. As soon as a stone tool is chipped or a wooden spear is carved, the step toward ecological change by human selection has begun. Living in HFG societies, as populations grew the pressures and changes on the environment increased. It is not that there was just more change, but that the rate of change accelerated. Humans caused the extinction of large numbers of species for tens of thousands of years. Civilization has accelerated the rate of change even more, and for the last 10,000 years peoples of all races and geographic locales have significantly altered their environments.

Humans have been successful in changing the environment for productive uses that have led to a higher standard of living and a richer, more diverse lifestyle. We have also changed it for destructive uses leading to the extinction not only of whole species, but of whole peoples. Change, good and bad, cannot be stopped without stopping history because change is history. And as chaos and complexity theory have shown, small changes early in a historical sequence can trigger enormous changes later. So many quirky contingencies construct determining necessities, making it virtually impossible to reverse the change once it is under way (Shermer, 1993). Once the channel of change is dug, it is almost impossible to jump the berm to another channel. There is no going back in history; we can only go forward. The question is what type of change will be triggered by human actions, and in what direction will it go?

As for our future, it is very difficult to legislate historical change because of the impossibility of determining the consequences of legislative actions. Which change do we prohibit and which do we allow? Since all human actions cause change in the environment, once we start down the road of prohibiting actions that cause change, where do we stop? Obviously the majority of us have no desire to return humanity to a HFG state. Nor could the environment support our population under such a condition. We are animals, it's true, but we are thinking animals. All technologies alter the environment—from stone tools to nuclear power plants. Once we start down the road of technologically-triggered change, there is no turning back. But you can go forward in a new direction.

One solution to environmental problems is more and better science and technology, and the application of them to solve problems older science and technologies caused. My libertarian inclinations make me resistant to supporting government intervention to solve the problems. Free-market solutions are already available to many of these free-market caused problems, if only the market could be allowed to act in a truly free manner. Yet my historical and scientific training leads me to fear that free markets could result in a planetary ecocide—Easter Island writ large. Perhaps the risks run too high for us to place our trust in the free market.

Are we on the verge of imitating our ancestors who were unable to solve their Eco-Survival Problem? There is compelling evidence that overpopulation, pollution, global warming, the ozone hole, chemical poisoning, and many more could threaten our very existence. But there is equally good evidence that we can adapt and solve problems. There is no need to cry doom yet. But we should be alert. If we are going to legislate change, it should be based on the best scientific knowledge available. The politics of this most political of sciences has muddied the issues. We need more data. It would seem that a truly "wise use" of government funding would be for more and better environmental science in order to determine with high confidence what needs to be legislated, where, and when. Will we be like the Easter Islanders standing there staring at the last palm tree and say "screw the future, let's cut the damn thing down"? Or will we heed the lessons of history and find a solution to our own Eco-Survival Problem? There is a difference between us and all those who failed to find this solution. We are the first humans to realize the consequences of our actions in time to do something about them. The question is, what will we do?

 
The Beautiful People Myth was written by Michael Shermer. This article was appropriated from Skeptic magazine, Vol. 5, No. 1, 1997, page 72. Skeptic magazine is a quarterly publication of the Skeptics Society, devoted to the investigation of extraordinary claims and revolutionary ideas and the promotion of science and critical thinking.

Read More . . .

Monday, September 7, 2009

Congress doesn't read the bills they vote on

Do you think Congress actually reads those bills they pass? No, they do not. The U.S. Congress routinely votes on bills that they have not even read.

It is typical for Congress members to carelessly pass mammoth bills that none of them have read. Sometimes printed copies aren't even available when they vote. Also members of Congress typically do not even write the laws they pass. Most laws are written either by special interest lobbyists or by bureaucrats. And they routinely pass unpopular measures by combining them with popular bills that are completely unrelated.

Most Congressmen are lawyers, and many others are businessmen. They know what “fiduciary responsibility” is. For members of Congress, fiduciary responsibility means reading each word of every bill before they vote.

But Congress has not met this duty for a long time. Instead . . .
  • Often no one knows what these bills contain, or what they really do, or what they will really cost.
  • Additions and deletions are made at the last minute, in secrecy.
  • They combine unpopular proposals with popular measures that few in Congress want to oppose. (This practice is called “log-rolling.”)
  • And votes are held with little debate or public notice.
Once these bills are passed, and one of these unpopular proposals comes to light, they pretend to be shocked. “How did that get in there?” they ask. There's a basic principle at stake here. America was founded on the slogan, “No taxation without representation.” A similar slogan applies to this situation: “No LEGISLATION without representation.” We hold this truth to be self-evident, that those in Congress who vote on legislation they have not read, have not represented their constituents. They have misrepresented them.

Many American citizens are unaware of this Congressional behavior. Congress shirks its responsibilities in many ways: Congress does't read them, and also Congress doesn't even write the laws they pass either. Most laws are written either by special interest lobbyists or by bureaucrats. Politicians don’t have to sweat the details of the laws they pass. Most laws today contain only general directives. The regulatory details are left to unelected and unaccountable bureaucrats.

Politicians don’t have to read the bills they pass. Congress passes thousands of pages of legislation without knowing what it contains or does. Often this legislation is not even printed before a vote is held, and new items are sometimes added to bills after they have been passed.

Politicians don’t have to deliberate, or debate the laws they create. Watch CSPAN. Whenever Congress holds a so called debate on a bill there is one person talking, a few people waiting to talk, and the rest of the room is empty. What “debate” means in Congress is that a handful of politicians are playing to the camera while the rest are out talking to lobbyists.

Politicians don’t have to cast individual votes on individual laws. Congressional leaders routinely pass unpopular measures by combining them with popular bills that are completely unrelated. This gives politicians an excuse for passing bad bills. They claim they had to do it because the bad measure was part of a good bill they couldn’t possibly oppose.

Politicians rarely have to correct their mistakes. Some laws have sunset provisions, and come up for periodic reconsideration. But Congress routinely fails to figure out if a law or program has actually worked. They just renew it and, of course, increase the funding.

If politicians had to write the laws, sweat the details, read the bills, deliberate and debate, cast individual votes on individual proposals, and routinely revisit past legislation in the same methodical way, it would be a lot harder to make the federal government grow and grow and grow . . . But they don’t, so it isn’t.

Congress has repeatedly committed “legislation without representation,” and due to that, there is an organization that is attempting to correct this with strong measures to prohibit these Congressional misrepresentations: Downsize DC at DownsizeDC.org.

Downsize DC has created the Read the Bills Act (RTBA). RTBA requires that . . .
  • Each bill, and every amendment, must be read in its entirety before a quorum in both the House and Senate.
  • Every member of the House and Senate must sign a sworn affidavit, under penalty of perjury, that he or she has attentively either personally read, or heard read, the complete bill to be voted on.
  • Every old law coming up for renewal under the sunset provisions must also be read according to the same rules that apply to new bills.
  • Every bill to be voted on must be published on the Internet at least 7 days before a vote, and Congress must give public notice of the date when a vote will be held on that bill.
  • Passage of a bill that does not abide by these provisions will render the measure null and void, and establish grounds for the law to be challenged in court.
  • Congress cannot waive these requirements.

Read the Bills Act

Downsize DC is a non-partisan organization which aims to limit the size of government in the United States through awareness and petitioning government.

http://www.downsizedc.org/

http://en.wikipedia.org/wiki/Read_the_bills_act


 

Read More . . .

Saturday, April 25, 2009

Old Wives’ Tales: Fact or Folklore?

Old wive's tales debunked by Allison Ford at Divine Caroline. Who exactly are these old wives, and why do they seem to have an opinion about everything? Toads, warts, eating, swimming, swallowed gum, carrots, eyesight, chocolate, and acne.

Before modern medicine and technology, women were the keepers of medical information. They delivered babies, healed the sick, and were considered experts in nutrition, children, folk medicine, herbs, and death. The “old wives” of these tales were most likely just wise village women—grandmothers, mothers, midwives, and healers. Perhaps once rooted in truth, now old wives’ tales are synonymous with unsubstantiated traditional beliefs and urban legends. They exist for everything from health to pregnancy to forecasting the weather. Some old wives’ tales are just silly superstitions, but some may just have a nugget of truth.

Touching Toads Will Give You Warts
Toads aren’t exactly the cuddliest animals, so maybe that’s why the old wives were so spooked by them. But contrary to popular belief, they can’t give you warts. Warts in humans are only spread by the human papillomavirus, or HPV, and reptiles do not carry the virus. This fallacy may have originated because toads do have bumps on their backs that slightly resemble warts and people could have been sufficiently freaked out to believe that the toads were the cause. In fact, those bumps aren’t warts at all. They’re glands that store toxins to protect the toads from predators. So handling a toad won’t give you warts, but the toad might release a poison and teach you not to pick up toads anymore.

Don’t Swim for an Hour After Eating, Or You’ll Cramp Up
Just think of all the time wasted in the summer, waiting poolside for your lunch to digest. Mothers have been warning their kids not to swim after eating since at least the 1950s, despite the fact that there is not a single recorded instance of someone drowning after suffering a cramp. If you eat and then start rigorously exercising, the blood that should be rushing to your stomach to aid in digestion gets diverted to your arms and legs, causing a cramp or stitch. Cramps don’t usually happen to kids who are just splashing in a pool though. You’d need to be doing laps or seriously exerting yourself in order to be at risk, and even then, cramps are pretty easily ameliorated. This tall tale may have originated with overprotective parents who wanted an Adult Swim.

Swallowed Gum Takes Seven Years to Digest
Nope, not even close. Humans have chewed on plants and other natural substances for thousands of years and this specious claim might have been made up by mothers who were tired of hearing their kids make smacking noises all day, or who thought that gum chewing was low-class. Gum doesn’t break down in the digestive system, but it passes through like anything else. If you’re regularly swallowing wads of gum, then they could meld into a giant blob in your stomach and cause some problems, but the occasional swallower of gum has nothing to worry about.

Eat Carrots for Better Eyesight
Carrots do contain beta carotene, which is important for eye health among other things, but eating copious amounts of carrots doesn’t improve vision. There is some evidence that vitamin A and beta-carotene can reduce the risk of cataracts and macular degeneration, but a person would have to eat about 370 baby carrots per day and I don’t know anyone who loves carrots that much. Some say that in World War II, the English wanted to conceal their use of a new air radar system, so they claimed that Royal Air Force pilots ate lots of carrots, which is why they developed superior vision. It’s unclear whether or not this story is true. Although there’s typically no harm in eating lots of carrots, excess consumption can turn your skin an orange hue. There is one known death from eating WAY too many carrots.

Eating Bread Crusts Will Make Your Hair Curly
No, diet cannot make your hair curly (or straight). The bread crust myth is thought to have originated in Europe about 300 years ago, when many people lived on the brink of starvation. Curly hair was seen as a symbol of health and prosperity, as well as an indicator of youth. Those who had enough to eat (including bread) were generally healthier, so bread became associated with healthy, curly hair. Crusts actually tend to be the most nutrient-dense and healthful parts of bread. They contain more fiber and antioxidants than the rest of the loaf, so while eating them might not give you ringlets, it might make your hair shine a little brighter.

Chocolate Will Give You Acne
It’s possible that this myth originated when scientists discovered that overactive sebaceous glands produce a fatty substance called sebum, which can lead to acne. Chocolate is high in fat, and sebum is high in fat, so the thinking was that if you ate a lot of chocolate, more sebum (and acne) would be produced. It was a pretty big jump to a conclusion that’s given grief to teenagers for years. Chocolate does contain fat, but not the same type that’s found in our skin. The only way it can cause acne is if you rub it onto your face.

If You Pluck a Gray Hair, Two More Will Grow Back in Its Place
Gray hair can proliferate quickly, so it’s natural that once you see one gray hair, you start noticing them all over your head, as if they’ve multiplied overnight. But follicles produce one strand of hair, no more, no less. Plucking a gray hair won’t cause more to grow. Actually, plucking can cause you to lose hair, since yanking can damage the follicle or destroy it completely. It’s okay to tweeze the occasional stray gray, but if your hair is already thin or thinning, getting it colored might be your best bet.

Even though many old wives’ tales have been debunked as superstitious myth, their origins as folk medicine remain important. Luckily, we know that we don’t have to be so superstitious. Go ahead, have a candy bar, take a dip in the pool, and pick up some toads.

From the web site, Divine Caroline:
Old Wives’ Tales: Fact or Folklore?
By Allison Ford
First published April 2009

 

Read More . . .

Tuesday, April 21, 2009

The American Revolutionary War was not won by using guerilla tactics

There is a popular notion that the American colonists strictly used guerilla tactics and acted as snipers from the forest, hiding behind trees and rocks, picking off British Redcoats in ambush. This is a myth of the American Revolution.

Most people imagine that the British soldiers were the only ones marching in formation out in the open and following the rules of European warfare. Even though guerilla tactics are not how the Americans won the Revolution, this myth is based on reality to a certain extent. In fact, according to Anthony J. Joes, the guerillas' contribution was extremely important to American independence.

There were certainly instances of the Americans using guerilla tactics, particularly following Lexington and Concord in Massachusetts and later in the South by such partisan leaders as Francis Marion. These guerilla bands managed to wear down Cornwallis' force with hit-and-run tactics and the destruction of supplies, making his army more vulnerable when they finally confronted the main Continental Army at Yorktown. Furthermore, American riflemen, or rangers, when led by officers who knew how to utilize them correctly such as Daniel Morgan and Nathanael Greene, were extremely effective.

But for the most part, it is untrue that the Americans won the war by using cover, while the hapless British stood in the open in ranks to be shot by the hidden Americans. In fact, the British already had 75 years of experience with this type of guerilla warfare in North America, especially during the French and Indian War.

Both sides fought primarily in the open, in formation. The military gains made by the colonists increased after the Continental Army was trained in the traditional and more formal methods of European warfare. Baron Frederick von Steuben, with experience in the Prussian War, was engaged by General Washington for this purpose. When von Stueben took over training at Valley Forge, he put a single standard and methodology into the American army, so they could work better together. Through his influence and discipline in creating a regular army matching the British in tactics, the Americans were then able to defeat the British on the battlefield. They then became a match for the British on the open ground in every respect. The Americans had been hampered by various methods and commands of maneuver, with few large-scale training drills. Von Stueben changed that, setting a single standard and training the army to use it, and then the Americans proved their ability to use these techniques at the Battle of Monmouth.

Some of the confusion may be because Generals George Washington and Nathanael Greene successfully used a strategy of harassment and progressively grinding down British forces instead of seeking a decisive battle, in a classic example of asymmetric warfare. Nevertheless the theater tactics used by most of the American forces were those of conventional warfare. One of the exceptions was in the South, where the brunt of the war was upon militia forces who fought the enemy British troops and their Loyalist supporters, but used concealment, surprise, and other guerrilla tactics to much advantage. General Francis Marion of South Carolina, who often attacked the British at unexpected places and then faded into the swamps by the time the British were able to organize return fire, was named by them "The Swamp Fox." However, even in the South, most of the major engagements were battles of conventional warfare. However the guerrilla tactics in the South were a key factor in the prevention of British reinforcement to the North, and that was a decisive factor in the outcome of the war.

Certainly on occasion the Americans used cover, hiding behind trees and rock walls. The start of the war at Lexington and Concord is a prime example, and the New Jersey Militia, used it well also. Most battles were fought using some form of linear tactics—they would fire volleys, and often stood in lines. Both sides used cover when they could. The slow rate of fire made maneuvering important, so units fought and moved in lines, even in the woods, so they could protect against bayonet charges.

◊ ◊ ◊

Most historians estimate that only about 30-40 percent of the colonists were Patriots. Approximately 30-40 percent were loyal to the British ("Loyalists" or "Tories"). And the rest, about 30-35 percent did not really care who won ("Neutrals").

Redcoats (British Regulars)
As regimental reputations were built on battlefield gallantry, armies began to develop more colorful uniforms. This was psychological warfare. A distinctive uniform of a well known regiment would instill fear in their opponents, often causing them to retreat rather than stand and fight. Each of the European nations created their own styles and colors of uniforms. This system remained in place until World War I. Since then, some individual regiments still have "full dress" or a ceremonial uniform in addition to the service or field uniform.

The traditional enemy of the colonists was the Indian. The tactics used to fight the Indians were quite different from those of massed European armies. The Patriots' use of Indian tactics inflicted numerous casualties upon the British, but it did not win battles.

It wasn't until the Continental Army, and to a lesser degree, the militia, mastered the art of 18th century warfare—standing in ranks and trading volleys and finally capturing the battle field at bayonet point, did the American colonists start winning battles.

Linear tactics remained the rule through­out the 19th century and the first part of the 20th century. The mass carnage caused by the invention of the machine gun in World War I forced these time honored tactics to change.

SOURCES:

Some of this information was originally published in Myth Information by J. Allen Varasdi, Ballantine Books, 1989.

revolutionarywararchives.org/tactics.html

wikipedia.org/wiki/Guerrilla_warfare#American_Revolutionary_War

doublegv.com/ggv/battles/tactics.html

wiki.answers.com/Q/Were_guerrilla_war_tactics_used_in_the_American_Revolution

 

Read More . . .

Thursday, April 16, 2009

Myths of the Old West


The Old West, with little or no government, was a generally peaceful place, not the violent frontier often depicted. There were probably fewer than a dozen bank robberies in the entire period from 1859 through 1900 in all the frontier West.

The frontier West was not the violent "Wild West" depicted by the press and history teachers who don’t know history. Before 1900 there were no successful bank robberies in any of the major towns in Colorado, Wyoming, Montana, the Dakotas, Kansas, Nebraska, Oregon, Washington, Idaho, Nevada, Utah, or New Mexico, and only a pair of robberies in California and Arizona. Lots of people carried concealed weapons, so potential robbers were always vulnerable. Criminals don’t want to get hurt doing their criminal acts, so they aren't as likely to pick prey that appears willing to fight back.

In 2000 there were about 7,500 bank robberies, burglaries, and larcenies in the United States. Normally, these crimes are pulled off with no injuries or deaths.

What is particularly remarkable about bank robberies today is that there are far more of them than there were a century ago, even after accounting for the increase in population. Was the United States really plagued by bank robberies in the late 1800s and early 1900s? Did gangs of armed men in black hats routinely plunder banks in small western towns? Not quite. Historians Larry Schweikart and Lynne Doti, in their study of banking in the "frontier west," found that western bank robberies were almost nonexistent in the "Wild West" period. Over the four decades from 1859-1900 in 15 states (including Nebraska), there were only about half a dozen bank robberies. As Schweikart has noted, bank robberies began to be a serious problem in the western United States only in the 1920s, when the automobile allowed criminals to quickly cross the state line—and when the physical security of banks became less important to their success.

There are more bank robberies in modern-day Dayton, Ohio, in a year than there were in the entire Old West in a decade, perhaps in the entire frontier period.

One of the enduring images of movies and television about the frontier west in America is the bank robbery. In a typical Hollywood scene, several riders, clad in long coats—despite summertime frontier temperatures of up to 125 degrees—slowly enter town, conspicuously scanning the cityscape for lawmen. The riders tie up their horses and enter the bank in broad daylight. Then they move with lightning speed to draw their guns, force the cashier or president to open the safe, throw the money in saddlebags, and hightail it for their horses outside. In a cloud of dust, they scramble out of town, with an occasional gunshot from one of the befuddled sheriffs trailing behind. The townspeople may mount a posse, but this belated action proves ineffective, as the crooks gleefully reach their hideout, the next town, or Mexico, whichever comes first.

There is one thing wrong with this scenario: it almost never happened. In 1991, Lynne Doti and Larry Schweikart published Banking in the American West from the Gold Rush to Deregulation, in which they surveyed primary and secondary sources from all the states of the “frontier west.” This included every state west of the Missouri/Minnesota/Texas line, specifically, Arizona, California, Colorado, the Dakotas, Kansas, Idaho, Nebraska, Nevada, New Mexico, Oklahoma, Oregon, Utah, Washington, and Wyoming. The time frame was 1859-1900, or what most historians would include in the “frontier period.”

The western bank-robbery scene is pure myth. Yes, a handful of robberies occurred. In the roughly 40 years, spread across these 15 states, there were three or four definite bank robberies; and in subsequent correspondence with academics anxious to help “clarify the record,” perhaps two or three others were pointed out.

◊ ◊ ◊

Violence in the Old West

In the real Dodge City of history, there were five killings in 1878, the most homicidal year in the little town's frontier history. In the most violent year in Deadwood, South Dakota, only four people were killed. In the worst year in Tombstone, home of the shoot-out at the OK Corral, only five people were killed. The only reason the OK Corral shoot-out even became famous was that town boosters deliberately overplayed the drama to attract new settlers. They cashed in on the tourist boom by inventing a myth.

The most notorious cow towns in Kansas—Abilene, Dodge City, Ellsworth, Wichita, and Caldwell—did see more violence than similar-sized small towns elsewhere. But not as much as you might think. Records indicate that between 1870 and 1885, there were only 45 murders in those towns.

There is no evidence anyone was ever killed in a frontier shoot-out at high noon.

Billy the Kid was a pyschopathic murderer, but he didn't kill 21 people by the time he was 21 years old, as the legend says. Authorities can account for three men he killed for sure, and no more than a total of six or seven.

Wild Bill Hickok claimed to have killed six Kansas outlaws and secessionists in the incident that first made him famous. But he lied. He killed just three—all unarmed.

Bill Cody's reputation as a gunslinger was mostly from his own fiction. He freely admitted that he fabricated all the excessive shooting in those dime novels. But he was a good shot and is said to have proved it repeatedly at the bison-killing contests where he earned the nickname Buffalo Bill. But he didn't kill many Indians, and when he was old, his estranged wife revealed that he have been wounded in combat with Indians only once, not 137 times as he claimed.

◊ ◊ ◊

An interesting question is why there were so few bank robberies. Certainly people in the Wild West were no less greedy than later generations of criminals. In the 1920s, for example, a spate of western bank robberies plagued the Great Plains states: rewards soared, bank insurance was offered for the first time, and western bankers discussed bank robberies with increasing frequency at their meetings. Career criminals such as Bonnie and Clyde became infamous for their ability to strike quickly and escape. So if the crooks didn’t change, what did?

Equally interesting is the simultaneous rise of government regulation aimed at bank failures—but not robberies. After the 1890s almost every western state began to regulate other types of bank behavior to “protect” the consumer. Why were there so few bank robberies before the government got involved?

Symbolic Building
Besides demonstrating an affinity for business and personal wealth, the banker had to show the community that he meant business by constructing a building that would symbolically reflect stability, permanence, and safety.

The buildings were in the dead center of town, with other stores on each side. This left only two walls “open” to blasting without disturbing residents, who tended to sleep above their establishments. The bank front faced into the town, and smashing through it would be obvious. That left the rear wall the most vulnerable. Even then, however, blasting through a wall was no easy (or quiet) chore. Bankers double-reinforced rear walls, and should the robbers get inside, they still had to deal with an iron safe. Safe storage of money was a key to successful banking: one Oklahoma banker kept his cash in a small grated box with rattlesnakes inside; an Arizona banker had a safe, but put his money in a wastebasket covered by a cloth, hoping thieves would take the safe and ignore the rest. Still others slept, literally, with the bank’s assets under their bed.

Eventually, though, early iron safes appeared. Constructed in the “ball-on-a-box” design, they featured a large metal box on legs that held important documents. Actual gold and silver, plus paper money, was stored on top of the box in a large “ball safe,” which proved daunting to separate from the bottom, or, more important, to haul off. Dynamite could break it off from its base, but what does one do with a huge round iron ball? The absence of plastic explosives made surgical entrance difficult, though certainly not impossible. These safes were later abandoned in favor of more conventional Diebold safes, named after the Cincinnati company that supplied many of them. The rectangular safes sported metal doors several inches thick. Again, one could penetrate them given enough time, but that was a luxury most thieves lacked. In short, penetrating a vault or safe constituted a major, difficult undertaking that most robbers avoided. But for our purposes here, the key is that the vault and safe, along with the building itself, made up the “symbols of safety” that reassured depositors their money was safe.

Indeed, many western banks commonly left the vault open during the day to allow customers a full view of the safe. Customers also saw fine wooden counters, excellent brass finishings (sometimes gold), and in banks in larger cities, beautiful chandeliers and marble floors. Ornate and ostentatious materials and furnishings contributed to the overall message of the owner’s wealth, the bank’s permanence, and the institution’s stability and safety. Once regarded as irrelevant or odd, it turns out that the fine interiors had a definite purpose in maintaining the solvency of frontier banks.

Direct Approach
Given the difficulty of liberating cash from such buildings, it is not surprising that robbers usually chose the more direct approach. Several gunslingers marching headlong into a bank may have seemed like a good idea to some, and certainly Butch Cassidy’s gang pulled off the successful Telluride robbery in such a mode. His gang had the advantage of Cassidy’s brilliant planning: a shrewd evaluator of horse flesh, Cassidy had stationed (Pony Express-style) horses at exactly the points where he knew his own horses would be wearing out, ensuring that his gang had fresh mounts all the way to their hideout. Even so, one has to search extensively to find bank robberies of even this type. There was one in Nogales, one in California, and perhaps a couple in other locations. But like the rear-wall blasting, the front-door robbery is notoriously absent in western records.

So where did the myth of the western bank robbery arise? Some of it can be traced to Missouri, where the James and Quantrill gangs plundered at will during the Civil War era. Their expeditions ranged as far north as Northfield, Minnesota.

But Hollywood is the likely culprit, certainly guilty of misrepresentation.


SOURCES:

The Non-Existent Frontier Bank Robbery By Larry Schweikart, January 2001. (Larry Schweikart teaches history at the University of Dayton.)

Legends, Lies, and Cherished Myths of American History By Richard Shenkman, 1988

Bank Robberies and Rational Crooks

 

Read More . . .

Tuesday, April 14, 2009

Pirates and Democracy

There are many popular images and beliefs about pirates that are incorrect. For example, unlike traditional Western societies of the time, many pirate crews operated as limited democracies.

Here are just a few of the popular images and beliefs about pirates that are incorrect.

Pirates were barbaric at times, and even quite cutthroat. And certainly above all, they were thieves. But, despite this ugly business, they did have strict rules of order, and by the seventeenth century, there even existed a pirate government.

Unlike traditional Western societies of the time, many pirate crews operated as limited democracies. Both the captain and the quartermaster were elected by the crew. Captains were elected for their leadership and naval knowledge (not for their dueling skills), although they were typically fierce fighters. Someone the crew could trust.

Quartermasters provided for equal disposition of the booty, and pirate courts settled disputes. When not in battle, the quartermaster usually had the real authority on the ship. Pirates injured in battle might be afforded special compensation similar to medical or disability insurance. Prisoners were usually allowed to either join the pirates or sail off on their own ships.

There is little evidence to support the notion of buried treasure. Even though pirates raided many ships, few, if any, buried their treasure. Often, the "treasure" that was stolen was food, water, alcohol, weapons, or clothing. Other things they stole were household items like bits of soap and gear like rope and anchors, or sometimes they would keep the ship they captured (either to sell off or because it was better than their ship). Such items were likely to be needed immediately, rather than saved for future trade. For this reason, there was no reason for the pirates to bury these goods.

Pirates tended to kill few people aboard the ships they captured, oftentimes they would kill no one if the ship surrendered, because if it became known that pirates took no prisoners, their victims would fight to the last and make victory very difficult. Contrary to popular opinion, pirates did not force captives to walk the plank. The standard technique for getting rid of unwanted passengers was simply to heave them overboard.

In reality, many pirates ate poorly, and often lived on bananas and limes; few became fabulously wealthy; and many died young.

In the "golden age of piracy" (1650-1730), the idea of the pirate as the senseless, savage thief that lingers today was created by the British government as propaganda. Many ordinary people believed it was false: pirates were often rescued from the gallows by supportive crowds.

If you became a merchant or Navy sailor then—plucked from the docks of London's East End, young and hungry—you ended up in a floating wooden Hell. You worked all hours on a cramped, half-starved ship, and if you slacked off for a second, the all-powerful captain would whip you with the Cat O' Nine Tails. If you slacked consistently, you could be thrown overboard. And at the end of months or years of this, you were often cheated of your wages.

Pirates rebelled against this world. They mutinied against their tyrannical captains, and created a different way of working on the seas. The pirates showed clearly and subversively that ships did not have to be run in the brutal and oppressive ways of the merchant service and the Royal Navy. This is why they were popular, despite being unproductive thieves.

In the 1700s, a legitimate life on the high seas—on either a merchant ship or in the British Navy—was about as bad as that of most lower-class citizens. Life was tough. While the gentry in England enjoyed their fine goods, ample property, and lives of luxury, the poor spent most of their time in the mines and mills, chained to a life of crushing labor, sadistic beatings, and marginal subsistence.

Many seamen turned to piracy to escape the harsh and unjust discipline on the merchant ships, where they were subject to the whims and ways of sadistic and psychopathic officers who enjoyed using an array of punishments. Naval and merchant seamen were frequently flogged, keel-hauled, hanged from the yardarms, forced to eat cockroaches, towed from the ship’s stern, and more. What’s worse, many of these men were pressed into service against their will.

Some of this information was originally published in Myth Information by J. Allen Varasdi, Ballantine Books, 1989.

http://en.wikipedia.org/wiki/Pirate

http://www.piratesinfo.com/cpi_piracy_and_america_american_pirates_916.asp

 

Read More . . .

Monday, April 13, 2009

MYTH: Natural is good and man-made is bad

There is a misconception that human exposures to carcinogens and other toxins are nearly all due to synthetic chemicals. On the contrary, the amount of synthetic pesticide residues in plant foods are insignificant compared to the amount of natural pesticides produced by plants themselves. Of all dietary pesticides, 99.99 percent are natural.

TOPICS COVERED:
  • Natural vs. Synthetic

  • Natural Toxins in Food

  • Natural Poisons

  • 99.99% of all the Pesticides we eat are Natural Pesticides

  • DDT and Malaria
One of the craziest myths of all time is forced upon us every day: “Natural is good and man-made is bad.” This is a fallacy. There is no difference whatsoever between a “natural” chemical, such as Vitamin C (ascorbic acid) from a fruit, and a synthetic sample of the same material. It is also not the case that “natural” chemicals, i.e., those produced by plants and animals, are always “good” while “man-made” chemicals are always “bad.”

The idea that there is some fundamental difference between “natural” and “man-made” chemicals is a very common misconception, often fueled by marketing campaigns for “chemical-free” products. Nature isn't good and nature isn't bad. It's just the way things are.

Many natural organisms can kill you. Such as natural poisons. What exactly is a poison? Paracelsus made the point in the 16th Century that the dose is the important factor. Some years ago a death occurred due to the overconsumption of carrots. The victim turned orange and died.

Dioxin is a dangerous man-made compound, but it is still a million times less toxic than botulinum. One teaspoon of botulinum could kill a quarter of the world’s population, yet some people choose to inject it in the form of Botox®. The popular non-surgical method of temporarily reducing or eliminating frown lines, forehead creases, crows’ feet near the eyes and thick bands in the neck. The toxin blocks nerve impulses and temporarily paralyzes the muscles that cause wrinkles. This gives the skin a smoother appearance.

There really is no one chemical known as "dioxin." It is a made-up word, fueled by environmentalists and the foolish media. It is the name given to any of a family of 75 compounds called dibenzo-para-dioxins composed of benzene and oxygen atoms.

Nature’s poisons outrank those synthesized by chemists, both in number and in toxicity. Nature is the world’s best chemist: five of the seven most deadly known compounds occur in nature.

Dangerous chemical compounds: (N=natural, S=synthetic)

N - Botulinum toxin
N - Tetanus toxin
N - Diphtheria toxin
S - Dioxin*
N - Muscarine
N - Bufotoxin
S - Sarin

*Dioxin is very controversial. Widely considered a very deadly chemical, fueled by media hysteria, it in fact is only dangerous in high doses. According to Encyclopedia Britannica: "Toxicologists mistakenly concluded from studies on laboratory animals that TCDD (dioxin) was one of the most toxic of all man-made substances… Subsequent research, however, discounted most of these inferences, which were based on the effects of very high doses of TCDD on guinea pigs and other peculiarly susceptible animals. Among humans, the only disease definitely found related to TCDD is chloracne, which develops shortly after exposure to the chemical."

The National Institute of Occupational Safety and Health has evaluated the health of industrial workers exposed to dioxin levels 50 times as high as the exposure received by Vietnam veterans (from dioxin in Agent Orange). These workers have shown no increase in cancer risk.

The bottom line on dioxin is that, like alcohol, it can be dangerous in larger quantities (but it is easier to die from alcohol). Fears of cancer, birth defects and such are not being substantiated in the real world. So while dioxins should be treated with care and professional attention, there is no need for the public to panic every time they hear the word "dioxin."

The Next Ten (Most Deadly) Chemicals:

N - Strychnine
S - Soman
S - Tabun
N - Tubocurarine chloride
N - Rotenone
S - Isoflurophate
S - Parathion
N - Aflatoxin
S - Sodium cyanide
N - Solanine

Vitamin C, or ascorbic acid, is one of the most commonly used preservatives and is an essential nutrient for humans. It occurs naturally in many fruits and vegetables but is not very stable and is often destroyed upon cooking. Vitamin C can be synthesized from glucose in the laboratory and the product is EXACTLY the same as the naturally occurring substance.

Some argue that “Synthetic chemicals bioaccumulate in our bodies.” Well, naturally occurring compounds can accumulate in our bodies too. For example, vitamins A and D are fat-soluble vitamins and can accumulate in fatty tissues. Large excesses of these vitamins can cause death just as an accumulation of some synthetic chemicals can. There is no reason why synthetic compounds should accumulate to a greater extent than naturally occurring ones.

If we didn't have chemical preservatives that would be as great a disaster for our food supply as the loss of all forms of refrigeration.

Natural toxins in food
Natural toxins in food can be just as dangerous as synthetic ones. Garlic, mustard, and horseradish all contain allyl isothiocyanate, which can cause cancer. Barbecued meat contains the carcinogen benzopyrene.

Milk products can be toxic for those who lack the enzyme needed to digest it. Parsley, carrots, and celery are good for you, but nevertheless, they contain myristicin, which in large quantities can cause hallucinations, liver damage, and even death.

We all consume a wide variety of natural toxins in an average week. Since we only consume very small amounts of each of the different toxic compounds at any one time, our livers can process the toxins and they are broken down by a range of metabolic pathways. We are exquisitely designed to cope with a whole variety of substances in small quantities that would be poisonous in larger amounts. It is possible to overdo it and suffer the negative effects of these toxins, but in most cases this is rather difficult. For example, caffeine is toxic, but you would have to drink 85 cups of coffee at one sitting to die from caffeine poisoning.

Examples of this kind illustrate that there is no difference between the negative effects of some synthetic chemicals and those of many of the chemicals that occur naturally in the things we eat. The quantity consumed is a vital factor in the effect produced.

Organic food is NOT better for you. This is a common misconception not borne out by the research evidence. In properly controlled investigations on the same dry weight of organic and non-organic fruit and vegetables, analysis showed the same amounts of vitamins, minerals etc. If all the food in the world was organic, so much manure would be needed that there would need to be three times as many cows.

Poisonous mushrooms are certainly natural, and 32 different mushrooms have been associated with fatalities. And an additional 52 have been identified as containing significant toxins. By far the majority of mushroom poisonings are not fatal, but the majority of fatal poisonings are attributable to the Amanita phalloides mushroom. The most toxic mushrooms contain a clear, odorless, tasteless liquid that looks like water. Just a couple of drops in a drink will kill the victim by miserable death a few weeks later. At one point it was used for chemical warfare.

All parts of the beautiful oleander plant contain poison—several types of poison.

Nicotine is a toxic poison. It's natural and addictive, and some say that it kills 1,000 Americans every day. Nicotine is actually a natural pesticide. Not many people realize that nicotine is also sold commercially in the form of a pesticide.

Nature also produces natural pesticides like opium, cocaine, THC (in marijuana), caffeine, digitalis, etc. Pine trees and citrus tree contain a natural pesticide, terpene, that protects them from pests. Citrus peel oil (it is in frozen orange juice) and it is very potent against fire ants and other insects.

Viruses, bacteria, mycoplasmas, and parasites are all natural. Does this mean small pox is good? Viruses can cause cancer. Natural sunlight (in excess) can cause skin cancer. Other natural chemicals that are toxic and dangerous include venom from snakes, spiders, bees, scorpions.

Cyanides are produced by certain bacteria, fungi, and algae and are found in a number of foods and plants. Cyanide is found, although in small amounts, in lima beans, apple seeds, peaches, mangoes and bitter almonds. There are genuinely hazardous cyanide levels in the kernels of apricots and peaches. Many cyanide-containing compounds are highly toxic, but some are not. When cyanide is in combination with other substances, it is sometimes not toxic.

It is estimated that more than 2,000 plant species contain cyanide, a lot of which are our everyday foods. Cyanide exists in its simplest form in nature as a gas: hydrogen cyanide. Fatal cyanide poisoning from our food is very rare. The amount we ingest with our food is usually very minute and our body can handle it with ease. However unnatural sources of cyanide can be very dangerous, and our bodies may not be able to deal with it. The most dangerous cyanides are hydrogen cyanide and salts derived from it, such as potassium cyanide and sodium cyanide, among others.

Nature lovers should know that Chinese herbal remedies often contain mercury, lead, and arsenic!

Fluoride is a mineral that occurs naturally in most water supplies. Fluoridation is an adjustment of the natural fluoride concentration to increase it to about one part of fluoride to one million parts of water. Fluoride is toxic, but at low levels it is harmless. A potentially fatal dose would be approximately 5 mg. of fluoride per kg. of bodyweight.

Most scientists agree that pesticide residues pose a smaller threat to our health than do naturally occurring substances found even in organically grown food. The Food and Drug Administration estimates that pesticides account for just 0.01 percent of the cancer risk associated with food.

Naturally occurring pesticides (found even in organically grown food) are present in the human diet in concentrations 10,000 times greater than man-made pesticides. The obsession with synthetic pesticides is absurd when you consider that natural pesticides produced by plants to ward off insects or animals, which are proving carcinogenic in lab animal tests just as often as their synthetic counterparts, constitute over 99.99 percent of all the pesticides we eat.

Current procedures to test whether a chemical causes cancer entail exposing animals, usually rats or mice, to massive doses of the chemical, then killing the animals and checking for tumors. But there are major problems with this procedure.

One, animals aren't necessarily the best stand-ins for humans. In fact, 30 percent of the time, a chemical that causes cancer in mice won't do so in rats and vice versa, even though these species are much closer to each other than they are to humans. Also many chemicals that cause cancer in rats and mice do NOT cause cancer in hamsters. Chemicals have very different effects on different animals, even on closely related animals like rats, mice, and hamsters.

For another, the dose given the animals is on average almost 400,000 times the dose that the Environmental Protection Agency tries to protect humans against.

The assumption in the testing is that whatever causes cancer in a few rats out of a few dozen at massive doses will, in a population of hundreds of millions of humans, also cause human cancers, even at much smaller doses.

But this is a flawed theory (generally called "linear" or "no-threshold," or "one molecule" theory). This directly contradicts what is known about chemical poisoning, which says that virtually anything at a high enough dose can kill a person, even if at a low dose it is actually therapeutic or even necessary to life, such as vitamins and salt.

Vitamin A in small doses is necessary for life, while large doses will kill you. Eating a lot of salt-cured meat has been linked to stomach cancer, but no one can live without some salt.

Fifty percent of all synthetic chemicals tested in massive doses on laboratory animals have caused tumors. While 50 percent of synthetic chemicals are carcinogenic, 50 percent of the natural chemicals are also carcinogenic.

Many Americans are focusing on synthetic chemicals that may cause cancer in humans, but are blissfully ignorant of the natural carcinogens. There are over 1,000 natural chemicals in a cup of coffee, only 22 have been tested. Of these, 17 are carcinogens. But don't worry about drinking coffee. The problem isn't the coffee—it's the high-dose animal tests. Too many people ignore the fact that rodent tests have shown natural chemicals to be carcinogenic just as often as synthetic chemicals.

By weight, there is more carcinogen-causing chemicals in a cup of coffee than you are likely to get in synthetic pesticides in a whole year. Yet, the average daily intake in coffee is 1,000 times the tolerance level the EPA allows for synthetic pesticides.

Organic apple juice often contains up to 137 naturally occurring volatile chemicals, of which five have been tested. Two of these five have been found to be carcinogenic in laboratory animals.

Pesticide residues (that are eaten by consumers) are nothing to worry about. However, with farmhands, it's different since their levels of exposure are far greater, and so it is good that we have strict rules of exposure for them and for chemical workers. But I don't think anyone's ever died of residues.

Eliminating essential chemical pesticides will likely increase cancer rates.
Synthetic pesticides have significantly lowered the cost of plant foods, thus making them more available to consumers. Eating more fruits and vegetables is thought to be the best way to lower risks from cancer and heart disease (other than giving up smoking; our vitamins, antioxidants, and fiber come from plants and are important anti-carcinogens). If you eliminate essential synthetic pesticides, you make fruits and vegetables more expensive, which means people will then eat less of them, and more will die of cancer. Huge expenditures of money and effort on tiny hypothetical risks does not improve public health. Rather, it diverts our resources from real human health hazards, and it hurts the economy.

There is a misconception that human exposures to carcinogens and other toxins are nearly all due to synthetic chemicals. On the contrary, the amount of synthetic pesticide residues in plant foods are insignificant compared to the amount of natural pesticides produced by plants themselves. Of all dietary pesticides, 99.99 percent are natural: They are toxins produced by plants to defend themselves against fungi and animal predators. Because each plant produces a different array of toxins, we estimate that on average Americans ingest roughly 5,000 to 10,000 different natural pesticides and their breakdown products. Americans eat an estimated 1,500 milligrams of natural pesticides per person per day, which is about 10,000 times more than they consume of synthetic pesticide residues. By contrast, the FDA found the residues of 200 synthetic chemicals, including the synthetic pesticides thought to be of greatest importance, average only about 0.09 milligram per person per day.

Another misconception is that synthetic toxins pose greater carcinogenic hazards than natural toxins. On the contrary, the proportion of natural chemicals that is carcinogenic when tested in both rats and mice is the same as for synthetic chemicals—roughly half. All chemicals are toxic at some dose, and 99.99 percent of the chemicals we ingest are natural.

And yet another misconception is that the toxicology of man-made chemicals is different from that of natural chemicals. Humans have many general natural defenses that make us well buffered against normal exposures to toxins, both natural and synthetic. DDT is often viewed as the typically dangerous synthetic pesticide. However, it saved millions of lives in the tropics and made obsolete the pesticide lead arsenate, which is even more persistent and toxic, although all natural. While DDT was unusual with respect to bioconcentration, natural pesticides also bioconcentrate if they are fat soluble. Potatoes, for example, naturally contain fat soluble neurotoxins detectable in the bloodstream of all potato eaters. High levels of these neurotoxins have been shown to cause birth defects in rodents.

Carrots contain natural chemicals that have been found to cause cancer in rodents in massive doses. This is also true of apples, bananas, broccoli, Brussels sprouts, cabbage, celery, and many other unprocessed foods. Further testing is likely to eventually find natural rodent carcinogens in essentially everything we eat. We are surrounded by a sea of carcinogens, most of which are natural compounds occurring normally in a variety of foods. The body's defense mechanisms are able to resist these carcinogens in small doses, though often not in the massive amounts which laboratory rodents receive.

Many of those naturally occurring chemicals are themselves pesticides, developed not by industrial chemists but by mother nature. Plants couldn't survive if they weren't filled with toxic chemicals. They don't have immune systems, teeth, claws, and they can't run away. So throughout evolution they've been making newer and nastier pesticides. They're better chemists than Dow or Monsanto. They've been at it a long time. Monsanto, Dow, and Uniroyal are amateurs compared to Mother Nature's pesticide factory.

Potatoes contain two chemicals, solanine and chaconine, which kill insects in the same way that synthetic organophosphate pesticides do. A single potato contains about 15,000 micrograms of these natural pesticides. And yet you're eating only about 15 micrograms of man-made organophosphate pesticides a day.

The typical, newer pesticides use a tablet about the size of an aspirin to treat an acre and is about as toxic to humans and animals as table salt. What they attack are enzymes particular to the pest species. They are not toxic to humans because they are not particular to human systems.

Although charges that eating food with DDT residues caused cancer or even killed humans directly—charges made most prominently in the late Rachel Carson's best-selling 1962 book Silent Spring—later proved to be unsubstantiated, DDT prompted concern that it was causing havoc in the ecosystem because it persisted in the body's tissue and could thus be passed along the food chain.

According to the National Agricultural Chemicals Association, however, none of the pesticides still in use on American crops are persistent.

DDT itself was a tremendous improvement over previous non-synthetic chemical pesticides. It was treated as a miracle chemical when it first appeared and given credit for saving millions of lives, according to the World Health Organization.

But DDT was banned in 1972 before other chemicals were ready to take its place, a ban that some scientists claim shows the damage that anti-pesticide extremism can cause.

The government of Sri Lanka halted the spraying of mosquitoes with DDT. Consequently, the incidence of malaria jumped from nearly zero to 2,500,000 cases and 10,000 deaths before the country began spraying again. Malaria kills! Compare this to known deaths caused by DDT—none. Millions of people were dusted with it and sprayed with it to kill lice and fleas and it never did anyone any harm.

One problem with the environmentalists' argument is their claim that the tiny amount of pesticide residue left on food puts us all at risk of cancer. (About 1 percent of fruits and vegetables have residues above the legal limit; most have none at all.)

This stems from assumptions that a human will react the same way to a chemical as a rodent in a laboratory will. But 30 percent of the chemicals that cause cancer in rats at high doses do not harm mice, and vice-versa. With such a discrepancy between closely related species, what does that say about extrapolating from either of them to humans?

Another questionable assumption is that chemicals that cause tumors in rodents when administered in huge doses will cause tumors in humans at a fraction of those doses. It ignores the scientific axiom "only the dose makes the poison." The iron in a tablet that many adults take regularly has killed babies. Eating a lot of salt-cured meat can increase the risk of stomach cancer, but everyone needs some salt or else they'll die.

It bears repeating that the important rule of toxicology is: The dose makes the poison. If exposure to a chemical is extremely low, then the likelihood of being harmed by the chemical is also low. Some substances that are deadly in large doses may be beneficial in small doses.

Some minerals, such as iron and potassium, are vitally important parts of our diet, but they would poison us if consumed in large quantities. Our bodies naturally contain traces of arsenic and other chemicals that are considered potent carcinogens. The most important factor is the amount of these elements, because that is what determines whether they are beneficial or poisonous. Before you can decide whether toxic chemicals endanger our health, we need to know the level of our exposure to them.

Over 99 percent of the cancer risk associated with food comes from naturally occurring substances in food. Food additives account for approximately 0.2 percent of food-related cancer risks. Pesticides make up 0.01 percent and animal drugs, 0.01 percent.

The cancer risk from pesticides is so low as to be indistinguishable from zero, and is thousands of times less than the cancer risk associated with naturally occurring carcinogens in our diets.

Even though animal tests are not reliable guides for determining risks for humans, you can see in the list below that based on animal tests, the risk factors posed by pesticides and food additives are very low compared to the various foods and drinks we ingest.

Source and daily exposure Risk factor
Wine (1 glass) 4,700.0
Beer (12 ounces) 2,800.0
Cola (1 serving) 2,700.0
Bread (2 slices) 400.0
Basil (1 gram) 100.0
Cooked bacon (100 grams) 9.0
Water (1 liter) 1.0
Additives and pesticides in food 0.5
Additives and pesticides in bread and grain products 0.4
Coffee (1 cup) 0.3


SOURCES:

Bruce Ames, "Ranking Possible Carcinogenic Hazards," Science 236 (April 17, 1987), p. 271.

http://www.rsc.org/images/NaturalNotes_tcm18-115179.pdf

http://www.textfiles.com/politics/993frmn.txt

http://www.fumento.com/pests.html

http://www.fumento.com/times.html

http://www.fumento.com/ames.html

http://www.fumento.com/supest.html

http://www.heartland.org/bin/media/publicpdf/23643c.pdf

http://www.omichron.com/eureka!.html

 

Read More . . .