The Pleistocene Overkill Hypothesis: An Optimal Foraging Perspective

by Claire Brandes

The Quaternary Period is infamous for overseeing the extinction of some of prehistory’s most charismatic species, including the wooly mammoth and American mastodon. Two primary, yet conflicting, hypotheses aim to explain the cause of this biodiversity loss, leading many scholars to ask: was climate change or human overhunting responsible for the demise of North American megafauna? Support for the latter theory, or the overkill hypothesis, comes from archaeological evidence suggesting that the arrival of the first humans in the Americas, the Paleoindians, and the first megafaunal extinctions occurred roughly in tandem. This piece further explores this hypothesis through an economic lens, considering the advantages and disadvantages to hunting large game with regards to tradeoffs in energy expenditure. Considering optimal foraging theory the overkill hypothesis is improbable. Total energy expended exceeded the proposed net caloric return rate for consumption of a mammoth, suggesting that a variety of factors, and not human hunting alone, ultimately caused the Quaternary extinction event. This research carries serious implications regarding the dangers of climate change as well as human overexploitation of natural resources as risk factors for biodiversity loss and should be considered in conversations surrounding modern day conservation efforts.  

Quaternary period, megafauna, extinction, Paleoindians, optimal foraging

The plains of prehistoric North America were once habitat to the largest mammals to ever walk the planet. These large mammals, or megafauna, are defined with respect to their taxonomic group, and typically denote species that weigh over 1000 kg (Lupe & Schmitt, 2016). Megafauna went extinct almost systematically during the latter part of the Pleistocene Epoch, the period of Earth’s history from 2.6 million years ago to 11,700 years ago (Johnson, 2018). The Quaternary extinction event, which began around 12,000 years ago, saw the demise of fifty-seven species of megafauna, representing 35 genera, in the following 2,000 years, including the popularly known wooly mammoth and two other proboscideans, the taxonomic order to which mammoths and elephants belong (Bulte et al., 2005; Surovell & Waguespack, 2009). The average rate of extinction for these animals is typically one per every 40,000 years (Bulte et al., 2005). Compared to this standard, the loss of life during this time period was tremendous (Bulte et al., 2005).    

Two leading theories aim to explain the Quaternary extinction event. First, a significant degree of climate change occurred during the Pleistocene, which some scholars blame for megafaunal extinction (Bulte et al., 2005). Changes in vegetation occurred in response to the formation and subsequent movement of ice sheets across the North American landscape. As the theory goes, the foraging efficiency of large herbivores decreased as they struggled to adapt to glacial activity, which proved to be detrimental to their survival (Bulte et al., 2005). The second theory, referred to as the overkill hypothesis, suggests it was not changes in climate but early humans who hunted large game species to extinction. Archaeological evidence suggests that the arrival of the first humans in the Americas, the Paleoindians, and the first megafaunal extinctions occurred roughly in tandem (Bulte et al., 2005). Known for their distinctive spear points that were thought to have been used to hunt large game, the aptly named Clovis people are among the most well-known of these early hunter-gatherers (Waguespack & Surovell, 2003). Thus emerged the idea of humans as super-predators who took out these prehistoric beasts.

Due to the limitations of knowledge revealed by the archaeological record, the true cause of the Quaternary extinction remains highly contentious within the scientific community. Like many facets of early human behavior, the subsistence strategies of Paleoindians are open to multiple, often conflicting interpretations that further complicate this debate. Yet the use of behavioral models developed through years of research and ethnographic observation may work to elucidate these matters. An analysis of optimal foraging theory predicts a generalized foraging strategy for Paleoindian hunter-gatherers, as opposed to large-game specialization or a reliance on large-game as a primary food source, discounting the plausibility of the overkill hypothesis.

As a theoretical framework, optimal foraging theory aims to predict how a forager will behave in pursuit of a food source. Within this theory are several decision sets that are used to determine how prey will be valued, pursued, and handled (Winterhalder, 1987). Each dimension is meant to illustrate a step in the process of obtaining food from the time an individual starts their search until the food is processed and ready to be consumed. An assessment of diet in its entirety must then consider the difficulties experienced by a forager as they progress through each set of decisions. This framework will be used to assess the plausibility of the overkill hypothesis by examining the likelihood of megafauna as a staple of the Paleoindian diet. In particular, the inclusion of mammoths will be analyzed in a greater depth, as they are the only megafaunal species with a clear association to Paleoindian kill sites (Byers & Ugan, 2005).

Due to their size and subsequent caloric value, mammoths and other megafauna such as mastodons and wooly rhinos have long been hypothesized to have been highly ranked prey by early humans. Evidence present at Paleoindian kill sites has proven that mammoths were once successfully hunted by these people. In a 2003 study examining 33 known Clovis archaeological sites, proboscidean remains, a taxonomic group of megafauna, were present in 79% of assemblages, the most commonly occurring genera of all faunal discoveries (Waguespack & Surovell, 2003). In zooarchaeological theory, body size is considered to be a strong indicator of prey rank (Lupo & Schmitt, 2016). A wooly mammoth is estimated to provide roughly 7,000 pounds of edible tissue (Byers & Ugan, 2005). In the framework of optimal foraging theory, this measurement characterizes mammoths as highly desired prey in the eyes of a hungry Paleoindian.

The diet-breadth model predicts which and how many types of food resources a forager will pursue based on what they could encounter. The simple, yet fundamental assumption of this model goes as follows: a forager’s goal is to maximize their net rate of energy intake (Winterhalder, 1987). Prey is categorized as high value if they yield significant post-encounter return rates, measured in amount of kilocalories worth of meat obtained per hour of handling time (Lupo & Schmitt, 2016). Handling time is the average time spent, once prey is encountered, pursuing, dispatching, and preparing the animal for consumption (Smith et al., 1983). Byers and Ugan (2005) assess post-encounter return rates would need to equal upwards of 30,000 kilocalories per hour to maintain the appeal of a specialized diet. Significant constraints placed on the diet-breadth model complicate how large game may be ranked as prey handling times increase (Winterhalder, 1987). Megafaunal specialization could therefore be maintained successfully only under conditions in which mammoths existed in abundance, were easily killed, and took little time to process (Byers & Ugan, 2005). 

Mammoths likely occurred in low density rates across the Pleistocene landscape. A negative correlation exists between animal size and density, which is observable in present day environments (Byers & Ugan, 2005). Modern elephants will be used to establish surrogate data unobtainable from actual mammoth remains; elephants and mammoths, who share a common ancestor (a recent relationship when considered in evolutionary terms) are comparable in size.

African elephants today exist at an average rate of 1.3 animals per square kilometers, and Indian elephants at an even lower density of 0.6 animals per square kilometer (Waguespack & Surovell, 2003). The likely population density of mammoths may be extrapolated from this data to predict an equally low rate of occurrence for these animals. Larger animals also tend to be organized patchily, meaning groups of mammoths most likely occurred in clumps on the landscape, as opposed to a uniformly distributed species that are more easily discovered and isolated by predators (Byers & Ugan, 2006). Due to predeation by a multitude of carnivores, large-bodied herbivores of the late Pleistocene were likely to maintain a small population. Several studies theorize that populations of megafauna were receding even before the Clovis people entered the Americas (Byers & Ugan, 2005). 

The predicted rate of encounter for mammoths likely does not support human specialization in hunting megafauna (Byers & Ugan, 2005). Even given that mammoths were highly ranked prey, their presumed scarcity would predict a broader and less specialized Paleoindian diet. Most decision sets within optimal foraging theory assume foragers will behave in a manner that maximizes their rate net of energy return per unit of foraging time. Byers and Ugan (2005) estimate hunters would need to slay a mammoth once per hour of search time in order to maintain a sustainable balance of energy expended and gained. Long search times can be particularly detrimental to overall return rates for highly ranked prey (Smith et al., 1983). Consequently, opportunity cost increases with longer search times. Foraging behavior can be modeled on a marginal value curve, where, at a certain point, searching for a specific prey animal loses its value as it goes on for an extended period of time (Winterhalder, 1987). Simply put, the longer a forager spends hunting only a specific type of prey, the greater the potential gain lost had they pursued an alternative food resource. This phenomenon results in a conservation effect where a higher payoff is achieved by foragers who do not exhaust resources and therefore maintain tolerable search times. Pleistocene foragers would have presumably moved between resource patches at a rate that would not cause local prey extinctions (Smith et al., 1983).

Assuming mammoths were abundant enough to support Paleoindian hunting rates, subsequent costs to hunting the animal emerge, beginning with pursuit costs. Pursuit time refers to the time spent stalking, chasing, and dispatching prey, including time lost to failed hunts. Pursuit time, contingent on encounter rates, tends to increase with prey body size (Lupo & Schmitt, 2016). Mammoths are estimated to weigh over six tons, almost 100 times the weight of an average human. In the natural world, a relationship in size between predator and prey to this degree is unheard of. Wolves who target adult moose, a considerably larger animal, about eight times its size, do not even reach this standard. For prey that are disproportionately small or large compared to the size of the hunter, difficulty during capture often ensues and that decreases the net caloric return rates (Surovell & Waguespack, 2009).

The aforementioned failed hunt, a key element of pursuit costs, occurs when prey is pursued but hunters are unable to successfully capture or kill the animal. Today’s African elephants are known to have thick skin that allows the animal to persevere through multiple attacks and escape when hunted (Lupo & Schmitt, 2016). The thick fur coat once adorned by mammoths provides further evidence of these animals’ naturally occurring protection. Modern ethnographic accounts of pursuit times associated with large-game document low success rates. This phenomenon is observable across continents from the indigenous Martu people of Australia to the Hadza hunter-gatherers that are native to Tanzania. The latter group suffers a 97% failure rate for any individual on any given day pursuing an African elephant (Lupo & Schmitt, 2016). Any foraging groups sensitive to these risks are more likely to adopt a wider breadth of diet (Surovell & Waguespack, 2009).

Preparation time for consumption of megafauna also increases the cost of hunting these animals. Larger animals take longer to butcher, running the risk of meat spoilage if this is not done in a timely manner. In order to consume meat, hunters must either transport the carcass or move their entire camp and community to the kill site, both options exhausting a considerable amount of time and energy. Ethnographic accounts of elephant hunts establish an average of 40–52 men required to carry flesh and bones, or around half that number to transport dried meat back to camp (Lupo & Schmitt, 2016). Total handling time for an animal the size of a mammoth has been estimated to range between 75 to 187.5 hours, an extremely taxing time frame (Byers & Ugan, 2005). Depending on their fat content, Byers and Ugan (2005) grant a 105–175 hour window of processing time where post-encounter return rates justify megafaunal specialization. Given an estimated mean value of around 131 hours of handling time, including mammoth in the diet is unlikely to yield more kilocalories than expended in many cases. To compensate for these restraints, Paleoindians may have exploited only a partial amount of a carcass once killed. While cutting processing costs, this practice nonetheless results in waste. Partial utilization of a kill reflects a drop in overall return rates in kilocalories per unit of pursuit and processing times (Byers & Ugan, 2005). Thus, the processing costs of procuring a mammoth were likely not worth the effort compared to the amount of meat possibly obtained by exploiting certain mid-sized game, including bovids and deer.

Hunting megafauna like elephants is rare among present day African hunter-gatherer tribes (Byers & Ugan, 2005). Modern elephant hunters are a group composed of select individuals who have the training and knowledge to successfully dispatch an elephant. Those who attempt to do so are often motivated more so by the social prestige that accompanies a large kill rather than sustenance, as is the case of Hadza elephant hunters known as tûmas. Even then, elephants were reportedly not procured often (Lupo & Schmitt, 2016). Furthermore, there is little evidence to demonstrate specific group dynamics of this nature within small, early human societies. Thus the prestige of a mammoth kill remains entirely speculative.

As far as mammoth remains found in association with Paleoindian kill sites, scholars argue that these sites are subject to discovery bias that exaggerates how often these animals were actually taken for food. Mammoth-bearing archaeological sites are more easily discovered due to the considerable size of proboscidean bones and are also afforded considerable research attention given the implications of these finds (Surovell & Waguespack, 2009). Many archaeological sites dated to the Pleistocene epoch include additional fossil evidence of many small and medium-bodied prey types such as camel and bison, often overlooked in the discourse surrounding the overkill hypothesis (Byers & Ugan, 2005; Waguespack & Surovell, 2003).

Although Paleoindians were known to hunt mammoths, through the lens of optimal foraging theory the proposed search and handling times for procuring megafauna does not support a specialized diet. It is considered efficient within the diet-breadth model to include prey in the diet only if it produces overall return rates higher than the amount of energy expended obtaining the kill. The proposed search and handling times associated with procuring a mammoth decrease overall return rates to an unsustainable amount, with Paleoindians expending a greater amount of energy than gained by exploiting megafauna (Lupo & Schmitt, 2016). When assessing the roles of both climate change and human overkill in the Quaternary extinction, it is likely that the late Pleistocene climate exacerbated the effects of Paleoindian hunting practices in a deadly fashion. This combination of fatal circumstances was not limited to North America; climate change is considered to have played a large role in South American megafaunal extinctions as well (Metcalf et al., 2016). This region faced an even greater decline in biodiversity with an astounding loss of 83% of megafaunal genera. The archaeological record suggests that despite instances of hunting these species, early humans coexisted with megafauna for a minimum of 1,000 years. Palaeoclimatological reconstructions of the Pleistocene show that climate change was prevalent in South America as well, directly contributing to extinctions (Metcalf et al., 2016). When considering the circumstances surrounding loss of megafauna in both the American continents, it is likely that one factor alone, like overhunting, did not cause the extinction events.

Ultimately, optimal foraging theory does not support the hypothesis that human hunting alone caused the extinction of the vast amount of North American megafauna. The likely interaction that occurred was between a species already in decline due to a variety of factors and foreign predators, both technologically and cognitively advanced in an unprecedented way. No matter the true cause, the Quaternary extinction event carries serious implications regarding the dangers of climate change as well as human overexploitation of natural resources.


Bulte, E., Horan, R. D., & Shogren, J. F. (2005). Megafauna extinction: A paleoeconomic theory of human overkill in the Pleistocene. Journal of Economic Behavior & Organization, 59(3), 297-323.

Byers, D. A., & Ugan, A. (2005). Should we expect large game specialization in the late

Pleistocene? An optimal foraging perspective on early Paleoindian prey choice. Journal of Archaeological Science, 32(11), 1624-1640.

Johnson, W. Hilton (2018). Pleistocene EpochEncyclopedia Britannica.

Lupo, K. D., & Schmitt, D. N. (2016). When bigger is not better: The economics of hunting megafauna and its implications for Plio-Pleistocene hunter-gatherers. Journal of Anthropological Archaeology, 44, 185-197.

Metcalf, J. L., Turney, C., Barnett, R., Martin, F., Bray, S. C., Vilstrup, J. T., … & Cooper, A. (2016). Synergistic roles of climate warming and human occupation in Patagonian megafaunal extinctions during the Last Deglaciation. Science Advances2(6), e1501682.

Smith, E. A., Bettinger, R. L., Bishop, C. A., Blundell, V., Cashdan, E., Casimir, M. J., . . . Stini, W. A. (1983). Anthropological Applications of Optimal Foraging Theory: A Critical Review [and Comments and Reply]. Current Anthropology, 24(5), 625-651.

Surovell, T. A., & Waguespack, N. M. (2009). Human Prey Choice in the Late Pleistocene and Its Relation to Megafaunal Extinctions. In G. Haynes (Ed.), American Megafaunal Extinctions at the End of the Pleistocene (pp. 77-105). Dordrecht: Springer.

Waguespack, N. M., & Surovell, T. A. (2003). Clovis hunting Strategies, or How to Make out on Plentiful Resources. American Antiquity, 68(2), 333-352.

Winterhalder, B. (1987). The Analysis of Hunter-Gatherer Diets: Stalking an Optimal Foraging Model. In M. Harris and E. Ross (Eds.), Food and Evolution: Toward a Theory of Human Food Habits (pp. 311-339). Philadelphia: Temple University Press.

Acknowledgements: I thank Dr. Bram Tucker for providing valuable feedback over the course of producing this piece.

Citation Style: APA