During my time working with the South Atlantic Landscape Conservation Cooperative (South Atlantic LCC), a large part of my focus has been to help synthesize, create, streamline, and revise the spatial models of indicators for integration into Conservation Blueprint 2.0. During the workshops held this past spring, people asked hundreds of questions about individual indicators, but one particular question really stuck out in my mind. What makes a model good? From my perspective, the answer is simple: science!

First, let’s explore what models are all about. A model is broadly defined as a simplification of reality.  For example, below is a model depicting the density of biscuit-eating people within the South Atlantic LCC at a 10 meter resolution. Clearly, it is a simplification of reality and is therefore a model. When first presented with a distribution model, people often have two types of reactions. They look for where the model is true and where it is false given what they know. For example, “My mom lives in South Carolina and she eats lots of biscuits…This model looks great. Very accurate!” Meanwhile, a more skeptical person will say something like, “It can’t be right. I know those folks in Atlanta aren’t eating that many biscuits!” Now, I’m a believer that anecdotes are incredibly helpful as contributions to model development and to guide modeling along, but anecdotes are not be confused with scientific evidence. For the moment, we can say, yes, this is a model, and perhaps the best available model for the density of biscuit-eating people. Models, including the biscuit model, are not perfect, but what makes it a good model?

Most notably, we need to explore how models interact with science. One may look at the biscuit model and decide that it looks good (and yummy!) to them. It might have been modeled by a renowned and trusted modeler like the Pillsbury Dough Boy, and therefore, people may tend to believe it. But it is not yet scientifically tested.  To do scientific testing, Dr. Dough Boy would need to test whether the modeled density is similar to the eating habits of a sampling of people. This scientific testing would provide evidence for, or against, the model and would give it a chance to be wrong. Thus, it would be a scientifically tested model. From the perspective of Sir Karl Popper, one of the most influential philosophers of science, science must have hypotheses that are “falsifiable” (Popper 1935, 1959). These hypotheses must be subjected to scientific testing, and they must have a chance to fail what Sir Karl Popper termed as “decisive experiments” (Popper 1935, 1959). If there is no testing, we cannot claim it as science. For example, we may have a model of Giant Squid distribution, but if the model does not have a chance to fail via scientific testing, the model remains an untested hypothesis. Generally, the scientific community has shown statistical testing is the dominant form of testing and models are usually compared against random (i.e., is the predicted distribution in any particular area more accurate than a flip of a coin?). In the case of a model of biscuit-eating people, scientific testing would compare the distribution model against a randomly derived model (see random biscuit model below). If our biscuit model is better than random, we would have evidence that the model works (i.e., model validation!). Beyond the test against random, there are other measures of accuracy which are helpful, but the accuracy required of models depends on individual circumstances. One key benefit to having a scientific model is that uncertainty is quantified. Then we can assess if the model is accurate enough to meet our specific needs. Without scientific testing and subsequent validation, uncertainty may be difficult, or impossible, to determine.

Developing scientifically tested models for an area the size of the South Atlantic can be a daunting task, particularly for species. I have struggled myself, as I helped develop and successfully validate a Bachman’s Sparrow model from survey data limited to North Carolina.  The model currently assumes habitat relationships quantified within North Carolina are exactly the same throughout the South Atlantic LCC.  According to input from collaborators and others, this is most likely wrong. Knowing these limitations, I have to remind myself, we are in the business of conserving species, not conserving models…Improvements will need to be made. Fortunately, several of our indicators are based on raw data collected that have known uncertainty and limitations. For example, Red-cockaded Woodpecker clusters have been mapped directly from extensive surveys. Other indicator models are only slightly tweaked from scientifically tested models. For example, several indicators are based on National Land Cover Database models; these include estuarine wetland patch size, impervious surface, and riparian buffers. Other models are based on relatively simplistic transformations of raw data, such as low road density, open water-vegetation edge, and the interpolation of Coastal Condition Index point sampling. The indicator types listed above probably don’t require additional testing, although more up-to-date data is always helpful. Among the other indicators, there is still plenty of scientific testing yet to do, but I think we are on the right track for conservation in the South Atlantic.

Within the field of natural resources management, models have become as common as biscuits at Bojangles. I hope this blog will help you distinguish between the best available science and the best available model. These are critical distinctions and your ability to distinguish the two might just move conservation in one direction or another. In my opinion, evidence that a model is at least better than random is a critical first step in determining if a model is truly good. As a citizen concerned about our natural resources, I hope that our conservation decisions are based on scientific evidence whenever possible. Yes, collecting on-the-ground data may be difficult or expensive, but even complex 21st century spatial models require scientific testing and validation if we are to claim them as being scientific. Scientific models may not always be available when decisions are necessary, but acknowledging where science is lacking will point us towards knowledge gaps that need to be filled.

Bradley Pickens, bapicken@ncsu.edu

Postdoctoral Research Associate, North Carolina Cooperative Fish and Wildlife Research Unit

Disclaimer: Here, I have addressed models of a current distribution of a species or other characteristic of the landscape. There are many process-based models that project into the future (e.g., sea-level rise projections), but these are trained on large datasets that are scientifically collected and rigorously tested. They also rely on assumptions not relevant to our models of current distributions.

Figure 1. (left) Model of the density of biscuit-eating people in the South Atlantic region. Larger biscuits represent higher densities; (right) a random model of the density of biscuit-eating people. Both are models, but to pass the test of science, evidence needs to show the model (left) is more accurate that a random model (right).

Literature Cited

Popper, K. R. 1935. Logik der Forschung. Verlag von Julius Springer, Vienna, Austria.

Popper, K. R. 1959. The Logic of Scientific Discovery. Basic Books, New York.