Modern conservation and environmental management rely on data. Unless you can actually show cold, hard evidence of natural deterioration, you open yourself up to criticism from denialists and other eco-skeptics. It is too easy for industry lobbyists to dismiss conservation recommendations as tree-hugger scare-mongering.
So conservationists, being the idealists that we are, decide to gather evidence for downward trends of various aspects of biodiversity. Unfortunately, efforts to quantify biodiversity trends are a major challenge. Not because measuring trends in diversity is particularly difficult, but rather because long-term monitoring is susceptible to sampling artefacts.
I’m a bit late to writing about this, but two influential meta-analyses (Vellend et al. 2013; Dornelas et al. 2014) gathered mountains of existing time-series data and found no evidence of net change in local species richness over time. I can’t criticise the way these scientists analysed the data; in fact, I am in awe of the way they not only analysed trends in diversity, but also compared observed trends to a suite of null models. They did everything they possibly could with the available data, but I am still unconvinced by their conclusions.
My scepticism is due to intrinsic limitations of time-series data.
Any introductory statistics course will teach you that the statistical characteristics of a sub-sample will approach that of the broader population as the number of randomly selected, independent samples increases. The trouble with drawing general conclusions from time-series data is that the individual time-series are neither independent nor randomly selected, regardless of sample size.
Obviously, time series data is only available for geographical localities with long-term monitoring programs. In practice, this means that these data are biased towards temperate regions in the developed world (simply because wealthier countries are more likely to collect data over several decades). This is certainly not ideal because the greatest loss of diversity is expected in developing tropical countries.
Even if we discount this unequal geographical spread of sampling localities, there is also ‘survivorship bias‘. If given a choice, most practical ecologists would prefer relatively secure habitats – like experimental forest plots or wetlands in protected areas – if they have to invest the time and energy collecting long-term data. As such, we only ever hear about the time-series data that were continuously monitored for sufficiently long periods. We’ll never know how many long-term monitoring programs were abandoned before completion because of some form of habitat transformation.
I’ve got personal experience with the phenomenon of survivorship bias. I was once involved in the biomonitoring of an infrastructure development project on a large (> 50 000 ha) property. We set up survey plots on parts of the property that were (a) not immediately earmarked for development and (b) far enough from the main development activity to be safe for civilian access (according to health and safely regulations). After several years of monitoring, we found no clear trend in the state of biodiversity at our sampling localities. This, of course, only implied that the infrastructure development had no clear secondary impacts on the surrounding biodiversity. Our data said nothing about any primary impacts, which we can only assume included the complete destruction of diversity at the activity zone (because the natural habitat is now covered by roads and concrete).
Needless to say, I remain sceptical about the conservation significance of meta-analyses of time-series data. I think they remain cool scientific exercises, but to draw any applied conclusions from these analyses beyond the limited extent of the sampling locality is, in my opinion, very short-sighted1. I don’t have any solutions to these clear shortcomings, but I’d really appreciate anyone who can convince me otherwise.
1. I am not implying that the authors of these types of studies are short-sighted. On the contrary, they carefully avoid making any naive statements in their manuscripts because they know better.↩