The Balance of Nature?

To understand populations’ space use one needs to understand the individual’s space use. To understand the individuals’ space use one needs to acknowledge the profound influence of spatio-temporal memory capacity combined with multi-scale landscape utilization, which continues to be empirically verified at a high pace in a surprisingly wide range of taxa. Complex space use has wide-ranging consequences for the traditional way of thinking when it comes to formulate these processes in models. In a nutshell, the old and hard-dying belief in the balance of nature needs a serious re-formulation, since complexity implies “strange” fluctuations of abundance over space, time and scale. A  fresh perspective is needed with respect to inter-species interactions (community ecology) and environmental challenges from habitat destruction, fragmentation and chemical attacks. We need to address the challenge by rethinking also the very basic level of how we perceive an ecosystem’s constituents: how we assume individuals, populations and communities to relate to their surroundings in terms of statistical mechanics.

Stuart L. Pimm summarizes the Grand Ecological Challenge well in his book The Balance of Nature? (1991). Here he illustrates the need to rethink old perceptions linked to the implicit balancing principle of carrying capacity*, and he stresses the importance of understanding limits to how far population properties like resilience and resistance may be stretched before cascading effects appear. In particular, he advocates the need to extend the perspective from short-series local-scale population dynamics to long-term and broad scale community dynamics. In this regard, his book is as timely today as it was 27 years ago. However, in my view the challenge goes even deeper than the need to extending spatio-temporal scales and the web of species interactions.

Balancing on a straw – an Eurasian wren Troglodytes troglodytes (photo: AOG).

My own approach towards the Grand Ecological Challenge started with similar thoughts and concerns as raised by Pimm**. However, as I gradually drifted from being a field ecologist towards actually attempting to model parsimonious population systems I found the theoretical toolbox to be void of key instruments to build realistic dynamics. In fact, the current methods were in many respects even seriously misleading, due to what I considered some key dissonant model assumptions.

In my book (Gautestad 2015), and here in my subsequent blog, I have summarized how – for example – individual-based modelling generally rests on a very unrealistic perception of site fidelity (March 23, 2017: “Why W. H. Burt is Now Hampering Progress in Modern Home Range Analysis“). I have also found it necessary to start from scratch when attempting to build what I consider a more realistic framework for population dynamics (November 8, 2017: “MRW and Ecology – Part IV: Metapopulations?“), for the time being culminating with my recent series of post on “Simulating Populations” (part I-X).

I guess the main take-home message from the present post is:

  • Without a realistic understanding; i.e., modelling power, of individual dispersion over space, time and scale it will be futile to build a theoretical framework with deep explanatory and predictive value with respect to population dynamics and population ecology. In other words, some basic aspects of system complexity at the “particle level” needs to be resolved.
  • Since we in this respect typically are considering either the accumulation of space use locations during a time interval (e.g., a series of GPS fixes) or a population’s dispersion over space and how it changes over time, we need a proper formulation of the statistical mechanics of these processes.In other words, when simplifying extremely complicated systems into a manageable set of smaller set of variables, parameters and key interactions, we have to invoke the hidden layer.
  • With a realistic set of basic assumptions in this respect, the modelling framework will in due course be ready to be applied on issues related to the Grand Ecological Challenge – as so excellently summarized by Pimm in 1991. In other words, before we can have any hope of a detailed prediction of a local or regional faith of a given species or community of species under a given set of circumstances, we need to build models that are void of the classical system assumptions that have cemented the belief in the so-called balance of nature.

NOTES

*) The need to rethink the concept of carrying capacity and accompanying “balance” (density dependent regulation) should be obvious from the simulations of the Zoomer model. Here a concept of carrying capacity (called CC) is introduced at a local scale only, where – logically – the crunch from overcrowding is felt by the individuals. By coarse-graining to a larger pixel than this finest system resolution we get a mosaic of local population densities where each pixel contains a heterogeneous collection of intra-pixel (local) CC-levels. If “standard” population dynamic principles applies, the population change when averaging the responses over a large number of pixels with similar density should be the same whether one considers the density at the coarser pixel or the average density of the embedded finer-grained sub-pixels. This mathematical simplification follows from the mean field principle. In other words, the sum equals the parts. On the other hand, if the principle of multi-scaled dynamics applies, two pixels at the coarser scale containing a similar average population density may respond differently during the next time increment due to inter-scale influence. At any given resolution the dynamics is as a function not only of the intra-pixel heterogeneity within the two pixels but also of their respective neighbourhood densities; i.e., the condition at an even coarser scale. The latter is obviously not compliant with the mean field principle, and thus requires a novel kind of population dynamical modelling.

**) In the early days I was particularly inspired by Strong et al. (1984), O’Neill et al. (1986) and L. R. Taylor; for example, Taylor (1986).

REFERENCES

Gautestad, A. O. 2015, Animal Space Use: Memory Effects, Scaling Complexity, and Biophysical Model Coherence Indianapolis, Dog Ear Publishing.

O’Neill, R. V., D. L. DeAngelis, J. B. Wade, and T. F. H. Allen. 1986. A Hierarchical Concept of Ecosystems. Monographs in Population Biology. Princeton, Princeton University Press.

Pimm, S. L. 1991, The balance of nature? Ecological issues in the conservation of species and communities. Chicago, The University of Chicago Press.

Strong, D.E., Simberloff, D., Abele, L.G. & Thistle, A.B. (eds). 1984. Ecological Communities: Conceptual Issues and the Evidence. Princeton,Princeton University Press.

Taylor, L. R. 1986. Synoptic dynamics, migration and the Rothamsted insect survey. J. Anim. Ecol. 55:1-38.

Simulating Populations IIX: Time Series and Pink Noise

On rare occasions one’s research effort may lead to a Eureka! moment. While exploring and experimenting with the latest refinement of my Zoomer model for population dynamics I stumbled over a key condition to tune the model in and out of full coherence between spatial and temporal variability from the perspective of a self-similar statistical pattern. Fractal-like variability over space and time has been repeatedly reported in the ecological literature, but rarely has such pattern been studied simultaneously as two aspects of the same population. My hope is that the Zoomer model also has a more broadened potential by casting stronger light on the strange 1/f noise phenomenon (also called pink noise, or Flicker noise), which still tends to create paradoxes between empirical patterns and theoretical models in many fields of science.

  

First, consider again some basic statistical aspects of a Zoomer-based simulation. Above I show the spatial dispersion of individuals within the given arena. See previous posts I-VII in this series for other examples, system conditions and technical details. The new aspect in the present post is a toggle towards temporal variation of the spatial dispersion, and fractal properties in that respect (called a self-affine pattern in a series of measurements, consistent with 1/f noise*). This phenomenon usually indicates the presence of a broad distribution of time scales in the system.

In particular, observe the rightmost log(M,V) regression. As in previous examples, the variable M in the Taylor’s power law model V = aMb represents – in my application of the law – a combination of the average number of individuals in a set of samples at different locations at a given scale and supplemented by samples of M at different scales. The latter normally contributes the most to the linearity of the log(M,V) plot, due to a better spread-out of the range of M. During the present scenario that was run for 5000 time steps after skipping an initial transient period, the self-similar and self-organized spatial pattern of slope b ≈ 2 and log(a) ≈ 0 was dominating, only occasionally interrupted by some short intervals with << 2 and log(a) >> 0. These episodes were caused by simultaneous disruption of a relatively high number of local populations in cells that exceeded their respective carrying capacity level (CC), leading to a lot of simultaneous re-positioning of these individuals to other parts of the arena in accordance to the actual simulation rules. Thus, some temporary “disruptive noise” could appear now and then, occasionally leading to a relatively scrambled population dispersion with b ≈ 1 (Poisson distribution) and log(a) >> 0 for short periods of time.

The actual time series for the present example is given above, showing local density at a given location (a cell at unit scale) over a period of 5,000 increments. The scramble events due to many simultaneous population crashes in other cells over the arena can indirectly be seen as the strongest density peaks in the series. The sudden inflow of immigrants to the given locality caused a disruption to the monitored local population, which typically crashed during the next increment due to density > CC.

In may respects the series illustrates the “frustrating” phenomenon seen in long time series in ecology: the longer the observation period, the larger the accumulated magnitude of fluctuations (Pimm and Redfearn 1988, Pimm 1991). However, in the present scenario where conditions are known, there is no uncertainty related to whether the complex fluctuations are caused by extrinsic (environmental) or intrinsic dynamics. Since CC was set to be uniform over space and time and only varied due to some level of stochastic noise (a constant + a smaller random number) the time series above is expressing intrinsically driven, self-organized kind of fluctuations over space, time and scale.

The key lesson from the present analysis is to understand local variability as a function not only of local conditions, but also a function of coarser-scale events and conditions in the surroundings of the actual population being monitored. The latter is often referred to a long-range dependence. In the Zoomer model this phenomenon is rooted in the emergence of scale-free space use of the population, due to scale-free space use at the individual level (the Multi-scaled random walk model). In the present post I illustrate long-range dependence also over temporal variability.

To remove the major magnitude of serial autocorrelation the original series above was sampled 1:80. The correlogram to the right shows this frequency to be a feasible compromise between full non-autocorrelation and a reasonably long remaining series length.

After this transformation towards observing the series at a different scale, the sampled series was subject to log(Mt,V) analysis, where Mt represents – in a sliding window manner – the mean abundance over a few time steps of the temporally coarser-scaled series, and V represents these  intervals’ respective variance.

Interestingly, the log(Mt,V) pattern was reasonably similar to the log(M,V) pattern from the spatial transect above; i.e., the marked line with b ≈ 2 and log(a) ≈ 0, under the condition that the series was analyzed at a sufficiently coarse temporal scale from sampling to avoid most of the serial autocorrelation.

The most intriguing result is perhaps the Figure below, showing the log-log transformed power spectrogram of the original time series**. Over a temporal scale range up to frequency of 1500 (the x-axis) the distribution of power (the y-axis); i.e., amplitudes squared, satisfies 1/f noise!

See my book for an introduction to this statistical “beast” ***, where I devote several chapters to it. At higher frequencies further down towards the “rock bottom” of 4096 (the dimmed area), the power shows inflation. This is probably due to an integer effect at the finest resolutions of the time series, since individuals always come as whole numbers rather than fractions. Thus, the finest-resolved peak in power was probably influenced by this effect.

To my knowledge, no other model framework has simultaneously been able to reproduce the empirically often reported fractal-like spatial population abundance (the spatial expression of Taylor’s power law) with a fractal-like temporal variation (the temporal Taylor’s power law). Only either-or results have been previously reproduced by models among the thousands of papers on this topic over the last 60 years or so.

For an introduction to 1/f noise in an ecological context, you may find this link to a review by Halley and Inchausti (2004) helpfull. I cite:

1/f-noises share with ecological time series a number of important properties: features on many scales (fractality), variance growth and long-term memory. (…) A key feature of 1/f-noises is their long memory. (…)  Given the undoubted ubiquity and importance of 1/f-noise, it is surprising that so much work still revolves about either observing the 1/f-noise in (yet more) novel situations, or chasing what seems to have become something of a “holy grail” in physics: a universal mechanism for 1/f-noise. Meanwhile, important statistical questions remain poorly understood.

What about ecological methods based on the theory above? By exploring the statistical pattern of animal dispersion over space and its variability over time – and mimicking this pattern in the Zoomer model – a wide range of ecological aspects may be studied within a hopefully more realistic framework than the presently dominating approach using coupled map lattice models or differential equations. Statistical signatures of quite novel kind can be extracted from real data and interpreted in the light of complex space use theory.

In upcoming posts I will on one hand dig into the set of Zoomer model “knobs” that steer the statistical pattern in and out of – for example – the 1/f noise condition, and on the other hand I will exemplify the potential for ecological applications of the model.

REFERENCES

Halley, J. M. and P. Inchausti. 2004. The increasing importance of 1/f noises as models of ecological variability. Fluctuation and Noise Letters 4:R1–R26.

Pimm, S. L. 1991, The balance of nature? Ecological issues in the conservation of species and communities. Chicago, The University of Chicago Press.

Pimm, S. L., and A. Redfearn. 1988. The variability of animal populations. Nature 334:613-614.

NOTES

*) “1/f-noise” refers to 1/fν-noise for which 0 ≤ ν ≤ 2; “near-pink 1/f-noise” refers to cases where 0.5 ≤ ν ≤ 1.5 and “pink noise” refers to the specific case were ν = 1. All 1/fν-noises are defined by the shape of their power spectrum S(ω):

S ∝ 1/ων

Here ω = 2πf is the angular frequency. “White noise” is characterized by ν ≈ 0, and its integral, “red” or “brown” noise, has ν ≈ 2. A classical random walk along a time axis is an example of the latter, and its derivative produces white noise.

**) Since a FFT analysis requires a series length satisfying a power or 2, only the first 4096 steps were included.

***) 1/f noise is challenging, both mathematically and conceptually:

The non-integrability in the cases ν ≥ 1 is associated with infinite power in low frequency events; this is called the infrared catastrophe. Conversely for ν ≤ 1, which contains infinite power at high frequencies, it is called the ultraviolet catastrophe. Pink noise (ν = 1) is non-integrable at both ends of the spectrum. The upper and lower frequencies of observation are limited by the length of the time series and the resolution of the measurement, respectively.
Halley and Inchausti (2004), p R7.

 

The Hidden Layer

Focusing on the statistical pattern of space use without acknowledging the biophysical model for the process will create much confusion and unnecessary controversy. Ecologists are now forced to get a better grip on concepts from statistical mechanics than earlier generations. For example, to understand the transformation from data on actual behaviour to pattern analysis of space use, the concept of the hidden layer represents the first gate to pass.

Research on animal movement and space use has always had a central place in ecology. However, as more field data, better computers and more sophisticated statistical methods have become available, some old dogma have come under attack. Specific theoretical aspects of this quest for improved model realism have emerged from the rapidly growing cooperation between biologists and physicists in the emerging field of macro-level biophysics. The so-called Lévy flight foraging hypothesis is one example. And, of course, I can’t resist mentioning the MRW theory.

A booted eagle Hieraaetus pennatus is triggering a flock of spotless starlings Sturnus unicolor to show swarming behaviour. Malaga river delta, December 2017. Photo: AOG.

In 1985 Charles Krebs described ecology as the scientific study of the interactions that determine the distribution and abundance of organisms. In an ethological context animal space use is studied on two levels – tactical and strategic. The tactical level regards understanding individual biology and behavioural ecology on a moment-to-moment temporal scale. Strategic space use adds an extra layer of complexity to the tactical behaviour. In a simplistic manner we may refer to this layer as the animal’s state at a given moment in time; for example whether it is hungry or not (e.g., in hunting mode). Strategy also involves processing of memory-based goals. Strategies executed at coarser time scales than tactics. Some of the interaction between tactics and strategy may then – under specific conditions (see below) – be transformed to dynamic models at the tactical level; so-called mechanistic models, which consists of a set of executable rules covering respective cognitive and environmental conditions. Validating the model dynamics and resulting statistical patterns against real animal data then rates the degree of model realism. For example; realistic, tactical models have been developed to cast light on the “clumping behaviour” (dense swarming) of flock of birds that are threatened by a raptor.

The myriad of rules that influence animal movement makes detailed modelling an impossible task, and would anyway only lead to a descriptive picture with no value to ecological hypothesis testing. In fact, the signature of successful modelling is simplification. Thus, only specific aspects of the reference individual’s behaviour can be included and scrutinized.

The present post addresses one particular aspect of system simplification; coarse-graining the temporal scale. This approach implies a qualitative change of how the space use system is observed and analyzed. Actually, temporal coarse-graining is forced upon us when studying animal space use from sampling an individual’s successive displacements as a series of locations (fixes) during a given period of time. During each inter-fix interval the observed displacement regards the resultant vector from a myriad of intermediate and unobserved events. What has happened to the moment-to-moment kind of behavioural ecology? It has become buried below the hidden layer.

At the surface of this hidden layer you lose sight of behavioural details (like raptor response and swarming rules) but you gain access to an alternative perspective of movement and space use. Alternative statistical descriptors are emerging at this temporally coarser scale, following the laws of statistical mechanics. What is analyzed above the hidden layer is the over-all pattern from many displacements events that are aggregated into a spatial scatter of fixes.

For example, you may coarse-grain both the temporal and spatial system dimensions, and study the aggregated distribution of fixes at the spatial scale of virtual grid cells (pixels) and temporal scale of the fix sampling period. The spatio-temporal variations in intensity of space use within the actual space-time extents then allows for modelling and hypothesis testing, but now using statistical-mechanical descriptors of space use intensity. These descriptors are either not valid below the hidden layer (e.g., the information content of local density of fixes) or they have an alternative interpretation (e.g., movement as a “step” versus movement as a resultant vector for a given interval and location). Both levels of analysis require large sets of input to allow for statistical treatment.

Why is the hidden layer concept and the statistical-mechanical approach more important to relate to today than in earlier decennia? The short answer is the realization – seeded by better and more extensive data – that animal space use involves more than a couple of universality classes of movement (see this post). In fact, in my book, papers and blog posts I have detailed eight classes, most of which are unfamiliar to you.

To understand space use that is influenced by spatial memory and scale-free movement, statistical mechanical modelling is a prerequisite for realistic representation of such complex systems, unless you limit your perspective to a short-term behavioural bout within a very localized arena. In other words, “a single piece of a jigsaw-puzzle of space use dynamics”. For example, if you zoom closely into a small segment of a circle you observe an approximately straight line. Take a step outwards, and you are facing the qualitatively different geometry – the mathematics of a curve and finally a full circle. Stubbornly staying within the linear framework when analyzing more extensive objects than what you observe at fine scales will force you into a corner filled with paradoxes.

Fine-grained and coarse-grained analyses of animal space use are complementary approaches to the same system.

 

MRW and Ecology- Part VII: Testing Habitat Familiarity

Consider having a series of GPS fixes, and you wonder if the individual was utilizing familiar space during your observation period – or started building site familiarity around the time when you started collecting data. Simulation studies of Multi-scaled random walk (MRW) shows how you may cast light on this important ecological aspect of space use.

First, you should of course test for compliance with the MRW assumptions, (a) site fidelity with no “distance penalty” on return events, (b) scale-free space use over the spatial range that is covered by your data, and (c) uniform space utilization on average over this scale range. One single test in the MRW Simulator, the A(N) regression, cast light on all these aspects. First, you seek to optimize pixel resolution for the analysis (estimating the Characteristic scale of space use, CSSU). Next, if you find “Home range ghost” compliance; i.e., incidence I expands proportionally with square root of sample size of fixes, your data supports (a) spatial memory utilization with no distance penalty due to sub-diffusive and non-asymptotic area expansion, (b) scale-free space use due to linearity of the log[I(N)] scatter plot, and (c) equal inter-scale weight of space use due to slope ≈ 0.5.

Supposing your data confirmed MRW, how to test for time-dependent strength of habitat familiarity? Consider the following simulation example, mimicking space use during a season and under constant environmental conditions.

The red dots show log(N,I) for various sample sizes up to the total set of 11,000 fixes. Each dot represents the average I for respective N of the two methods continuous sampling and frequency sampling (counteracting autocorrelation effect; see a previous post). However, analyzing the first 1,000 fixes separately (black dots) consistently revealed a more sloppy space use in terms of aggregated incidence at a given N, relative to the total season. The next 1,000 fixes, however, was compliant with the total series both with respect to slope and y-intercept (CSSU) (green dots).

The reason for the discrepancy in space use during the initial period of fix sampling* was in the present scenario the actual simulation condition; site familiarity was set to develop “from scratch” simultaneously with the onset of fix collection. I define strength of site familiarity as proportional with the total path length from which the model animal collects a previous location to return to**. In the start of the sampling period, the underlying path is short in comparison to the total path that was traversed during the total season, and – crucially – return steps targeted previous locations from the actual simulation period only, and not locations prior to to this start time. In other words, the animal was assumed to settle down in the area at the point in time when the simulation commenced.

To conclude, if your data shows CSSU and slope of similar magnitude in the early and later phase of data collection, you sampled an individual with a well-established memory map of its environment during the entire observation period. The implicit assumption for this conclusion is of course that the environmental conditions was constant during the entire sampling period, including the initial phase. Using empirical rather than synthetic data means that additional tests would have to be performed to cast light on this aspect.

NOTE

*) The presentation above reflects the pixel resolution that was optimized for the total series. The first 1,000 fixes showed a more coarse-grained space use, reflected in a 50% larger CSSU scale (not shown: optimal pixel size was 50% larger for this part of the series) despite constant movement speed and return rate for the entire simulation period. In this scenario a larger CSSU [coarser optimal pixel for the A(N) analysis] signals a less mature habitat utilization in the home range’s early phase. The CSSU was temporarily inflated during build-up of site familiarity, but – somewhat paradoxically – the accumulated number of fix-embedding grid cells (incidence) for a given N at this scale was smaller. These two effects, reflecting degree of habitat familiarity during home range establishment, should be considered a transient effect.

**) Two definitions should be specified:

  • I define strength of site familiarity as proportional with the total path length from which the model animal collects a previous location to return to.
  • I define strength of site fidelity as proportional with the return frequency.

Both definitions rest on the assumptions of no distance penalty on return targets and no time penalty on returns; i.e., infinite spatio-temporal memory horizon relative to the actual sampling period.

MRW and Ecology – Part IV: Metapopulations?

In light of the recent insight that individuals of a population generally seem to utilize their environment in a multi-scaled and even scale-free manner, the metapopulation concept needs a critical evaluation. Even more so, since many animals under a broad range of ecological conditions are simultaneously mixing scale-free space use with memory map-based site fidelity. In fact, both properties, multi-scaled movement and targeted return events to previous locations, undermine key assumptions of the metapopulation concept.

Levins (1969) model of “populations of populations” – termed metapopulation – rattled many corners of theoretical and applied ecology, despite previous knowledge of the concept from the groundbreaking research by Huffaker (1958) and others (Darwin, Gause, etc.). Since then, Ilkka Hanski (1999) and others have produced broad theoretical and empirical research on the metapopulation concept.

The Levins model describes a metapopulation in a spatially implicit manner, where close and more distant sub-populations are assumed to have same degree of connectivity. Later models (including Hanski’s work) made the dynamics spatially explicit. Hence, distant sub-populations are in this class of design more closely connected dynamically than more distant populations. Sub-populations (or “local” populations) are demarcated by large difference in internal individual mixing during a reproduction cycle relative to the rate of mixing with neighbouring sub-populations at this temporal scale. As a rule-of-thumb, the migration rate between neighbour populations during a reproduction cycle should be smaller than 10-15% to classify the system as a metapopulation. Simultaneously, intrinsic mixing during a cycle in a given sub-population is assumed to approximate 100%; i.e., “full spatial mixing” (spatial homogenization when averaging individual locations over a generation period).

 

According to the prevailing metapopulation concept, high rate of internal mixing in sub-populations is contrasted by substantially lower mixing rate between sub-populations. The alternative view – advocated here – is a hierarchical superposition of mixing rates if the individual-level movement is scale-free over a broad scale range. The hierarchy is indicated by three levels, with successively reduced intra- and inter-population mixing rate towards higher levels.

The spatially explicit model of a metapopulation is based on three core assumptions:

  1. The individual movement process for the population dynamics should comply with a scale-specific process; i.e., a Brownian motion-like kind of space use in statistical-mechanical terms, both within and between sub-populations. This property allows intrinsic population dynamics of sub-populations to be modelled as “homogeneous” at this temporal scale. This property is also assumed by the theory of differential and difference equations. It also allows the migration between sub-populations to be described as a classical diffusion process.
  2. Following from Point 1, more distant sub-populations are always less dynamically linked (smaller diffusion rate) than neighbour populations. In fact, dispersal between distant sub-populations may be ignored in spatially explicit models.
  3. Emigration from a given sub-population may be stochastic (random) or deterministic (e.g., density dependent emigration rate), while immigration rate is stochastic only. The latter follows logically from compliance with point 1. In other words, emigrating individuals may occasionally return, but only by chance and thus on equal terms with the other immigrants from neighbour populations. Hence, both the intrinsic and inter-population mixing process is assumed to lack spatial memory capacity for targeted returns at the individual level.

In my alternative idea for a spatially (and temporally) structured kind of population dynamics, individual movement is assumed to comply with multi-scaled random walk (MRW). Contrary to a classical Brownian motion and diffusion-like process, MRW defines both a scale-free kind of movement and a degree of targeted returns to previous locations. Thus, both emigration and immigration may be implicitly deterministic. The two perceptions of a structured population are conceptualized in the present illustrations. “Present idea” regards the prevailing metapopulation concept, and the “Alternative idea” regards population dynamics under the MRW assumptions.

The upper part of the illustration to the right shows the two classical metapopulation assumptions in a simplistic manner. Shades of blue regards strength of inter-population mixing, which basically is reaching neighbour populations only (by a rate of less than 10-15%, to satisfy a metapopulation structure) but not more distant ones. For example, inter-generation dispersal rate between next-closest sub-populations is expected to be less than (10%)*(10%) = 1%, and falls further towards zero at longer distances. The Alternative idea at the lower part describes a more leptocurtic (long-tailed) dispersal kernel – in compliance with a power law (scale-free dispersal) – rather than an exponentially declining kernel (scale-specific dispersal), as in the standard metapopulation representation. Separate arrows for immigration from one sub-population to a neighbour population of the “Present idea” part illustrates the standard diffusion principle, while the single dual-pointing arrow of the “Alternative idea” illustrates that immigration an emigration are not independent processes, due to spatial memory-dependent return events. The emergent property of targeted returns connects even distant sub-populations in a partly deterministic manner.

The Alternative idea design is termed the Zoomer model, which is explored both theoretically and by preliminary simulations in my book. A summary was presented in this post. A long-tailed dispersal kernel may embed and connect subpopulations that are separated by a substantial width of matrix habitat. Since the tail is thin (only a small part of individual displacements are reaching these distances), long-distance moves and directed returns happen with a small rate. 

The Zoomer model as an alternative to the classical metapopulation concept has far-reaching implications for population dynamical modelling and ecological interpretation. For example, the property that two distant sub-populations may sometimes be closer connected than connectivity to intermediate sub-populations due to emergence of a complex network structure at the individual level was illustrated by my interpretation of the Florida snail kite research (see this post). At the individual level, research on Fowler’s toad (see this post) and the Canadian bison (see this post) shows how distant foraging patches may be closer connected than some intermediate patches. Also this is in compliance with the Zoomer concept and in opposition to the classical metapopulation concept. In many posts I’ve shown examples of the leptocurtic distribution of an individual’s histogram of displacement lengths, covering very long distances in the tail part – potentially well into the typical scale regime of a metapopulation (for example, the lesser kestrel).

REFERENCES

Hanski, I. Metapopulation Ecology. Oxford University Press. 1999.

Huffaker, C.B. 1958. Experimental Studies on Predation: Dispersion factors and predator–prey oscillations. Hilgardia 27:83-

Levins, R. 1969. Some demographic and genetic consequences of environmental heterogeneity for biological control. Bulletin of the Entomological Society of America 15:237–240

Random Walk Should Not Imply Random Walking

Random walk is one of the most sticky concepts of movement ecology. Unfortunately, this versatile theoretical model approach to simplify complex space use under a small set of movement rules often leads to confusion and unnecessary controversy. As pointed out by any field ecologist, unless an individual is passively shuffled around in a stochastic sequence of multi-directional pull and push events, the behavioural response to local events and conditions is deterministic! An animal behaves rationally. It successively interprets and responds to environmental conditions – within limits given by its perceptive and cognitive capacity – rather than ignoring these cues like a drunken walker. Any alternative strategy would lose in the game of natural selection. Still, from a theoretical perspective an animal path may still be realistically represented by random walk – given that the randomness is based on properly specified biophysical premises and the animal adhere to these premises.

Photo: AOG

Outside our house I can study a magpie pica pica moving around, apparently randomly, until something catches its attention. An insect larva? A spider or other foraging rewards? After some activity at this patch it restarts its exploratory movement. As ecologist it is easy to describe the behaviour as ARS (area restricted search). In more general terms, the bird apparently toggles between relatively deterministic behaviour during patch exploration and more random exploratory moves in-between. If I had radio-tagged the magpie with high resolution equipment, I could use a composite random walk model (or more contemporary: a Brownian bridge formulation) derived from ARS to estimate the movement characteristics for intra- and inter-patch steps respectively, and test ecological hypotheses.

However, what if the assumptions behind the random walk equations are not fulfilled by the magpie behaviour? Now and then the magpie flies back in a relatively direct line to a previous spot for further exploration. In other words, the path is self-crossing more frequently than expected by chance. Also, the next day the magpie may be return to our lawn in a manner that indicates stronger site fidelity than expected from chance, considering all the other available gardens in the county. The magpie explores, but also returns in a goal-oriented manner, meaning that the home range concept should be invoked. Looking closer, when exploring the garden the magpie also seems to choose every next step carefully, constantly scanning its immediate surroundings, rather than changing direction and movement speed erratically. Occasional returns to a previous spot, in addition to returning repeatedly to our garden, indicates utilization of a memory map. In short, this magpie example may not fit the premises of an ARS the way it is normally modeled in movement ecology, namely as a toggling between fine- and coarser-scale random walk.

Hence, two challenges have to be addressed.

  1. What are the conditions to treat the movement as random walk when analysing the data?
  2. What are the basic prerequisites for applying the classical random walk theory for the analysis?

Regarding the first question, contemporary ecological modelling of movement typically defines the random parts of an animal’s movement path as truly stochastic (rather than as a model simplification of the multitude of factors that influence true movement), in the meaning of expressing real randomness in behavioural terms. The Lévy flight foraging hypothesis is an example of this specification. The remaining parts of the path are then expressing deterministic rules, like pausing and foraging when a resource patch is encountered, or triggering of a bounce-back response when sufficiently hostile environment is encountered. In my view this stochastic/deterministic framework is counterproductive with respect to model realism, since it tends to cover up the true source of randomness.

To clarify the concept of randomness in movement models one should be explicit about the model’s biophysical assumptions. Different sets of assumptions lead to different classes of random walk. In my book I summarized these classes as eight corners of the Scaling cube. Sloppiness with respect to model premises hinders the theory of animal space use to evolve towards stronger realism.

  • Random walk (RW) in the classical sense; i.e., Brownian motion-like, regards a statistical-mechanical simplification of a series of deterministic responses to a continuous series of particle shuffling. Collision between two particles is one example of such shuffling events. In other words, during a small increment of time a passively responding particle performs a given displacement in compliance with environmental factors (“forces”) and physical laws at the given point in space and time. Until new forces are acting on the particle (e.g., new collisions), it maintains its current speed and direction. In other words, under these physical conditions the process is also Markov-compliant: regardless of which historic events that brought the particle to it current position, its next position is determined by the updated set of conditions during this increment. The next step is independent of its past steps.
  • The average distance between change of movement direction of a RW is captured by the mean free path parameter. This implies that RW is a scale-specific process, and the characteristic scale is given by the mean free path during the defined time extent.
  • Since the RW particle is responding passively, its path is truly stochastic even at the spatio-temporal resolution of the mean free path. When sampling a RW path at coarser temporal resolutions a larger average distance between successive particle locations is observed. Basically, this distance increases proportionally with the square root of the sampling interval. This and other mathematical relationships of a RW (and its complementary diffusion formulation) is predictable and coherent from a well-established statistical-mechanical theory.
  • Stepping from a physical RW particle to a biophysical representation of an individual in the context of movement ecology implies specification and realism of two assumptions: (1) the movement behaviour should be Markov compliant (i.e., scale-specific), and (2) the path should be sampled at coarser intervals than the characteristic time interval that  accompanies the mean free path (formulated in the average “movement speed” at the mean free path scale). At these coarser spatio-temporal resolutions even deterministic movement steps becomes stochastic by nature, due to lumping together the resultant displacement from a series inter-independent finer-grained steps.

    An animal is observed at position A and re-located at position B after t time units. The vector AB may be considered a RW compliant step if – and only if – the intermediate path locations (dotted circles) in totality are sufficiently independent of the respective previous displacement vectors to make the resultant vector AB random. Each of the intermediate steps may be caused by totally deterministic behaviour. Still, the sum of the sequence of more or less inter-independent displacements makes position B unpredictable from the perspective of position A. The criterion for accepting AB as a step in a RW sequence is fulfilled at temporal scale (sampling resolution) t, even it the “hidden layer” steps are more or less deterministic at finer resolutions <<t.

    In my book I refer to such observational coarse-graining as increasing the depth of the hidden layer, from a fine-resolved unit scale – where local causality of respective displacements are revealed – to a coarser resolution where deterministic (and Markov-compliant) behaviour requires a statistical-mechanical description.

Regarding the second question raised above regarding Markov compliance, see the RW criterion in the Figure to the right [as was also exemplified by “Markov the robot” in Gautestad (2013)].

However, what if the animal violates Markov compliance? In other words, what if it is responding in a non-Markovian manner, meaning that path history counts to explain present movement decisions? Is the magpie-kind of non-Markovian movement typical for animal space use, from a parsimonious model perspective, or is multi-scaled site fidelity the exception rather than the rule? These are the core questions any modeller of animal movement should ask him/herself. One should definitely not just accept old assumptions just because several generations of ecologists have done so (many with strong reluctance, though).

Instead of accepting classical RW or its trivial variants correlated RW and biased RW as a proper representation of basic movement by default, albeit while closing your nose, you should explore a broader application of other corners of the Scaling cube, each with respective sets of statistical-mechanical assumptions.

 

REFERENCE

Gautestad, A. O. 2013. Lévy meets Poisson: a statistical artifact may lead to erroneous re-categorization of Lévy walk as Brownian motion. The American Naturalist 181:440-450.