Positive and Negative Feedback Part I: Individual Space use

The standard theories on animal space use rest on some shaky behavioural assumptions, as elaborated on in my papers, in my book and here in my blog. One of these assumptions regards the assumed lack of influence of positive feedback, in particular the self-reinforcing effect that emerge when individuals are moving around with a cognitive capacity for both temporal and spatial memory utilization. The common ecological methods to study individual habitat use; like the utilization distribution (a kernel density distribution with isopleth demarcations), use/availability analysis, and so on, explicitly build on statistical theory that not only disregards such positive feedback, but in fact requires that this emergent property is not influencing the system under scrutiny.

Unfortunately, most memory-enhanced numerical models to simulate space use are rigged to comply with negative rather than positive feedback effects. For example, the model animal successively stores its local experience with habitat attributes while traversing the environment, and it uses this insight in the sequential calculation of how long to stay in the current location and when to seek to re-visit some particularly rewarding patches (Börger et al. 2008, van Moorter et al. 2009, Spencer 2012, Fronhofer et al. 2013; Nabe-Nielsen et al. 2013). In other words, the background melody is still to maintain compliance with the marginal value theorem (Charnov 1976) and the ideal free distribution dogma, which both are negative feedback processes and not a self-reinforcing process that tends to counteract such a tendency.

Negative feedback (or balancing feedback) occurs when some function of the output of a system, process, or mechanism is fed back in a manner that tends to reduce the fluctuations in the output, whether caused by changes in the input or by other disturbances. Whereas positive feedback tends to lead to instability via exponential growth, oscillation or chaotic behavior, negative feedback generally promotes stability. Negative feedback tends to promote a settling to equilibrium, and reduces the effects of perturbations. Negative feedback loops in which just the right amount of correction is applied with optimum timing can be very stable, accurate, and responsive.
https://en.wikipedia.org/wiki/Negative_feedback

A common curlew Numenius arquata foraging on a field within its summer home range. Anecdotally, one may observe that a specific individual tends to revisit not only specific fields while foraging, but also specific parts of these fields. If this site fidelity is influenced by a rising tendency to prefer familiar space on expense of revisiting potentially equally rewarding patches based on previous visits, a positive feedback (self-reinforcing space use) has emerged.  This effect then interferes with the traditional ecological factors, like selection based on habitat heterogeneity, in a complex manner. Photo: AOG.

The above definitions follow the usual path to explain negative feedback as “good”, and positive feedback as something scary (I will return to this misconception in a later post in this series). It echoes the prevailing “Balance of nature” philosophy of ecology, which I’ve criticized at several occasions (see, for example, this post).

In a previous post, “Home Range as an Emergent Property“, I described how memory map utilization under specific ecological conditions may lead to a self-reinforcing re-visitation of previously visited locations (Gautestad and Mysterud 2006, 2010); in other words, a positive feedback mechanism*. Contemporary research on animal movement covering a wide range of taxa, scales, and ecological conditions continues to verify site fidelity as a key property of animal space use.

I use a literature search to test an assumption of the ideal models that has become widespread in habitat selection theory: that animals behave without regard for site familiarity. I find little support for such “familiarity blindness” in vertebrates.
Piper 2011, p1329.

Obviously, in the context of spatial memory and site fidelity it should be an important theme for research to explore to what extent and under which conditions negative and positive feedback mechanisms are shaping animal space use.

Positive feedback from site fidelity will fundamentally influence analysis of space use. For example, two patches with a priori similar ecological properties may end up being utilized with a disproportionate frequency due to initial chance effects regarding which patch happened to gain familiarity first**. Further, if the animal is utilizing the habitat in a multi-scaled manner (which is easy to test using my MRW-based methods), this grand error factor in a standard use/availability analysis cannot be expected to be statistically hidden by just studying the habitat at a coarser spatial resolution within the home range.

Despite this theoretical-empirical insight, the large majority of wildlife ecologists still tend to use classic methods resting on the negative feedback paradigm to study various aspects of space use. The rationale can be described by two end-points on a continuum: either one ignores the effect from self-reinforcing space use (assuming/hoping that the effect does not significantly influence the result), or one use these classic methods while holding one’s nose.

The latter category is accepting the prevailing methods’ basic shortcomings – either based on field experience or inspired by reading about alternative theories and methods – but the strong force from conformity in the research community is hindering bold steps out of the comfort zone. Hence, the paradigm prevails. Again I can’t resist referring to a previous post, “Why W. H. Burt is Now Hampering Progress in Modern Home Range Analysis“.

Within the prevailing modelling paradigm, implementing spatial memory utilization in combination with positive feedback-compliant site fidelity is a mathematical and statistical nightmare – if at all possible. However, as a reader of this blog you are at least fully aware of the fact that some numeric models have been developed lately, ouside the prevailing paradigm. These approaches not only account for memory map utilization but also embed the process of positive feedback in a scale-free manner (I refer to our papers and to my book for model details; see also Boyer et al. 2012).

 

NOTES

* The paper explores space use under the premise of positive feedback during superabundance of resources, in combination with negative feedback during temporal and local over-exploitation.

** In Gautestad and Mysterud (2010) I described this aspect as the distance from centre-effect; i.e., the utilization distribution falls off at the periphery of ulilized patches independently of a similar degradation of preferred habitat.

 

REFERENCES

Börger, L., B. Dalziel, and J. Fryxell. 2008. Are there general mechanisms of animal home range behaviour? A review and prospects for future research. Ecology Letters 11:637-650.

Boyer, D., M. C. Crofoot, and P. D. Walsh. 2012. Non-random walks in monkeys and humans. Journal of the Royal Society Interface 9:842-847.

Charnov, E. L. 1976. Optimal foraging: the marginal value theorem. Theor. Popula. Biol. 9:129-136.

Fronhofer, E. A., T. Hovestadt, and H.-J. Poethke. 2013. From random walks to informed movement. Oikos 122:857-866.

Gautestad, A. O., and I. Mysterud. 2006. Complex animal distribution and abundance from memory-dependent kinetics. Ecological Complexity 3:44-55.

Gautestad, A. O., and I. Mysterud. 2010. Spatial memory, habitat auto-facilitation and the emergence of fractal home range patterns. Ecological Modelling 221:2741-2750.

Nabe-Nielsen, J., J. Tougaard, J. Teilmann, K. Lucke, and M. C. Forckhammer. 2013. How a simple adaptive foraging strategy can lead to emergent home ranges and increased food intake. Oikos 122:1307-1316.

Piper, W. H. 2011. Making habitat selection more “familiar”: a review. Behav. Ecol. Sociobiol. 65:1329-1351.

Spencer, W. D. 2012. Home ranges and the value of spatial information. Journal of Mammalogy 93:929-947.

van Moorter, B., D. Visscher, S. Benhamou, L. Börger, M. S. Boyce, and J.-M. Gaillard. 2009. Memory keeps you at home: a mechanistic model for home range emergence. Oikos 118:641-652.

The Balance of Nature?

To understand populations’ space use one needs to understand the individual’s space use. To understand the individuals’ space use one needs to acknowledge the profound influence of spatio-temporal memory capacity combined with multi-scale landscape utilization, which continues to be empirically verified at a high pace in a surprisingly wide range of taxa. Complex space use has wide-ranging consequences for the traditional way of thinking when it comes to formulate these processes in models. In a nutshell, the old and hard-dying belief in the balance of nature needs a serious re-formulation, since complexity implies “strange” fluctuations of abundance over space, time and scale. A  fresh perspective is needed with respect to inter-species interactions (community ecology) and environmental challenges from habitat destruction, fragmentation and chemical attacks. We need to address the challenge by rethinking also the very basic level of how we perceive an ecosystem’s constituents: how we assume individuals, populations and communities to relate to their surroundings in terms of statistical mechanics.

Stuart L. Pimm summarizes this Grand Ecological Challenge well in his book The Balance of Nature? (1991). Here he illustrates the need to rethink old perceptions linked to the implicit balancing principle of carrying capacity*, and he stresses the importance of understanding limits to how far population properties like resilience and resistance may be stretched before cascading effects appear. In particular, he advocates the need to extend the perspective from short-series local-scale population dynamics to long-term and broad scale community dynamics. In this regard, his book is as timely today as it was 27 years ago. However, in my view the challenge goes even deeper than the need to extending spatio-temporal scales and the web of species interactions.

Balancing on a straw – an Eurasian wren Troglodytes troglodytes (photo: AOG).

My own approach towards the Grand Ecological Challenge started with similar thoughts and concerns as raised by Pimm**. However, as I gradually drifted from being a field ecologist towards actually attempting to model parsimonious population systems I found the theoretical toolbox to be void of key instruments to build realistic dynamics. In fact, the current methods were in many respects even seriously misleading, due to what I considered some key dissonant model assumptions.

In my book (Gautestad 2015), and here in my subsequent blog, I have summarized how – for example – individual-based modelling generally rests on a very unrealistic perception of site fidelity (March 23, 2017: “Why W. H. Burt is Now Hampering Progress in Modern Home Range Analysis“). I have also found it necessary to start from scratch when attempting to build what I consider a more realistic framework for population dynamics (November 8, 2017: “MRW and Ecology – Part IV: Metapopulations?“), for the time being culminating with my recent series of post on “Simulating Populations” (part I-X).

I guess the main take-home message from the present post is:

  • Without a realistic understanding; i.e., modelling power, of individual dispersion over space, time and scale it will be futile to build a theoretical framework with deep explanatory and predictive value with respect to population dynamics and population ecology. In other words, some basic aspects of system complexity at the “particle level” needs to be resolved.
  • Since we in this respect typically are considering either the accumulation of space use locations during a time interval (e.g., a series of GPS fixes) or a population’s dispersion over space and how it changes over time, we need a proper formulation of the statistical mechanics of these processes.In other words, when simplifying extremely complicated systems into a manageable set of smaller set of variables, parameters and key interactions, we have to invoke the hidden layer.
  • With a realistic set of basic assumptions in this respect, the modelling framework will in due course be ready to be applied on issues related to the Grand Ecological Challenge – as so excellently summarized by Pimm in 1991. In other words, before we can have any hope of a detailed prediction of a local or regional faith of a given species or community of species under a given set of circumstances, we need to build models that are void of the classical system assumptions that have cemented the belief in the so-called balance of nature.

NOTES

*) The need to rethink the concept of carrying capacity and accompanying “balance” (density dependent regulation) should be obvious from the simulations of the Zoomer model. Here a concept of carrying capacity (called CC) is introduced at a local scale only, where – logically – the crunch from overcrowding is felt by the individuals. By coarse-graining to a larger pixel than this finest system resolution we get a mosaic of local population densities where each pixel contains a heterogeneous collection of intra-pixel (local) CC-levels. If “standard” population dynamic principles applies, the population change when averaging the responses over a large number of pixels with similar density should be the same whether one considers the density at the coarser pixel or the average density of the embedded finer-grained sub-pixels. This mathematical simplification follows from the mean field principle. In other words, the sum equals the parts. On the other hand, if the principle of multi-scaled dynamics applies, two pixels at the coarser scale containing a similar average population density may respond differently during the next time increment due to inter-scale influence. At any given resolution the dynamics is as a function not only of the intra-pixel heterogeneity within the two pixels but also of their respective neighbourhood densities; i.e., the condition at an even coarser scale. The latter is obviously not compliant with the mean field principle, and thus requires a novel kind of population dynamical modelling.

**) In the early days I was particularly inspired by Strong et al. (1984), O’Neill et al. (1986) and L. R. Taylor; for example, Taylor (1986).

REFERENCES

Gautestad, A. O. 2015, Animal Space Use: Memory Effects, Scaling Complexity, and Biophysical Model Coherence Indianapolis, Dog Ear Publishing.

O’Neill, R. V., D. L. DeAngelis, J. B. Wade, and T. F. H. Allen. 1986. A Hierarchical Concept of Ecosystems. Monographs in Population Biology. Princeton, Princeton University Press.

Pimm, S. L. 1991, The balance of nature? Ecological issues in the conservation of species and communities. Chicago, The University of Chicago Press.

Strong, D.E., Simberloff, D., Abele, L.G. & Thistle, A.B. (eds). 1984. Ecological Communities: Conceptual Issues and the Evidence. Princeton,Princeton University Press.

Taylor, L. R. 1986. Synoptic dynamics, migration and the Rothamsted insect survey. J. Anim. Ecol. 55:1-38.

Simulating Populations IIX: Time Series and Pink Noise

On rare occasions one’s research effort may lead to a Eureka! moment. While exploring and experimenting with the latest refinement of my Zoomer model for population dynamics I stumbled over a key condition to tune the model in and out of full coherence between spatial and temporal variability from the perspective of a self-similar statistical pattern. Fractal-like variability over space and time has been repeatedly reported in the ecological literature, but rarely has such pattern been studied simultaneously as two aspects of the same population. My hope is that the Zoomer model also has a more broadened potential by casting stronger light on the strange 1/f noise phenomenon (also called pink noise, or Flicker noise), which still tends to create paradoxes between empirical patterns and theoretical models in many fields of science.

  

First, consider again some basic statistical aspects of a Zoomer-based simulation. Above I show the spatial dispersion of individuals within the given arena. See previous posts I-VII in this series for other examples, system conditions and technical details. The new aspect in the present post is a toggle towards temporal variation of the spatial dispersion, and fractal properties in that respect (called a self-affine pattern in a series of measurements, consistent with 1/f noise*). This phenomenon usually indicates the presence of a broad distribution of time scales in the system.

In particular, observe the rightmost log(M,V) regression. As in previous examples, the variable M in the Taylor’s power law model V = aMb represents – in my application of the law – a combination of the average number of individuals in a set of samples at different locations at a given scale and supplemented by samples of M at different scales. The latter normally contributes the most to the linearity of the log(M,V) plot, due to a better spread-out of the range of M. During the present scenario that was run for 5000 time steps after skipping an initial transient period, the self-similar and self-organized spatial pattern of slope b ≈ 2 and log(a) ≈ 0 was dominating, only occasionally interrupted by some short intervals with << 2 and log(a) >> 0. These episodes were caused by simultaneous disruption of a relatively high number of local populations in cells that exceeded their respective carrying capacity level (CC), leading to a lot of simultaneous re-positioning of these individuals to other parts of the arena in accordance to the actual simulation rules. Thus, some temporary “disruptive noise” could appear now and then, occasionally leading to a relatively scrambled population dispersion with b ≈ 1 (Poisson distribution) and log(a) >> 0 for short periods of time.

The actual time series for the present example is given above, showing local density at a given location (a cell at unit scale) over a period of 5,000 increments. The scramble events due to many simultaneous population crashes in other cells over the arena can indirectly be seen as the strongest density peaks in the series. The sudden inflow of immigrants to the given locality caused a disruption to the monitored local population, which typically crashed during the next increment due to density > CC.

In may respects the series illustrates the “frustrating” phenomenon seen in long time series in ecology: the longer the observation period, the larger the accumulated magnitude of fluctuations (Pimm and Redfearn 1988, Pimm 1991). However, in the present scenario where conditions are known, there is no uncertainty related to whether the complex fluctuations are caused by extrinsic (environmental) or intrinsic dynamics. Since CC was set to be uniform over space and time and only varied due to some level of stochastic noise (a constant + a smaller random number) the time series above is expressing intrinsically driven, self-organized kind of fluctuations over space, time and scale.

The key lesson from the present analysis is to understand local variability as a function not only of local conditions, but also a function of coarser-scale events and conditions in the surroundings of the actual population being monitored. The latter is often referred to a long-range dependence. In the Zoomer model this phenomenon is rooted in the emergence of scale-free space use of the population, due to scale-free space use at the individual level (the Multi-scaled random walk model). In the present post I illustrate long-range dependence also over temporal variability.

To remove the major magnitude of serial autocorrelation the original series above was sampled 1:80. The correlogram to the right shows this frequency to be a feasible compromise between full non-autocorrelation and a reasonably long remaining series length.

After this transformation towards observing the series at a different scale, the sampled series was subject to log(Mt,V) analysis, where Mt represents – in a sliding window manner – the mean abundance over a few time steps of the temporally coarser-scaled series, and V represents these  intervals’ respective variance.

Interestingly, the log(Mt,V) pattern was reasonably similar to the log(M,V) pattern from the spatial transect above; i.e., the marked line with b ≈ 2 and log(a) ≈ 0, under the condition that the series was analyzed at a sufficiently coarse temporal scale from sampling to avoid most of the serial autocorrelation.

The most intriguing result is perhaps the Figure below, showing the log-log transformed power spectrogram of the original time series**. Over a temporal scale range up to frequency of 1500 (the x-axis) the distribution of power (the y-axis); i.e., amplitudes squared, satisfies 1/f noise!

See my book for an introduction to this statistical “beast” ***, where I devote several chapters to it. At higher frequencies further down towards the “rock bottom” of 4096 (the dimmed area), the power shows inflation. This is probably due to an integer effect at the finest resolutions of the time series, since individuals always come as whole numbers rather than fractions. Thus, the finest-resolved peak in power was probably influenced by this effect.

To my knowledge, no other model framework has simultaneously been able to reproduce the empirically often reported fractal-like spatial population abundance (the spatial expression of Taylor’s power law) with a fractal-like temporal variation (the temporal Taylor’s power law). Only either-or results have been previously reproduced by models among the thousands of papers on this topic over the last 60 years or so.

For an introduction to 1/f noise in an ecological context, you may find this link to a review by Halley and Inchausti (2004) helpfull. I cite:

1/f-noises share with ecological time series a number of important properties: features on many scales (fractality), variance growth and long-term memory. (…) A key feature of 1/f-noises is their long memory. (…)  Given the undoubted ubiquity and importance of 1/f-noise, it is surprising that so much work still revolves about either observing the 1/f-noise in (yet more) novel situations, or chasing what seems to have become something of a “holy grail” in physics: a universal mechanism for 1/f-noise. Meanwhile, important statistical questions remain poorly understood.

What about ecological methods based on the theory above? By exploring the statistical pattern of animal dispersion over space and its variability over time – and mimicking this pattern in the Zoomer model – a wide range of ecological aspects may be studied within a hopefully more realistic framework than the presently dominating approach using coupled map lattice models or differential equations. Statistical signatures of quite novel kind can be extracted from real data and interpreted in the light of complex space use theory.

In upcoming posts I will on one hand dig into the set of Zoomer model “knobs” that steer the statistical pattern in and out of – for example – the 1/f noise condition, and on the other hand I will exemplify the potential for ecological applications of the model.

REFERENCES

Halley, J. M. and P. Inchausti. 2004. The increasing importance of 1/f noises as models of ecological variability. Fluctuation and Noise Letters 4:R1–R26.

Pimm, S. L. 1991, The balance of nature? Ecological issues in the conservation of species and communities. Chicago, The University of Chicago Press.

Pimm, S. L., and A. Redfearn. 1988. The variability of animal populations. Nature 334:613-614.

NOTES

*) “1/f-noise” refers to 1/fν-noise for which 0 ≤ ν ≤ 2; “near-pink 1/f-noise” refers to cases where 0.5 ≤ ν ≤ 1.5 and “pink noise” refers to the specific case were ν = 1. All 1/fν-noises are defined by the shape of their power spectrum S(ω):

S ∝ 1/ων

Here ω = 2πf is the angular frequency. “White noise” is characterized by ν ≈ 0, and its integral, “red” or “brown” noise, has ν ≈ 2. A classical random walk along a time axis is an example of the latter, and its derivative produces white noise.

**) Since a FFT analysis requires a series length satisfying a power or 2, only the first 4096 steps were included.

***) 1/f noise is challenging, both mathematically and conceptually:

The non-integrability in the cases ν ≥ 1 is associated with infinite power in low frequency events; this is called the infrared catastrophe. Conversely for ν ≤ 1, which contains infinite power at high frequencies, it is called the ultraviolet catastrophe. Pink noise (ν = 1) is non-integrable at both ends of the spectrum. The upper and lower frequencies of observation are limited by the length of the time series and the resolution of the measurement, respectively.
Halley and Inchausti (2004), p R7.

 

The Hidden Layer

Focusing on the statistical pattern of space use without acknowledging the biophysical model for the process will create much confusion and unnecessary controversy. Ecologists are now forced to get a better grip on concepts from statistical mechanics than earlier generations. For example, to understand the transformation from data on actual behaviour to pattern analysis of space use, the concept of the hidden layer represents the first gate to pass.

Research on animal movement and space use has always had a central place in ecology. However, as more field data, better computers and more sophisticated statistical methods have become available, some old dogma have come under attack. Specific theoretical aspects of this quest for improved model realism have emerged from the rapidly growing cooperation between biologists and physicists in the emerging field of macro-level biophysics. The so-called Lévy flight foraging hypothesis is one example. And, of course, I can’t resist mentioning the MRW theory.

A booted eagle Hieraaetus pennatus is triggering a flock of spotless starlings Sturnus unicolor to show swarming behaviour. Malaga river delta, December 2017. Photo: AOG.

In 1985 Charles Krebs described ecology as the scientific study of the interactions that determine the distribution and abundance of organisms. In an ethological context animal space use is studied on two levels – tactical and strategic. The tactical level regards understanding individual biology and behavioural ecology on a moment-to-moment temporal scale. Strategic space use adds an extra layer of complexity to the tactical behaviour. In a simplistic manner we may refer to this layer as the animal’s state at a given moment in time; for example whether it is hungry or not (e.g., in hunting mode). Strategy also involves processing of memory-based goals. Strategies executed at coarser time scales than tactics. Some of the interaction between tactics and strategy may then – under specific conditions (see below) – be transformed to dynamic models at the tactical level; so-called mechanistic models, which consists of a set of executable rules covering respective cognitive and environmental conditions. Validating the model dynamics and resulting statistical patterns against real animal data then rates the degree of model realism. For example; realistic, tactical models have been developed to cast light on the “clumping behaviour” (dense swarming) of flock of birds that are threatened by a raptor.

The myriad of rules that influence animal movement makes detailed modelling an impossible task, and would anyway only lead to a descriptive picture with no value to ecological hypothesis testing. In fact, the signature of successful modelling is simplification. Thus, only specific aspects of the reference individual’s behaviour can be included and scrutinized.

The present post addresses one particular aspect of system simplification; coarse-graining the temporal scale. This approach implies a qualitative change of how the space use system is observed and analyzed. Actually, temporal coarse-graining is forced upon us when studying animal space use from sampling an individual’s successive displacements as a series of locations (fixes) during a given period of time. During each inter-fix interval the observed displacement regards the resultant vector from a myriad of intermediate and unobserved events. What has happened to the moment-to-moment kind of behavioural ecology? It has become buried below the hidden layer.

At the surface of this hidden layer you lose sight of behavioural details (like raptor response and swarming rules) but you gain access to an alternative perspective of movement and space use. Alternative statistical descriptors are emerging at this temporally coarser scale, following the laws of statistical mechanics. What is analyzed above the hidden layer is the over-all pattern from many displacements events that are aggregated into a spatial scatter of fixes.

For example, you may coarse-grain both the temporal and spatial system dimensions, and study the aggregated distribution of fixes at the spatial scale of virtual grid cells (pixels) and temporal scale of the fix sampling period. The spatio-temporal variations in intensity of space use within the actual space-time extents then allows for modelling and hypothesis testing, but now using statistical-mechanical descriptors of space use intensity. These descriptors are either not valid below the hidden layer (e.g., the information content of local density of fixes) or they have an alternative interpretation (e.g., movement as a “step” versus movement as a resultant vector for a given interval and location). Both levels of analysis require large sets of input to allow for statistical treatment.

Why is the hidden layer concept and the statistical-mechanical approach more important to relate to today than in earlier decennia? The short answer is the realization – seeded by better and more extensive data – that animal space use involves more than a couple of universality classes of movement (see this post). In fact, in my book, papers and blog posts I have detailed eight classes, most of which are unfamiliar to you.

To understand space use that is influenced by spatial memory and scale-free movement, statistical mechanical modelling is a prerequisite for realistic representation of such complex systems, unless you limit your perspective to a short-term behavioural bout within a very localized arena. In other words, “a single piece of a jigsaw-puzzle of space use dynamics”. For example, if you zoom closely into a small segment of a circle you observe an approximately straight line. Take a step outwards, and you are facing the qualitatively different geometry – the mathematics of a curve and finally a full circle. Stubbornly staying within the linear framework when analyzing more extensive objects than what you observe at fine scales will force you into a corner filled with paradoxes.

Fine-grained and coarse-grained analyses of animal space use are complementary approaches to the same system.

 

MRW and Ecology- Part VII: Testing Habitat Familiarity

Consider having a series of GPS fixes, and you wonder if the individual was utilizing familiar space during your observation period – or started building site familiarity around the time when you started collecting data. Simulation studies of Multi-scaled random walk (MRW) shows how you may cast light on this important ecological aspect of space use.

First, you should of course test for compliance with the MRW assumptions, (a) site fidelity with no “distance penalty” on return events, (b) scale-free space use over the spatial range that is covered by your data, and (c) uniform space utilization on average over this scale range. One single test in the MRW Simulator, the A(N) regression, cast light on all these aspects. First, you seek to optimize pixel resolution for the analysis (estimating the Characteristic scale of space use, CSSU). Next, if you find “Home range ghost” compliance; i.e., incidence I expands proportionally with square root of sample size of fixes, your data supports (a) spatial memory utilization with no distance penalty due to sub-diffusive and non-asymptotic area expansion, (b) scale-free space use due to linearity of the log[I(N)] scatter plot, and (c) equal inter-scale weight of space use due to slope ≈ 0.5.

Supposing your data confirmed MRW, how to test for time-dependent strength of habitat familiarity? Consider the following simulation example, mimicking space use during a season and under constant environmental conditions.

The red dots show log(N,I) for various sample sizes up to the total set of 11,000 fixes. Each dot represents the average I for respective N of the two methods continuous sampling and frequency sampling (counteracting autocorrelation effect; see a previous post). However, analyzing the first 1,000 fixes separately (black dots) consistently revealed a more sloppy space use in terms of aggregated incidence at a given N, relative to the total season. The next 1,000 fixes, however, was compliant with the total series both with respect to slope and y-intercept (CSSU) (green dots).

The reason for the discrepancy in space use during the initial period of fix sampling* was in the present scenario the actual simulation condition; site familiarity was set to develop “from scratch” simultaneously with the onset of fix collection. I define strength of site familiarity as proportional with the total path length from which the model animal collects a previous location to return to**. In the start of the sampling period, the underlying path is short in comparison to the total path that was traversed during the total season, and – crucially – return steps targeted previous locations from the actual simulation period only, and not locations prior to to this start time. In other words, the animal was assumed to settle down in the area at the point in time when the simulation commenced.

To conclude, if your data shows CSSU and slope of similar magnitude in the early and later phase of data collection, you sampled an individual with a well-established memory map of its environment during the entire observation period. The implicit assumption for this conclusion is of course that the environmental conditions was constant during the entire sampling period, including the initial phase. Using empirical rather than synthetic data means that additional tests would have to be performed to cast light on this aspect.

NOTE

*) The presentation above reflects the pixel resolution that was optimized for the total series. The first 1,000 fixes showed a more coarse-grained space use, reflected in a 50% larger CSSU scale (not shown: optimal pixel size was 50% larger for this part of the series) despite constant movement speed and return rate for the entire simulation period. In this scenario a larger CSSU [coarser optimal pixel for the A(N) analysis] signals a less mature habitat utilization in the home range’s early phase. The CSSU was temporarily inflated during build-up of site familiarity, but – somewhat paradoxically – the accumulated number of fix-embedding grid cells (incidence) for a given N at this scale was smaller. These two effects, reflecting degree of habitat familiarity during home range establishment, should be considered a transient effect.

**) Two definitions should be specified:

  • I define strength of site familiarity as proportional with the total path length from which the model animal collects a previous location to return to.
  • I define strength of site fidelity as proportional with the return frequency.

Both definitions rest on the assumptions of no distance penalty on return targets and no time penalty on returns; i.e., infinite spatio-temporal memory horizon relative to the actual sampling period.

MRW and Ecology – Part IV: Metapopulations?

In light of the recent insight that individuals of a population generally seem to utilize their environment in a multi-scaled and even scale-free manner, the metapopulation concept needs a critical evaluation. Even more so, since many animals under a broad range of ecological conditions are simultaneously mixing scale-free space use with memory map-based site fidelity. In fact, both properties, multi-scaled movement and targeted return events to previous locations, undermine key assumptions of the metapopulation concept.

Levins (1969) model of “populations of populations” – termed metapopulation – rattled many corners of theoretical and applied ecology, despite previous knowledge of the concept from the groundbreaking research by Huffaker (1958) and others (Darwin, Gause, etc.). Since then, Ilkka Hanski (1999) and others have produced broad theoretical and empirical research on the metapopulation concept.

The Levins model describes a metapopulation in a spatially implicit manner, where close and more distant sub-populations are assumed to have same degree of connectivity. Later models (including Hanski’s work) made the dynamics spatially explicit. Hence, distant sub-populations are in this class of design more closely connected dynamically than more distant populations. Sub-populations (or “local” populations) are demarcated by large difference in internal individual mixing during a reproduction cycle relative to the rate of mixing with neighbouring sub-populations at this temporal scale. As a rule-of-thumb, the migration rate between neighbour populations during a reproduction cycle should be smaller than 10-15% to classify the system as a metapopulation. Simultaneously, intrinsic mixing during a cycle in a given sub-population is assumed to approximate 100%; i.e., “full spatial mixing” (spatial homogenization when averaging individual locations over a generation period).

 

According to the prevailing metapopulation concept, high rate of internal mixing in sub-populations is contrasted by substantially lower mixing rate between sub-populations. The alternative view – advocated here – is a hierarchical superposition of mixing rates if the individual-level movement is scale-free over a broad scale range. The hierarchy is indicated by three levels, with successively reduced intra- and inter-population mixing rate towards higher levels.

The spatially explicit model of a metapopulation is based on three core assumptions:

  1. The individual movement process for the population dynamics should comply with a scale-specific process; i.e., a Brownian motion-like kind of space use in statistical-mechanical terms, both within and between sub-populations. This property allows intrinsic population dynamics of sub-populations to be modelled as “homogeneous” at this temporal scale. This property is also assumed by the theory of differential and difference equations. It also allows the migration between sub-populations to be described as a classical diffusion process.
  2. Following from Point 1, more distant sub-populations are always less dynamically linked (smaller diffusion rate) than neighbour populations. In fact, dispersal between distant sub-populations may be ignored in spatially explicit models.
  3. Emigration from a given sub-population may be stochastic (random) or deterministic (e.g., density dependent emigration rate), while immigration rate is stochastic only. The latter follows logically from compliance with point 1. In other words, emigrating individuals may occasionally return, but only by chance and thus on equal terms with the other immigrants from neighbour populations. Hence, both the intrinsic and inter-population mixing process is assumed to lack spatial memory capacity for targeted returns at the individual level.

In my alternative idea for a spatially (and temporally) structured kind of population dynamics, individual movement is assumed to comply with multi-scaled random walk (MRW). Contrary to a classical Brownian motion and diffusion-like process, MRW defines both a scale-free kind of movement and a degree of targeted returns to previous locations. Thus, both emigration and immigration may be implicitly deterministic. The two perceptions of a structured population are conceptualized in the present illustrations. “Present idea” regards the prevailing metapopulation concept, and the “Alternative idea” regards population dynamics under the MRW assumptions.

The upper part of the illustration to the right shows the two classical metapopulation assumptions in a simplistic manner. Shades of blue regards strength of inter-population mixing, which basically is reaching neighbour populations only (by a rate of less than 10-15%, to satisfy a metapopulation structure) but not more distant ones. For example, inter-generation dispersal rate between next-closest sub-populations is expected to be less than (10%)*(10%) = 1%, and falls further towards zero at longer distances. The Alternative idea at the lower part describes a more leptocurtic (long-tailed) dispersal kernel – in compliance with a power law (scale-free dispersal) – rather than an exponentially declining kernel (scale-specific dispersal), as in the standard metapopulation representation. Separate arrows for immigration from one sub-population to a neighbour population of the “Present idea” part illustrates the standard diffusion principle, while the single dual-pointing arrow of the “Alternative idea” illustrates that immigration an emigration are not independent processes, due to spatial memory-dependent return events. The emergent property of targeted returns connects even distant sub-populations in a partly deterministic manner.

The Alternative idea design is termed the Zoomer model, which is explored both theoretically and by preliminary simulations in my book. A summary was presented in this post. A long-tailed dispersal kernel may embed and connect subpopulations that are separated by a substantial width of matrix habitat. Since the tail is thin (only a small part of individual displacements are reaching these distances), long-distance moves and directed returns happen with a small rate. 

The Zoomer model as an alternative to the classical metapopulation concept has far-reaching implications for population dynamical modelling and ecological interpretation. For example, the property that two distant sub-populations may sometimes be closer connected than connectivity to intermediate sub-populations due to emergence of a complex network structure at the individual level was illustrated by my interpretation of the Florida snail kite research (see this post). At the individual level, research on Fowler’s toad (see this post) and the Canadian bison (see this post) shows how distant foraging patches may be closer connected than some intermediate patches. Also this is in compliance with the Zoomer concept and in opposition to the classical metapopulation concept. In many posts I’ve shown examples of the leptocurtic distribution of an individual’s histogram of displacement lengths, covering very long distances in the tail part – potentially well into the typical scale regime of a metapopulation (for example, the lesser kestrel).

REFERENCES

Hanski, I. Metapopulation Ecology. Oxford University Press. 1999.

Huffaker, C.B. 1958. Experimental Studies on Predation: Dispersion factors and predator–prey oscillations. Hilgardia 27:83-

Levins, R. 1969. Some demographic and genetic consequences of environmental heterogeneity for biological control. Bulletin of the Entomological Society of America 15:237–240