Positive and Negative Feedback Part II: Populations

Examples of positive feedback loops in population dynamics abound. Even if the majority of models are focusing on negative feedback, like the logistic growth function, non-equilibrium “boom and bust” kind of model designs have also been developed. In this post I elaborate on the particular kind of positive feedback loop that emerges from cross-scale dual-direction flow of individuals that is based on the parallel processing conjecture.

The image to the right illustrates – in simplistic terms – a spatially extended population model of standard kind (e.g., a coupled map lattice design) where each virtually demarcated local population j at spatial resolution i and at a given point in time t contains Nij individuals. No borders for local migration are assumed; i.e., the environment is open both internally and externally towards neighbouring sites.Typically, these individuals are set to be subject to a locally negative feedback loop in accordance to principles of density dependent regulation*. The larger the N the larger the probability of an increased death rate and/or and increased emigration rate from time t to t+1, eventually leading both the local and the over-all population to a steady state. This balancing** condition lasts until some change (external perturbation) is forcing the system into a renewed loop of negative feedback-driven dynamics. In a variant of this design, density regulation may be formulated to be absent until a critical local density is reached, leading to boom and bust (“catastrophic” death and emigration), which may be more or less perturbed by random immigration rate from asynchronous developments in respective surrounding Nij. More sophisticated variants abound, like inclusion of time lag responses, interactions with other trophic levels, and so on.

As previously explained in other posts, this kind of model framework depends on a premise of Markov-compliant processes at the individual level (mechanistic system), and thus also at the population level (local or global compliance with the mean field principle). In this framework intrinsic dynamics may be density dependent or not, but from the perspective of a given Nij, extrinsic influence – like immigration of individuals – is always stochastic and thus density independent with respect to Nij.  In other words, the net immigration rate during a given time increment is not influenced by the state of the population in this location (i,j). You can search my blog or read my book to find descriptions and details on all these concepts.

To implement cross-location and dual-direction deterministic dynamics, multi-scaled behaviour and spatial memory needs to be introduced. My parallel processing conjecture; which spins off various testable hypotheses, creates turmoil in this standard system design for population dynamics because it explicitly introduces such system complexity. For example, positive feedback loops may emerge. Positive feedback as described below may effectively also counteracting the paradoxical Allée effect, which all “standard” population models are confronted with at the border zone of a population in an open environment**.

The dynamic driver of the complexity is the introduction of spatial memory in combination with a scale-free kind of dynamics along both the spatial and the temporal dimensions. In statistical-mechanical terms, parallel processing is incompatible with a mechanistic system. Thus, a kind of extended statistical mechanics is needed. I refer to the post where I describe the scale-extended description of a metapopulation system.

For the most extensive individual-level test of the parallel processing conjecture until now (indirectly also verifying positive feedback of space use), see our paper on statistical analysis of space use by red deer Cervus elaphus (Gautestad et al. 2013; Gautestad and Mysterud 2013). In my blog I have also provided several anecdotal examples of third party research potentially supporting the parallel processing conjecture. For the sake of system coherence, if parallel processing is verified for individual space use of a given species and under given ecological conditions, this behaviour should also be reflected in the complementary population dynamical modelling of the given species and conditions.

Extending the standard population model. As explained in a range of blog posts, my Zoomer model represents a population level system design that is coherent with the individual-level space use process (in parsimonious terms), as formulated by the Multi-scaled random walk model. In my previous post I described the latter in the context of positive feedback from individual-level site fidelity. Below I illustrate positive feedback also at the population level, where site fidelity get boosted by conspecific attraction. In other words, conspecifics become part of the individuals’ resource mapping at coarser scales, as it is allowed for by spatial memory. Consequently, a potential for dual-direction deterministic flow of individuals is introduced (see above). Conspecific attraction is assumed to be gradually developed by individual experience of conspecifics’ whereabouts during exploratory moves.

In the Zoomer model , some percentage of the individuals are redistributing themselves over a scale range during each time increment. Emigration (“zooming out”) is marked by dotted arrows, and immigration (“zooming in”) is shown as continuous-line arrows. Numbers refer to scale level of the neighbourhood of a given locality. This neighbourhood scales logarithmically; i.e., in a scale-free manner, in compliance with exploratory moves in the individual-level Multi-scaled random walk model. Zooming in depends on spatial memory by the individuals, and introduces a potential for the emergence of positive feedback at the population level.

First, consider the zooming process, whereby a given rate, z, of individuals (for example, z=5% on average at a chosen time resolution Δt) at a “unit” reference scale (k=i) are redistributing themselves over a scale range beyond this unit scale***. During a given Δt consider that 100 individuals become zoomers from the specific location marked by the white circle. In parallel with the zooming out-process the model describes a zooming in-process with a similar strength. The latter redistributes the zoomers in accordance to scale-free immigration of individuals under conspecific attraction.Thus, number of individuals (N) at this location j at scale i, marked as Nij, will at the next time t+1 either embed N-100 individuals if they all leave location j and end up somewhere in the neighbourhood of j, or the new number will be N -100 + an influx of immigrants, where these immigrants come from the neighbourhood at scale i (those returning home again), scale i+1 (immigration from locations nearby), i+2 (from an even more distant neighbourhood), etc.

In the ideal model variant of zooming we are thus assuming a scale-free redistribution of individuals during zooming, with zooming to a neighbourhood at scale ki+x takes place with probability 1/ki+x (Gautestad and Mysterud 2005). Under this condition, zoomers to successively coarser scales become “diluted” over proportionally larger neighbourhood area, the maximum number of immigrants in this example is 100 + N’, where N’ is the average number of zoomers pr. location at unit scale k=i within the coarsest defined system scale k=i(max) for zooming surrounding location j at scale i.

As a consequence of this kind of scale-free emigration of zoomers, the population system demonstrates zooming with equal weight of individual redistribution from scale to scale over the defined scale range (Lévy-like in this respect, with scaling exponent β≈2; see Gautestad and Mysterud 2005). By studying the distribution of step lengths, this “equal weight” hypothesis may be tested, when combinded with othe rstatistical fingerprints (in particular, verifying memory-dependent site fidelity; see Gautestad and Mysterud 2013).

Putting this parsimonious Zoomer model with its system variables and parameters into a specific ecological context implies a huge and basically unexplored potential for ecological inference under condition of scale-free space use in combination with site fidelity.

Positive feedback in the Zoomer model. As shown in my series of simulations of the Zoomer model a few posts ago, a positive feedback loop emerges from locations with relatively high abundance of individuals having a relatively larger chance of received a net influx of zoomers during the next increment, and vice versa for locations with low abundance. The positive feedback emerges from the conspecific attraction process, linking the dynamics at different scales together in a parallel processing manner.

This positive feedback loop from conspecific attraction also counteracts extinction from a potential Allée effect (see this post and this post), which have traditionally been understood and formulated from the standard population paradigm. The Zoomer model represents an alternative description of a process that effectively counteracts this effect.

NOTES

*) The immigration rate connects the local population to surrounding populations, but is – by necessity from the standard model design – density independent with respect to the dynamics in Nij.

**) Since the process is assumed to obey a Markovian and the mean field principles (standard, mechanistic process), the arena and population system must either be assumed to be infinitely large or the total set of local populations has to be assumed to be demarcated by some kind of physical border. Otherwise, net emigration will tend to drive N towards zero (extinction from standard diffusion in open environments). Individuals will “leak” from an open border zone to the surroundings.

***) The unit temporal scale for a population system should be considered coarser than the unit scale at the individual level, since the actual scale range under scrutiny typically is larger for population systems. In particular, to find the temporal scale where for example 5% of the local population can be expected to be moving past the inter-cell borders of a given unit spatial grid resolution ki=1, one should be expected to find Δt substantially larger than Δt at the individual level.

Consider that the difference in Δt is a function of the difference of the area of short-range versus long range displacements under the step length curve for individual displacements, where the ∼5% long-step tail of this area represents the relative unit time in comparison to the rest of the distribution (thereby defined as intra-cell moves). Since this area is a fraction of the area for the remaining 95% of the displacements, the difference in Δt should scale accordingly.

REFERENCES

Gautestad, A. O., and I. Mysterud. 2005. Intrinsic scaling complexity in animal dispersion and abundance. The American Naturalist 165:44-55.

Gautestad, A. O., and A. Mysterud. 2013. The Lévy flight foraging hypothesis: forgetting about memory may lead to false verification of Brownian motion. Movement Ecology 1:1-18.

Gautestad, A. O., L. E. Loe, and A. Mysterud. 2013. Inferring spatial memory and spatiotemporal scaling from GPS data: comparing red deer Cervus elaphus movements with simulation models. Journal of Animal Ecology 82:572-586.

 

Positive and Negative Feedback Mechanisms Part I: Individual Space use

The standard theories on animal space use rest on some shaky behavioural assumptions, as elaborated on in my papers, in my book and here in my blog. One of these assumptions regards the assumed lack of influence of positive feedback, in particular the self-reinforcing effect that emerge when individuals are moving around with a cognitive capacity for both temporal and spatial memory utilization. The common ecological methods to study individual habitat use; like the utilization distribution (a kernel density distribution with isopleth demarcations), use/availability analysis, and so on, explicitly build on statistical theory that not only disregards such positive feedback, but in fact requires that this emergent property is not influencing the system under scrutiny.

Unfortunately, most memory-enhanced numerical models to simulate space use are rigged to comply with negative rather than positive feedback effects. For example, the model animal successively stores its local experience with habitat attributes while traversing the environment, and it uses this insight in the sequential calculation of how long to stay in the current location and when to seek to re-visit some particularly rewarding patches (Börger et al. 2008, van Moorter et al. 2009, Spencer 2012, Fronhofer et al. 2013; Nabe-Nielsen et al. 2013). In other words, the background melody is still to maintain compliance with the marginal value theorem (Charnov 1976) and the ideal free distribution dogma, which both are negative feedback processes and not a self-reinforcing process that tends to counteract such a tendency.

Negative feedback (or balancing feedback) occurs when some function of the output of a system, process, or mechanism is fed back in a manner that tends to reduce the fluctuations in the output, whether caused by changes in the input or by other disturbances. Whereas positive feedback tends to lead to instability via exponential growth, oscillation or chaotic behavior, negative feedback generally promotes stability. Negative feedback tends to promote a settling to equilibrium, and reduces the effects of perturbations. Negative feedback loops in which just the right amount of correction is applied with optimum timing can be very stable, accurate, and responsive.
https://en.wikipedia.org/wiki/Negative_feedback

A common curlew Numenius arquata foraging on a field within its summer home range. Anecdotally, one may observe that a specific individual tends to revisit not only specific fields while foraging, but also specific parts of these fields. If this site fidelity is influenced by a rising tendency to prefer familiar space on expense of revisiting potentially equally rewarding patches based on previous visits, a positive feedback (self-reinforcing space use) has emerged.  This effect then interferes with the traditional ecological factors, like selection based on habitat heterogeneity, in a complex manner. Photo: AOG.

The above definitions follow the usual path to explain negative feedback as “good”, and positive feedback as something scary (I will return to this misconception in a later post in this series). It echoes the prevailing “Balance of nature” philosophy of ecology, which I’ve criticized at several occasions (see, for example, this post).

In a previous post, “Home Range as an Emergent Property“, I described how memory map utilization under specific ecological conditions may lead to a self-reinforcing re-visitation of previously visited locations (Gautestad and Mysterud 2006, 2010); in other words, a positive feedback mechanism*. Contemporary research on animal movement covering a wide range of taxa, scales, and ecological conditions continues to verify site fidelity as a key property of animal space use.

I use a literature search to test an assumption of the ideal models that has become widespread in habitat selection theory: that animals behave without regard for site familiarity. I find little support for such “familiarity blindness” in vertebrates.
Piper 2011, p1329.

Obviously, in the context of spatial memory and site fidelity it should be an important theme for research to explore to what extent and under which conditions negative and positive feedback mechanisms are shaping animal space use.

Positive feedback from site fidelity will fundamentally influence analysis of space use. For example, two patches with a priori similar ecological properties may end up being utilized with a disproportionate frequency due to initial chance effects regarding which patch happened to gain familiarity first**. Further, if the animal is utilizing the habitat in a multi-scaled manner (which is easy to test using my MRW-based methods), this grand error factor in a standard use/availability analysis cannot be expected to be statistically hidden by just studying the habitat at a coarser spatial resolution within the home range.

Despite this theoretical-empirical insight, the large majority of wildlife ecologists still tend to use classic methods resting on the negative feedback paradigm to study various aspects of space use. The rationale can be described by two end-points on a continuum: either one ignores the effect from self-reinforcing space use (assuming/hoping that the effect does not significantly influence the result), or one use these classic methods while holding one’s nose.

The latter category is accepting the prevailing methods’ basic shortcomings – either based on field experience or inspired by reading about alternative theories and methods – but the strong force from conformity in the research community is hindering bold steps out of the comfort zone. Hence, the paradigm prevails. Again I can’t resist referring to a previous post, “Why W. H. Burt is Now Hampering Progress in Modern Home Range Analysis“.

Within the prevailing modelling paradigm, implementing spatial memory utilization in combination with positive feedback-compliant site fidelity is a mathematical and statistical nightmare – if at all possible. However, as a reader of this blog you are at least fully aware of the fact that some numeric models have been developed lately, ouside the prevailing paradigm. These approaches not only account for memory map utilization but also embed the process of positive feedback in a scale-free manner (I refer to our papers and to my book for model details; see also Boyer et al. 2012).

 

NOTES

* The paper explores space use under the premise of positive feedback during superabundance of resources, in combination with negative feedback during temporal and local over-exploitation.

** In Gautestad and Mysterud (2010) I described this aspect as the distance from centre-effect; i.e., the utilization distribution falls off at the periphery of ulilized patches independently of a similar degradation of preferred habitat.

 

REFERENCES

Börger, L., B. Dalziel, and J. Fryxell. 2008. Are there general mechanisms of animal home range behaviour? A review and prospects for future research. Ecology Letters 11:637-650.

Boyer, D., M. C. Crofoot, and P. D. Walsh. 2012. Non-random walks in monkeys and humans. Journal of the Royal Society Interface 9:842-847.

Charnov, E. L. 1976. Optimal foraging: the marginal value theorem. Theor. Popula. Biol. 9:129-136.

Fronhofer, E. A., T. Hovestadt, and H.-J. Poethke. 2013. From random walks to informed movement. Oikos 122:857-866.

Gautestad, A. O., and I. Mysterud. 2006. Complex animal distribution and abundance from memory-dependent kinetics. Ecological Complexity 3:44-55.

Gautestad, A. O., and I. Mysterud. 2010. Spatial memory, habitat auto-facilitation and the emergence of fractal home range patterns. Ecological Modelling 221:2741-2750.

Nabe-Nielsen, J., J. Tougaard, J. Teilmann, K. Lucke, and M. C. Forckhammer. 2013. How a simple adaptive foraging strategy can lead to emergent home ranges and increased food intake. Oikos 122:1307-1316.

Piper, W. H. 2011. Making habitat selection more “familiar”: a review. Behav. Ecol. Sociobiol. 65:1329-1351.

Spencer, W. D. 2012. Home ranges and the value of spatial information. Journal of Mammalogy 93:929-947.

van Moorter, B., D. Visscher, S. Benhamou, L. Börger, M. S. Boyce, and J.-M. Gaillard. 2009. Memory keeps you at home: a mechanistic model for home range emergence. Oikos 118:641-652.

The Balance of Nature?

To understand populations’ space use one needs to understand the individual’s space use. To understand the individuals’ space use one needs to acknowledge the profound influence of spatio-temporal memory capacity combined with multi-scale landscape utilization, which continues to be empirically verified at a high pace in a surprisingly wide range of taxa. Complex space use has wide-ranging consequences for the traditional way of thinking when it comes to formulate these processes in models. In a nutshell, the old and hard-dying belief in the balance of nature needs a serious re-formulation, since complexity implies “strange” fluctuations of abundance over space, time and scale. A  fresh perspective is needed with respect to inter-species interactions (community ecology) and environmental challenges from habitat destruction, fragmentation and chemical attacks. We need to address the challenge by rethinking also the very basic level of how we perceive an ecosystem’s constituents: how we assume individuals, populations and communities to relate to their surroundings in terms of statistical mechanics.

Stuart L. Pimm summarizes this Grand Ecological Challenge well in his book The Balance of Nature? (1991). Here he illustrates the need to rethink old perceptions linked to the implicit balancing principle of carrying capacity*, and he stresses the importance of understanding limits to how far population properties like resilience and resistance may be stretched before cascading effects appear. In particular, he advocates the need to extend the perspective from short-series local-scale population dynamics to long-term and broad scale community dynamics. In this regard, his book is as timely today as it was 27 years ago. However, in my view the challenge goes even deeper than the need to extending spatio-temporal scales and the web of species interactions.

Balancing on a straw – an Eurasian wren Troglodytes troglodytes (photo: AOG).

My own approach towards the Grand Ecological Challenge started with similar thoughts and concerns as raised by Pimm**. However, as I gradually drifted from being a field ecologist towards actually attempting to model parsimonious population systems I found the theoretical toolbox to be void of key instruments to build realistic dynamics. In fact, the current methods were in many respects even seriously misleading, due to what I considered some key dissonant model assumptions.

In my book (Gautestad 2015), and here in my subsequent blog, I have summarized how – for example – individual-based modelling generally rests on a very unrealistic perception of site fidelity (March 23, 2017: “Why W. H. Burt is Now Hampering Progress in Modern Home Range Analysis“). I have also found it necessary to start from scratch when attempting to build what I consider a more realistic framework for population dynamics (November 8, 2017: “MRW and Ecology – Part IV: Metapopulations?“), for the time being culminating with my recent series of post on “Simulating Populations” (part I-X).

I guess the main take-home message from the present post is:

  • Without a realistic understanding; i.e., modelling power, of individual dispersion over space, time and scale it will be futile to build a theoretical framework with deep explanatory and predictive value with respect to population dynamics and population ecology. In other words, some basic aspects of system complexity at the “particle level” needs to be resolved.
  • Since we in this respect typically are considering either the accumulation of space use locations during a time interval (e.g., a series of GPS fixes) or a population’s dispersion over space and how it changes over time, we need a proper formulation of the statistical mechanics of these processes.In other words, when simplifying extremely complicated systems into a manageable set of smaller set of variables, parameters and key interactions, we have to invoke the hidden layer.
  • With a realistic set of basic assumptions in this respect, the modelling framework will in due course be ready to be applied on issues related to the Grand Ecological Challenge – as so excellently summarized by Pimm in 1991. In other words, before we can have any hope of a detailed prediction of a local or regional faith of a given species or community of species under a given set of circumstances, we need to build models that are void of the classical system assumptions that have cemented the belief in the so-called balance of nature.

NOTES

*) The need to rethink the concept of carrying capacity and accompanying “balance” (density dependent regulation) should be obvious from the simulations of the Zoomer model. Here a concept of carrying capacity (called CC) is introduced at a local scale only, where – logically – the crunch from overcrowding is felt by the individuals. By coarse-graining to a larger pixel than this finest system resolution we get a mosaic of local population densities where each pixel contains a heterogeneous collection of intra-pixel (local) CC-levels. If “standard” population dynamic principles applies, the population change when averaging the responses over a large number of pixels with similar density should be the same whether one considers the density at the coarser pixel or the average density of the embedded finer-grained sub-pixels. This mathematical simplification follows from the mean field principle. In other words, the sum equals the parts. On the other hand, if the principle of multi-scaled dynamics applies, two pixels at the coarser scale containing a similar average population density may respond differently during the next time increment due to inter-scale influence. At any given resolution the dynamics is as a function not only of the intra-pixel heterogeneity within the two pixels but also of their respective neighbourhood densities; i.e., the condition at an even coarser scale. The latter is obviously not compliant with the mean field principle, and thus requires a novel kind of population dynamical modelling.

**) In the early days I was particularly inspired by Strong et al. (1984), O’Neill et al. (1986) and L. R. Taylor; for example, Taylor (1986).

REFERENCES

Gautestad, A. O. 2015, Animal Space Use: Memory Effects, Scaling Complexity, and Biophysical Model Coherence Indianapolis, Dog Ear Publishing.

O’Neill, R. V., D. L. DeAngelis, J. B. Wade, and T. F. H. Allen. 1986. A Hierarchical Concept of Ecosystems. Monographs in Population Biology. Princeton, Princeton University Press.

Pimm, S. L. 1991, The balance of nature? Ecological issues in the conservation of species and communities. Chicago, The University of Chicago Press.

Strong, D.E., Simberloff, D., Abele, L.G. & Thistle, A.B. (eds). 1984. Ecological Communities: Conceptual Issues and the Evidence. Princeton,Princeton University Press.

Taylor, L. R. 1986. Synoptic dynamics, migration and the Rothamsted insect survey. J. Anim. Ecol. 55:1-38.

Simulating Populations IIX: Time Series and Pink Noise

On rare occasions one’s research effort may lead to a Eureka! moment. While exploring and experimenting with the latest refinement of my Zoomer model for population dynamics I stumbled over a key condition to tune the model in and out of full coherence between spatial and temporal variability from the perspective of a self-similar statistical pattern. Fractal-like variability over space and time has been repeatedly reported in the ecological literature, but rarely has such pattern been studied simultaneously as two aspects of the same population. My hope is that the Zoomer model also has a more broadened potential by casting stronger light on the strange 1/f noise phenomenon (also called pink noise, or Flicker noise), which still tends to create paradoxes between empirical patterns and theoretical models in many fields of science.

  

First, consider again some basic statistical aspects of a Zoomer-based simulation. Above I show the spatial dispersion of individuals within the given arena. See previous posts I-VII in this series for other examples, system conditions and technical details. The new aspect in the present post is a toggle towards temporal variation of the spatial dispersion, and fractal properties in that respect (called a self-affine pattern in a series of measurements, consistent with 1/f noise*). This phenomenon usually indicates the presence of a broad distribution of time scales in the system.

In particular, observe the rightmost log(M,V) regression. As in previous examples, the variable M in the Taylor’s power law model V = aMb represents – in my application of the law – a combination of the average number of individuals in a set of samples at different locations at a given scale and supplemented by samples of M at different scales. The latter normally contributes the most to the linearity of the log(M,V) plot, due to a better spread-out of the range of M. During the present scenario that was run for 5000 time steps after skipping an initial transient period, the self-similar and self-organized spatial pattern of slope b ≈ 2 and log(a) ≈ 0 was dominating, only occasionally interrupted by some short intervals with << 2 and log(a) >> 0. These episodes were caused by simultaneous disruption of a relatively high number of local populations in cells that exceeded their respective carrying capacity level (CC), leading to a lot of simultaneous re-positioning of these individuals to other parts of the arena in accordance to the actual simulation rules. Thus, some temporary “disruptive noise” could appear now and then, occasionally leading to a relatively scrambled population dispersion with b ≈ 1 (Poisson distribution) and log(a) >> 0 for short periods of time.

The actual time series for the present example is given above, showing local density at a given location (a cell at unit scale) over a period of 5,000 increments. The scramble events due to many simultaneous population crashes in other cells over the arena can indirectly be seen as the strongest density peaks in the series. The sudden inflow of immigrants to the given locality caused a disruption to the monitored local population, which typically crashed during the next increment due to density > CC.

In may respects the series illustrates the “frustrating” phenomenon seen in long time series in ecology: the longer the observation period, the larger the accumulated magnitude of fluctuations (Pimm and Redfearn 1988, Pimm 1991). However, in the present scenario where conditions are known, there is no uncertainty related to whether the complex fluctuations are caused by extrinsic (environmental) or intrinsic dynamics. Since CC was set to be uniform over space and time and only varied due to some level of stochastic noise (a constant + a smaller random number) the time series above is expressing intrinsically driven, self-organized kind of fluctuations over space, time and scale.

The key lesson from the present analysis is to understand local variability as a function not only of local conditions, but also a function of coarser-scale events and conditions in the surroundings of the actual population being monitored. The latter is often referred to a long-range dependence. In the Zoomer model this phenomenon is rooted in the emergence of scale-free space use of the population, due to scale-free space use at the individual level (the Multi-scaled random walk model). In the present post I illustrate long-range dependence also over temporal variability.

To remove the major magnitude of serial autocorrelation the original series above was sampled 1:80. The correlogram to the right shows this frequency to be a feasible compromise between full non-autocorrelation and a reasonably long remaining series length.

After this transformation towards observing the series at a different scale, the sampled series was subject to log(Mt,V) analysis, where Mt represents – in a sliding window manner – the mean abundance over a few time steps of the temporally coarser-scaled series, and V represents these  intervals’ respective variance.

Interestingly, the log(Mt,V) pattern was reasonably similar to the log(M,V) pattern from the spatial transect above; i.e., the marked line with b ≈ 2 and log(a) ≈ 0, under the condition that the series was analyzed at a sufficiently coarse temporal scale from sampling to avoid most of the serial autocorrelation.

The most intriguing result is perhaps the Figure below, showing the log-log transformed power spectrogram of the original time series**. Over a temporal scale range up to frequency of 1500 (the x-axis) the distribution of power (the y-axis); i.e., amplitudes squared, satisfies 1/f noise!

See my book for an introduction to this statistical “beast” ***, where I devote several chapters to it. At higher frequencies further down towards the “rock bottom” of 4096 (the dimmed area), the power shows inflation. This is probably due to an integer effect at the finest resolutions of the time series, since individuals always come as whole numbers rather than fractions. Thus, the finest-resolved peak in power was probably influenced by this effect.

To my knowledge, no other model framework has simultaneously been able to reproduce the empirically often reported fractal-like spatial population abundance (the spatial expression of Taylor’s power law) with a fractal-like temporal variation (the temporal Taylor’s power law). Only either-or results have been previously reproduced by models among the thousands of papers on this topic over the last 60 years or so.

For an introduction to 1/f noise in an ecological context, you may find this link to a review by Halley and Inchausti (2004) helpfull. I cite:

1/f-noises share with ecological time series a number of important properties: features on many scales (fractality), variance growth and long-term memory. (…) A key feature of 1/f-noises is their long memory. (…)  Given the undoubted ubiquity and importance of 1/f-noise, it is surprising that so much work still revolves about either observing the 1/f-noise in (yet more) novel situations, or chasing what seems to have become something of a “holy grail” in physics: a universal mechanism for 1/f-noise. Meanwhile, important statistical questions remain poorly understood.

What about ecological methods based on the theory above? By exploring the statistical pattern of animal dispersion over space and its variability over time – and mimicking this pattern in the Zoomer model – a wide range of ecological aspects may be studied within a hopefully more realistic framework than the presently dominating approach using coupled map lattice models or differential equations. Statistical signatures of quite novel kind can be extracted from real data and interpreted in the light of complex space use theory.

In upcoming posts I will on one hand dig into the set of Zoomer model “knobs” that steer the statistical pattern in and out of – for example – the 1/f noise condition, and on the other hand I will exemplify the potential for ecological applications of the model.

REFERENCES

Halley, J. M. and P. Inchausti. 2004. The increasing importance of 1/f noises as models of ecological variability. Fluctuation and Noise Letters 4:R1–R26.

Pimm, S. L. 1991, The balance of nature? Ecological issues in the conservation of species and communities. Chicago, The University of Chicago Press.

Pimm, S. L., and A. Redfearn. 1988. The variability of animal populations. Nature 334:613-614.

NOTES

*) “1/f-noise” refers to 1/fν-noise for which 0 ≤ ν ≤ 2; “near-pink 1/f-noise” refers to cases where 0.5 ≤ ν ≤ 1.5 and “pink noise” refers to the specific case were ν = 1. All 1/fν-noises are defined by the shape of their power spectrum S(ω):

S ∝ 1/ων

Here ω = 2πf is the angular frequency. “White noise” is characterized by ν ≈ 0, and its integral, “red” or “brown” noise, has ν ≈ 2. A classical random walk along a time axis is an example of the latter, and its derivative produces white noise.

**) Since a FFT analysis requires a series length satisfying a power or 2, only the first 4096 steps were included.

***) 1/f noise is challenging, both mathematically and conceptually:

The non-integrability in the cases ν ≥ 1 is associated with infinite power in low frequency events; this is called the infrared catastrophe. Conversely for ν ≤ 1, which contains infinite power at high frequencies, it is called the ultraviolet catastrophe. Pink noise (ν = 1) is non-integrable at both ends of the spectrum. The upper and lower frequencies of observation are limited by the length of the time series and the resolution of the measurement, respectively.
Halley and Inchausti (2004), p R7.

 

The Hidden Layer

Focusing on the statistical pattern of space use without acknowledging the biophysical model for the process will create much confusion and unnecessary controversy. Ecologists are now forced to get a better grip on concepts from statistical mechanics than earlier generations. For example, to understand the transformation from data on actual behaviour to pattern analysis of space use, the concept of the hidden layer represents the first gate to pass.

Research on animal movement and space use has always had a central place in ecology. However, as more field data, better computers and more sophisticated statistical methods have become available, some old dogma have come under attack. Specific theoretical aspects of this quest for improved model realism have emerged from the rapidly growing cooperation between biologists and physicists in the emerging field of macro-level biophysics. The so-called Lévy flight foraging hypothesis is one example. And, of course, I can’t resist mentioning the MRW theory.

A booted eagle Hieraaetus pennatus is triggering a flock of spotless starlings Sturnus unicolor to show swarming behaviour. Malaga river delta, December 2017. Photo: AOG.

In 1985 Charles Krebs described ecology as the scientific study of the interactions that determine the distribution and abundance of organisms. In an ethological context animal space use is studied on two levels – tactical and strategic. The tactical level regards understanding individual biology and behavioural ecology on a moment-to-moment temporal scale. Strategic space use adds an extra layer of complexity to the tactical behaviour. In a simplistic manner we may refer to this layer as the animal’s state at a given moment in time; for example whether it is hungry or not (e.g., in hunting mode). Strategy also involves processing of memory-based goals. Strategies executed at coarser time scales than tactics. Some of the interaction between tactics and strategy may then – under specific conditions (see below) – be transformed to dynamic models at the tactical level; so-called mechanistic models, which consists of a set of executable rules covering respective cognitive and environmental conditions. Validating the model dynamics and resulting statistical patterns against real animal data then rates the degree of model realism. For example; realistic, tactical models have been developed to cast light on the “clumping behaviour” (dense swarming) of flock of birds that are threatened by a raptor.

The myriad of rules that influence animal movement makes detailed modelling an impossible task, and would anyway only lead to a descriptive picture with no value to ecological hypothesis testing. In fact, the signature of successful modelling is simplification. Thus, only specific aspects of the reference individual’s behaviour can be included and scrutinized.

The present post addresses one particular aspect of system simplification; coarse-graining the temporal scale. This approach implies a qualitative change of how the space use system is observed and analyzed. Actually, temporal coarse-graining is forced upon us when studying animal space use from sampling an individual’s successive displacements as a series of locations (fixes) during a given period of time. During each inter-fix interval the observed displacement regards the resultant vector from a myriad of intermediate and unobserved events. What has happened to the moment-to-moment kind of behavioural ecology? It has become buried below the hidden layer.

At the surface of this hidden layer you lose sight of behavioural details (like raptor response and swarming rules) but you gain access to an alternative perspective of movement and space use. Alternative statistical descriptors are emerging at this temporally coarser scale, following the laws of statistical mechanics. What is analyzed above the hidden layer is the over-all pattern from many displacements events that are aggregated into a spatial scatter of fixes.

For example, you may coarse-grain both the temporal and spatial system dimensions, and study the aggregated distribution of fixes at the spatial scale of virtual grid cells (pixels) and temporal scale of the fix sampling period. The spatio-temporal variations in intensity of space use within the actual space-time extents then allows for modelling and hypothesis testing, but now using statistical-mechanical descriptors of space use intensity. These descriptors are either not valid below the hidden layer (e.g., the information content of local density of fixes) or they have an alternative interpretation (e.g., movement as a “step” versus movement as a resultant vector for a given interval and location). Both levels of analysis require large sets of input to allow for statistical treatment.

Why is the hidden layer concept and the statistical-mechanical approach more important to relate to today than in earlier decennia? The short answer is the realization – seeded by better and more extensive data – that animal space use involves more than a couple of universality classes of movement (see this post). In fact, in my book, papers and blog posts I have detailed eight classes, most of which are unfamiliar to you.

To understand space use that is influenced by spatial memory and scale-free movement, statistical mechanical modelling is a prerequisite for realistic representation of such complex systems, unless you limit your perspective to a short-term behavioural bout within a very localized arena. In other words, “a single piece of a jigsaw-puzzle of space use dynamics”. For example, if you zoom closely into a small segment of a circle you observe an approximately straight line. Take a step outwards, and you are facing the qualitatively different geometry – the mathematics of a curve and finally a full circle. Stubbornly staying within the linear framework when analyzing more extensive objects than what you observe at fine scales will force you into a corner filled with paradoxes.

Fine-grained and coarse-grained analyses of animal space use are complementary approaches to the same system.

 

MRW and Ecology- Part VII: Testing Habitat Familiarity

Consider having a series of GPS fixes, and you wonder if the individual was utilizing familiar space during your observation period – or started building site familiarity around the time when you started collecting data. Simulation studies of Multi-scaled random walk (MRW) shows how you may cast light on this important ecological aspect of space use.

First, you should of course test for compliance with the MRW assumptions, (a) site fidelity with no “distance penalty” on return events, (b) scale-free space use over the spatial range that is covered by your data, and (c) uniform space utilization on average over this scale range. One single test in the MRW Simulator, the A(N) regression, cast light on all these aspects. First, you seek to optimize pixel resolution for the analysis (estimating the Characteristic scale of space use, CSSU). Next, if you find “Home range ghost” compliance; i.e., incidence I expands proportionally with square root of sample size of fixes, your data supports (a) spatial memory utilization with no distance penalty due to sub-diffusive and non-asymptotic area expansion, (b) scale-free space use due to linearity of the log[I(N)] scatter plot, and (c) equal inter-scale weight of space use due to slope ≈ 0.5.

Supposing your data confirmed MRW, how to test for time-dependent strength of habitat familiarity? Consider the following simulation example, mimicking space use during a season and under constant environmental conditions.

The red dots show log(N,I) for various sample sizes up to the total set of 11,000 fixes. Each dot represents the average I for respective N of the two methods continuous sampling and frequency sampling (counteracting autocorrelation effect; see a previous post). However, analyzing the first 1,000 fixes separately (black dots) consistently revealed a more sloppy space use in terms of aggregated incidence at a given N, relative to the total season. The next 1,000 fixes, however, was compliant with the total series both with respect to slope and y-intercept (CSSU) (green dots).

The reason for the discrepancy in space use during the initial period of fix sampling* was in the present scenario the actual simulation condition; site familiarity was set to develop “from scratch” simultaneously with the onset of fix collection. I define strength of site familiarity as proportional with the total path length from which the model animal collects a previous location to return to**. In the start of the sampling period, the underlying path is short in comparison to the total path that was traversed during the total season, and – crucially – return steps targeted previous locations from the actual simulation period only, and not locations prior to to this start time. In other words, the animal was assumed to settle down in the area at the point in time when the simulation commenced.

To conclude, if your data shows CSSU and slope of similar magnitude in the early and later phase of data collection, you sampled an individual with a well-established memory map of its environment during the entire observation period. The implicit assumption for this conclusion is of course that the environmental conditions was constant during the entire sampling period, including the initial phase. Using empirical rather than synthetic data means that additional tests would have to be performed to cast light on this aspect.

NOTE

*) The presentation above reflects the pixel resolution that was optimized for the total series. The first 1,000 fixes showed a more coarse-grained space use, reflected in a 50% larger CSSU scale (not shown: optimal pixel size was 50% larger for this part of the series) despite constant movement speed and return rate for the entire simulation period. In this scenario a larger CSSU [coarser optimal pixel for the A(N) analysis] signals a less mature habitat utilization in the home range’s early phase. The CSSU was temporarily inflated during build-up of site familiarity, but – somewhat paradoxically – the accumulated number of fix-embedding grid cells (incidence) for a given N at this scale was smaller. These two effects, reflecting degree of habitat familiarity during home range establishment, should be considered a transient effect.

**) Two definitions should be specified:

  • I define strength of site familiarity as proportional with the total path length from which the model animal collects a previous location to return to.
  • I define strength of site fidelity as proportional with the return frequency.

Both definitions rest on the assumptions of no distance penalty on return targets and no time penalty on returns; i.e., infinite spatio-temporal memory horizon relative to the actual sampling period.