The Inner Working of Parallel Processing

The concept of scale-free animal space use becomes increasingly difficult to avoid in modeling and statistical analysis of data. The empirical support for power law distributions continue to pile up, whether the pattern appears in GPS fixes of black bear movement or in the spatial dispersion of a population of sycamore aphids. What is the general class of mechanism, if any? In my approach into this challenging and often frustrating field of research on complex systems, one particular conjecture – parallel processing (PP) – percolates the model architecture. PP requires a non-mechanistic kind of dynamics. Sounding like a contradiction in terms? To illustrate PP in a simple graph, let’s roll dice!

Please note: the following description represents novel details of the PP concept, still awaiting journal publication. Thus, if you are inspired by this extended theory of statistical mechanics to the extent that it percolates into your own work, please give credit by referring to this blog post (or my book). Thank you.

The basic challenge regards how to model a process that consists of a mixture of short term tactics and longer time (coarser scale) strategic goals. Consider that the concept of “now” for a tactical response regards a temporally finer-grained event than the time scale for executing a more strategic event, which consequently takes place within a more “stretched” time frame relative to the tactical scale.

“Strategy” is defined in a hierarchy theoretical manner; coarser scale strategy consequently invokes a constraint on finer scaled events (references in my book). For example, while an individual executes a strategic change of state like starting a relatively large-distance displacement (towards a goal), finer-scaled events during this execution (consider shorter time goals) are processed freely but within within the top-down constraint that they should not hinder the execution of the coarser goals. Hence, the degrees of process freedom increases with the scale distance between a given fine-scaled goal and a coarser-scaled goal.

To illustrate such a a PP-compliant scale range from tactics to strategy within an extended statistical-mechanical system, consider the two-dimensional graph to the right. The x-axis represents a sequence of unidirectional classic time and the y-axis represents a log2-scaled* expression of time’s orthogonal axis, “elacs” (ε) along this sequence.

The continuous x-y plane has been discretized for simpler conceptualization, and each (x,y) pair shows a die. This die represents a potential change of state of the given process at the given point in time and at the given temporal scale. An actual change of state at a given (t,ε) location is marked by a yellow die, while a white die describes an event still in process at this scale. The respective number of eyes on each die could represent a set of available states for a given system variable at this scale. To illustrate complex dynamics (over-)simplistically in terms of concepts from quantum mechanics, consider each magnitude of ε at the y-axis to represent a wave length in a kind of “complex system” wave function and each yellow die represents a “collapse” of this probability wave into a specific execution of the given event at a given point of unit time this time scale.

As the system is viewed towards coarser time scales (larger ε), the average frequency of change of state vanishes proportionally with 1/ε = 1/bz, where b is the logarithmic base and increasing z describes increasing scale level of bz. In other words, the larger the z, the more “strategic” a given event at this scale. In short, consider that each die on scale level 1 [log(b0)=1] is rolled at each time increment t=1, t=2, …, t=8; each die at level 2 [log(b1)=2] is on average rolled each second time increment, an so on.

In the illustrative example above, no events have taken place during the eight time increments at the two coarsest scales bz where z=7 (ε=128) and z=8 (ε=256). A substantial increase of the observation period would be needed to increase the probability of actually observing such coarse-scaled change of system state.

More strategic events are executed more rarely. Strategic events at a given scale bare initiated in a stochastic manner when observed from a finer time scale (smaller z), but increasingly deterministic when observed from coarser time scales. At finer scales such a strategic event may be inexplicable (thus appearing unexpectedly at a given point in time), while the causal relationship of the given process is established (visible) when the process is observed at the properly coarsened time scale. However, at each time scale there is an element of surprise factor, due influence from even coarser scale constraints and even lower frequency change of state of the system at these coarser scales. 

The unit time scale, log(b0)=1, captures the standard time axis, which is one-dimensional as long as the system can be described as non-complex. In other words, the y-axis’ dynamics do not occur, and – consequently – it makes no sense to talk about a parallel process in progress**. In this standard scale-specific framework, time is one-dimensional and describes scale-specific processes realistically. This includes the vast theories of low order Markovian processes (“mechanistic” modeling), the  mathematical theory of differential equations (calculus), and standard statistical mechanics.

For a deeper argument why a PP kind of fundamental system expansion seems necessary for a realistic description of system complexity, read my book and my previous blog posts. By the way, it should of course be considered pieces of a theoretical framework in progress.

The ε-concept was introduced in my book to allow for complex dynamics within a non-Markovian physical architecture. In other words, to allow for a proper description of parallel processing the concept of time as we know it in standard modeling in my view needs to be heuristically expanded to a two-dimensional description of dynamics.

The bottom line: it works! In particular, it seems to survive the acid tests when applied on empirical data, both with respect to individual space use and population dispersion.

Environment is hereby expanded with a two-dimensional representation of dynamical time. This implies that an individual’s environment not only consists of its three-dimensional surroundings at a given point in time but also its temporal “surroundings” due to the log compliant (scale-free) scale-stretching of time. In this manner an implementation of parallel processing turns the common Markovian, mechanistically modeled framework into a special case. According to the special case of standard mechanistic dynamics a given process may be realistically represented either by a scale-specific process at a given (unit) scale or a trivial linear superposition of such processes (e.g., a composite random walk toggling between different magnitudes of the diffusion parameter for each “layer”). On the other hand, complexity arises when such a description that is based on one-dimensional time is not sufficient to reproduce the system realistically.

Observe that in a PP-system several events (change of system state) may be executed in parallel! In the illustration above, see for example the situation for t=5 where events at three time scales by chance are initiated simultaneously but at different time scales as defined by ε. Such a kind of dynamics represents a paradox within the constraint of a Markovian (mechanistic) system.

An earlier illustration of the PP framework was given here. For other examples, search this blog for “parallel processing” or read my book.

Various aspects of scaling in animal space use; from power law scaling of displacement lengths (Lévy like distribution), fractal dispersion of GPS fixes (the home range ghost model) and scale free distribution of populations (Taylor’s power law and the Zoomer model) may be natural outcomes of systems that obey the PP conjecture.

NOTE

*) The base, b, of the logarithm does not matter. Any positive integer introduces scaling of the ε-axis.

**) in a standard, mechanistic process an event describes a change of system state at a given point in space at a given point it time. No “time stretching” takes place.

 

Temporally Constrained Space Use, Part III: Critique of Common Models

There is no doubt among field ecologists that animals from a broad range of taxa and over wide range of ecological conditions utilize their environment in a spatial memory-influenced manner. Spatial map utilization have now been verified also well beyond vertebrates, like dragonflies and some solitary wasps. To me at least it is thus a mystery why theoretical models that are void of influence from a memory map; for example ARS, Lévy walk and CTRW (see Part I, II), are still dominating ecological research with mostly no critical questions asked about their feasibility.

It is a fact that the memory-less mainstream models all have a premise that the data should not be influenced by map-dependent site fidelity. In other words, applying ARS, Lévy walk and CTRW models as stochastic representation of space use also implies accepting that the animal’s path is self-crossing by chance only, and not influenced by targeted returns. Such returns can be expected to seriously disrupt results on – for example – habitat selection, since self-reinforcing patch utilization (positive feedback) obviously becomes a serious issue for methods that are based on memory-less space use where revisits are statistically independent events.

Despite performing hypothesis tests on data that obviously contradicts this hidden assumption about lack of spatial memory influence, for example movement in a home range context (where the home range is an emergent property from such returns), memory-less models are applied by cultural instinct or a misconception that alternatives do not exist. “Everybody else is using these standard models, so why not me?”

This attitude obviously hinders space use-related ecological research on its path towards becoming hard science at the level we are used to find in physics, chemistry and geology; i.e., models with strong predictive power. The laid-back excuse that animal ecology is not only more complicated but also basically more complex does not hold anymore. Biophysical research, for example based on inspiration from – or developed in compliance with – my parsimonious MRW model (Song et al. 2010; Boyer et al. 2012; Boyer and Solis-Salas 2014; Mercado-Vásquez and Boyer 2018), show how even complex space use systems may now be treated analytically with success.

So far, there still exists only one book (Gautestad 2015) that is dedicated to criticizing the sloppy culture of model selection in ecological research. The statistical errors that follow from ignoring the frequently violated assumption about memory-less space use are percolating both my book and my blog*.

MRW implements a combination of scale-free space use with memory-dependent, occasional returns to previous sites in accordance to the parallel processing conjecture. The average return interval tret to a previously visited location relative to the sampling interval tobsρ = tret/tobs, will lead to different analytical results a a function of ρ.

This important ratio defines how the observed distribution of step lengths is a function of  memory-influenced movement that complies with the MRW formulation: a mixture of scale-free exploratory steps and occasional returns to a previous location. I cite from Part II:

If the animal in question is utilizing spatial memory a lot of confusion, paradoxes and controversy may thus appear if the same data are analyzed on the basis of erroneously applying memory-less models within different regimes of ρ!

For example, an decreasing tret for a given tobs implies stronger site fidelity. The variable observer effect that is expressed by tobs becomes apparent within a quite wide transition range around tobs ≈ tret. For example, a Brownian motion-like form of the step length distribution may erroneously be found if ρ << 1, and a power law form can be expected when ρ >> 1, with truncated power law to be observed in-between. However, power law compliance may arise both in scale-free but spatially memory-less behaviour (Lévy walk) and MRW when ρ >> 1. Recall that MRW implies a combination of spatially memory-influenced and Lévy walk-like kind of movement in statistical terms.

The step length distributions to the right (Gautestad and I. Mysterud 2005)  illustrates from MRW-simulated data the effect on changing the ratio ρ >> 1 towards ρ < 1 apparently makes the step length distribution shape-shifting from a power law (apparently Lévy) to a negative exponential (apparently Brownian). This paradoxical pattern appears simply from changing sampling frequency of a given series of successive relocations.  As observation frequency becomes larger than the return frequency the paradox appears from comparing the expectation from erroneous of model based on the memory-less space use assumption; i.e., Brownian motion vs. Lévy walk. 

The Figure to the right (Gautestad and A. Mysterud 2013) illustrates the same transition more graphically. The hump (blue colour)  that is observed for for ρ = 10 towards the extreme tail of the distribution, leading to a hump-like “hockey stick” pattern, becomes almost invisible at ρ = 100 Appendix 1 in Gautestad and A. Mysterud, 2013; see also Gautestad 2012). This gradual appearance/disappearance of the hockey stick as a function of ρ >> 1 illustrates the pseudo-LW aspect of MRW. By the way, such a “hump” on the tail part of a power law distribution has in fact been found and commented in several analyses of empirical data. Citing from Gautestad and A. Mysterud (2013):

It is interesting that one of the main issues raised in this  respect regards the “problematic” occasional over-representation of very long step lengths even relative to an ideal Lévy walk distribution, invoking the term “Lévy walk-like” search (Sims and Humphries 2012; Sims et al. 2012). This “hump” in the long tail part of the distribution has been hypothesized to emerge from some kind of environmental forcing (Sims and Humphries 2012). However, here we have shown (Figure 3) that a similar hump – called a hockey stick – is in fact expected by default if MRW-compliant data are analysed within a specific range of the ratio between return events and observation interval.
Gautestad and Mysterud 2013, p14.

The take-home message from these two examples is stressing the importance of testing for spatial memory before choosing which statistical model(s) to apply for a specific analysis.

NOTE

*) In my research I also criticize memory-implementing models where spatial utilization beyond the individual’s current perceptual field builds on a mechanistic (Markov-compliant) kind of information processing. See, for example, this post. Consequently, in the Scaling cube, these Markov models are located in the lower right corner (MemRW), in contrast to the “parallel processing”-based MRW, which you find in the upper right corner. In Gautestad et al. (2013) we tested these alternative model classes on red deer Cervus elaphus, and found strong support for the MRW framework. The red deer moved both in compliance with a scale-free space utilization, in parallel with site fidelity from targeted returns in a manner which supported parallel processing. Additional research has also given support to to MRW lately; for example see Merkle et al. (2014), who tested a set of contemporary hypotheses on memory-influenced movement in free-ranging bison Bison bison and found support for a central premise of MRW in the summer ranges of this species.

REFERENCES

Boyer, D., M. C. Crofoot, and P. D. Walsh. 2012. Non-random walks in monkeys and humans. Journal of the Royal Society Interface 9:842-847.

Boyer, D. and C. Solis-Salas. 2014. Random walks with preferential relocations to places visited in the past and their application to biology. arXiv 1403.6069v1:1-5.

Gautestad, A. O. 2012. Brownian motion or Lévy walk? Stepping towards an extended statistical mechanics for animal locomotion. Journal of the Royal Society Interface 9:2332-2340.

Gautestad, A. O. and A. Mysterud. 2013. The Lévy flight foraging hypothesis: forgetting about memory may lead to false verification of Brownian motion. Movement Ecology 1:1-18.

Gautestad, A. O., L. E. Loe and I. Mysterud. 2013. Inferring spatial memory and spatiotemporal scaling from GPS data: comparing red deer Cervus elaphus movements with simulation models. Journal of Animal Ecology 82:572-586.

Mercado-Vásquez, G. and D. Boyer. 2018. Lotka-Volterra systems with stochastic resetting. arXiv:cond-mat.stat-mech:1809.03975v03971.

Merkle, J. A., D. Fortin, and J. M. Morales. 2014. A memory-based foraging tactic reveals an adaptive mechanism for restricted space use. Ecology Letters Doi: 10.1111/ele.12294.

Sims, D. W. and N. E. Humphries. 2012. Lévy flight search patterns of marine predators not questioned: a reply to Edwards et al. ArXiv 1210.2288: [q-bio.PE].

Sims D. W, N. E. Humphries, R. W. Bradford and B. D. Bruce. 2012. Lévy flight and Brownian search patterns of a free-ranging predator reflect different prey field characteristics. Journal of Animal Ecology 81:432-442.

Song, C., T. Koren, P. Wang, and A.-L. Barabási. 2010. Modelling the scaling properties of human mobility. Nature Physics 6:818-823.

Temporally Constrained Space Use, Part II: Approaching the Memory Challenge

In Part I three models for temporally constrained space use were summarized. Here in Part II I put them more explicitly into the context of ecology with focus on some key assumptions for the respective models. Area restricted search (ARS), Lévy walk (LW) and Continuous time random walk (CTRW) are statistical representations of disparate classes of temporally constrained space use without explicit consideration of spatial memory effects. Hence, below I reflect on a fourth model, Multi-scaled Random Walk (MRW), where site fidelity gets a different definition relative to its spatially memory-less counterparts.

A cattle egret Bubulcus ibis is foraging within a wide perimeter surrounding its breeding site. Spatial memory is utilized not only to be able to return to the nest but also to revisit favored foraging locations during a bout, based on a memory map of past experience. Photo AOG.

First, ARS is typically formulated as a composite random walk-like behaviour in statistical terms, which could be suitable for situations where a Markovian compliant (“mechanistic”) behaviour is either verified or can be reasonably assumed (memory-less  and scale-specific movement in both time and space). In this scenario the diffusion exponent can be estimated for movement bouts in different habitats and time intervals, and the result can be interpreted behavioural-ecologically. For example, the diffusion rate can be expected to be smaller i optimal patches than elsewhere. In other words, the local staying time increases due to a more jagged path.

Second, Lévy walk is a special kind of random walk. Most steps are relatively short but others may be extremely long. Sequences of short steps in-between the long ones make the overall space use appear locally constrained during these periods*). Lévy walk is characterized by a spatially memory-less statistical representation of scale-free (“hierarchical”) movement within a given spatial scale range. Beyond this range the distribution of step lengths will show increased compliance with a non-scaling, truncated Lévy walk; i.e., a composite model with exponential tail rather than a power law for the extreme part of the step length distribution. By analyzing the step length distribution within the scale-free (power law) regime using different sampling intervals one should be able to verify model compliance from stationary power exponent. A Lévy walk is statistically self-similar in space, and thus the power exponent is expected to be relatively unaffected by the sampling scheme; see Reynolds (2008). Calculating the difference in the median step length for a given sampling interval when studying subsets of the movement data under different environmental conditions brings the model into the realm of ecology [see a practical method in; for example, Gautestad (2012)].

Third, Continuous Time Random Walk (CTRW) is suitable where the animal is found to occasionally stop moving. The temporal distribution of the duration of such resting episodes can then be fitted to statistical models; for example, a power law, a negative exponential, or a mixture as in the distribution for truncated power law. The spatial distribution of step lengths is in CTRW fitted independently of the temporal distribution. Bartumeus et al. (2010) applied the CTRW framework to study “intensive versus extensive searching” (scale-free sub-diffusive versus super-diffusive search) in foraging of Balearic shearwaters Puffinus mauretanicus and Cory’s shearwaters Calonectris diomedea along the coast of Spain (Bartumeus et al. 2010). The authors  interpreted the results ecologically with weight on difference between presence and absence of local trawling activity. See Part I, where I gave a brief summary.

However, is CTRW a proper framework for these seabirds? At the end of each foraging bout they obviously utilized spatial memory to successfully return to their breeding location. CTRW assumes consistently random crossing of the movement path due to the model’s lack of spatial memory description. To me it seems illogical to assume that these birds should toggle between memory-dependent and goal-oriented returns at the end (and possibly at the start) of each trip and memory-less Brownian motion (ARS-like?) during foraging when moving in the proximity of trawlers. The same argument about conditional memory switch-off may be raised for scale-free (Lévy-like) search in the absence of trawlers.

In the context of memory-less statistical modelling of movement (the three models above), site fidelity is defined by the strength of “slow motion”, and how the distribution of local staying times is expected to vary with ecological conditions. Compare this with the alternative model Multi-scaled Random Walk, where site fidelity is defined as the strength (frequency) of targeted returns to a previous location on a path. This return frequency may be interpreted as a function of ecological conditions. Hence, MRW explicitly invokes both spatial memory and its relative strength:

Three time scales are defined: the implicit interval between successive displacements in simulations (t), the average return interval to a previous location (tret), and the observation interval on the movement path (tobs). The latter represents GPS locations in real data, and is applied to study the effect from varying ρ = tret/tobs (relative strength of site fidelity for a given tobs).
Gautestad and Mysterud (2013), p4

Note that an increasing tret for a given tobs implies weakened site fidelity, and the functional form of the step length distribution is influenced by the ρ = tret/tobs ratio. For example, a Brownian motion-like form may be found if ρ << 1, and a power law form can be expected when ρ >> 1, with truncated power law (Lévy-like) to be observed in-between. See Figure 3 in Gautestad and I. Mysterud (2005) and Figure 3 in Gautestad and A. Mysterud (2013).

If the animal in question is utilizing spatial memory a lot of confusion, paradoxes and controversy may thus appear if the same data are analyzed on the basis of erroneously applying memory-less models within different regimes of ρ!

The MRW model may thus offer interesting aspects with a potential for alternative interpretation of the results of space use analyses when put into the context of – for example – foraging shearwaters. Thanks to the three times scales for MRW as above – where the third variable, t, represents the unit (t≡1) spatiotemporal scales for exploratory moves – it should be possible to test for example Lévy walk or CTRW against MRW using real movement data. 

More on this in Part III.

NOTE

*) While temporally constrained space use in ARS regards difference in environmental forcing, the occurrence of short-step intervals of random occurrence in a Lévy walk is by default due to intrinsic behaviour.

REFERENCES

Bartumeus, F., L. Giuggioli, M. Louzao, V. Bretagnolle, D. Oro, and S. A. Levin. 2010. Fishery discards impact on seabird movement patterns at regional scales. Current Biology 20:215-222.

Gautestad, A. O. 2012. “Brownian motion or Lévy walk? Stepping towards an extended statistical mechanics for animal locomotion.” Journal of the Royal Society Interface 9: 2332-2340.

Gautestad, A. O., and I. Mysterud. 2005. Intrinsic scaling complexity in animal dispersion and abundance. The American Naturalist 165:44-55.

Gautestad, A. O. and A. Mysterud 2013. “The Lévy flight foraging hypothesis: forgetting about memory may lead to false verification of Brownian motion.” Movement Ecology 1: 1-18.

Reynolds, A. 2008. How many animals really do the Lévy walk? Comment. Ecology 89:2347-2351.

Temporally Constrained Space Use, Part I: Three Models

Temporally constrained space use is a key property of animal movement. With respect to vertebrates three main statistical representations are particularly popular among modelers, based on disparate theoretical foundations. Which one should one use for analysis of a particular data set? As always in ecological research, one needs some simple protocol to distinguish between alternative model assumptions.

Animals paths are neither straight lines nor a dense dot of positions from juggling back and forth at the same spot. Typically we see a complicated combination of these two extreme patterns; some quite straightforward moves occasionally abrupted by more jagged movement. In order to infer behavioural and ecological results from space use one needs to study the data in the context of a realistic theoretical framework.

Outside the realm of temporal site fidelity; e.g., a drifting home range (Doncaster and Macdonald 1991),  ecological textbooks typically explain the mixture of straight and convoluted movement bouts as Area restricted Search (ARS).

Area-restricted search. A foraging pattern in which a consumer responds to an intake of food by slowing down its movement and remaining longer in the vicinity of the most recently located food item. This behaviour causes consumers to remain longer in areas where the density of food items is high than in areas where it is low.
A Dictionary of Ecology. Encyclopedia.com. 29 Jul. 2018.

In terms of statistical models, this rather qualitative description of behaviour may be formulated in many ways. A popular one is to combine classic or correlated random walk with two distinct parameter values for intrinsic step length distribution (the λ value) in F(r) ∝ e-λr. In this manner, movement varies with the jaggedness of the path (number of turns pr. period of time) rather than the movement speed. A larger λ implies smaller step lengths on average, which tend to increase local staying time during intervals when this movement mode is active. By fine-tuning respective λ1 and λ2 and  in such a superposition of two “randomly toggling modes” this so-called composite random walk can even be made to mimic the second main approach to model complicated paths, the Lévy flight model (Benhamou 2007; but see Gautestad 2013):

Lévy flights are, by construction, Markov processes. For general distributions of the step-size, satisfying the power-like condition, the distance from the origin of the random walk tends, after a large number of steps, to a stable distribution due to the generalized central limit theorem, enabling many processes to be modeled using Lévy flights.
https://en.wikipedia.org/wiki/Lévy_flight.

Lévy flights (and walks) are typically thought of as producing “fat tailed” step length distributions. However, within an often observed parameter range of the distribution (Pr(U>u = O(u)-k with 1<k<2) in real animals, one should not forget that half of the displacements in the distribution are in fact relatively short! In fact, the dominating step length bin is ultra-short moves, leading to a path that is conceptually (albeit not statistical mechanically) similar to the slow-down effect of more jagged moves during composite random walk.

Such “knots” of ultra-short moves of a Lévy path brings us to the third class of movement models, Continuous time Random Walk (CTRW). In this case the movement may be arrested for a shorter or longer period:

The step length distribution and a waiting time distribution (“resting” between steps) describe mutually independent random variables. This independence between jump lengths and waiting time to perform the next step makes the difference between CTRW and ordinary random walk (including Brownian motion).
Page 91 in: Gautestad, A. O. 2015, Animal Space Use: Memory Effects, Scaling Complexity, and Biophysical Model Coherence Indianapolis, Dog Ear Publishing.

Again, by fine-tuning the intrinsic model formulations and parameter conditions of CTRW one may in fact produce a Lévy flight-like movement pattern (Reynolds 2010; Schulz et al. 2013). Towards another extreme, one may achieve a classic random walk look-alike pattern. Conceptually, CTRW thus belongs on the left wall of the Scaling cube (x=0; y and z varies in accordance to model variant and parameter settings).

CTWR (which is closely connected to the concept of fractional diffusion) is typically invoked to explain a kind of switching – “intensive versus extensive searching” (scale-free sub-diffusive versus super-diffusive search) – that is now being found in real animal data like foraging of Balearic shearwaters Puffinus mauretanicus and Cory’s shearwaters Calonectris diomedea along the coast of Spain (Bartumeus et al. 2010).

Since I lack good pictures of shearwaters I decorate this post with a snapshot of another seabird, the sandwich tern Thalasseus sandvicensis. At least the picture was taken in the same general area of the Mediterranean. Photo: AOG.

In particular, they found that the birds in the presence of fisheries were more “restless” when taking advantage of fishery discards than in absence of trawlers, implying a higher probability of leaving a localized area pr. unit time during trawling activity. Specifically, during period of fishery discard utilization tended to show a smaller temporal scaling exponent for staying time (the temporal aspect of CTRW; larger β in their site fidelity function S(t) ∝ tβ, implying fewer events with a particularly prolonged staying time). In the spatial aspect of flight length distributions, when fisheries discard was present the birds tended to show good compliance with a negative exponential function*.

On the other hand, in the absence of trawlers a compliance with Lévy walk (truncated power law) was found. Space and time brought together, they found that the birds tended towards sub-diffusive foraging in the presence of fisheries discard (despite – somewhat counter-intuitively – a smaller local staying time within a given patch of a given spatial resolution), and super-diffusive foraging under natural conditions.

Despite the mathematical and numerical attractiveness of composite random walk, Lévy walk and CTRW and their well-explored statistical properties, they unfortunately all lack what may be a crucial component of foraging behaviour: spatial memory; i.e., the condition x>0 of the scaling cube.

Without spatial memory, self-crossing of an individual’s path happens by chance only, not intentionally by returns to a previous location (site fidelity, whether we consider short term or long term time scales).

  • May a model that implements spatial memory offer an alternative interpretation of the results presented by Bartumeus et al. 2010?
  • May this alternative hypothesis even offer a logical explanation for the apparent paradox that the birds were more restless locally when the movement simultaneously was more spatially constrained in overall terms?

The memory aspect of temporally constrained space use will be explored in Part II.

NOTE

*) Somewhat confusingly relative to common practice Bartumeus et al. (2010) use the exponential formula variant F(r) ∝ e-r/λ, which makes average step length proportional with λ rather than 1/λ.

REFERENCES

Bartumeus, F., L. Giuggioli, M. Louzao, V. Bretagnolle, D. Oro, and S. A. Levin. 2010. Fishery discards impact on seabird movement patterns at regional scales. Current Biology 20:215-222.

Benhamou, S. 2007. How many animals really do the Lévy walk? Ecology 88:1962-1969.

Doncaster, C. P., and D. W. Macdonald. 1991. Drifting territoriality in the red fox Vulpes vulpes. Journal of Animal Ecology 60:423-439.

Gautestad, A. O. 2013. Animal space use: Distinguishing a two-level superposition of scale-specific walks from scale-free Lévy walk. Oikos 122:612-620.

Reynolds, A. 2010. Bridging the gulf between correlated random walks and Lévy walks: autocorrelation as a source of Lévy walk movement patterns. J. R. Soc. Interface 7: 1753–1758.

Schulz, J. H. P., A. V. Chechkin, and R. Metzler. 2013. Correlated continuous time random walks: combining scale-invariance with long-range memory for spatial and temporal dynamics. J. Phys. A: Math. Theor. 46:1-22.

Analytical Sensitivity to Fuzzy Fix Coordinates

In empirical data GPS fixes are never exact positions. A “fuzziness field” will always be introduced due to uncertain geolocation. When analyzing a set of fixes in the context of multi-scaled space use, are the parameter estimates sensitive to this kind of statistical error? Simultaneously, I also explore the effect on constraining the potential home range by disallowing sallies to the outermost range of available area.

To explore the triangulation error effect on space use analysis I have simulated Multi-scaled random walk in a homogeneous environment with N=10,000 fixes (of which the first 1,000 fixes were discarded) under two scenaria; a “sharp” location (no uncertainty polygons), and strong fuzziness. The latter introduced a random displacement to each x-y coordinate with a standard deviation (SD) of magnitude approximately equal to the system condition’s Characteristic scale of space use (CSSU). Displacements to the outermost parts of the given arena was disallowed, to study how this may influence the analyses. I then ran the following three algorithms in the MRW Simulator: (a) analysis of A(N) at the home range scale, (b) analysis of A(N) at a local scale (splitting the home range into four quadrants), and (c) analysis of the fix scatters’ fractal property.

The following image shows the sharp fix set and the strongest fuzzyness condition.

By visual inspection is is easy to spot the effect from the spatial error (SD = 1182 length units, upper row to the right). However, the respective A(N) analyses at the home range scale generated a CSSU estimate that was only 10% larger in the fuzzy set (linear scale). When superimposing cells of CSSU magnitude onto each fix, the home ranges appear quite similar in overall appearance and size. This was to be expected, since fuzziness influences fine-resolution space use only.

Visually, both home range areas appear somewhat constrained with respect to range, due to the condition to disallow displacements to the peripheral parts of the defined arena (influencing less than 1% of the displacements).

A(N) analysis (the Home range ghost). The two conditions appeared quite similar in plots of log[I(N)], where I is the number of fix-embedding pixels at optimized pixel resolution, as described in previous posts.

However, for the fuzzy series there is a more pronounced break-point with a transition towards exponent ∼0.5 in sample size of magnitude larger than log2(N) ≈ 3. This break-point “lifted” the regression line somewhat for the fuzzy series, leading to a slightly larger intercept with the y-axis when interpolating towards log(N) = 0. This difference between the two conditions with respect to the y-intercept, Δlog(c) from the home range ghost formula log[I(N)] = log(c) + z*log(N), also defines the difference in CSSU when comparing sharp and fuzzy data sets. recall that CSSU ≡ c.

The spatial constraint on extreme displacements relative to the respective home ranges’ centres apparently did not influence these results.

I have also superimposed local CSSU-analysis for respective four quadrants of the two home ranges. When area extent for analysis is constrained in this manner; i.e., spatial extent is reduced to 1:4 in area terms (1:2 linearly) for each local sub-set of fixes, respective (N,I) plot needs to be adjusted by a factor which compensates for the difference in scale range.

Since the present MRW conditions were run under fractal dimension D=1, each local log2[(N,I)] plot is rescaled to log2[(N,I)] + log2D(grain/extent)] = log2[(N,I)] + 1 when Δ regards the relative change of scale under condition D=1. After this rescaling the over-all CSSU and the local CSSU are overlapping, as shown by the regression lines in the A(N) analyses above. Overlapping CSSU implies that the four quadrants had similar space use conditions, which is true in this simplified case.

Fractal analysis. The Figure below shows the magnitude of incidence I as a function of relative spatial resolution k (the “box counting method”), spanning the scale range from k=1 (the entire arena, linear scale 40,000 units) and down to k = 40,000/(212) = 9.8 units, linear scale*.

Starting from the coarsest resolution, k=1, the log-log regression obviously shows = 1. At resolution k=1:2 and k=1:4 (4 and 16 grid cells, respectively), I = 4 and I = 4. In other words, all boxes contain fixes at k=1/2, apparently satisfying a two-dimensional object, and at k=1:4 some empty cells (12 of 16) are peeled away as empty space from the periphery of the fix scatter at this finer resolution.

This coarse-scale pattern is a consequence of the defined space use constraint. Disallowing “occasional sallies” outside the core home range obviously influences the number of non-empty boxes relative to all boxes available at the coarse resolutions 1<k<4-1.

However, at progressively finer resolutions – below the “space fill effect” range transcending down to ca k=1:32 – the true fractal nature of the scatter of fixes begin to appear due to log-log linearity, confirming a statistical fractal with a stable dimension D≈1.1 over the resolution range 2-9 < k < 2-5  (showing log-log slope of -1.1). At finer resolutions, the dilution effect flattens further expansion of I. The D=1.1 scale range is close to expectation from the simulation conditions Dtrue=1, while the deviations above and below this range are trivial statistical artifacts from space filling and dilution.

The most interesting pattern regards the finest resolution range, where the fuzzy set of fixes somewhat unexpectedly follows a similar log(k,I) pattern as the non-fuzzy set. However, the slight difference in D, which increases to D =1.17, may be caused by the fuzziness (proportionally stronger space-fill effect as resolution is increased).

To conclude, if the magnitude of position uncertainty does not supersede the individual’s CSSU scale under the actual conditions, the A(N) analyses of MRW compliant space use does not seem to be seriously influenced by location fuzziness. The “fix scrambling” error is in most part subdued towards finer scales than the CSSU.

However, the story doesn’t end here. I have superimposed a dotted red line onto the Figure above. Overlapping with the D=1 section of grid resolutions, the line is extrapolated towards the intersection with log(I) ≈ 0 at a scale that is 21.5 = 2.8 times larger (linear) scale than the actual arena for the present analyses. In other words, in absence of area constraint and step length constraint (and disregarding step length constraint due to limited movement speed of the individual) one should expect the actual set of fixes to “fill up” the missing incidence over the coarse scale range, leading to D≈1 for the entire range from the dilution effect range towards coarser scales.

I have also marked the CSSU scale as the midpoint of the red line. A resolution of log2(k)=-5.5 is in fact very close to the CSSU estimate from the A(N) method (k=1:50 of actual arena using the A(N) method, versus 1:35 of the arena according to the fractal analysis). This alternative method to estimate CSSU was first published in this blog post.

A preliminary development towards this approach was explored both theoretically and empirically in Gautestad and Mysterud (2012).

NOTE

*) This analysis of N= 8,000 fixes, spanning box counting of 1, 2, 4, 16, 32, … 16.8 million grid cells at respective scales, took ca 10 hours pr. series in the MRW Simulator, using a laptop computer. I suppose this enormous number cracking it would be outside the practical range of a similar algorithm in R.

REFERENCES

Gautestad, A. O., and I. Mysterud. 2012. The Dilution Effect and the Space Fill Effect: Seeking to Offset Statistical Artifacts When Analyzing Animal Space Use from Telemetry Fixes. Ecological Complexity 9:33-42.

 

The Balance of Nature?

To understand populations’ space use one needs to understand the individual’s space use. To understand the individuals’ space use one needs to acknowledge the profound influence of spatio-temporal memory capacity combined with multi-scale landscape utilization, which continues to be empirically verified at a high pace in a surprisingly wide range of taxa. Complex space use has wide-ranging consequences for the traditional way of thinking when it comes to formulate these processes in models. In a nutshell, the old and hard-dying belief in the balance of nature needs a serious re-formulation, since complexity implies “strange” fluctuations of abundance over space, time and scale. A  fresh perspective is needed with respect to inter-species interactions (community ecology) and environmental challenges from habitat destruction, fragmentation and chemical attacks. We need to address the challenge by rethinking also the very basic level of how we perceive an ecosystem’s constituents: how we assume individuals, populations and communities to relate to their surroundings in terms of statistical mechanics.

Stuart L. Pimm summarizes this Grand Ecological Challenge well in his book The Balance of Nature? (1991). Here he illustrates the need to rethink old perceptions linked to the implicit balancing principle of carrying capacity*, and he stresses the importance of understanding limits to how far population properties like resilience and resistance may be stretched before cascading effects appear. In particular, he advocates the need to extend the perspective from short-series local-scale population dynamics to long-term and broad scale community dynamics. In this regard, his book is as timely today as it was 27 years ago. However, in my view the challenge goes even deeper than the need to extending spatio-temporal scales and the web of species interactions.

Balancing on a straw – an Eurasian wren Troglodytes troglodytes (photo: AOG).

My own approach towards the Grand Ecological Challenge started with similar thoughts and concerns as raised by Pimm**. However, as I gradually drifted from being a field ecologist towards actually attempting to model parsimonious population systems I found the theoretical toolbox to be void of key instruments to build realistic dynamics. In fact, the current methods were in many respects even seriously misleading, due to what I considered some key dissonant model assumptions.

In my book (Gautestad 2015), and here in my subsequent blog, I have summarized how – for example – individual-based modelling generally rests on a very unrealistic perception of site fidelity (March 23, 2017: “Why W. H. Burt is Now Hampering Progress in Modern Home Range Analysis“). I have also found it necessary to start from scratch when attempting to build what I consider a more realistic framework for population dynamics (November 8, 2017: “MRW and Ecology – Part IV: Metapopulations?“), for the time being culminating with my recent series of post on “Simulating Populations” (part I-X).

I guess the main take-home message from the present post is:

  • Without a realistic understanding; i.e., modelling power, of individual dispersion over space, time and scale it will be futile to build a theoretical framework with deep explanatory and predictive value with respect to population dynamics and population ecology. In other words, some basic aspects of system complexity at the “particle level” needs to be resolved.
  • Since we in this respect typically are considering either the accumulation of space use locations during a time interval (e.g., a series of GPS fixes) or a population’s dispersion over space and how it changes over time, we need a proper formulation of the statistical mechanics of these processes.In other words, when simplifying extremely complicated systems into a manageable set of smaller set of variables, parameters and key interactions, we have to invoke the hidden layer.
  • With a realistic set of basic assumptions in this respect, the modelling framework will in due course be ready to be applied on issues related to the Grand Ecological Challenge – as so excellently summarized by Pimm in 1991. In other words, before we can have any hope of a detailed prediction of a local or regional faith of a given species or community of species under a given set of circumstances, we need to build models that are void of the classical system assumptions that have cemented the belief in the so-called balance of nature.

NOTES

*) The need to rethink the concept of carrying capacity and accompanying “balance” (density dependent regulation) should be obvious from the simulations of the Zoomer model. Here a concept of carrying capacity (called CC) is introduced at a local scale only, where – logically – the crunch from overcrowding is felt by the individuals. By coarse-graining to a larger pixel than this finest system resolution we get a mosaic of local population densities where each pixel contains a heterogeneous collection of intra-pixel (local) CC-levels. If “standard” population dynamic principles applies, the population change when averaging the responses over a large number of pixels with similar density should be the same whether one considers the density at the coarser pixel or the average density of the embedded finer-grained sub-pixels. This mathematical simplification follows from the mean field principle. In other words, the sum equals the parts. On the other hand, if the principle of multi-scaled dynamics applies, two pixels at the coarser scale containing a similar average population density may respond differently during the next time increment due to inter-scale influence. At any given resolution the dynamics is as a function not only of the intra-pixel heterogeneity within the two pixels but also of their respective neighbourhood densities; i.e., the condition at an even coarser scale. The latter is obviously not compliant with the mean field principle, and thus requires a novel kind of population dynamical modelling.

**) In the early days I was particularly inspired by Strong et al. (1984), O’Neill et al. (1986) and L. R. Taylor; for example, Taylor (1986).

REFERENCES

Gautestad, A. O. 2015, Animal Space Use: Memory Effects, Scaling Complexity, and Biophysical Model Coherence Indianapolis, Dog Ear Publishing.

O’Neill, R. V., D. L. DeAngelis, J. B. Wade, and T. F. H. Allen. 1986. A Hierarchical Concept of Ecosystems. Monographs in Population Biology. Princeton, Princeton University Press.

Pimm, S. L. 1991, The balance of nature? Ecological issues in the conservation of species and communities. Chicago, The University of Chicago Press.

Strong, D.E., Simberloff, D., Abele, L.G. & Thistle, A.B. (eds). 1984. Ecological Communities: Conceptual Issues and the Evidence. Princeton,Princeton University Press.

Taylor, L. R. 1986. Synoptic dynamics, migration and the Rothamsted insect survey. J. Anim. Ecol. 55:1-38.