Homogeneous or Heterogeneous Environment?

When simulating animal space use – whether at the level of individuals or populations – some of the first conditions to define regard the attributes of the arena. The environment may be simple (homogeneous) or complicated (heterogeneous). As always the choice is a matter of weighing model simplicity against realism. However, another dimension of this choice is often overlooked. Animal space use is always a mixture of intrinsic and extrinsic driving forces. Thus, by simulating in a homogeneous environment one may to some extent disentangle these aspects by studying the intrinsic dynamics first and then – in follow up simulations – adding extrinsic variability.

This is an important point to stress, since simple models in homogeneous environment have occasionally been criticized for lack of realism.

As an example of how simplicity may contribute to important insight, consider traditional home range simulations (the convection model), observed at a statistical-mechanical temporal scale with sufficiently deep hidden layer (see definition in the post on the scaling cube). Convection regards a mixture of diffusion and advection. In a homogeneous environment the former will tend to disperse the animal homogeneously. On the other hand, advection will tend to force the spatial scatter of fixes into a smooth utilization distribution function, with higher density of fixes close to the defined centre of attraction.

In a homogeneous environment one has the opportunity to study the relative effects from these two intrinsic driving forces in isolation, by running scenaria with variation of parameters for diffusion versus advection strength (and actual formulation for each of them).

The results may look trivially simplistic, but they represent the important foundation for the next step: adding realism by introducing a heterogeneous habitat. In the convection model the output in the form of a multi-modal and complicated utilization distribution in a heterogeneous environment will be substantially easier to comprehend, given the background from the homogeneous condition. Start with the simple – move towards the more complicated. Albert Einstein said something like “Always simplify a model as far as possible – but not further“. Hence, to study intrinsic aspects of space use behaviour, use a homogeneous environment. Then move to heterogeneous conditions to find the optimal level of simplicity for this – more realistic – condition with respect to exploring model architecture for ecological inference in the given context of interest.

c-variabilityWith respect to the convection model for home range behavour, simulation in a homogeneous environment has little to offer. This condition has already been explored in great detail over so many years and in so many contexts that there are no more nuggets to find. Both the theory (diffusion, advection and traditional statistical mechanics) and specific model variants are solidly developed from the “intrinsic behaviour” perspective. Only added complicatedness (terms for behavioural responses to specific environmental conditions) may bring up new insight.

On the other hand, when it comes to complex space use, where intrinsic movement implies multi-scaled (and even scale-free) exploratory bouts of movement in parallel with spatial memory utilization (site fidelity), the theory is still in such an early phase of development that there is new insight to find around every corner. For example, the present illustration shows a simulated set of fixes from the MRW model, where the environment is conceptualized to be homogeneous in the centre and left part, and with lower quality in the rightmost part. The two left-hand sub-sections of the home range show how the individual’s characteristic scale of space use (CSSU) is of similar magnitude despite large difference in local density of fixes. CSSU is derived from the intercept with the y-axis in the Incidence regression log[I(N)].

These two localities have similar environmental conditions in the simulation; i.e, this part of the home range covers a homogeneous environment. The CSSU reveals this condition with great statistical clarity, while the classic density proxy N/A does not, given the present premise of complex space use.

On the other hand, the right-hand locality is in a lower-quality environment. The space use thus reflects this, by a larger y-intecept (larger CSSU, and thus weaker habitat selection strength in this locality), despite equal N in the leftmost and rightmost examples. In other words, the local density of fixes, N/A, is the same in the leftmost and rightmost analysis, but in the latter case N fixes are less “clumped” – more spatially randomomized – at the finer spatial resolution (smaller pixel size within the grid cell). Thus, incidence is larger for a given N.

This conceptual illustration exemplifies why my simulations of the MRW model typically have been executed in a homogeneous environment. For example, the CSSU concept was derived from this condition, and several theoretical predictions based on this condition were verified in follow-up simulations in heterogeneous environment. Several steps towards more realistic and heterogeneous scenaria have been performed, and more are in the coming.

Towards an Alternative Proxy for Space Use Intensity

Scale-free exploratory moves and memory-map dependent return events – the MRW model – is here applied as representative of complex space use. Since the spatial dispersion of fixes in the home range’s interior in this scenario self-organizes towards a statistical fractal with dimension D≈1 the utilization distribution (UD) becomes error-prone as a proxy variable for space use intensity.  UD depends on space use that satisfies the classic condition D≈2 (“a smooth, differentiable density surface” at fine resolutions). Consequently, from theoretical arguments under condition of complex space use, local density of GPS fixes (the UD)  is expected to be a poor indicator of true space use intensity, and thus unfit as optimal indicator of habitat selection. The obvious question then arises, what is the alternative to estimators like the popular kernel density estimation? Here I summarize one such alternative – the individual’s characteristic scale of space use (CSSU). 

To make a long story short (the longer one is presented in my book), the alternative to the UD under condition of complex space use is 1/c; where c is a parameter in the Home range ghost formula; A = c√N (Area expanding proportionally with square root of sample size). After a simple transformation of this equation, c equals “area pr. square root of N” and thus cancels out the Home range ghost effect on A. In other words, c is expected to be of similar magnitude, whether a large or a small sample size of GPS fixes is applied for its estimate (but a large sample is obviously preferred, for statistical reasons).

Its inverse, 1/c, provides a convenient expression for complex space use intensity. A smaller c leads to a proportionally larger 1/ci.e., stronger space use intensity. The intensity may – like the classic density variable – be calculated at the scale of the total home range, or a spatial sub-section of this range. In the latter case, local 1/c provides an alternative to the local density within the UD.

Like the traditional density-based utilization distribution (for example, GPS fixes pr area unit), 1/c also expresses such area-referring intensity. 1/c = 1/(A/√N) = √N/A, rather than the classic N/A.

The “characteristic” scale is then referring to the condition of grid cell size that leads to 1/c=c.

In other words,


This condition is best illustrated graphically, as log(I) = log(c) + z*log(N). Area is represented by incidence, I, and the power exponent z=0.5 under the default MRW condition (β=2; see the scaling cube).


In an extensive analysis of movement by red deer we found that space use in these animals showed best compliance with MRW with β≈2.2 (fix dispersion’s fractal dimension D≈1.2 and z≈0.4) (Gautestad, A. O., L. E. Loe, & A. Mysterud. 2013. Inferring spatial memory and spatiotemporal scaling from GPS data: comparing red deer Cervus elaphus movements with simulation models. Journal of Animal Ecology 82:572-586). By applying the incidence approach (sum of non-empty grid cells) to estimate home range area as a function of sample size of fixes, and varying grid resolution to optimize towards c=1 for N=1; i.e., [log(N),log(c)]=(0,0), we found the respective individuals’ characteristic scale of space use (CSSU). Simply stated, the characteristic scale equals the grid resolution (spatial scale) for which the condition log(c)=0.

In a follow-up analysis we estimated c at the home range scale for 18 individuals, as shown by the inset panel in the illustration to the left (Figure 5B in Gautestad, A. O.& A. Mysterud. 2013. The Lévy flight foraging hypothesis: forgetting about memory may lead to false verification of Brownian motion. Movement Ecology 1:1-18).  We have so far not followed up on this analysis by comparing 1/c with the UD, as a proxy for local space use intensity – and thereby an indicator of strength of habitat selection – under respective local habitat conditions. Perhaps readers of this blog want to give it a try on their own data? Additional details on the 1/c method may be found in Chapter 11 of my book, Implications and applications.

A Statistical-Mechanical Perspective on Site Fidelity – Part I

From the perspective of statistical mechanics, surprising and fascinating process details continue to pop up during my simulation studies on complex movement. One such aspect regards the counter-intuitive emergence of uneven distribution of entropy over a scale range of space use. In this post I introduce the basics of concepts like entropy and ergodicity, and I contrast these properties under classic (“scale-specific”) and complex (“scale-free”) conditions. In a follow-up Part II of this post I publish the novel, theory-extending property of the home range ghost.

Entropy is commonly understood as a measure of micro-scale particle disorder within a macroscopic system. This disorder; e.g., uncertainty with respect to a given particle’s exact location within a given time resolution, is quantified at a coarse scale of space and time. In other words, “the hidden layer” has to be sufficiently deep to allow for a simplified statistical-mechanical representation of the space use (see definition of the hidden layer in the post on the scaling cube). When the disorder is maximized, the system is in a state of equilibrium.

In our context, the degree of “particle disorder” may be represented by a series of GPS relocations of an individual, and the “macroscopic” (or meso-scopic) aspect regards the statistical properties that emerge from the movement, which typically express some kind of area-constrained space use; i.e., a home range. In traditional home range models, which come closest to classic statistical-mechanical system descriptions, the concept of entropy generally requires that…

  • the macroscopic system is closed (i.e., spatially constrained movement), and
  • the evolution of entropy regards how this property becomes maximized as the space use drifts towards equilibrium – either as a function of time extent (increasing sampling period) or a function of time grain (increasing inter-fix interval).

In the context of a large set of GPS fixes under classically defined home range conditions, one might then assume the maximized entropy condition to be represented by serially non-autocorrelated fixes, as a result of applying a sufficiently large sampling interval. This condition supports system ergodicity.

In mathematics, the term ergodic is used to describe a dynamical system which, broadly speaking, has the same behavior averaged over time as averaged over the space of all the system’s states (phase space). In physics the term is used to imply that a system satisfies the ergodic hypothesis of thermodynamics. (Wikipedia)

In short, an ergodic system implies that a sample of N GPS fixes is expected to show a similar dispersion in statistical terms (i.e., the utilization distribution) as a snapshot dispersion of N independently moving individuals with identical behaviour within the same area and time interval. This ergodic condition depends on the other condition – a closed system, which could be imagined to be satisfied by an animal that bounces back towards the central parts of the home range when it strays too far from this centre. In the ergodic scenario, maximum lack of information about the animal’s exact whereabouts – and thus maximized entropy – is achieved.zoomers2

One particular advantage of this statistical-mechanical equilibrium-like condition is that traditional estimates of habitat selection (use/availability analysis) may be expected to perform with great predictive power (i.e., large model realism), given that the classic assumptions for space use behaviour are satisfied. However, as already summarized in a previous post this month, these model conditions may not be representative for a typical space use process.

The illustration to the right (Figure 34 in my book) indicates the conceptual difference between classic and complex space use in terms of ergodicity. In this image consider – for the sake of model simplicity – that a population of 85 individuals (called an ensemble in statistical-mechanical jargon) are moving either “scale-specifically” (classic condition; upper panel) or “scale-free” (lower panel). The system is supposed to be closed; i.e., the individuals are bouncing back when reaching the borders of the available space. Consider that the respective individuals are located twice, and line segments are connecting the respective displacements. On one hand, consider that the chosen interval is chosen to be large enough to ensure that the “hidden layer” of unresolved path details is sufficiently deep to embed the necessary degrees of freedom (large number of unobserved micro-moves) to allow for a statistical-mechanical representation of the process. On the other hand, the interval is sufficiently small to make the probability of bounce-back from the area borders negligible during this interval of data collection. In other words, the system represents is closed, non-ergodic, and far from statistical-mechanical equilibrium at this time scale.

In the scale-specific case (upper panel), displacement vectors are not varying much; they are in totality complying with a negative exponential distribution of displacement lengths. This follows from the assumption of a scale-specific process and a sufficiently deep hidden layer (see the book for details). Under this condition, even totally deterministic movement below the hidden layer will appear stochastic and Brownian motion-like at our chosen temporal resolution for fix sampling. In contrast, consider the scale-free space-use variant in the lower panel, where some individuals have performed larger displacements during the interval. Specifically, in this scenario the distribution of displacement lengths complies statistically with a power law (may thus look like a Lévy walk) rather than an exponential at the given time resolution. The behavioural basis for the qualitative difference between the two scenarios is elaborated on in the book.

In the present context, just consider that the non-ergodic situation in the upper panel may be drifted towards an ergodic observation level on the system simply by decreasing the time resolution to allow for a high probability of intermediate border bounce-backs at this coarser time scale. However, the lower panel, representing complex space use, will require substantially larger time intervals to allow for ergodicity (technically, the Lévy walk-like process must reach the temporal limit for a transition towards Brownian motion; see Gautestad A. O & Mysterud A. 2013. The Lévy flight foraging hypothesis: forgetting about memory may lead to false verification of Brownian motion. Movement Ecology. 2013;1:1-18).

With reference to the Scaling cube, the scenario in the upper panel catches the Brownian motion class of movement (lower left corner), under special condition of environmentally imposed constraint on area expansion (the convection model; the classic home range concept). The lower panel regards the upper left corner in the back of the cube, Levy walk, with a similar extrinsic constraint on free dispersal as in the first scenario. In the follow-up Part II of this post I elaborate on how an additional complexity-extended system description of these two scenaria may reveal properties that are counter-intuitive – in fact, paradoxical – from the classic perspective described above. The extra condition regards movement that is combined with site fidelity (occasional targeted returns to a previous location). In other words, we consider movement classes at the right-hand wall of the cube rather than the left-hand wall, where a home range emerges from intrinsic behaviour rather than extrinsic space use constraint.

These paradoxical properties to be elaborated on in Part II are in fact quite easy to verify in a set of GPS fixes, and under the extended theory the paradoxes are even resolved. However, a verification of complex space use makes traditional methods to calculate local habitat selection dubious. Hence, this series of posts’ theoretical focus on statistical mechanics also points towards practical consequences for ecological inference based on space use data.

Home Range as an Emergent Property

An animal is generally constraining its various activities within a limited range of the habitat, relative to what is potentially available to it during the actual period. However, the object we call a home range (HR) may be described by models that represent qualitatively different processes at a very fundamental level. These differences have consequences for statistical analysis and ecological inference. The differences also highlight an inherent paradox of the classic model, and makes one wonder how this approach has survived as a cornerstone of animal space use theory for so long.

A HR is generally and traditionally assumed to be the result of the animal’s tendency to turn back towards more central parts of the range when the distance from centre(s) of activity becomes too large. More specifically, a directional (centre-pointing) bias on direction of movement is assumed to become stronger the more distant the animal moves from the centre. This advection effect, in combination with the independent effect from a variable staying time in various intra-HR localities, will then generate a utilization distribution function that reflects (a) a generally lower intensity of space use towards the periphery of the home range and (b) locally varying space use intensity within the HR.Utilization distribution

The HR object is thus traditionally described by an area demarcation and a utilization distribution, albeit acknowledging that the range has fuzzy borders and a heterogeneous internal “structure”. However, an increasing sample of relocations of the animal is on one hand assumed to allow for a better statistical approximation of the utilization distribution, and on the other hand a better estimate of the HR asymptote – the object’s area representation under the defined protocol.

In short, the classic HR model describes a convection process of space use. Convection is the combination of advective transport (the tendency to drift towards the HR centre) and diffusive transport (mechanistically modelled as a random walk). An important assumption for this classic model is that successive re-visits to a given intra-HR location are independent events. This follows – for example – from commonly applied statistical methods for habitat selection (e.g., use vs. availability), the assumption of a home range asymptote (a stationary space use, unless the HR is drifting), methods to test for serial non-autocorrelation (e.g., Schoener’s index), and preferred methods to estimate the utilization distribution (for example, the kernel density estimate). Details are provided in my book.

The classic model property that an animal’s path is assumed to self-cross by chance is a stronger conjecture than just relating it to verifying statistical independence of successive fixes in a series (serially non-auto-correlated relocations), since independent revisits to a given location regards the process that generates constrained space use. However, this assumption also brings up an inherent paradox of the classic HR model: how can area constraint appear in an open environment if revisits happen by chance?

I’m confident that many ecologists studying animal space use will agree that the classic convection model is not realistic. As an an alternative model framework for the  HR process (containing the right-hand wall of the scaling cube, with four movement classes), consider that a HR is not a direct consequence of an external “field” of directional bias towards the home range’s centre. Instead the HR is in these four universality classes conjectured to appear as a side-effect from a tendency for occasional targeted revisits of some of previously visited locations. In other words, strategic returns lead to area constraint as a by-product. A given revisit event may – according to these four classes of movement – be ignited relatively independent of the animal’s current location. Consequently, the emergence of area constraint cannot be categorized as driven by a fixed bias (vector) field, which has a given strength and direction at a given point in space and time.

In this alternative scenario the HR becomes an emergent property from a tendency to remember previously visited locations (a memory map) and goal-driven occasional returns towards some of these locations.

In two of these alternative classes of HR-related movement – the MRW model – there may even be a positive correlation between revisit frequency to a given site and the probability for future revisits to this site. In other words, the process is – under specific ecological conditions – driven by positive feedback (self-reinforcing space use). The individual advantage from site familiarity may be one such driver of positive feedback. We have previously called this space use property “auto-facititation” (Gautestad A.O, Mysterud I. 2010. Spatial memory, habitat auto-facilitation and the emergence of fractal home range patterns. Ecological Modelling. 2010;221:2741-50).

In the illustration above – from Appendix A.3 in the book – 1,000 relocations from one specific model formulation – the Multi-scaled Random Walk (MRW) – are simulated and sampled at three temporal scales; covering high-frequency sampling (left) towards lower frequency sampling. Successive fixes along the sampled series are connected by lines. Despite the very different visual appearance of the three example ranges, they share a set of specific statistical signatures – they belong to the same universality class in statistical-mechanical terms. The three home ranges – illustrated by independently run series – here shows the transition from serially autocorrelated relocations towards non-autocorrelated relocations under this class.

Contrary to the classic framework, a non-autocorrelated series does not appear from a sample lag that is larger than the average bounce-back-from-the-periphery interval, but from sampling at a lag that is larger than the average interval for targeted return events.

In my book I describe how this and other classes of HR-related movement may be differentiated by simple statistical analyses of the series of relocations. These tests will reveal what kind of space use; for example whether the classic HR model, the MRW model, or another statistical-mechanical class, provides the most realistic representation of the process behind your GPS data series.


A Coincidence, or Yet Another Confirmation of the Home Range Ghost?

There are now many hundred third-party papers that refer to previously published work on my book’s main model – Multi-Scaled Random Walk (MRW). This week I read with interest one of the most recent additions to this set of references.

The paper is published in Spanish, with the following English translation “Home range and habitat use of two giant anteaters (Myrmecophaga tridactyla) in Pore, Casanare, Colombia“. Authors: Cesar Rojano Bolaño, María Elena López Giraldo, Laura Miranda-Cortés, and Renzo Ávila Avilán. Edentata 16 (2015): 37–45.

Their Figure 4 caught my attention, since it apparently shows a very familiar pattern: the so-called Home Range Ghost. I allow myself to reproduce a sandwich of their Figure 4 and Figure 1. I have also – with kind permission from the authors – added the Home Range Ghost expectation onto the graphs (red lines). They show how the two giant anteaters’ home ranges apparently expanded non-asymptotically with number of telemetry fixes (in compliance with the authors’ conclusion). Here I have allowed myself to add to that conclusion that the expansion specifically seems to satisfy the generic MRW formula A = cN^0.5 (a power law: area expands proportionally with the square root of sample size of fixes).

Are the data really compliant with this power law, or is the area accumulation asymptotic? Here is a simple rule-of-thumb: log-transform both axes! If the plot describes a straight line, you have a power law, with the slope expressing the exponent.

Obviously, two anteaters with only ca 70 fixes in each sample (collected at lag 3 hours) makes no model verification. However, the pattern is at least compliant with extensive  empirical tests of the Home Range Ghost aspect of MRW (free-ranging sheep, Black bear, red deer, etc; see Figures 3, 4 and 58 in the book).

As explained in detail in the book the Home Range Ghost is no ghost at all, given that one assumes that the home range is generated by an animal that utilizes its habitat in accordance to the biophysics of the MRW model. The red lines are thus model predictions, based on the underlying theory for parallel processing in combination with spatial memory (upper right edge of The Scaling Cube). 

In other words, in a MRW scenario the emerging home range from the sample of fixes describes a statistical fractal – “rugged at all resolutions” – rather than a classic area object (whether the border is fuzzy or not). Further, its interior cannot be properly represented by a utilization distribution. The true “density surface” for a fractal object is non-differentiable (non-smooth). “Forcing” a density distribution function upon such a pattern will be both misleading and produce paradoxical patterns, as exemplified here by the Home Range Ghost.