Three Bold Steps for Movement Ecology

Linking the statistical pattern of space use to general models of movement behaviour, both at the population and the individual level (diffusion vs. path analyses), has always been a cornerstone of animal ecology. However, over the last 10-20 years or so we have seen a rapidly growing interest in studying these processes more explicitly from a biophysical perspective. Biologists and physicists have come together on a common arena – movement ecology – seeking to resolve some key theoretical challenges. There is now a consensus that space use is more complex (in the physical sense of the word) than the traditional text book models have accounted for. In particular, individuals are generally utilizing their environment in a spatio-temporal multi-scaled manner, and species within a broad range of taxa also show capacity for spatially explicit memory utilization (e.g., a memory map). However, despite the emergence of very sophisticated models, movement ecology still has a long way to go to fully embrace these concepts and embed them into a coherent theoretical framework.

Somewhat provocatively, in my book I advocate that researchers in the wide-ranging and multi-dimensional field of animal space use need to be more willing to broaden their standard approaches with respect to statistical and dynamic models. Too much traditional thinking is hampering real progress in respective camps!

When all men think alike, no one thinks very much.
Walter Lippmann

In movement ecology, “thinking alike” primarily regards three unfortunate facts:

  • Researchers in the field of “Lagrangian” path analysis of complex movement (for example, Lévy walk behaviour), generally ignore spatial memory in their models; i.e., the cognitive capacity involving homing to familiar space and patches.
  • Researchers in the field of “Eulerian” space use analysis (for example, spatially explicit home range behaviour and use/availability analyses with respect to foraging pattern) are now including the dynamic effect of memory map utilization in their models, but generally not in a multi-scaled manner in the sense of Lévy walk-compliant movement.
  • The need to distinguish clearly between two levels of system abstraction, (a) animal behaviour and (b) the emerging statistical pattern of space use from this behaviour (e.g., GPS fixes). The latter is not properly recognized as statistical mechanics by either of the two camps, Lagrangian and Eulerian research. This coarser-grained representaion of space use is generally referred to as “stochastic modelling” or “statistical pattern analysis”, rather than confronting this issue by its true nature, linking behaviour at the biological level to statistical mechanics at the biophysical level as two sides of the same coin of system processes.

Regarding this third issue, the statisticscalingcubeal-mechanical framework for movement and space use has to be extended to reflect the broader framework of complexity, in a biophysical sense. In this post I have already described the Scaling cube, which is elaborated on in detail in my book.

In this cube I unite eight universality classes of biophysical representation of space use, where the standard text book models belong to the lower left corner (Brownian motion and classic random walk/diffusion). This corner reflects dynamics and pattern from memory-less movement, both with respect to time and space (no memory effects). Currently, there is rapid theoretical development in some parts of the “Markov floor” (M) away from the text book corner, in particular towards the MemRW corner where model animals utilize a memory map in a mechanistic manner. There is also much activity going on around the upper left edge of the cube representing true Lévy walk (LW).

Referring the the Scaling cube, it should be obvious that Movement ecology is now ripe for embracing a stronger integrative approach.  At present, research in the MemRW corner (whether theoretical modelling or empirical validation of these models) do not implement the temporal memory axis in their model constructions. In short, the animal’s next move is re-defined in a sequential manner, from one moment to the next. Similarly, the CompRW (composite random walk) camp – where the model animal’s next step either is temporally “fine-scaled” and tactical or more coarse-scaled and strategic (leading to larger displacements pr. unit time, on average) is primarily occupied with methods to differentiate this kind of sequential toggling of scale-specific movement at different time scales from real scale-free movement like Lévy walk (LW). I define the latter to belong to the parallel processing “ceiling” of the cube (PP), which consequently extends the Markov floor of mechanistic modelling to a three-dimensional space-time-scale system. Researchers in the LW camp are focusing on statistical methods to distinguish LW from spacific variants of CompRW, without considering the statistical-mechanically qualitative difference between the two processes. The PP dimension (hierarchical scaling) may contribute to resolving paradoxes and open paths towards novel statistical methods.

At present, my book and this blog are are the only sources describing and advocating such an integrative approach as expressed by the Scaling cube, and my Multi-scale random walk model (MRW) is the only approach that fully embraces all eight corners of the cube. The standard MRW is found in the upper right corner. By “tuning the model knobs”, MRW can be slid toward either of the other seven corners. In this manner, the MRW approach should offer a potential for resolving the three challenges for movement ecology, as outlined above.

One novel challenge, which emerge from studying the Scaling cube geometry, is the need to explore theoretically and empirically the qualitative difference between standard Lévy walk (no spatial memory, full temporal memory, hierarchically scale-free process) and the Lévy-like MRW process (full spatial memory, full temporal memory, hierarchically scale-free process). MRW is LW-like, but not LW. The first paper to test this qualitative difference on real animals’ space use, using my proposed protocols,  is Gautestad et al. (2013). In this paper you also find links to the theoretical development of the respective methods. Follow-up elaborations on MRW, PP and other aspects of space use complexity are given in my book.

Such an integrative approach as outlined in the Scaling cube will require some bold steps by researchers in respective camps. I unfortunately ruined my career in ecology by stubbornly exploring the statistical-mechanical links between the eight corners of the Scaling cube over a period of more than 25 years. Younger and equally ambitious and challenge-seeking researchers will hopefully cope better, helped by the more broadened comfort zone of accepted directions of research now emerging from the present foundation of movement ecology. Good luck.


Gautestad, A. O., L. E. Loe, and A. Mysterud. 2013. Inferring spatial memory and spatiotemporal scaling from GPS data: comparing red deer Cervus elaphus movements with simulation models. Journal of Animal Ecology 82:572-586.

CSSU – the Alternative Approach

In my book and in several blog posts I have described in detail how a spatial scatter of GPS fixes can be analyzed to estimate the animal’s characteristic scale of space use (CSSU). The method is based on zooming the resolution of a virtual grid until the intercept log(c) ≈ 0 and the slope z ≈ 0.5 in the log-transformed home range ghost formula log[I(N)] = log(c) + z*log(N). Incidence, I, regards “box counting” of grid cells that embed at least one fix at the given resolution of the grid. In this post I present for the first time an alternative method to estimate CSSU.

Space use that is influenced by multi-scaled, spatial memory utilization tends to generate a GPS fix pattern from path sampling that is self-similar; i.e., the scatter is compliant with a statistical fractal with dimension D ≈ 1. Since D<<2, the dispersion is statistically non-stationary under change of sample size for its estimation (N). This property explains the home range ghost formula with the “paradoxical” N-dependency of observed space use, even for very large N. In a previous post I explained how this kind of multi-scaled space use in statistical-mechanical terms regards a combination of “inwards” and “outwards” space use in scaling terms. Thus, implicit in this property is a hypothesis of a balancing point between these two tendencies; the CSSU. Since the process is scale-free and CSSU is independent on N in the domain of serially non-autocorrelated fixes, it should be possible to find CSSU from the following simple method:

  1. Find the fixes’ fractal property by applying the box counting method to find I(N) at different grid resolutions, k.
  2. In a regression of I(k) with log-transformed axes, D equals the slope multiplied with -1, given that the slope is linear over at least two decades of k (smaller range does not verify a consistent D over the given scale range, but the slope may still be sufficiently indicative to be useful for the present analysis).
  3. Based on the deduction from the hypothesis of a balanced space use surrounding CSSU along the scale axis, one should expect the to find log(CSSU) as the midpoint between the smallest log(k) that embeds all fixes and the magnitude of log(k) where the given N becomes insufficient to resolve the fractal space use. The latter effect should be seen in the left part of the log(I(k)) regression line where the dotted line by necessity flattens out towards to a constant magnitude of I for smaller k than this transition scale. In this range D=0, since the slope is close to zero. At this fine resolution, the home range is simply a set of zero-dimensional dots. Also observe a widening of the D=1 range of log(k) in larger samples of fixes, in compliance with the conjecture that inward expansion and outward expansion is equally strong when the animal is putting equal effort into utilizing space use at different spatial scales.

The following illustration, which is based on the serially non-autocorrelated set of simulated fixes in this post, shows the main points:

CSSU from D route

When applying the box counting method to estimate D one normally use the maximum sample size, in order to maximize the range of k for which the slope (and thus D and its scale range of stability) may be estimated. Above I have shown this procedure (using N = 9,000), but also added alternative regressions for smaller N (uniform subsampling of the total series). In this manner, the respective scores of I for a given k from varying N as in these samples indirectly reflects the previously described I(N) method; the “home range ghost” formula I(N) = cN(zooming to the grid resolution where z≈0.5 and log(c)≈0, we find CSSU from this “unit” scale c ≈ 1).

The colour-filled symbols indicate the respective ranges of k where the regression slope for I(k) for the actual N is quite stable. Larger sample sizes show slope closer to -1, i.e., D≈1. The smallest sample, N=90, was – as expected – not able to resolve any fractal structure of the scatter of fixes. Observe how the range of k for which D is stable is widening with increasing N. In other words, the “dilution effect” (Gautestad and Mysterud, 1994) kicks in at finer resolutions (smaller k). Towards the other end of the range we see the “space fill effect”; i.e., all grid cells at coarse resolutions contain at least one fix (D=2).

These two properties; a log-linear stable slope over a given range of k and the different magnitude of I for a given k as sample size is changed, provides the link between the two approaches to estimate CSSU. First, the previously estimated CSSU for this series, indirectly based on the I(N) method, is shown by the blue vertical line (k=1:40 → log(k)=-5.32 scale units, including adjustment to the present arena size).

Next, the presently estimated CSSU using the new “midpoint of stable D” approach is shown by the vertical red line. Clearly, the two approaches lead to a similar magnitude of CSSU.


Gautestad, A. O., and I. Mysterud. 1994. Fractal analysis of population ranges: methodological problems and challenges. Oikos 69:154-157.

The KDE Smoothing Parameter: Approaching the Core Issue

When calculating individual space use by the kernel density estimation (KDE), the smoothing parameter h must be specified. The choice of method to calculate h has a dramatic effect on the resulting estimate. Here I argue that looking for the optimal algorithm for h is probably a blind alley for other reasons than generally acknowledged.

Two methods that are  used extensively for KDE home-range analysis; the least square cross validation method (LSCV) and the method to determining the optimal h for a standard multivariate normal distribution (Href). In short, both methods have been found to have serious drawbacks. In particular, LSCV is generally under-smoothing the home range representation, leading to a utilization distribution (UD) that tends to be fragmented with many local “peaks”. On the other hand, Href tends to over-smooth the UD. Thus, relative to LSCV the resulting UD suppresses the local peaks in density of fixes and tends to show a larger home range for a given isopleth. The literature on these issues, including proposals for alternative methods, is huge. Since most ecologists working on animal space use are aware of this methodological minefield I limit myself to refer to Horne and Garton (2006).


This cormorant regularly revisited a given bay and a given part of its shoreline, offering a good opportunity for a patient photographer. The bird’s fishing success at this particular location was subtantial, which illustrates nicely how spatial memory – and in particular the concept of subjective habitat autofacilitation (Gautestad and Mysterud, 2010) – plays an important role in vertebrates’ space use activities. However, such self-reinforcing revisit of patches undermines the theoretical foundation for KDE as a descriptor of habitat selection. Photo by Arild.

The core problem with the KDE approach is in my view not how to optimize between over- and undersmooting of the UD.  All KDE variants are based on a common assumption that the actual animal has utilized its habitat i a Markov compliant manner. This is more serious than the h issue. I refer to my book and a series of previous blog posts for explanation of a Markov process, with numerous examples provided. In short, Markov compliance is in the present context a mathematical form of a process that is statistically “compatible” with the statistical kernel functions, which represent the backbone of any KDE. Such a process in the context of a home range implies a temporally scale-specific (“mechanistic”) habitat ultilization. In the limit of large samples of relocations of the animal such a mechanistic process leads to convergence towards a “smooth” UD in statistical terms. That is, even a multi-modal UD may be assumed to be locally “flat” upon zooming sufficiently into the UD’s functional surface.

So far, absolutely all theoretical developments within the KDE arena rest on this “smooth UD surface” statistical-mechanical assumption.

I thus conclude that the KDE is not an appropriate approach for data that are collected from an animal that has utilized its habitat in a complex manner. In this post I provided support for this view, using data from free-ranging sheep. In other words, if an animal has utilized spatial memory, it generates a home range as an emergent property from the movement process. Further, if the animal has integrated its spatial and historic information to allow for multi-scaled space use the UD will no longer be smooth (as shown by the sheep data, and – for example – data on red deer; Gautestad et al., 2013). The UD will describe a statistical fractal; i.e., mathematically rugged on all resolutions, in a self-similar manner.

The KDE will thus never be able to describe a multi-scaled home range pattern realistically, from the perspective of local intensity of habitat use. Other approaches are needed. For example, as an alternative to KDE’s isopleths I generally advocate using incidence, I; studying number and spatial distribution of non-empty grid cells from superimposing a virtual grid onto the spatial scatter of fixes. In other posts I have in this regard described a method to find the optimal grid resolution, leading to a formula that can be applied to estimate the animal’s characteristic scale of space use (CSSU) under the given conditions.

Applying CSSU analysis will reveal that two sections of a home range may show similar density of fixes but different local magnitude of CSSU. Relatively small CSSU for a given density implies a higher degree of intra-section “clumping” of fixes (thus, 1/CSSU is expressing intensity of habitat utilization, independent of fix density per se).

Similarly, two sections may show strong difference in density but a similar magnitude of CSSU and thus a similar intensity of habitat utilization despite the density difference. For example, 1/CSSU may be large (CSSU small) within specific sections of the periphery of the home range. In this case, despite low fix density the animal has shown more “surgical” space use inside this section during its visits. I refer to previous posts for more details.


Gautestad, A. O. and I. Mysterud (2010). “Spatial memory, habitat auto-facilitation and the emergence of fractal home range patterns.” Ecological Modelling 221: 2741-2750.

Gautestad, A. O., L. E. Loe, and A. Mysterud. 2013. Inferring spatial memory and spatiotemporal scaling from GPS data: comparing red deer Cervus elaphus movements with simulation models. Journal of Animal Ecology 82:572-586.

Horne, J. S. and E. O. Garton. 2006. Likelihood Cross-Validation Versus Least Squares CrossValidation for Choosing the Smoothing Parameter in Kernel Home-Range Analysis. J. Wildl. Manage. 70:641-648.