The MRW Simulator – Finally Available!

Back in 1997 I started programming the foundation for a personal simulation environment for Multi-scaled random Walk, the MRW Simulator. Through countless updates over these 20 years the program has gradually matured into a version which finally is ready for limited distribution towards peers in the field of animal space use research.

The MRW Simulator is a Windows©-compliant tool to generate various classes of animal movement (self-produced data series) or to import existing data series. The generated or imported data – consisting of a sequence of (x,y) coordinates – may then be subject to various kinds of statistical protocols through simple menu clicks. The generated text files are then typically exported for detailed analyses and presentation of results in other applications, like the R package or Excel©.

While R is based on an interpreted language, the MRW Simulator is a fully complied program. Thus, movement paths of length up to 20 million steps may be simulated within minutes of execution time, rather than multi-hours or days. A multi-scaled analysis of data over a substantial scale range is almost forbidden in an interpreted system due to the algorithm’s long execution period. In the MRW Simulator such analyses are performed in a fraction of this time. Thus, R and the MRW Simulator may supplement each other. R is strong on statistics and algorithmic freedom; the MRW Simulator is strong on time–effective execution of a small set of basic but typically time-consuming algorithms.

The opening screen contains menus (1), a window where the simulated or imported set of fixes are displayed (2) and various command buttons, check boxes and information fields (314).

To get your first experience with the system, try out the most basic setting for a simulation. First, choose among classes of movement; Levy walk/MRW, Correlated random walk, and Composite random walk (superposition of two correlated random walks) (3). The difference between LW and MRW is explained below.

For your first test, choose Levy walk / MRW (3), with default setting for fractal dimension (D=1) and maximum displacement length between successive steps (truncation=1,000,000 length units). D=1 simulates the condition where the animal on average utilizes its environment with similar scale-free weight at each intermediate scale from unit step length to maximum step (setting 1<D<=2 skews space use towards finer-scale space use on expense of coarser scales, again in average terms).

In a column of text fields (4) you may define conditions like series length, properties for the simulated path, size of the arena and grid resolution for the subsequent analysis. For example, the difference between Levy walk and MRW is given by setting a return frequency >0 for MRW (implying targeted return events to previous locations at the chosen average frequency). For this first run, just keep the default values.

Later you will learn how to additionally modify the conditions by including a pre-defined series of coordinates (in a file called seed*.txt, where * regards an incremental number) (5). At this stage, just keep default settings.

By default the simulation runs in a homogeneous environment. The set of “Habitat heterogeneity” fields (6) allows defining the corners of a rectangle where the model animal behaves in a more “fine-grained” manner by reducing average movement speed. Other ecological aspects may also be defined, like a method to account for temporal and local resource exhaustion. As a start, just keep defaults.

Now, click the “Single-series” command button (7). You should see a number of fixes appearing as dots in the arena window.

The number of fixes reflect the ratio of total series length and the observation interval on this series; i.e., “Number of fixes” (Norig= 1,000,000) multiplied by an average “Observation frequency” (p=0.001). This leads to an observed series length – a path sample – of ca 1,000 fixes; which are displayed in the observation window.

Before moving on to your first data analysis, observe that the simulation’s default settings are defined by “schemes”, which can be pre-loaded from a dropdown menu (8). You may also run a number of replicate simulations in an automated sequence (9). The arena may be copied to the clipboard (10) for subsequent pasting into other applications like a Word document, an Excel sheet, etc.

The “Data path” field (11) displays the folder where the system saves and retrieves data. By default, the data resides in a subfolder, “\mov”, under the location of the MRW simulator’s EXE file. This location is set during program setup.

The field “Fractal resolution range” (12) defines the scale range over which a subsequent analysis of the scatter of fixes – selected from the Analysis menu – will be performed by the so-called box counting method.

The field “A(N)” (13) shows the progress of another analysis, total area (incidence) as a function of sample size, N.

The counter (14) is automatically incremented each time you click the “Singe-series” button (7). TIP: To repeat (and overwrite) an existing series, edit the counter number (14) to one decrement below the actual series. For example, to re-execute data series number 5, edit the counter field to “4” before clicking the button (7). To re-execute series 1, edit the field to “-1” (the number zero is reserved as the initial setting number).

The data file containing “observed” fixes resides in the \mov folder (see above), with name “levy*.txt”. (* = 1, 2, 3, …). It contains three columns of data; x-coordinate, y-coordinate, and inter-step distance.

The MRW Simulator 2.0 will now be made available as a free add-on tool for all buyers of my book. If you purchase it through my shopping cart at, you will get the program and its user guide bundled with the book. Existing book owners: contact me at and I’ll fix you a personal download link – free of charge. You may purchase by invoice – see top of this page!

In the next blog post I’ll show some of the menu procedures of the MRW Simulator, including how to import you own GPS space use series for analysis on-the-fly.

MRW and Ecology – Part VI: The Statistical Property of Return Events

Animals that combine scale-free space use with targeted returns to previous locations generate a self-organized kind of home range. In short, the home range becomes an emergent property from such self-reinforcing revisits. Obviously, any space use pattern from complex processes outside the domain of Markov (mechanistic) theory needs to be analyzed using methods that are coherent with this kind of behaviour. Below I exemplify further the versatility of the MRW approach to adjust for serial auto-correlation (see Part III). I also show the quite surprising model property that the sub-set of inter-fix displacement lengths for return events seems to have a similar statistical distribution as the over-all pattern of exploratory step lengths. This additional emergent property of space use may lead to methods to test a wide range of behaviour-ecological hypotheses, for example to which extent an animal calculates on an energy cost with respect to distance to potential target locations for returns..

In ecological research it is traditionally considered logical that an animal considers a return to a distant familiar location to be less preferred than revisiting closer locations. On the other hand, by default (a priori) the MRW model does not include such a distance penalty on long-distance returns. Recently, the realism of this model premise has gained empirical support from studies on bison and toads (Merkle et al. 2014,2017; Marchand et al. 2017). In the MRW model’s standard version, a given return step is targeting any previous locations with equal probability except for the additive effect of number of previous visits to a given site, which increases the statistical probability for future revisits (self-reinforcing site fidelity). The implicit assumption is that the added energetic cost from long distance returns either is negligible relative to other parts of the energy budget, or the fitness value from keeping in touch with familiar locations regardless of current distance far exceeds the energy consideration. While this property regards a homogeneous environment it is trivial to adjust to a heterogeneous scenario without loss of the general principle. In this post I present more details on the return step property of MRW from a theoretical angle, as a starting point to test the model’s default condition on real data.

First, consider that the robustness of the MRW-based method to estimate an individual’s characteristic scale of space use (CSSU) within a given time and space extent is key to understand the energy aspect of return events as outlined above. The property of return events imposes a characteristic scale; i.e., CSSU, on space use, despite the scale-free nature of exploratory steps. For a given period, CSSU is a combined function of average movement speed and average return frequency. In a previous post I proposed how CSSU may be estimated even in auto-correlated (“over-sampled”) data series of location fixes. In this post I present a pilot analysis which strengthens this approach.

Consider the two simulated Home range ghost results to the right; incidence, I, as a function of number of fixes, N. The first set of fixes (circles) regards weakly auto-correlated series of fixes from return rate 1:10 and fix sampling at 1:100, while the second series (squares) resulted from a strongly autocorrelated path sampling (return rate 1:100 and sampling at 1:10). As was shown in Part III of this set of blog posts, by performing the “averaging trick” on log(I,N) from frequency and continuous sampling (open symbols for respective sets) the average log-log slope remains close to z=0.5 (area expanding proportionally with square root of sample size) even for the strongly auto-correlated series. The slight deviance from z=0.5 in the two series should be considered normal variability to be expected from one simulated series to the next (averaging over large sets of series would bring z closer to 0.5).

Critically, the present result also shows compliance with the expected change of the characteristic scale of space use (CSSU, represented by the parameter c in the Home range ghost formula I=cN0.5) as a function of the ratio between frequency of return events relative to exploratory moves (assuming constant average movement speed). In other words, observation frequency, which represents a sub-set of all displacements along a path (sampling of fixes) should not influence the CCSU estimate despite influencing the degree of auto-correlation. According to MRW theory, fewer returns during a constant average movement speed lead to larger CSSU*. In the analysis of the present two series, ten times smaller return rate led to an optimized unit pixel size (I≡1) of magnitude √10 = 3.3 times larger than for the weakly autocorrelated series with higher return frequency.

In the Figure above, the two CSSU scales have both been rescaled to c=1 ([log(c)=0], but respective series’ unit scale (I=1) is de facto correctly found to be very different in absolute terms.In the present examples, CSSU was estimated to c1 = 1252 area units for the high frequency return scenario and c2 = 4002 area units for the second series with fewer returns (and stronger degree of autocorrelation).

To conclude, after optimizing pixel size in respective series by analyzing I(N) over a range of pixel resolutions as previously described in my book and other posts, this preliminary analysis verifies a strong coherence between return step frequency and the magnitude of CSSU in accordance to the theoretical parameter prediction, despite strong difference in degree of serial autocorrelation in the sample of relocations. On other words, the CSSU estimate is quite resilient to the researcher’s choice of fix sampling scheme.

However, another aspect of the return step component of may turn out to be valuable to test the opposing energy hypotheses with respect to distance penalty, as outlined above.

Quite surprisingly I must admit, even considering the implicit “no distance penalty” model design, the tail part of the step length distribution of returns is quite similar to the tail of observed step lengths (fixes) that are sampled from the total series of steps!

The example series above with the weakest degree of auto-correlation (circles in the top Figure) shows similar functional form between the over-all distribution of binned step lengths [log(L); red circles below] and return distances (open symbols)**.

As expected from the weakly autocorrelated series, the fit to the power law function with Levy exponent β=2 of the exploratory steps is showing a clear “hump” in the extreme part of the Log(L) distribution of fixes, due to influence from intermediate return events.

For the more strongly autocorrelated series with N=10,000 fixes from a total series of 100,000 steps and a lower return frequency 1:100 (Figure below) we see – as theoretically expected – a more subdued hump for the fixes, due to less influence from return events***. The hump would be even less pronounced if the fix sampling frequency had been even larger (Gautestad and Mysterud 2013; in particular Figure A2 in Supplementary material).


Again the tail distribution of return lengths – where the total set of 1,000 events is shown as triangles – is similar to the the over-all distribution of fixes (1,000 first and 1,000 last of the N=10,000 fixes, shown as red and green circles). The median length for return steps is larger under this scenario (740 length units, versus 262) due to a ten times lower return frequency in relative terms. On the other hand, the median length for the actual set of fixes is strongly reduced as a consequence of the ten times larger fix sampling frequency.

To summarize, while the estimate of CSSU is quite resilient to fix sampling frequency, the (observed) median step length of fixes and (unobserved) length of return steps are influenced by fix sampling rate and return rate, respectively. Despite independence between the median length for observed series and hidden return lengths, both aspects of movement show a similar distribution of lengths.

Finally, what if the return step targets had not been set a priori to be independent on distance; i.e., by invoking distance penalty on return events? I have not tested this aspect yet in a modified MRW simulation model, but intuitively I predict the distribution of return steps to morph towards a negative exponential function rather than a power law, as in the exploratory kind of moves. As aconsequence, the “hump” effect in the distribution of fixes should also be more subdued. Hence, by testing the difference in functional form of return steps and step lengths of observed fixes, one may have a method to test empirically the energy hypothesis that was outlined above.

The challenge, of course, is to develop a method to distinguish between exploratory moves and return events in empirical data. In simulation data it is simple to filter out the returns; in true space use data it is necessary to distinguish returns from path crossing by chance. More on this methodology in an upcoming post.


*) Thus, the ratio returns/exploratory moves have a similar influence on CSSU as a change in average movement speed where the speed is expressed as the average staying time in a given grid cell. In Gautestad and Mysterud 2010, Eq. 4, we defined the expected length of step x, Lx, as a function of a scaling parameter for movement speed δ and fractal dimension of the path, d:

Lx = (δ[1 − Rnd])−1/d         (Eq. 4)

where Rnd is  a random number 0 ≤Rnd < 1 and δ is a scaling parameter. In some sense δ may be interpreted as a parameter for expected staying time in a given patch, since larger δ implies smaller Lx and thus increased local fix contagion.
Gautestad and Mysterud 2010, p2744

Thus, by defining the space use’s fractal dimension D as D≡d, we have the relationship with CSSU’s Home range ghost parameter, c, and movement speed:

c ∝ 1/√δ  |  D = 1          (Eq 5).

**) Due to a return step frequency of 1:10 and actual fix sampling frequency of 1:100, the total set of return events exceeds the fix sample by a factor of 10. Thus, I have compared the distribution of of return lengths from the early part of the simulated path (open squares) with return lengths towards the end of the path (open triangles), keeping both samples at same size as the set of observed fixes. Red circles in the Figure above represent 10,000 fixes from a total series of 1 million steps. When studying the first and the last part of the 100,000 hidden return steps specifically, their distribution looks indistinguishable from the series of “observed” fixes. Triangles show the result for the first 10,000 return events, and the squares show the result from the last 10,000 returns during the total 1 of million steps.

***) In this example where observation frequency exceeds the intrinsic return frequency by a factor of 10, the first and last part of the set of fixes (red and green circles, respectively) was used for comparison with the total set of return steps (open triangles).


Gautestad, A. O., and I. Mysterud. 2010. Spatial memory, habitat auto-facilitation and the emergence of fractal home range patterns. Ecological Modelling 221:2741-2750.

Gautestad, A. O., and A. Mysterud. 2013. The Lévy flight foraging hypothesis: forgetting about memory may lead to false verification of Brownian motion. Movement Ecology 1:1-18.

Marchand, P, M. Boenke and D. M. Green. 2017. A stochastic movement model reproduces patterns of site fidelity and long-distance dispersal in a population of Fowler’s toads (Anaxyrus fowleri). Ecological Modelling 360:63–69.

Merkle, J. A., D. Fortin and J. M. Morales. 2014. A memory-based foraging tactic reveals an adaptive mechanism for restricted space use. Ecology Letters 17:924–931.

Merkle, J. A., J. R. Potts and D. Fortin. 2017. Energy benefits and emergent space use patterns of an empirically parameterized model of memory-based patch selection. Oikos 126:185–195

Statistical-mechanical Details on Space Use Intensity

While stronger intensity of space use in the standard (Markovian/mechanistic) biophysical model framework is equal to the proxy variable fix density, density=N/area, the complex system analogue is 1/c. This alternative expression for intensity is derived from from the Home range ghost formula cN0.5 c√N). Below I illustrate the biophysical difference between the two intensity concepts by a simple Figure and some basic mathematics of the respective processes. The extended statistical mechanics of complex space use underscores the importance of estimating and applying a realistic spatial resolution, close to the magnitude of CSSU, when analyzing individual habitat utilization within various habitat classes. The traditional density variable for space use intensity will invoke a large noise term and even spurious results in ecological use/availability analyses of home range data.

A spatial dispersion of a small and a large sample of fixes is shown in the upper and lower row, respectively. Two resolutions (spatial scales) are shown; the spatial extent (large squares) and a virtual grid scale (dotted lines, shown in the upper right square only). For interpretation of low and high intensity of complex space use, 1/c, see the main text.

In statistical-mechanical terms, one of the main discrepancies between the traditional space use models (mechanistic modelling) and complex movement (MRW) regards the representation of locally varying intensity of space use.

Classical space use intensity may be calculated from a single scale, and trivially extrapolated to a coarser resolution up to the full area extent.

Why is this “freedom to zoom” feasible and mathematically allowed? Consider an example where the system extent is represented by the demarcation of a specific habitat type within a home range, simplified by a square under four conditions in the Figure to the right. Due to assumed compliance with standard statistical mechanics under classical space use analysis, we are specifically assuming finite system variance within the given spatial extent,

Var(X1) + Var(X2) + … +  Var(Xn) = σ2

where [X] is the set of spatial elements from sectioning a system’s extent into sub-sets 1, 2 , 3 .., n; and sub-sets into sub-sub-sets to find respective sub-set variances. Thus, Var(Xi) is the i‘th element’s second moment variability (variance). For example, σ2 could be the intrinsic variances of the spatial inter-cell number of fixes in the virtual grid cells in the Figure above’s upper right scenario (sub-sub grid cells not shown).

The variance also changes proportionally with density. In other words, variance is stationary upon scaling and can thus be assumed to change proportionally with grid scale and density. This implies compliance with the central limit theorem. Even if intra-cell variance is not constant between grid cells at a given resolution within the given extent, as is expected in a heterogeneous habitat where local density varies, the sum of variance of these local parts is independent of this finer-scale variability between sub-components. Once again I underscore that this enormously simplifying system property regards scenaria under the standard statistical-mechanical framework!

On the other hand, the local variability of fix density from complex space use does not comply with the central limit theorem. Intensity of use needs to be calculated over a scale range – from “grain” to extent – rather than any scale, and the grain scale must be chosen with care. 

Traditionally, space use may be quantified by the magnitude of “free space” (area/N) in a sample of N relocations (fixes) of an individual, due to compliance with the central limit theorem, as explained above. On the other hand we have complex space use; i.e., scale-free movement under influence of spatial memory and under compliance with the parallel processing postulate. Under this biophysical framework free space is expressed by the ratio area/√N, rather than the ratio area/N, and quantified by the characteristic scale of space use (CSSU). CSSU is a function of average movement speed and average return rate to previous locations. The  system complexity from CSSU implies that the sum of the system parts’ standard deviation – rather than variance – is stationary upon re-scaling; i.e., 

s.d.(X1) + s.d.(X2) + … +  s.d.(Xn) = √σ2

In other words, by default the spatial statistics follow a Cauchy distribution with scale parameter γ=1, rather than the classical Gaussian distribution. CSSU is proportional with the parameter c in the Home range ghost formula I=c√N, where I is number of fix-embedding virtual grid cells at spatial scale c≈CSSU. 

What if we “lose focus” by studying the system (applying the grain scale) at coarser of finer resolutions than the CSSU? In the illustration above it is assumed that the superimposed virtual grid in the upper right corner reflects a spatial resolution that is close to this system’s true CSSU. If the system’s CSSU had been higher (1/c implicitly lower, as in the upper left-hand scenario), applying the same observer-defined grid resolution as in the upper right scenario would show deviance from the Cauchy distribution. The Cauchy scale parameter and the Home range ghost exponent are both inflated*) due to this “out of focus” situation; i.e., γ>>1 and z>>0.5<1.

In short, by superimposing a virtual grid at scale <<CSSU, we will observe I≈cNz with z≈1 rather than z≈0.5. The parameter c and thus the true CSSU has been erratically estimated. The power exponent z → 1 as grid scale is successively decreased by using cells that are smaller than the true CSSU scale under this condition. However, compliance with z=0.5 may be regained under a “Low 1/c” scenario (upper left) by sufficiently increasing the grid scale relative to cell sizes in the “large 1/c” scenario shown in the upper right example. We can then re-estimate CSSU by such scale zooming towards a coarser resolution and find that z → 0.5 as the coarse-graining is approaching the true CSSU. By comparing scenaria with low and high CSSU; i.e., high and low intensity of space use (1/CSSU), we can rise behavioural-ecological hypotheses about these differences. One obvious example regards strength of intra-home range habitat selection, but where intensity of space use is expressed by habitat-specific 1/c rather than density of fixes.

On the other hand, starting with a too coarse grid cell scale to estimate CSSU will lead to 0<z<<0.5. Defining the scale for I for system observation substantially larger than the true CSSU scale means that I will be seen to increase extremely slowly or not at all with increasing N. In Cauchy terms, the scale parameter 0<γ<1. Hence, the chance to need an extra grid cell to cover all fixes when increasing sample size to N+1 is very small, but not negligible! Occasional sallies of surprising magnitude happen! Surprising from the standard statistical-mechanical framework, but just part of the picture in a space use system that obeys parallel processing principles.

To summarize, while a “Gauss-compliant” (non-complex) kind of space use allows the average intensity of space use to be considered trivially constant upon zooming and linear rescaling over a scale range within the system extent, “Cauchy-compliant” space use requires a search for the correct grain scale to find the system’s average CSSU at this scale within the given extent. 

More details on the statistical-mechanical system description of complex space use is found in my book.


*) Apparently but erroneously, the variability under too fine-grained pixel resolution (grid cell scale) leading to z≈1 and Cauchy scale parameter γ≈2 may be interpreted as Gauss-compliant statistics. However, the Cauchy distribution does not have finite moments of any order. Thus, in strict terms, the reference to √σ2 under the γ=1 scenario is not correct since variance is a term under standard statistical mechanics, but represents a commonly applied approximation (Mandelbrot 1983, Schroeder 1991).


Mandelbrot, B. B. 1983, The fractal geometry of nature. New York, W. H. Freeman and Company.

Schroeder, M. 1991, Fractals, Chaos, Power Laws – Minutes from an Infinite Paradise. New York, W. H. Freedman and Company.

MRW and Ecology – Part III: Autocorrelation

Ideally, when studying ecological aspects of an individual’s whereabouts based on (for example) series of GPS fixes, N should not only be large. The series of fixes should also be non-autocorrelated to ensure statistically independent samples of space use. Since these two goals are difficult to fulfill simultaneously (the latter tend to undermine the former), two workarounds are common. Either the autocorrelation issue is ignored albeit recognized, or space use is analyzed by path analytical methods rather than the more classical use-availability approach. Both workarounds have drawbacks. In this post I show for the first time a surprisingly simple method to compensate for the oversampling effect  that leads to autocorrelated series of fixes.

Again, as in Part II of this series, I focus on how to improve realism and reduce the statistical error term when studying ecological aspects of habitat selection, given that data compliance with the MRW framework has been verified (see, for example, this post regarding red deer) or can be feasibly assumed. Hence, the individual’s characteristic scale of space use (CSSU) is the primary response variable we are looking for. In part II the proper proxy for local intensity of space use was described as the inverse of CSSU (actually, the inverse of the parameter c).

However, by default the basic version of the Home range ghost equation I = c√N, where I is the total area of fix-embedding virtual grid boxes at the CSSU scale, assumes a data set of N serially non-autocorrelated fixes. This is difficult to achieve, due to the simultaneous goal to have a large N available for the analysis. Splitting the data into sub-sets of N from several habitat classes makes the autocorrelation issue even more challenging. Thus, over-sampling of the animal’s movement seems unavoidable. In the following example I illustrate how such an oversampling effect on local and temporal CSSU estimates may be accounted for.

As a reference scenario, consider the default MRW condition of non-autocorrelated fix sampling of an animal moving in a homogeneous environment. Non-autocorrelation is achieved by sampling at larger intervals than the average interval between successive return events. In the illustration above the spatial scatter of 10,000 fixes (grey dots) shows a relatively stationary space use when comparing N=100 fixes from early, middle and late part of the sampling period (blue, red and yellow dots, respectively). However, return events that took place during the last part of the series have a more spread-out set of historic locations to return to, and this explains why the 100 yellow fixes cover a somewhat larger range than a similar sample size from the series’ early part.

When sampling a series of fixes from the actual path for a given time period*, two methods may be applied; continuous sampling containing a section of the series, varying in length N; and frequency-based sampling where N fixes are uniformly spread over the entire time interval for the total series (higher sampling frequency implies larger N). With reference to the Home range ghost formula above, I shows compliance with a non-asymptotic power law with exponent z≈0.5 (log-log slope close to 0.5). Grid resolution (pixel size) has been optimized in accordance to previously described method. The well-behaved pattern in this scenario is due to lack of strong auto-correlation under both sampling regimes. In other words, the animal’s path has not been over-sampled. Still, the difference between continuous sampling (open triangles) and frequency-based sampling (open squares) shows that the former is more prone to short term random effects, in this example seen as the “plateau” of I(N) in the range N=23 to 27. The characteristic average scale, log(c), is given by the I(N) intercept with the y-axis, where log2(N)=0.

Observe the set of black circles, which represent the average log[I(N)] from the two sampling methods covering the same sampling period at Ntotal.

Next, consider an example with strongly autocorrelated fixes. The ecological condition will be described below as “semi-punctuated site fidelity”**. Again, the colour codes in the spatial scatter of fixes (above) describe subsets of 100 fixes from the early, middle and late part of the total sampling period.

What is important under this condition is the behaviour of log[I(N)] under the two sampling methods, continuous and frequency. As expected with autocorrelated series, sub-sampling the total series by the frequency method – relative to continuous sampling – will tend to show a larger I for a given N over the middle range of log(N). Similarly, continuous sampling tends to show a smaller area for a given N, relative to expectation from the Home range ghost equation.

However, when averaging the respective log(N,I) points the compliance with I ∝ √N is restored! Thus, the estimate of CSSU may be properly estimated also from over-sampled paths. Despite the substantial under-detection of true space use based on N autocorrelated fixes, the statistical-mechanical theory of MRW in fact predicts the true I(N) – and hence also the CSSU – by performing the averaging trick above.

Why does the average of continuous and frequency-sampled estimates represent the true I(N)? Consider the vertical distance between the respective pairs of log(N,I) points to represent un-observed “ghost area” as a result of over-sampling. The stronger the over-sampling the larger the ghost area. If the sampling regime had regarded non-autocorrelated series, the ghost area would have been small (as in the first example above), due to weak degree of over-sampling. Stronger auto-correlation leads to stronger ghost area. Why is the ghost area splitting the area from frequency sampling and continuous sampling by 50% in log-log terms? This theoretical question requires a deeper statistical-mechanical explanation, which is still in theoretical progress. However, the answer is linked to the 50%/50% inward/outward expansion property of MRW (see this post).


*) If the total sampling period is not kept constant (same time period for Ntotal), CSSU will be influenced by the fact that late return events are targeting a more spread-out scatter of previous locations. Despite this, CSSU will tend to contract somewhat with total observation period (temporal extent). This transient effect will be explored in an upcoming blog post

**) An extreme form of temporal space use heterogeneity is achieved by “punctuated site fidelity”. For example, for every 1/50th part of the total series length the animal erases its affinity to previous locations and begin developing affinity to newer locations only. For example, in the third section of such a path, return events the following return events do not target the initial two parts of the series. The first location in each of the 50 successive parts (time sections) is chosen randomly within the total arena, hence a “punctuated” kind of site fidelity. This scenario could in model-simplistic terms illustrate GPS sampling of an animal that occasionally is changing its space use in accordance to changing food distribution during the season. It could also illustrate an intrinsic predator avoidance strategy, whereby fitness may improve by occasional abrupt changes of patch use, and this may under specific conditions be more advantageous than the cost of occasionally giving up utilization of familiar patches. The scenario could also illustrate patch deterioration with respect to a critical resource; energy profit in utilized patches may deteriorate owing to foraging, and thus trigger a “reset” of over-all patch use in conceptual compliance with a variant of the marginal value theorem.

A less dramatic and more realistic variant of temporal heterogeneity, “partially punctuated site affinity”, is simulated by keeping – for example – the last 10% or 2% of the path locations of the foregoing part of the path as potential return targets on equal footing with the successively emerging locations in the present part. This condition leads to a tendency for a “drifting home range” (Doncaster and Macdonald 1991), with some degree of locking towards previous patch use, similar to the condition that was numerically explored in Gautestad and Mysterud (2006).


Gautestad A. O. and I. Mysterud. 2006 Complex animal distribution and abundance from memory-dependent kinetics. Ecological Complexity 3:44-55.

Doncaster C. P. and D. W. Macdonald. 1991 Drifting territoriality in the red fox Vulpes vulpes. Journal of Animal Ecology 60, 423-39.

Random Walk Should Not Imply Random Walking

Random walk is one of the most sticky concepts of movement ecology. Unfortunately, this versatile theoretical model approach to simplify complex space use under a small set of movement rules often leads to confusion and unnecessary controversy. As pointed out by any field ecologist, unless an individual is passively shuffled around in a stochastic sequence of multi-directional pull and push events, the behavioural response to local events and conditions is deterministic! An animal behaves rationally. It successively interprets and responds to environmental conditions – within limits given by its perceptive and cognitive capacity – rather than ignoring these cues like a drunken walker. Any alternative strategy would lose in the game of natural selection. Still, from a theoretical perspective an animal path may still be realistically represented by random walk – given that the randomness is based on properly specified biophysical premises and the animal adhere to these premises.

Photo: AOG

Outside our house I can study a magpie pica pica moving around, apparently randomly, until something catches its attention. An insect larva? A spider or other foraging rewards? After some activity at this patch it restarts its exploratory movement. As ecologist it is easy to describe the behaviour as ARS (area restricted search). In more general terms, the bird apparently toggles between relatively deterministic behaviour during patch exploration and more random exploratory moves in-between. If I had radio-tagged the magpie with high resolution equipment, I could use a composite random walk model (or more contemporary: a Brownian bridge formulation) derived from ARS to estimate the movement characteristics for intra- and inter-patch steps respectively, and test ecological hypotheses.

However, what if the assumptions behind the random walk equations are not fulfilled by the magpie behaviour? Now and then the magpie flies back in a relatively direct line to a previous spot for further exploration. In other words, the path is self-crossing more frequently than expected by chance. Also, the next day the magpie may be return to our lawn in a manner that indicates stronger site fidelity than expected from chance, considering all the other available gardens in the county. The magpie explores, but also returns in a goal-oriented manner, meaning that the home range concept should be invoked. Looking closer, when exploring the garden the magpie also seems to choose every next step carefully, constantly scanning its immediate surroundings, rather than changing direction and movement speed erratically. Occasional returns to a previous spot, in addition to returning repeatedly to our garden, indicates utilization of a memory map. In short, this magpie example may not fit the premises of an ARS the way it is normally modeled in movement ecology, namely as a toggling between fine- and coarser-scale random walk.

Hence, two challenges have to be addressed.

  1. What are the conditions to treat the movement as random walk when analysing the data?
  2. What are the basic prerequisites for applying the classical random walk theory for the analysis?

Regarding the first question, contemporary ecological modelling of movement typically defines the random parts of an animal’s movement path as truly stochastic (rather than as a model simplification of the multitude of factors that influence true movement), in the meaning of expressing real randomness in behavioural terms. The Lévy flight foraging hypothesis is an example of this specification. The remaining parts of the path are then expressing deterministic rules, like pausing and foraging when a resource patch is encountered, or triggering of a bounce-back response when sufficiently hostile environment is encountered. In my view this stochastic/deterministic framework is counterproductive with respect to model realism, since it tends to cover up the true source of randomness.

To clarify the concept of randomness in movement models one should be explicit about the model’s biophysical assumptions. Different sets of assumptions lead to different classes of random walk. In my book I summarized these classes as eight corners of the Scaling cube. Sloppiness with respect to model premises hinders the theory of animal space use to evolve towards stronger realism.

  • Random walk (RW) in the classical sense; i.e., Brownian motion-like, regards a statistical-mechanical simplification of a series of deterministic responses to a continuous series of particle shuffling. Collision between two particles is one example of such shuffling events. In other words, during a small increment of time a passively responding particle performs a given displacement in compliance with environmental factors (“forces”) and physical laws at the given point in space and time. Until new forces are acting on the particle (e.g., new collisions), it maintains its current speed and direction. In other words, under these physical conditions the process is also Markov-compliant: regardless of which historic events that brought the particle to it current position, its next position is determined by the updated set of conditions during this increment. The next step is independent of its past steps.
  • The average distance between change of movement direction of a RW is captured by the mean free path parameter. This implies that RW is a scale-specific process, and the characteristic scale is given by the mean free path during the defined time extent.
  • Since the RW particle is responding passively, its path is truly stochastic even at the spatio-temporal resolution of the mean free path. When sampling a RW path at coarser temporal resolutions a larger average distance between successive particle locations is observed. Basically, this distance increases proportionally with the square root of the sampling interval. This and other mathematical relationships of a RW (and its complementary diffusion formulation) is predictable and coherent from a well-established statistical-mechanical theory.
  • Stepping from a physical RW particle to a biophysical representation of an individual in the context of movement ecology implies specification and realism of two assumptions: (1) the movement behaviour should be Markov compliant (i.e., scale-specific), and (2) the path should be sampled at coarser intervals than the characteristic time interval that  accompanies the mean free path (formulated in the average “movement speed” at the mean free path scale). At these coarser spatio-temporal resolutions even deterministic movement steps becomes stochastic by nature, due to lumping together the resultant displacement from a series inter-independent finer-grained steps.

    An animal is observed at position A and re-located at position B after t time units. The vector AB may be considered a RW compliant step if – and only if – the intermediate path locations (dotted circles) in totality are sufficiently independent of the respective previous displacement vectors to make the resultant vector AB random. Each of the intermediate steps may be caused by totally deterministic behaviour. Still, the sum of the sequence of more or less inter-independent displacements makes position B unpredictable from the perspective of position A. The criterion for accepting AB as a step in a RW sequence is fulfilled at temporal scale (sampling resolution) t, even it the “hidden layer” steps are more or less deterministic at finer resolutions <<t.

    In my book I refer to such observational coarse-graining as increasing the depth of the hidden layer, from a fine-resolved unit scale – where local causality of respective displacements are revealed – to a coarser resolution where deterministic (and Markov-compliant) behaviour requires a statistical-mechanical description.

Regarding the second question raised above regarding Markov compliance, see the RW criterion in the Figure to the right [as was also exemplified by “Markov the robot” in Gautestad (2013)].

However, what if the animal violates Markov compliance? In other words, what if it is responding in a non-Markovian manner, meaning that path history counts to explain present movement decisions? Is the magpie-kind of non-Markovian movement typical for animal space use, from a parsimonious model perspective, or is multi-scaled site fidelity the exception rather than the rule? These are the core questions any modeller of animal movement should ask him/herself. One should definitely not just accept old assumptions just because several generations of ecologists have done so (many with strong reluctance, though).

Instead of accepting classical RW or its trivial variants correlated RW and biased RW as a proper representation of basic movement by default, albeit while closing your nose, you should explore a broader application of other corners of the Scaling cube, each with respective sets of statistical-mechanical assumptions.



Gautestad, A. O. 2013. Lévy meets Poisson: a statistical artifact may lead to erroneous re-categorization of Lévy walk as Brownian motion. The American Naturalist 181:440-450.

The Lesser Kestrel: Natal Dispersal In Compliance With The MRW Model

The Multi-scaled random walk (MRW) model defines a specific dispersal kernel for animal movement; a power law, which is qualitatively different from standard theory (a negative exponential function). Alcaide et al. (2009) analyzed long-term ringing programmes of the lesser kestrel Falco naumanni in Western Europe, and showed results from re-encounters of 1308 marked individuals in Spain. They found that most first-time breeders settled within 10 km from their natal colony (i.e., a strong philopatric tendency), with a negative association between natal dispersal and geographical distance. While Alcaide et al. (2009) were mainly concerned with gene flow and population effects, here I take a deeper look at their dispersal data and find strong support for MRW-compliant behaviour in the natal dispersal data. Indirectly, this pattern at the individual level also supports the MRW-analogue at the population level, the Zoomer model (Gautestad 2015).

I allow myself to copy their Figure 1, showing the natal dispersal distances:

Fig. 1. Frequency distribution of natal dispersal distances of lesser kestrels in the Guadalquivir Valley (SW Spain, N = 321 individuals, black bars; Negro et al. 1997) and in the Ebro Valley (NE Spain, N = 961, white bars; Serrano et al. 2003).


To visualize the difference between the expected dispersal kernel from MRW and from standard theory I here present the data above with log-scaled axes:


Under this transformation, compliance with a power law should resemble a straight regression line, with a slope that is defined by the power exponent. Such log-log linearity of a power law contrasts with a log-log transformed negative exponential function, which becomes convex. Interestingly, the two subsets of natal dispersal distances show strong compliance with a power law (R2=0.90 and R2=0.96, respectively), while the best-fitting negative exponential does not match the pattern that well (R2=0.60; dotted line).

Quite remarkably, even the power exponents (β=-2.02 and β=-2.00) show up very close to the standard MRW expectancy of β=-2  (Footnote 1). This particular magnitude of β is – according to MRW theory – expected from scale-free space use where the individual on average during the sampling period has put equal effort into utilizing its environment over the given scale range (in this case, from a spatial grain resolution of 10 km to an extent resolution of 440 km).

The discovery of natal dispersal data as summarized by Alcaide et al. (2009) allows me – for the first time – to study empirical model compliance in a species at relatively coarse temporal scales; i.e., over the interval from birth to first breeding the following year or two. Previous resolutions for MRW tests have typically been at temporal resolution of a few hours (GPS relocation data). Simultaneously, the good fit to power exponent β=-2, even at this coarse temporal scale, translates to β’=-1 in area terms rather than distances (Gautestad and Mysterud 2005). I recycle an illustration of this population kinetic aspect, which was also shown in this post and in my book:


The grey-shaded inset represents the classic dispersal kernel, expected from standard random walk at the individual level and diffusion at the population level; i.e., a negative exponential. The other elements in the illustration regard MRW (scale free power law, see also Footnote 2).

In particular, observe for the F(L) movement kernel that the coloured rectangle area of each log-scaled interval (bin) for squared distance, L2; representing “effort” by the individual to relate to respective spatial resolutions of their environment, is of similar magnitude when F=(L2)-1 = 1/L2. The area of each of the rectangles is the same. In other words; in a two-dimensional arena, an individual is then utilizing a k times larger landscape resolution 1/k times as frequently. In a population context (the Zoomer model, switching from a Lagrangian to the complementary Eulerian system perspective) – since a k times larger arena is expected to embed k times more individuals in average terms – when β=-2 the population is utilizing the landscape with equal intensity over the given scale range (Gautestad 2015, p122-132).

Footnote 1: what about the Lévy flight/walk model, which also predicts a scale-free and thus a log-log linear dispersal kernel? With respect to the lesser kestrel, as well as all other bird species, spatial memory is part of their cognitive capacity. A home range, which requires directed returns to previous locations, is exemplifying this utilization. MRW regards a combination of scale-free space use and site fidelity. Lévy flight only regards the former.

Footnote 2: With respect to lesser kestrel’s natal dispersal, the data represents the displacement distribution of many individuals (called an ensemble in statistical mechanics) rather than the distribution of a set of displacements for a given individual. Thus, the power law curve reflects these individuals’ pooled tendency for scale free space use during natal dispersal. When establishing their respective home ranges with centre of activity at the chosen breeding site, it would have been interesting to see whether the median displacement length (and β) for the following 1-2 year period deviated from natal dispersal at the same temporal resolution.


Alcaide, M., D. Serrano, J. L. Tella and J. J. Negro. 2009. Strong philopatry derived from capture–recapture records does not lead to fine-scale genetic differentiation in lesser kestrels. Journal of Animal Ecology 78:468–475.

Gautestad, A. O. 2015. Modelling parallel processing. pp114-148 inAnimal Space Use: Memory Effects, Scaling Complexity, and Biophysical Model Coherence. Dog Ear Publishing, Indianapolis. 298pp.

Gautestad, A. O. and I. Mysterud. 2005. Intrinsic scaling complexity in animal dispersion and abundance. The American Naturalist 165:44-55.