Analytical Sensitivity to Fuzzy Fix Coordinates

In empirical data GPS fixes are never exact positions. A “fuzziness field” will always be introduced due to uncertain geolocation. When analyzing a set of fixes in the context of multi-scaled space use, are the parameter estimates sensitive to this kind of statistical error? Simultaneously, I also explore the effect on constraining the potential home range by disallowing sallies to the outermost range of available area.

To explore the triangulation error effect on space use analysis I have simulated Multi-scaled random walk in a homogeneous environment with N=10,000 fixes (of which the first 1,000 fixes were discarded) under two scenaria; a “sharp” location (no uncertainty polygons), and strong fuzziness. The latter introduced a random displacement to each x-y coordinate with a standard deviation (SD) of magnitude approximately equal to the system condition’s Characteristic scale of space use (CSSU). Displacements to the outermost parts of the given arena was disallowed, to study how this may influence the analyses. I then ran the following three algorithms in the MRW Simulator: (a) analysis of A(N) at the home range scale, (b) analysis of A(N) at a local scale (splitting the home range into four quadrants), and (c) analysis of the fix scatters’ fractal property.

The following image shows the sharp fix set and the strongest fuzzyness condition.

By visual inspection is is easy to spot the effect from the spatial error (SD = 1182 length units, upper row to the right). However, the respective A(N) analyses at the home range scale generated a CSSU estimate that was only 10% larger in the fuzzy set (linear scale). When superimposing cells of CSSU magnitude onto each fix, the home ranges appear quite similar in overall appearance and size. This was to be expected, since fuzziness influences fine-resolution space use only.

Visually, both home range areas appear somewhat constrained with respect to range, due to the condition to disallow displacements to the peripheral parts of the defined arena (influencing less than 1% of the displacements).

A(N) analysis (the Home range ghost). The two conditions appeared quite similar in plots of log[I(N)], where I is the number of fix-embedding pixels at optimized pixel resolution, as described in previous posts.

However, for the fuzzy series there is a more pronounced break-point with a transition towards exponent ∼0.5 in sample size of magnitude larger than log2(N) ≈ 3. This break-point “lifted” the regression line somewhat for the fuzzy series, leading to a slightly larger intercept with the y-axis when interpolating towards log(N) = 0. This difference between the two conditions with respect to the y-intercept, Δlog(c) from the home range ghost formula log[I(N)] = log(c) + z*log(N), also defines the difference in CSSU when comparing sharp and fuzzy data sets. recall that CSSU ≡ c.

The spatial constraint on extreme displacements relative to the respective home ranges’ centres apparently did not influence these results.

I have also superimposed local CSSU-analysis for respective four quadrants of the two home ranges. When area extent for analysis is constrained in this manner; i.e., spatial extent is reduced to 1:4 in area terms (1:2 linearly) for each local sub-set of fixes, respective (N,I) plot needs to be adjusted by a factor which compensates for the difference in scale range.

Since the present MRW conditions were run under fractal dimension D=1, each local log2[(N,I)] plot is rescaled to log2[(N,I)] + log2D(grain/extent)] = log2[(N,I)] + 1 when Δ regards the relative change of scale under condition D=1. After this rescaling the over-all CSSU and the local CSSU are overlapping, as shown by the regression lines in the A(N) analyses above. Overlapping CSSU implies that the four quadrants had similar space use conditions, which is true in this simplified case.

Fractal analysis. The Figure below shows the magnitude of incidence I as a function of relative spatial resolution k (the “box counting method”), spanning the scale range from k=1 (the entire arena, linear scale 40,000 units) and down to k = 40,000/(212) = 9.8 units, linear scale*.

Starting from the coarsest resolution, k=1, the log-log regression obviously shows = 1. At resolution k=1:2 and k=1:4 (4 and 16 grid cells, respectively), I = 4 and I = 4. In other words, all boxes contain fixes at k=1/2, apparently satisfying a two-dimensional object, and at k=1:4 some empty cells (12 of 16) are peeled away as empty space from the periphery of the fix scatter at this finer resolution.

This coarse-scale pattern is a consequence of the defined space use constraint. Disallowing “occasional sallies” outside the core home range obviously influences the number of non-empty boxes relative to all boxes available at the coarse resolutions 1<k<4-1.

However, at progressively finer resolutions – below the “space fill effect” range transcending down to ca k=1:32 – the true fractal nature of the scatter of fixes begin to appear due to log-log linearity, confirming a statistical fractal with a stable dimension D≈1.1 over the resolution range 2-9 < k < 2-5  (showing log-log slope of -1.1). At finer resolutions, the dilution effect flattens further expansion of I. The D=1.1 scale range is close to expectation from the simulation conditions Dtrue=1, while the deviations above and below this range are trivial statistical artifacts from space filling and dilution.

The most interesting pattern regards the finest resolution range, where the fuzzy set of fixes somewhat unexpectedly follows a similar log(k,I) pattern as the non-fuzzy set. However, the slight difference in D, which increases to D =1.17, may be caused by the fuzziness (proportionally stronger space-fill effect as resolution is increased).

To conclude, if the magnitude of position uncertainty does not supersede the individual’s CSSU scale under the actual conditions, the A(N) analyses of MRW compliant space use does not seem to be seriously influenced by location fuzziness. The “fix scrambling” error is in most part subdued towards finer scales than the CSSU.

However, the story doesn’t end here. I have superimposed a dotted red line onto the Figure above. Overlapping with the D=1 section of grid resolutions, the line is extrapolated towards the intersection with log(I) ≈ 0 at a scale that is 21.5 = 2.8 times larger (linear) scale than the actual arena for the present analyses. In other words, in absence of area constraint and step length constraint (and disregarding step length constraint due to limited movement speed of the individual) one should expect the actual set of fixes to “fill up” the missing incidence over the coarse scale range, leading to D≈1 for the entire range from the dilution effect range towards coarser scales.

I have also marked the CSSU scale as the midpoint of the red line. A resolution of log2(k)=-5.5 is in fact very close to the CSSU estimate from the A(N) method (k=1:50 of actual arena using the A(N) method, versus 1:35 of the arena according to the fractal analysis). This alternative method to estimate CSSU was first published in this blog post.

A preliminary development towards this approach was explored both theoretically and empirically in Gautestad and Mysterud (2012).


*) This analysis of N= 8,000 fixes, spanning box counting of 1, 2, 4, 16, 32, … 16.8 million grid cells at respective scales, took ca 10 hours pr. series in the MRW Simulator, using a laptop computer. I suppose this enormous number cracking it would be outside the practical range of a similar algorithm in R.


Gautestad, A. O., and I. Mysterud. 2012. The Dilution Effect and the Space Fill Effect: Seeking to Offset Statistical Artifacts When Analyzing Animal Space Use from Telemetry Fixes. Ecological Complexity 9:33-42.


The Balance of Nature?

To understand populations’ space use one needs to understand the individual’s space use. To understand the individuals’ space use one needs to acknowledge the profound influence of spatio-temporal memory capacity combined with multi-scale landscape utilization, which continues to be empirically verified at a high pace in a surprisingly wide range of taxa. Complex space use has wide-ranging consequences for the traditional way of thinking when it comes to formulate these processes in models. In a nutshell, the old and hard-dying belief in the balance of nature needs a serious re-formulation, since complexity implies “strange” fluctuations of abundance over space, time and scale. A  fresh perspective is needed with respect to inter-species interactions (community ecology) and environmental challenges from habitat destruction, fragmentation and chemical attacks. We need to address the challenge by rethinking also the very basic level of how we perceive an ecosystem’s constituents: how we assume individuals, populations and communities to relate to their surroundings in terms of statistical mechanics.

Stuart L. Pimm summarizes the Grand Ecological Challenge well in his book The Balance of Nature? (1991). Here he illustrates the need to rethink old perceptions linked to the implicit balancing principle of carrying capacity*, and he stresses the importance of understanding limits to how far population properties like resilience and resistance may be stretched before cascading effects appear. In particular, he advocates the need to extend the perspective from short-series local-scale population dynamics to long-term and broad scale community dynamics. In this regard, his book is as timely today as it was 27 years ago. However, in my view the challenge goes even deeper than the need to extending spatio-temporal scales and the web of species interactions.

Balancing on a straw – an Eurasian wren Troglodytes troglodytes (photo: AOG).

My own approach towards the Grand Ecological Challenge started with similar thoughts and concerns as raised by Pimm**. However, as I gradually drifted from being a field ecologist towards actually attempting to model parsimonious population systems I found the theoretical toolbox to be void of key instruments to build realistic dynamics. In fact, the current methods were in many respects even seriously misleading, due to what I considered some key dissonant model assumptions.

In my book (Gautestad 2015), and here in my subsequent blog, I have summarized how – for example – individual-based modelling generally rests on a very unrealistic perception of site fidelity (March 23, 2017: “Why W. H. Burt is Now Hampering Progress in Modern Home Range Analysis“). I have also found it necessary to start from scratch when attempting to build what I consider a more realistic framework for population dynamics (November 8, 2017: “MRW and Ecology – Part IV: Metapopulations?“), for the time being culminating with my recent series of post on “Simulating Populations” (part I-X).

I guess the main take-home message from the present post is:

  • Without a realistic understanding; i.e., modelling power, of individual dispersion over space, time and scale it will be futile to build a theoretical framework with deep explanatory and predictive value with respect to population dynamics and population ecology. In other words, some basic aspects of system complexity at the “particle level” needs to be resolved.
  • Since we in this respect typically are considering either the accumulation of space use locations during a time interval (e.g., a series of GPS fixes) or a population’s dispersion over space and how it changes over time, we need a proper formulation of the statistical mechanics of these processes.In other words, when simplifying extremely complicated systems into a manageable set of smaller set of variables, parameters and key interactions, we have to invoke the hidden layer.
  • With a realistic set of basic assumptions in this respect, the modelling framework will in due course be ready to be applied on issues related to the Grand Ecological Challenge – as so excellently summarized by Pimm in 1991. In other words, before we can have any hope of a detailed prediction of a local or regional faith of a given species or community of species under a given set of circumstances, we need to build models that are void of the classical system assumptions that have cemented the belief in the so-called balance of nature.


*) The need to rethink the concept of carrying capacity and accompanying “balance” (density dependent regulation) should be obvious from the simulations of the Zoomer model. Here a concept of carrying capacity (called CC) is introduced at a local scale only, where – logically – the crunch from overcrowding is felt by the individuals. By coarse-graining to a larger pixel than this finest system resolution we get a mosaic of local population densities where each pixel contains a heterogeneous collection of intra-pixel (local) CC-levels. If “standard” population dynamic principles applies, the population change when averaging the responses over a large number of pixels with similar density should be the same whether one considers the density at the coarser pixel or the average density of the embedded finer-grained sub-pixels. This mathematical simplification follows from the mean field principle. In other words, the sum equals the parts. On the other hand, if the principle of multi-scaled dynamics applies, two pixels at the coarser scale containing a similar average population density may respond differently during the next time increment due to inter-scale influence. At any given resolution the dynamics is as a function not only of the intra-pixel heterogeneity within the two pixels but also of their respective neighbourhood densities; i.e., the condition at an even coarser scale. The latter is obviously not compliant with the mean field principle, and thus requires a novel kind of population dynamical modelling.

**) In the early days I was particularly inspired by Strong et al. (1984), O’Neill et al. (1986) and L. R. Taylor; for example, Taylor (1986).


Gautestad, A. O. 2015, Animal Space Use: Memory Effects, Scaling Complexity, and Biophysical Model Coherence Indianapolis, Dog Ear Publishing.

O’Neill, R. V., D. L. DeAngelis, J. B. Wade, and T. F. H. Allen. 1986. A Hierarchical Concept of Ecosystems. Monographs in Population Biology. Princeton, Princeton University Press.

Pimm, S. L. 1991, The balance of nature? Ecological issues in the conservation of species and communities. Chicago, The University of Chicago Press.

Strong, D.E., Simberloff, D., Abele, L.G. & Thistle, A.B. (eds). 1984. Ecological Communities: Conceptual Issues and the Evidence. Princeton,Princeton University Press.

Taylor, L. R. 1986. Synoptic dynamics, migration and the Rothamsted insect survey. J. Anim. Ecol. 55:1-38.

The Hidden Layer

Focusing on the statistical pattern of space use without acknowledging the biophysical model for the process will create much confusion and unnecessary controversy. Ecologists are now forced to get a better grip on concepts from statistical mechanics than earlier generations. For example, to understand the transformation from data on actual behaviour to pattern analysis of space use, the concept of the hidden layer represents the first gate to pass.

Research on animal movement and space use has always had a central place in ecology. However, as more field data, better computers and more sophisticated statistical methods have become available, some old dogma have come under attack. Specific theoretical aspects of this quest for improved model realism have emerged from the rapidly growing cooperation between biologists and physicists in the emerging field of macro-level biophysics. The so-called Lévy flight foraging hypothesis is one example. And, of course, I can’t resist mentioning the MRW theory.

A booted eagle Hieraaetus pennatus is triggering a flock of spotless starlings Sturnus unicolor to show swarming behaviour. Malaga river delta, December 2017. Photo: AOG.

In 1985 Charles Krebs described ecology as the scientific study of the interactions that determine the distribution and abundance of organisms. In an ethological context animal space use is studied on two levels – tactical and strategic. The tactical level regards understanding individual biology and behavioural ecology on a moment-to-moment temporal scale. Strategic space use adds an extra layer of complexity to the tactical behaviour. In a simplistic manner we may refer to this layer as the animal’s state at a given moment in time; for example whether it is hungry or not (e.g., in hunting mode). Strategy also involves processing of memory-based goals. Strategies executed at coarser time scales than tactics. Some of the interaction between tactics and strategy may then – under specific conditions (see below) – be transformed to dynamic models at the tactical level; so-called mechanistic models, which consists of a set of executable rules covering respective cognitive and environmental conditions. Validating the model dynamics and resulting statistical patterns against real animal data then rates the degree of model realism. For example; realistic, tactical models have been developed to cast light on the “clumping behaviour” (dense swarming) of flock of birds that are threatened by a raptor.

The myriad of rules that influence animal movement makes detailed modelling an impossible task, and would anyway only lead to a descriptive picture with no value to ecological hypothesis testing. In fact, the signature of successful modelling is simplification. Thus, only specific aspects of the reference individual’s behaviour can be included and scrutinized.

The present post addresses one particular aspect of system simplification; coarse-graining the temporal scale. This approach implies a qualitative change of how the space use system is observed and analyzed. Actually, temporal coarse-graining is forced upon us when studying animal space use from sampling an individual’s successive displacements as a series of locations (fixes) during a given period of time. During each inter-fix interval the observed displacement regards the resultant vector from a myriad of intermediate and unobserved events. What has happened to the moment-to-moment kind of behavioural ecology? It has become buried below the hidden layer.

At the surface of this hidden layer you lose sight of behavioural details (like raptor response and swarming rules) but you gain access to an alternative perspective of movement and space use. Alternative statistical descriptors are emerging at this temporally coarser scale, following the laws of statistical mechanics. What is analyzed above the hidden layer is the over-all pattern from many displacements events that are aggregated into a spatial scatter of fixes.

For example, you may coarse-grain both the temporal and spatial system dimensions, and study the aggregated distribution of fixes at the spatial scale of virtual grid cells (pixels) and temporal scale of the fix sampling period. The spatio-temporal variations in intensity of space use within the actual space-time extents then allows for modelling and hypothesis testing, but now using statistical-mechanical descriptors of space use intensity. These descriptors are either not valid below the hidden layer (e.g., the information content of local density of fixes) or they have an alternative interpretation (e.g., movement as a “step” versus movement as a resultant vector for a given interval and location). Both levels of analysis require large sets of input to allow for statistical treatment.

Why is the hidden layer concept and the statistical-mechanical approach more important to relate to today than in earlier decennia? The short answer is the realization – seeded by better and more extensive data – that animal space use involves more than a couple of universality classes of movement (see this post). In fact, in my book, papers and blog posts I have detailed eight classes, most of which are unfamiliar to you.

To understand space use that is influenced by spatial memory and scale-free movement, statistical mechanical modelling is a prerequisite for realistic representation of such complex systems, unless you limit your perspective to a short-term behavioural bout within a very localized arena. In other words, “a single piece of a jigsaw-puzzle of space use dynamics”. For example, if you zoom closely into a small segment of a circle you observe an approximately straight line. Take a step outwards, and you are facing the qualitatively different geometry – the mathematics of a curve and finally a full circle. Stubbornly staying within the linear framework when analyzing more extensive objects than what you observe at fine scales will force you into a corner filled with paradoxes.

Fine-grained and coarse-grained analyses of animal space use are complementary approaches to the same system.


MRW and Ecology- Part VII: Testing Habitat Familiarity

Consider having a series of GPS fixes, and you wonder if the individual was utilizing familiar space during your observation period – or started building site familiarity around the time when you started collecting data. Simulation studies of Multi-scaled random walk (MRW) shows how you may cast light on this important ecological aspect of space use.

First, you should of course test for compliance with the MRW assumptions, (a) site fidelity with no “distance penalty” on return events, (b) scale-free space use over the spatial range that is covered by your data, and (c) uniform space utilization on average over this scale range. One single test in the MRW Simulator, the A(N) regression, cast light on all these aspects. First, you seek to optimize pixel resolution for the analysis (estimating the Characteristic scale of space use, CSSU). Next, if you find “Home range ghost” compliance; i.e., incidence I expands proportionally with square root of sample size of fixes, your data supports (a) spatial memory utilization with no distance penalty due to sub-diffusive and non-asymptotic area expansion, (b) scale-free space use due to linearity of the log[I(N)] scatter plot, and (c) equal inter-scale weight of space use due to slope ≈ 0.5.

Supposing your data confirmed MRW, how to test for time-dependent strength of habitat familiarity? Consider the following simulation example, mimicking space use during a season and under constant environmental conditions.

The red dots show log(N,I) for various sample sizes up to the total set of 11,000 fixes. Each dot represents the average I for respective N of the two methods continuous sampling and frequency sampling (counteracting autocorrelation effect; see a previous post). However, analyzing the first 1,000 fixes separately (black dots) consistently revealed a more sloppy space use in terms of aggregated incidence at a given N, relative to the total season. The next 1,000 fixes, however, was compliant with the total series both with respect to slope and y-intercept (CSSU) (green dots).

The reason for the discrepancy in space use during the initial period of fix sampling* was in the present scenario the actual simulation condition; site familiarity was set to develop “from scratch” simultaneously with the onset of fix collection. I define strength of site familiarity as proportional with the total path length from which the model animal collects a previous location to return to**. In the start of the sampling period, the underlying path is short in comparison to the total path that was traversed during the total season, and – crucially – return steps targeted previous locations from the actual simulation period only, and not locations prior to to this start time. In other words, the animal was assumed to settle down in the area at the point in time when the simulation commenced.

To conclude, if your data shows CSSU and slope of similar magnitude in the early and later phase of data collection, you sampled an individual with a well-established memory map of its environment during the entire observation period. The implicit assumption for this conclusion is of course that the environmental conditions was constant during the entire sampling period, including the initial phase. Using empirical rather than synthetic data means that additional tests would have to be performed to cast light on this aspect.


*) The presentation above reflects the pixel resolution that was optimized for the total series. The first 1,000 fixes showed a more coarse-grained space use, reflected in a 50% larger CSSU scale (not shown: optimal pixel size was 50% larger for this part of the series) despite constant movement speed and return rate for the entire simulation period. In this scenario a larger CSSU [coarser optimal pixel for the A(N) analysis] signals a less mature habitat utilization in the home range’s early phase. The CSSU was temporarily inflated during build-up of site familiarity, but – somewhat paradoxically – the accumulated number of fix-embedding grid cells (incidence) for a given N at this scale was smaller. These two effects, reflecting degree of habitat familiarity during home range establishment, should be considered a transient effect.

**) Two definitions should be specified:

  • I define strength of site familiarity as proportional with the total path length from which the model animal collects a previous location to return to.
  • I define strength of site fidelity as proportional with the return frequency.

Both definitions rest on the assumptions of no distance penalty on return targets and no time penalty on returns; i.e., infinite spatio-temporal memory horizon relative to the actual sampling period.

The MRW Simulator: Importing Your Own GPS Data

You have a large database of GPS fixes, and you wonder if your animals have utilized their habitat in accordance to standard theory of mechanistic movement (the null hypothesis) or in compliance with the MRW theory (the alternative hypothesis). The MRW Simulator is tailormade for this kind of test.  If MRW is verified you may proceed with various analyses of behavioural ecology under the alternative statistical-mechanical theory. The initial test procedure is simple: (1) import your data, (2) prepare for a test of model compliance by applying one or more built-in algorithms, and (3) import the generated data tables for statistical test into third party packages (R, Excel, etc.).

You can import data to the MRW Simulator by preparing a two-column text file, using comma or TAB as delimiter between the two coordinate values for successive locations.

By default you should use the file name import.txt, but other names are also allowed (given the correct data structure). Place the file in the data folder (…/mov) and choose the menu “File | Import data from txt or csv”.

You are asked to define the file name for the imported data. By default, the name is set to “seed1.txt”. During importing the original series is centred on coordinate (0,0), the middle of the arena window. The arena size for the analysis is automatically adjusted to twice the space needed to display the set of the imported fixes.

After import, you set a couple of check boxes on the MRW Simulator’s user interface in accordance to User guide before clicking the run button (the MRW simulator re-formulates your imported data to its own format and saves the result in the text file levy1.txt). In particular, setting simulation series length to zero and choosing “use seed1.txt” as first part of the simulated series ensures that only your own data are reformatted. Within a fraction of a second the procedure exits without adding simulated data to the series, and you are ready to perform various analytical tasks on the levy1.txt file (see menu “Analyze”).

The procedure “A(N) regression” is typically applied to analyze space use at the home range scale. It is a convenient choice to test for MRW compliance of your data.

You are asked which of the Levy*.txt files to analyze for fix-filling area as a function of sample size N (number of fixes in the Levy*.txt file). Next, the analysis is executed in accordance to the scales set in “Arena extent for analysis”, “Arena grain for analysis” and “Pixel (intra-grain resolution)” in the MRW Simulator’s user interface.

In this procedure, set extent = grain. Pixel regards a ratio; the relative resolution of grain/pixel. For example, setting pixel = 10 performs analysis at the virtual grid scale 1/10 of arena scale; i.e., 10×10 grid cells. See User guide for details.

The progress is shown below the arena window. The algorithm is counting incidence over a range of sample sizes N at the given pixel resolution; first by sequential (continuous) sampling up to Ntotal and then by frequency (uniform) sampling over the total series. Search my blog or read my book for these concepts.

The result is saved in a text file containing a table of incidence (non-empty grid cells at the given pixel resolution) as a function of sample size N under the two sampling conditions. These data may then be imported in for example Excel for graphical presentation and statistical analysis; for example, a regression.

If you find a discrepancy between the scatter from the two sampling methods (you normally do!), your data is probably serially autocorrelated. To remove autocorrelation effect, take the average incidence for respective magnitudes of N, as was explained in this post. Conveniently, the MRW Simulator does this task for you (you find the averaging table below the tables for continuous and frequency sampling). This averaging procedure also adjusts for a “drifting home range” scenario, which also produces autocorrelation.

Does the result support MRW? First, you must verify presence of a characteristic scale of space use (CSSU), which is a property of scale-free movement under influence of spatial memory under the “parallel processing” postulate.

To test for CSSU you should experiment with various pixel resolutions and see if the log[(N,incidence)] pattern converges to a slope ∼0.5 at a given scale. If so, CSSU ≈ (pixel scale)2 = c.

If you don’t find reasonably good compliance with linearity of log[I(N)] = log(c) + 0.5*log(N) or the slope exceeds 0.5, try a coarser pixel resolution. If the slope is smaller than 0.5, try a finer pixel resolution.

If this test of convergence to log-linearity with slope ≈ 0.5 fails, you have probably either supported one of the null models; i.e., Brownian motion-like or Lévy-like space use void of spatial memory influence (slope ≈1, which is quite insensitive to change of pixel scale) or the classic paradigm: home range movement under influence of a constraining border zone [I(N) showing an area asymptote rather than a power law expansion with exponent close to 0.5].

The MRW Simulator 2.0 will now be made available as a free add-on tool for all buyers of my book. If you purchase it through my shopping cart at, you will get the program and its user guide bundled with the book. Existing book owners: contact me at and I’ll fix you a personal download link – free of charge. You may purchase by invoice – see top of this page!

The MRW Simulator – Finally Available!

Back in 1997 I started programming the foundation for a personal simulation environment for Multi-scaled random Walk, the MRW Simulator. Through countless updates over these 20 years the program has gradually matured into a version which finally is ready for limited distribution towards peers in the field of animal space use research.

The MRW Simulator is a Windows©-compliant tool to generate various classes of animal movement (self-produced data series) or to import existing data series. The generated or imported data – consisting of a sequence of (x,y) coordinates – may then be subject to various kinds of statistical protocols through simple menu clicks. The generated text files are then typically exported for detailed analyses and presentation of results in other applications, like the R package or Excel©.

While R is based on an interpreted language, the MRW Simulator is a fully complied program. Thus, movement paths of length up to 20 million steps may be simulated within minutes of execution time, rather than multi-hours or days. A multi-scaled analysis of data over a substantial scale range is almost forbidden in an interpreted system due to the algorithm’s long execution period. In the MRW Simulator such analyses are performed in a fraction of this time. Thus, R and the MRW Simulator may supplement each other. R is strong on statistics and algorithmic freedom; the MRW Simulator is strong on time–effective execution of a small set of basic but typically time-consuming algorithms.

The opening screen contains menus (1), a window where the simulated or imported set of fixes are displayed (2) and various command buttons, check boxes and information fields (314).

To get your first experience with the system, try out the most basic setting for a simulation. First, choose among classes of movement; Levy walk/MRW, Correlated random walk, and Composite random walk (superposition of two correlated random walks) (3). The difference between LW and MRW is explained below.

For your first test, choose Levy walk / MRW (3), with default setting for fractal dimension (D=1) and maximum displacement length between successive steps (truncation=1,000,000 length units). D=1 simulates the condition where the animal on average utilizes its environment with similar scale-free weight at each intermediate scale from unit step length to maximum step (setting 1<D<=2 skews space use towards finer-scale space use on expense of coarser scales, again in average terms).

In a column of text fields (4) you may define conditions like series length, properties for the simulated path, size of the arena and grid resolution for the subsequent analysis. For example, the difference between Levy walk and MRW is given by setting a return frequency >0 for MRW (implying targeted return events to previous locations at the chosen average frequency). For this first run, just keep the default values.

Later you will learn how to additionally modify the conditions by including a pre-defined series of coordinates (in a file called seed*.txt, where * regards an incremental number) (5). At this stage, just keep default settings.

By default the simulation runs in a homogeneous environment. The set of “Habitat heterogeneity” fields (6) allows defining the corners of a rectangle where the model animal behaves in a more “fine-grained” manner by reducing average movement speed. Other ecological aspects may also be defined, like a method to account for temporal and local resource exhaustion. As a start, just keep defaults.

Now, click the “Single-series” command button (7). You should see a number of fixes appearing as dots in the arena window.

The number of fixes reflect the ratio of total series length and the observation interval on this series; i.e., “Number of fixes” (Norig= 1,000,000) multiplied by an average “Observation frequency” (p=0.001). This leads to an observed series length – a path sample – of ca 1,000 fixes; which are displayed in the observation window.

Before moving on to your first data analysis, observe that the simulation’s default settings are defined by “schemes”, which can be pre-loaded from a dropdown menu (8). You may also run a number of replicate simulations in an automated sequence (9). The arena may be copied to the clipboard (10) for subsequent pasting into other applications like a Word document, an Excel sheet, etc.

The “Data path” field (11) displays the folder where the system saves and retrieves data. By default, the data resides in a subfolder, “\mov”, under the location of the MRW simulator’s EXE file. This location is set during program setup.

The field “Fractal resolution range” (12) defines the scale range over which a subsequent analysis of the scatter of fixes – selected from the Analysis menu – will be performed by the so-called box counting method.

The field “A(N)” (13) shows the progress of another analysis, total area (incidence) as a function of sample size, N.

The counter (14) is automatically incremented each time you click the “Singe-series” button (7). TIP: To repeat (and overwrite) an existing series, edit the counter number (14) to one decrement below the actual series. For example, to re-execute data series number 5, edit the counter field to “4” before clicking the button (7). To re-execute series 1, edit the field to “-1” (the number zero is reserved as the initial setting number).

The data file containing “observed” fixes resides in the \mov folder (see above), with name “levy*.txt”. (* = 1, 2, 3, …). It contains three columns of data; x-coordinate, y-coordinate, and inter-step distance.

The MRW Simulator 2.0 will now be made available as a free add-on tool for all buyers of my book. If you purchase it through my shopping cart at, you will get the program and its user guide bundled with the book. Existing book owners: contact me at and I’ll fix you a personal download link – free of charge. You may purchase by invoice – see top of this page!

In the next blog post I’ll show some of the menu procedures of the MRW Simulator, including how to import you own GPS space use series for analysis on-the-fly.