Positive and Negative Feedback Part I: Individual Space use

The standard theories on animal space use rest on some shaky behavioural assumptions, as elaborated on in my papers, in my book and here in my blog. One of these assumptions regards the assumed lack of influence of positive feedback, in particular the self-reinforcing effect that emerge when individuals are moving around with a cognitive capacity for both temporal and spatial memory utilization. The common ecological methods to study individual habitat use; like the utilization distribution (a kernel density distribution with isopleth demarcations), use/availability analysis, and so on, explicitly build on statistical theory that not only disregards such positive feedback, but in fact requires that this emergent property is not influencing the system under scrutiny.

Unfortunately, most memory-enhanced numerical models to simulate space use are rigged to comply with negative rather than positive feedback effects. For example, the model animal successively stores its local experience with habitat attributes while traversing the environment, and it uses this insight in the sequential calculation of how long to stay in the current location and when to seek to re-visit some particularly rewarding patches (Börger et al. 2008, van Moorter et al. 2009, Spencer 2012, Fronhofer et al. 2013; Nabe-Nielsen et al. 2013). In other words, the background melody is still to maintain compliance with the marginal value theorem (Charnov 1976) and the ideal free distribution dogma, which both are negative feedback processes and not a self-reinforcing process that tends to counteract such a tendency.

Negative feedback (or balancing feedback) occurs when some function of the output of a system, process, or mechanism is fed back in a manner that tends to reduce the fluctuations in the output, whether caused by changes in the input or by other disturbances. Whereas positive feedback tends to lead to instability via exponential growth, oscillation or chaotic behavior, negative feedback generally promotes stability. Negative feedback tends to promote a settling to equilibrium, and reduces the effects of perturbations. Negative feedback loops in which just the right amount of correction is applied with optimum timing can be very stable, accurate, and responsive.
https://en.wikipedia.org/wiki/Negative_feedback

A common curlew Numenius arquata foraging on a field within its summer home range. Anecdotally, one may observe that a specific individual tends to revisit not only specific fields while foraging, but also specific parts of these fields. If this site fidelity is influenced by a rising tendency to prefer familiar space on expense of revisiting potentially equally rewarding patches based on previous visits, a positive feedback (self-reinforcing space use) has emerged.  This effect then interferes with the traditional ecological factors, like selection based on habitat heterogeneity, in a complex manner. Photo: AOG.

The above definitions follow the usual path to explain negative feedback as “good”, and positive feedback as something scary (I will return to this misconception in a later post in this series). It echoes the prevailing “Balance of nature” philosophy of ecology, which I’ve criticized at several occasions (see, for example, this post).

In a previous post, “Home Range as an Emergent Property“, I described how memory map utilization under specific ecological conditions may lead to a self-reinforcing re-visitation of previously visited locations (Gautestad and Mysterud 2006, 2010); in other words, a positive feedback mechanism*. Contemporary research on animal movement covering a wide range of taxa, scales, and ecological conditions continues to verify site fidelity as a key property of animal space use.

I use a literature search to test an assumption of the ideal models that has become widespread in habitat selection theory: that animals behave without regard for site familiarity. I find little support for such “familiarity blindness” in vertebrates.
Piper 2011, p1329.

Obviously, in the context of spatial memory and site fidelity it should be an important theme for research to explore to what extent and under which conditions negative and positive feedback mechanisms are shaping animal space use.

Positive feedback from site fidelity will fundamentally influence analysis of space use. For example, two patches with a priori similar ecological properties may end up being utilized with a disproportionate frequency due to initial chance effects regarding which patch happened to gain familiarity first**. Further, if the animal is utilizing the habitat in a multi-scaled manner (which is easy to test using my MRW-based methods), this grand error factor in a standard use/availability analysis cannot be expected to be statistically hidden by just studying the habitat at a coarser spatial resolution within the home range.

Despite this theoretical-empirical insight, the large majority of wildlife ecologists still tend to use classic methods resting on the negative feedback paradigm to study various aspects of space use. The rationale can be described by two end-points on a continuum: either one ignores the effect from self-reinforcing space use (assuming/hoping that the effect does not significantly influence the result), or one use these classic methods while holding one’s nose.

The latter category is accepting the prevailing methods’ basic shortcomings – either based on field experience or inspired by reading about alternative theories and methods – but the strong force from conformity in the research community is hindering bold steps out of the comfort zone. Hence, the paradigm prevails. Again I can’t resist referring to a previous post, “Why W. H. Burt is Now Hampering Progress in Modern Home Range Analysis“.

Within the prevailing modelling paradigm, implementing spatial memory utilization in combination with positive feedback-compliant site fidelity is a mathematical and statistical nightmare – if at all possible. However, as a reader of this blog you are at least fully aware of the fact that some numeric models have been developed lately, ouside the prevailing paradigm. These approaches not only account for memory map utilization but also embed the process of positive feedback in a scale-free manner (I refer to our papers and to my book for model details; see also Boyer et al. 2012).

 

NOTES

* The paper explores space use under the premise of positive feedback during superabundance of resources, in combination with negative feedback during temporal and local over-exploitation.

** In Gautestad and Mysterud (2010) I described this aspect as the distance from centre-effect; i.e., the utilization distribution falls off at the periphery of ulilized patches independently of a similar degradation of preferred habitat.

 

REFERENCES

Börger, L., B. Dalziel, and J. Fryxell. 2008. Are there general mechanisms of animal home range behaviour? A review and prospects for future research. Ecology Letters 11:637-650.

Boyer, D., M. C. Crofoot, and P. D. Walsh. 2012. Non-random walks in monkeys and humans. Journal of the Royal Society Interface 9:842-847.

Charnov, E. L. 1976. Optimal foraging: the marginal value theorem. Theor. Popula. Biol. 9:129-136.

Fronhofer, E. A., T. Hovestadt, and H.-J. Poethke. 2013. From random walks to informed movement. Oikos 122:857-866.

Gautestad, A. O., and I. Mysterud. 2006. Complex animal distribution and abundance from memory-dependent kinetics. Ecological Complexity 3:44-55.

Gautestad, A. O., and I. Mysterud. 2010. Spatial memory, habitat auto-facilitation and the emergence of fractal home range patterns. Ecological Modelling 221:2741-2750.

Nabe-Nielsen, J., J. Tougaard, J. Teilmann, K. Lucke, and M. C. Forckhammer. 2013. How a simple adaptive foraging strategy can lead to emergent home ranges and increased food intake. Oikos 122:1307-1316.

Piper, W. H. 2011. Making habitat selection more “familiar”: a review. Behav. Ecol. Sociobiol. 65:1329-1351.

Spencer, W. D. 2012. Home ranges and the value of spatial information. Journal of Mammalogy 93:929-947.

van Moorter, B., D. Visscher, S. Benhamou, L. Börger, M. S. Boyce, and J.-M. Gaillard. 2009. Memory keeps you at home: a mechanistic model for home range emergence. Oikos 118:641-652.

Analytical Sensitivity to Fuzzy Fix Coordinates

In empirical data GPS fixes are never exact positions. A “fuzziness field” will always be introduced due to uncertain geolocation. When analyzing a set of fixes in the context of multi-scaled space use, are the parameter estimates sensitive to this kind of statistical error? Simultaneously, I also explore the effect on constraining the potential home range by disallowing sallies to the outermost range of available area.

To explore the triangulation error effect on space use analysis I have simulated Multi-scaled random walk in a homogeneous environment with N=10,000 fixes (of which the first 1,000 fixes were discarded) under two scenaria; a “sharp” location (no uncertainty polygons), and strong fuzziness. The latter introduced a random displacement to each x-y coordinate with a standard deviation (SD) of magnitude approximately equal to the system condition’s Characteristic scale of space use (CSSU). Displacements to the outermost parts of the given arena was disallowed, to study how this may influence the analyses. I then ran the following three algorithms in the MRW Simulator: (a) analysis of A(N) at the home range scale, (b) analysis of A(N) at a local scale (splitting the home range into four quadrants), and (c) analysis of the fix scatters’ fractal property.

The following image shows the sharp fix set and the strongest fuzzyness condition.

By visual inspection is is easy to spot the effect from the spatial error (SD = 1182 length units, upper row to the right). However, the respective A(N) analyses at the home range scale generated a CSSU estimate that was only 10% larger in the fuzzy set (linear scale). When superimposing cells of CSSU magnitude onto each fix, the home ranges appear quite similar in overall appearance and size. This was to be expected, since fuzziness influences fine-resolution space use only.

Visually, both home range areas appear somewhat constrained with respect to range, due to the condition to disallow displacements to the peripheral parts of the defined arena (influencing less than 1% of the displacements).

A(N) analysis (the Home range ghost). The two conditions appeared quite similar in plots of log[I(N)], where I is the number of fix-embedding pixels at optimized pixel resolution, as described in previous posts.

However, for the fuzzy series there is a more pronounced break-point with a transition towards exponent ∼0.5 in sample size of magnitude larger than log2(N) ≈ 3. This break-point “lifted” the regression line somewhat for the fuzzy series, leading to a slightly larger intercept with the y-axis when interpolating towards log(N) = 0. This difference between the two conditions with respect to the y-intercept, Δlog(c) from the home range ghost formula log[I(N)] = log(c) + z*log(N), also defines the difference in CSSU when comparing sharp and fuzzy data sets. recall that CSSU ≡ c.

The spatial constraint on extreme displacements relative to the respective home ranges’ centres apparently did not influence these results.

I have also superimposed local CSSU-analysis for respective four quadrants of the two home ranges. When area extent for analysis is constrained in this manner; i.e., spatial extent is reduced to 1:4 in area terms (1:2 linearly) for each local sub-set of fixes, respective (N,I) plot needs to be adjusted by a factor which compensates for the difference in scale range.

Since the present MRW conditions were run under fractal dimension D=1, each local log2[(N,I)] plot is rescaled to log2[(N,I)] + log2D(grain/extent)] = log2[(N,I)] + 1 when Δ regards the relative change of scale under condition D=1. After this rescaling the over-all CSSU and the local CSSU are overlapping, as shown by the regression lines in the A(N) analyses above. Overlapping CSSU implies that the four quadrants had similar space use conditions, which is true in this simplified case.

Fractal analysis. The Figure below shows the magnitude of incidence I as a function of relative spatial resolution k (the “box counting method”), spanning the scale range from k=1 (the entire arena, linear scale 40,000 units) and down to k = 40,000/(212) = 9.8 units, linear scale*.

Starting from the coarsest resolution, k=1, the log-log regression obviously shows = 1. At resolution k=1:2 and k=1:4 (4 and 16 grid cells, respectively), I = 4 and I = 4. In other words, all boxes contain fixes at k=1/2, apparently satisfying a two-dimensional object, and at k=1:4 some empty cells (12 of 16) are peeled away as empty space from the periphery of the fix scatter at this finer resolution.

This coarse-scale pattern is a consequence of the defined space use constraint. Disallowing “occasional sallies” outside the core home range obviously influences the number of non-empty boxes relative to all boxes available at the coarse resolutions 1<k<4-1.

However, at progressively finer resolutions – below the “space fill effect” range transcending down to ca k=1:32 – the true fractal nature of the scatter of fixes begin to appear due to log-log linearity, confirming a statistical fractal with a stable dimension D≈1.1 over the resolution range 2-9 < k < 2-5  (showing log-log slope of -1.1). At finer resolutions, the dilution effect flattens further expansion of I. The D=1.1 scale range is close to expectation from the simulation conditions Dtrue=1, while the deviations above and below this range are trivial statistical artifacts from space filling and dilution.

The most interesting pattern regards the finest resolution range, where the fuzzy set of fixes somewhat unexpectedly follows a similar log(k,I) pattern as the non-fuzzy set. However, the slight difference in D, which increases to D =1.17, may be caused by the fuzziness (proportionally stronger space-fill effect as resolution is increased).

To conclude, if the magnitude of position uncertainty does not supersede the individual’s CSSU scale under the actual conditions, the A(N) analyses of MRW compliant space use does not seem to be seriously influenced by location fuzziness. The “fix scrambling” error is in most part subdued towards finer scales than the CSSU.

However, the story doesn’t end here. I have superimposed a dotted red line onto the Figure above. Overlapping with the D=1 section of grid resolutions, the line is extrapolated towards the intersection with log(I) ≈ 0 at a scale that is 21.5 = 2.8 times larger (linear) scale than the actual arena for the present analyses. In other words, in absence of area constraint and step length constraint (and disregarding step length constraint due to limited movement speed of the individual) one should expect the actual set of fixes to “fill up” the missing incidence over the coarse scale range, leading to D≈1 for the entire range from the dilution effect range towards coarser scales.

I have also marked the CSSU scale as the midpoint of the red line. A resolution of log2(k)=-5.5 is in fact very close to the CSSU estimate from the A(N) method (k=1:50 of actual arena using the A(N) method, versus 1:35 of the arena according to the fractal analysis). This alternative method to estimate CSSU was first published in this blog post.

A preliminary development towards this approach was explored both theoretically and empirically in Gautestad and Mysterud (2012).

NOTE

*) This analysis of N= 8,000 fixes, spanning box counting of 1, 2, 4, 16, 32, … 16.8 million grid cells at respective scales, took ca 10 hours pr. series in the MRW Simulator, using a laptop computer. I suppose this enormous number cracking it would be outside the practical range of a similar algorithm in R.

REFERENCES

Gautestad, A. O., and I. Mysterud. 2012. The Dilution Effect and the Space Fill Effect: Seeking to Offset Statistical Artifacts When Analyzing Animal Space Use from Telemetry Fixes. Ecological Complexity 9:33-42.

 

The Hidden Layer

Focusing on the statistical pattern of space use without acknowledging the biophysical model for the process will create much confusion and unnecessary controversy. Ecologists are now forced to get a better grip on concepts from statistical mechanics than earlier generations. For example, to understand the transformation from data on actual behaviour to pattern analysis of space use, the concept of the hidden layer represents the first gate to pass.

Research on animal movement and space use has always had a central place in ecology. However, as more field data, better computers and more sophisticated statistical methods have become available, some old dogma have come under attack. Specific theoretical aspects of this quest for improved model realism have emerged from the rapidly growing cooperation between biologists and physicists in the emerging field of macro-level biophysics. The so-called Lévy flight foraging hypothesis is one example. And, of course, I can’t resist mentioning the MRW theory.

A booted eagle Hieraaetus pennatus is triggering a flock of spotless starlings Sturnus unicolor to show swarming behaviour. Malaga river delta, December 2017. Photo: AOG.

In 1985 Charles Krebs described ecology as the scientific study of the interactions that determine the distribution and abundance of organisms. In an ethological context animal space use is studied on two levels – tactical and strategic. The tactical level regards understanding individual biology and behavioural ecology on a moment-to-moment temporal scale. Strategic space use adds an extra layer of complexity to the tactical behaviour. In a simplistic manner we may refer to this layer as the animal’s state at a given moment in time; for example whether it is hungry or not (e.g., in hunting mode). Strategy also involves processing of memory-based goals. Strategies executed at coarser time scales than tactics. Some of the interaction between tactics and strategy may then – under specific conditions (see below) – be transformed to dynamic models at the tactical level; so-called mechanistic models, which consists of a set of executable rules covering respective cognitive and environmental conditions. Validating the model dynamics and resulting statistical patterns against real animal data then rates the degree of model realism. For example; realistic, tactical models have been developed to cast light on the “clumping behaviour” (dense swarming) of flock of birds that are threatened by a raptor.

The myriad of rules that influence animal movement makes detailed modelling an impossible task, and would anyway only lead to a descriptive picture with no value to ecological hypothesis testing. In fact, the signature of successful modelling is simplification. Thus, only specific aspects of the reference individual’s behaviour can be included and scrutinized.

The present post addresses one particular aspect of system simplification; coarse-graining the temporal scale. This approach implies a qualitative change of how the space use system is observed and analyzed. Actually, temporal coarse-graining is forced upon us when studying animal space use from sampling an individual’s successive displacements as a series of locations (fixes) during a given period of time. During each inter-fix interval the observed displacement regards the resultant vector from a myriad of intermediate and unobserved events. What has happened to the moment-to-moment kind of behavioural ecology? It has become buried below the hidden layer.

At the surface of this hidden layer you lose sight of behavioural details (like raptor response and swarming rules) but you gain access to an alternative perspective of movement and space use. Alternative statistical descriptors are emerging at this temporally coarser scale, following the laws of statistical mechanics. What is analyzed above the hidden layer is the over-all pattern from many displacements events that are aggregated into a spatial scatter of fixes.

For example, you may coarse-grain both the temporal and spatial system dimensions, and study the aggregated distribution of fixes at the spatial scale of virtual grid cells (pixels) and temporal scale of the fix sampling period. The spatio-temporal variations in intensity of space use within the actual space-time extents then allows for modelling and hypothesis testing, but now using statistical-mechanical descriptors of space use intensity. These descriptors are either not valid below the hidden layer (e.g., the information content of local density of fixes) or they have an alternative interpretation (e.g., movement as a “step” versus movement as a resultant vector for a given interval and location). Both levels of analysis require large sets of input to allow for statistical treatment.

Why is the hidden layer concept and the statistical-mechanical approach more important to relate to today than in earlier decennia? The short answer is the realization – seeded by better and more extensive data – that animal space use involves more than a couple of universality classes of movement (see this post). In fact, in my book, papers and blog posts I have detailed eight classes, most of which are unfamiliar to you.

To understand space use that is influenced by spatial memory and scale-free movement, statistical mechanical modelling is a prerequisite for realistic representation of such complex systems, unless you limit your perspective to a short-term behavioural bout within a very localized arena. In other words, “a single piece of a jigsaw-puzzle of space use dynamics”. For example, if you zoom closely into a small segment of a circle you observe an approximately straight line. Take a step outwards, and you are facing the qualitatively different geometry – the mathematics of a curve and finally a full circle. Stubbornly staying within the linear framework when analyzing more extensive objects than what you observe at fine scales will force you into a corner filled with paradoxes.

Fine-grained and coarse-grained analyses of animal space use are complementary approaches to the same system.

 

MRW and Ecology- Part VII: Testing Habitat Familiarity

Consider having a series of GPS fixes, and you wonder if the individual was utilizing familiar space during your observation period – or started building site familiarity around the time when you started collecting data. Simulation studies of Multi-scaled random walk (MRW) shows how you may cast light on this important ecological aspect of space use.

First, you should of course test for compliance with the MRW assumptions, (a) site fidelity with no “distance penalty” on return events, (b) scale-free space use over the spatial range that is covered by your data, and (c) uniform space utilization on average over this scale range. One single test in the MRW Simulator, the A(N) regression, cast light on all these aspects. First, you seek to optimize pixel resolution for the analysis (estimating the Characteristic scale of space use, CSSU). Next, if you find “Home range ghost” compliance; i.e., incidence I expands proportionally with square root of sample size of fixes, your data supports (a) spatial memory utilization with no distance penalty due to sub-diffusive and non-asymptotic area expansion, (b) scale-free space use due to linearity of the log[I(N)] scatter plot, and (c) equal inter-scale weight of space use due to slope ≈ 0.5.

Supposing your data confirmed MRW, how to test for time-dependent strength of habitat familiarity? Consider the following simulation example, mimicking space use during a season and under constant environmental conditions.

The red dots show log(N,I) for various sample sizes up to the total set of 11,000 fixes. Each dot represents the average I for respective N of the two methods continuous sampling and frequency sampling (counteracting autocorrelation effect; see a previous post). However, analyzing the first 1,000 fixes separately (black dots) consistently revealed a more sloppy space use in terms of aggregated incidence at a given N, relative to the total season. The next 1,000 fixes, however, was compliant with the total series both with respect to slope and y-intercept (CSSU) (green dots).

The reason for the discrepancy in space use during the initial period of fix sampling* was in the present scenario the actual simulation condition; site familiarity was set to develop “from scratch” simultaneously with the onset of fix collection. I define strength of site familiarity as proportional with the total path length from which the model animal collects a previous location to return to**. In the start of the sampling period, the underlying path is short in comparison to the total path that was traversed during the total season, and – crucially – return steps targeted previous locations from the actual simulation period only, and not locations prior to to this start time. In other words, the animal was assumed to settle down in the area at the point in time when the simulation commenced.

To conclude, if your data shows CSSU and slope of similar magnitude in the early and later phase of data collection, you sampled an individual with a well-established memory map of its environment during the entire observation period. The implicit assumption for this conclusion is of course that the environmental conditions was constant during the entire sampling period, including the initial phase. Using empirical rather than synthetic data means that additional tests would have to be performed to cast light on this aspect.

NOTE

*) The presentation above reflects the pixel resolution that was optimized for the total series. The first 1,000 fixes showed a more coarse-grained space use, reflected in a 50% larger CSSU scale (not shown: optimal pixel size was 50% larger for this part of the series) despite constant movement speed and return rate for the entire simulation period. In this scenario a larger CSSU [coarser optimal pixel for the A(N) analysis] signals a less mature habitat utilization in the home range’s early phase. The CSSU was temporarily inflated during build-up of site familiarity, but – somewhat paradoxically – the accumulated number of fix-embedding grid cells (incidence) for a given N at this scale was smaller. These two effects, reflecting degree of habitat familiarity during home range establishment, should be considered a transient effect.

**) Two definitions should be specified:

  • I define strength of site familiarity as proportional with the total path length from which the model animal collects a previous location to return to.
  • I define strength of site fidelity as proportional with the return frequency.

Both definitions rest on the assumptions of no distance penalty on return targets and no time penalty on returns; i.e., infinite spatio-temporal memory horizon relative to the actual sampling period.

The MRW Simulator: Importing Your Own GPS Data

You have a large database of GPS fixes, and you wonder if your animals have utilized their habitat in accordance to standard theory of mechanistic movement (the null hypothesis) or in compliance with the MRW theory (the alternative hypothesis). The MRW Simulator is tailormade for this kind of test.  If MRW is verified you may proceed with various analyses of behavioural ecology under the alternative statistical-mechanical theory. The initial test procedure is simple: (1) import your data, (2) prepare for a test of model compliance by applying one or more built-in algorithms, and (3) import the generated data tables for statistical test into third party packages (R, Excel, etc.).

You can import data to the MRW Simulator by preparing a two-column text file, using comma or TAB as delimiter between the two coordinate values for successive locations.

By default you should use the file name import.txt, but other names are also allowed (given the correct data structure). Place the file in the data folder (…/mov) and choose the menu “File | Import data from txt or csv”.

You are asked to define the file name for the imported data. By default, the name is set to “seed1.txt”. During importing the original series is centred on coordinate (0,0), the middle of the arena window. The arena size for the analysis is automatically adjusted to twice the space needed to display the set of the imported fixes.

After import, you set a couple of check boxes on the MRW Simulator’s user interface in accordance to User guide before clicking the run button (the MRW simulator re-formulates your imported data to its own format and saves the result in the text file levy1.txt). In particular, setting simulation series length to zero and choosing “use seed1.txt” as first part of the simulated series ensures that only your own data are reformatted. Within a fraction of a second the procedure exits without adding simulated data to the series, and you are ready to perform various analytical tasks on the levy1.txt file (see menu “Analyze”).

The procedure “A(N) regression” is typically applied to analyze space use at the home range scale. It is a convenient choice to test for MRW compliance of your data.

You are asked which of the Levy*.txt files to analyze for fix-filling area as a function of sample size N (number of fixes in the Levy*.txt file). Next, the analysis is executed in accordance to the scales set in “Arena extent for analysis”, “Arena grain for analysis” and “Pixel (intra-grain resolution)” in the MRW Simulator’s user interface.

In this procedure, set extent = grain. Pixel regards a ratio; the relative resolution of grain/pixel. For example, setting pixel = 10 performs analysis at the virtual grid scale 1/10 of arena scale; i.e., 10×10 grid cells. See User guide for details.

The progress is shown below the arena window. The algorithm is counting incidence over a range of sample sizes N at the given pixel resolution; first by sequential (continuous) sampling up to Ntotal and then by frequency (uniform) sampling over the total series. Search my blog or read my book for these concepts.

The result is saved in a text file containing a table of incidence (non-empty grid cells at the given pixel resolution) as a function of sample size N under the two sampling conditions. These data may then be imported in for example Excel for graphical presentation and statistical analysis; for example, a regression.

If you find a discrepancy between the scatter from the two sampling methods (you normally do!), your data is probably serially autocorrelated. To remove autocorrelation effect, take the average incidence for respective magnitudes of N, as was explained in this post. Conveniently, the MRW Simulator does this task for you (you find the averaging table below the tables for continuous and frequency sampling). This averaging procedure also adjusts for a “drifting home range” scenario, which also produces autocorrelation.

Does the result support MRW? First, you must verify presence of a characteristic scale of space use (CSSU), which is a property of scale-free movement under influence of spatial memory under the “parallel processing” postulate.

To test for CSSU you should experiment with various pixel resolutions and see if the log[(N,incidence)] pattern converges to a slope ∼0.5 at a given scale. If so, CSSU ≈ (pixel scale)2 = c.

If you don’t find reasonably good compliance with linearity of log[I(N)] = log(c) + 0.5*log(N) or the slope exceeds 0.5, try a coarser pixel resolution. If the slope is smaller than 0.5, try a finer pixel resolution.

If this test of convergence to log-linearity with slope ≈ 0.5 fails, you have probably either supported one of the null models; i.e., Brownian motion-like or Lévy-like space use void of spatial memory influence (slope ≈1, which is quite insensitive to change of pixel scale) or the classic paradigm: home range movement under influence of a constraining border zone [I(N) showing an area asymptote rather than a power law expansion with exponent close to 0.5].


The MRW Simulator 2.0 will now be made available as a free add-on tool for all buyers of my book. If you purchase it through my shopping cart at www.thescalingcube.com, you will get the program and its user guide bundled with the book. Existing book owners: contact me at arild@gautestad.com and I’ll fix you a personal download link – free of charge. You may purchase by invoice – see top of this page!


The MRW Simulator – Finally Available!

Back in 1997 I started programming the foundation for a personal simulation environment for Multi-scaled random Walk, the MRW Simulator. Through countless updates over these 20 years the program has gradually matured into a version which finally is ready for limited distribution towards peers in the field of animal space use research.

The MRW Simulator is a Windows©-compliant tool to generate various classes of animal movement (self-produced data series) or to import existing data series. The generated or imported data – consisting of a sequence of (x,y) coordinates – may then be subject to various kinds of statistical protocols through simple menu clicks. The generated text files are then typically exported for detailed analyses and presentation of results in other applications, like the R package or Excel©.

While R is based on an interpreted language, the MRW Simulator is a fully complied program. Thus, movement paths of length up to 20 million steps may be simulated within minutes of execution time, rather than multi-hours or days. A multi-scaled analysis of data over a substantial scale range is almost forbidden in an interpreted system due to the algorithm’s long execution period. In the MRW Simulator such analyses are performed in a fraction of this time. Thus, R and the MRW Simulator may supplement each other. R is strong on statistics and algorithmic freedom; the MRW Simulator is strong on time–effective execution of a small set of basic but typically time-consuming algorithms.

The opening screen contains menus (1), a window where the simulated or imported set of fixes are displayed (2) and various command buttons, check boxes and information fields (314).

To get your first experience with the system, try out the most basic setting for a simulation. First, choose among classes of movement; Levy walk/MRW, Correlated random walk, and Composite random walk (superposition of two correlated random walks) (3). The difference between LW and MRW is explained below.

For your first test, choose Levy walk / MRW (3), with default setting for fractal dimension (D=1) and maximum displacement length between successive steps (truncation=1,000,000 length units). D=1 simulates the condition where the animal on average utilizes its environment with similar scale-free weight at each intermediate scale from unit step length to maximum step (setting 1<D<=2 skews space use towards finer-scale space use on expense of coarser scales, again in average terms).

In a column of text fields (4) you may define conditions like series length, properties for the simulated path, size of the arena and grid resolution for the subsequent analysis. For example, the difference between Levy walk and MRW is given by setting a return frequency >0 for MRW (implying targeted return events to previous locations at the chosen average frequency). For this first run, just keep the default values.

Later you will learn how to additionally modify the conditions by including a pre-defined series of coordinates (in a file called seed*.txt, where * regards an incremental number) (5). At this stage, just keep default settings.

By default the simulation runs in a homogeneous environment. The set of “Habitat heterogeneity” fields (6) allows defining the corners of a rectangle where the model animal behaves in a more “fine-grained” manner by reducing average movement speed. Other ecological aspects may also be defined, like a method to account for temporal and local resource exhaustion. As a start, just keep defaults.

Now, click the “Single-series” command button (7). You should see a number of fixes appearing as dots in the arena window.

The number of fixes reflect the ratio of total series length and the observation interval on this series; i.e., “Number of fixes” (Norig= 1,000,000) multiplied by an average “Observation frequency” (p=0.001). This leads to an observed series length – a path sample – of ca 1,000 fixes; which are displayed in the observation window.

Before moving on to your first data analysis, observe that the simulation’s default settings are defined by “schemes”, which can be pre-loaded from a dropdown menu (8). You may also run a number of replicate simulations in an automated sequence (9). The arena may be copied to the clipboard (10) for subsequent pasting into other applications like a Word document, an Excel sheet, etc.

The “Data path” field (11) displays the folder where the system saves and retrieves data. By default, the data resides in a subfolder, “\mov”, under the location of the MRW simulator’s EXE file. This location is set during program setup.

The field “Fractal resolution range” (12) defines the scale range over which a subsequent analysis of the scatter of fixes – selected from the Analysis menu – will be performed by the so-called box counting method.

The field “A(N)” (13) shows the progress of another analysis, total area (incidence) as a function of sample size, N.

The counter (14) is automatically incremented each time you click the “Singe-series” button (7). TIP: To repeat (and overwrite) an existing series, edit the counter number (14) to one decrement below the actual series. For example, to re-execute data series number 5, edit the counter field to “4” before clicking the button (7). To re-execute series 1, edit the field to “-1” (the number zero is reserved as the initial setting number).

The data file containing “observed” fixes resides in the \mov folder (see above), with name “levy*.txt”. (* = 1, 2, 3, …). It contains three columns of data; x-coordinate, y-coordinate, and inter-step distance.


The MRW Simulator 2.0 will now be made available as a free add-on tool for all buyers of my book. If you purchase it through my shopping cart at www.thescalingcube.com, you will get the program and its user guide bundled with the book. Existing book owners: contact me at arild@gautestad.com and I’ll fix you a personal download link – free of charge. You may purchase by invoice – see top of this page!


In the next blog post I’ll show some of the menu procedures of the MRW Simulator, including how to import you own GPS space use series for analysis on-the-fly.