Stepping Away From the Markov Lamppost

From a theoretical angle MRW represents the backbone of my proposed framework for an extended statistical-mechanical interpretation of individual movement, building on the parallel processing postulate. What is the conceptual realism of the MRW model relative to the historically and contemporary dominating approach – mechanistic models of animal space use?

Regarding the core question – “what is the conceptual realism of the MRW model” – I’m in this post tempted to answer it by turning the question on its head:

What is the conceptual realism of the classic framework for movement ecology?
  • The standard approach in space use modelling is to apply the mathematics and statistics from the domain of a mechanistic system representation. 
  • A mechanistic system implies that the process is Markov compliant. 
  • Moving away from this Markov/mechanistic approach means a need to take bold steps into the darkness away from the lamppost*).
Why a need to leave the comfort zone? Let us take some steps in succession. First step: What is Markov compliance?

“A Markov process can be thought of as ‘memoryless’: loosely speaking, a process satisfies the Markov property if one can make predictions for the future of the process based solely on its present state just as well as one could knowing the process’s full history. i.e., conditional on the present state of the system, its future and past are independent.” (Wikipedia).

A typical example of a Markov process is a randomly moving particle. To satisfy “random” in the apparently contradictory context of totally deterministic movement, consider that the particle’s path is sampled at sufficiently large interval to allow for a transition from deterministic “micro-scale” behaviour (Newtonian mechanics in the present example) to a stochastic meso-scale path. At this coarser scale we have invoked a deep “hidden layer“; i.e., a temporal scale distance from successive movement-influencing collisions with other particles or obstacles along the fine-grained (true) path of the animal. Consequently, at this coarser temporal scale we easily get a Brownian motion (or standard correlated random walk) representation of the path – with properties that satisfy classic diffusion.

Recall from above the present premise that the behaviour at micro-scale needs to be Markov-compliant even when modelling the process at coarser temporal scales. Thus, a typical mechanistic process satisfies a typical diffusion process at meso-scale whatever rules are applied at the micro-scale; deterministic or stochastic, simple or more complicated.

Observe the dual mode of process description: a deterministic-mechanistic rule- and physical laws-execution of movement at micro-scales, and a stochastic representation of the same process at a coarser temporal resolution (you may call the latter statistical-mechanical from standard assumptions and principles). The Markov/mechanistic principle applies at both levels; only the observer’s temporal focus has changed and – consequently – the model formulation has flipped from the detailed behaviour-oriented description to a statistical-mechanical one (the Brownian motion/diffusion theory).

Regardless of temporal resolution, under what biological/ecological circumstances are Markov compliance and the mechanistic principle not applicable? Consider any animal that constrains it space use in a manner which we all can agree represents execution of home range behaviour. In other words, some kind of spatial memory has to be involved. One may thus ask, is this memory utilization mechanistic (Markov-compliant) or non-mechanistic?

“Animals typically are expected to move in a more complex manner than obeying a simple diffusion law. Verification of constrained space use (non-random self-crossing of the path) indicates site fidelity at the individual level and thus utilization of long-term spatial memory.” (My book, page 16).

This citation does in fact cover memory influence both under a special formulation of Markov-compliant behaviour and a non-Markovian behaviour. First, consider the Markov approach. A rapidly increasing number of mechanistic models that include site fidelity has indeed been developed. The “trick” has been to move away from the traditional centre-biased kind of movement (the Markov-compliant and memory-less convection model for home range dynamics), either by...
  1. assuming a central place foraging (spatial memory, with returns to a specific location), or 
  2. letting the centre of attraction change dynamically; i.e., in a step-by-step manner (for example, Börger et al. 2008, van Morter et al. 2009). 
Operationally, to satisfy the property of a mechanistic/Markovian process, the model animal is under both modelling approaches “equipped” with an increased cognitive capacity, allowing it to “scan” its environment beyond its present perceptual field and store past experiences in a spatially explicit manner (and building a memory map in the process). In this manner the animal may successively adjust its direction and movement speed to allow for (a) its internal state (e.g., hungry or not), (b) its local conditions (what it perceives through its senses at this point in time), and (c) the added information from its memory map.

Since this adjustment process involves recalculation of the next-step directional vector at every time increment and this recalculation at this specific point gives that same output whether the complete database of previous step decisions is included or not, the process is indeed mechanistic and Markov-compliant. “Paradoxically” – since it involves spatio-temporal memory. However, the strictly sequentially resolved movement decisions in this model design resolves the paradox. Another confirmatory sign of a Markov process is the fact that respective influences from terms under (a), (b) and (c) are summed (superimposed) to produce the updated next-step vector. The superposition principle only applies for a mechanistic/Markovian process.

In a context of site fidelity, consider that the lamppost in the comic strip above is represented by the classic convection model. Then consider that the spatial memory-extended mechanistic/Markov class of models represents the twilight zone, which the kind of approach described above contributes to lighten up. All ecologists working on animal space use should appreciate the strength of this approach in comparison to the convection model for home ranges.

However, are Markovian based memory-including models sufficiently flexible to reflect site fidelity in a realistic manner? With realism I do of course refer to the acid test: confronting model output with real data, like GPS space use representations. Since you are reading my blog and perhaps also my papers and my book, you are already aware that the Markov condition (mechanistic modelling) is not the only way to implement site fidelity. I advocate that a third step away from the lamppost should be explored, entering a zone where the mechanistic modelling approach is replaced by a framework that allows for a different kind of spatio-temporal utilization of memory.

In this specific model design, at every time increment the recalculation of the next move does not necessarily give the same output whether the complete database of previous step decisions is included or not. The current process is still connected to the past, in a manner which a Markov/mechanistic process cannot handle.

I started this post with the upside-down question “What is the conceptual realism of the classic framework for movement ecology?” Based our own pilot tests involving a range of vertebrate species (see excerpt in my book) we have found that a realistic modelling framework should be able to reproduce the following three statistical properties of home range data:

  • The home range ghost: log[I(N)] = log(c) + z*log(N), where 0.4<z<0.6 and the grid scale for incidence calculation has been optimized.
  • The scale-free space use property: Fractal dimension D of spatial scatter, 0.9<D<1.3 (for example, using the box counting method).
  • The scale-free property the step length distribution F(L): log[F(L)] ∝ –β*Log(L), where 1.8<β<2.3 and fix sampling has been performed in the relevant frequency domain (Gautestad & Mysterud 2013).

Call it the acid test of the Markov paradigm as null model in the context of animal site fidelity. So far, all our empirical pilot studies have shown compliance with these three properties. What about your data?


*) Comic strip source:


Börger, L., B. Dalziel, and J. Fryxell. 2008. Are there general mechanisms of animal home range behaviour? A review and prospects for future research. Ecology Letters 11:637-650.

Gautestad, A. O., and A. Mysterud. 2013. The Lévy flight foraging hypothesis: forgetting about memory may lead to false verification of Brownian motion. Movement Ecology 1:1-18.

van Moorter, B., D. Visscher, S. Benhamou, L. Börger, M. S. Boyce, and J.-M. Gaillard. 2009. Memory keeps you at home: a mechanistic model for home range emergence. Oikos 118:641-652.