Parallel Processing – How to Verify It

In my previous post I contrasted the qualitative difference between animal space use under parallel processing (PP) and the standard, mechanistic approach. In this post I take the illustration one step further by illustrating how PP – in contrast to the mechanistic approach – allows for the simultaneous execution of responses and goals at different time scales. This architecture is substantially different from the traditional mechanistic models, which are locked into a serial processing kind of dynamics. This crucial difference in modelling dynamics allows for a simple statistical test to differentiate between true scale-free movement and look-alike variants; for example, composite random walk that is fine-tuned towards producing apparently scale-free movement.

First, recall that I make a clear distinction between a mechanistic model and a dynamic model. The former is a special case of a dynamic model, which is broader in scope by including true scale-free processing; i.e., PP. In my previous post I rolled dice to explain the difference.

In the traditional framework there is no need to distinguish between a mechanistic and a dynamic evolution, simply because in this special case of dynamics time pr. definition is one-dimensional. On the other hand, in the PP framework time is generally two-dimensional to allow for parallel execution of a process (for example, movement) at different scales at any moment in time.

Ignoring this biophysical distinction has over the years produced a lot of unnecessary confusion and misinterpretation with respect to the Multi-scaled random walk model (MRW), which is dynamic but non-mechanistic. The distinction apparently sounds paradoxical in the standard modelling world, but not in the PP world. I say it again: MRW is non-mechanistic, non-mechanistic, non-mechanistic – but still dynamic!

First, consider multi-scale movement in the comfort zone of mechanistic models. You may also call it serial processing, or Markov compliant. In the image to the right we see a (one-dimensional) time progression over a time span t=1,….,8 of a series where unit time scale per definition equals one (ε = b0=1; see my previous post). Some sequences are processed at a coarser scale than unit scale; for example, during the interval from t=2 to t=5 the animal “related to” its environment in a particularly coarse-scaled manner relative to unit time. Consider an area-restricted search (ARS) scenario, where the unit-scale moves (light blue events) regard temporally more high-frequency search within a local food patch and more coarse-scaled moves regard temporally toggling into a mode of more inter-patch movement. Consider that the animal during this time temporarily switched to a behavioural mode whereby environmental input is less direction- and speed-influencing (as seen from the unit scale) than during intra-patch search.

Within a mechanistic framework, processing at different scales (temporal resolutions) cannot take place simultaneously. The process needs to toggle (Gautestad 2011).

Mechanistically, the ARS scenario is often parsimoniously modelled by a composite, correlated random walk. By fine-tuning the model parameters an the relative frequencies of toggling it has been shown how such a pattern may even produce approximately scale-free distribution of displacements; i.e, Lévy-like movement (Benhamou 2007). Such statistical similarity between two distinct dynamical classes has produced much fuzz in the field of animal movement research.

Next, contrast the Lévy look-alike model above with a true scale-free process to the right. Due to the dynamics being executed over a continuum of temporal scales, we get a hierarchical structure of events. Thanks to the extra ε axis, there is no intrinsic paradox – as in a mechanistic system – due to a mixture of simultaneous events at different resolutions. Again, I refer to my previous “rolling dice” description. Despite a potential for fine-tuning the composite random walk model to look statistically scale-free, this mechanistic variant and the dynamically scale-free Lévy walk belongs to different corners of the Scaling cube.

Finally, how to distinguish a PP compliant kind of scale-free dynamics from the look-alike process? Coarse-grain the time series and see if the scale-free property persists or not (Gautestad 2013)!

Simulation of a two-level Brownian motion model was performed under four conditions of ratio lambda between the scale parameter of the respective levels, lambda2/lambda1, where frequency of execution t2/t1 = 10 under all conditions. For each condition of lambda the simulated series were sampled at three time scales (lags, tobs); every step, sampling 1:10 and sampling 1:100. Original series lengths were increased proportionally in order to maintain the same sample size under each sampling scheme (20 000 steps). A double-log scatter plot (logarithmic base 2) of step length frequency, log(F), as a function of binned step length, log(L), was then made for each of the four parameter conditions and each of the three sampling schemes. (a) The result from lambda = 4 shows a linear regression slope and thus power law compliance over some part of the tail part of the distribution, with slope b = 2.9; i.e. the transition zone between Lévy walk (1 < b < 3) and Brownian motion (b >= 3 and increasing with increasing L, leading to steeper slope). At coarser time scales tobs = 10 and tobs = 100 the pattern is transformed to a generic-looking Brownian motion with exponential tail, which becomes linear in a semi-log plot: the inset shows the pattern from tobs = 100. (b) The results from lambda = 8.

Both the step length distribution (above) and the visual inspection of the path at different temporal scales reveal the true nature of the model: a look-alike scale-free and pseudo-Lévy pattern when the data are studied at unit scale where the fine-tuning of the parameters were performed, but shape-shifting towards the standard random walk at coarser scales. A true PP-compliant process would have maintained the Lévy pattern even at different sampling scales (Gautestad 2012).

Simulated paths of two-scale Brownian motion where 1000 steps are collected at time intervals 1:1, 1:10 and 1:100 relative to unit scale for the simulation, with lambda2/lambda1 = 15. The pattern shifts gradually from Lévy walk-like towards Brownian motion-like with increasing temporal scale relative to the execution scale (t = 1) for the simulations. Since the number of observations is kept constant the spatial extent of the path is increasing with increasing interval.

By the way, the PP conjecture also extends to the MRW-complementary population dynamical expression of animal space use, the Zoomer model. This property can be clearly seen in the Zoomer model’s mathematical expression.

 

REFERENCES

Benhamou, S. 2007. How many animals really do the Lévy walk? Ecology 88:1962-1969.

Gautestad, A. O. 2011. Memory matters: Influence from a cognitive map on animal space use. Journal of Theoretical Biology 287:26-36.

Gautestad, A. O. 2012. Brownian motion or Lévy walk? Stepping towards an extended statistical mechanics for animal locomotion. Journal of the Royal Society Interface 9:2332-2340.

Gautestad, A. O. 2013. Animal space use: Distinguishing a two-level superposition of scale-specific walks from scale-free Lévy walk. Oikos 122:612-620.

 

The Inner Working of Parallel Processing

The concept of scale-free animal space use becomes increasingly difficult to avoid in modeling and statistical analysis of data. The empirical support for power law distributions continue to pile up, whether the pattern appears in GPS fixes of black bear movement or in the spatial dispersion of a population of sycamore aphids. What is the general class of mechanism, if any? In my approach into this challenging and often frustrating field of research on complex systems, one particular conjecture – parallel processing (PP) – percolates the model architecture. PP requires a non-mechanistic kind of dynamics. Sounding like a contradiction in terms? To illustrate PP in a simple graph, let’s roll dice!

Please note: the following description represents novel details of the PP concept, still awaiting journal publication. Thus, if you are inspired by this extended theory of statistical mechanics to the extent that it percolates into your own work, please give credit by referring to this blog post (or my book). Thank you.

The basic challenge regards how to model a process that consists of a mixture of short term tactics and longer time (coarser scale) strategic goals. Consider that the concept of “now” for a tactical response regards a temporally finer-grained event than “now” at the time scale for executing a more strategic event, which consequently takes place within a more “stretched” time frame relative to the tactical scale.

Strategy is defined in a hierarchy theoretical manner; coarser scale strategy consequently invokes a constraint on finer scaled events (references in my book). For example, while an individual executes a strategic change of state like starting a relatively large-distance displacement (towards a goal), finer-scaled events during this execution – consider shorter time goals – are processed freely but within the top-down constraint that they should not hinder the execution of the coarser goals. Hence, the degrees of process freedom increases with the scale distance between a given fine-scaled goal and a coarser-scaled goal.

To illustrate such a PP-compliant scale range from tactics to strategy within an extended statistical-mechanical system, consider the two-dimensional graph to the right. The x-axis represents a sequence of unidirectional classic time and the y-axis represents a log2-scaled* expression of time’s orthogonal axis, “elacs” (ε) along this sequence.

The continuous x-y plane has been discretized for simpler conceptualization, and each (x,y) pair shows a die. This die represents a potential change of state of the given process at the given point in time and at the given temporal scale. An actual change of state at a given (t,ε) location is marked by a yellow die, while a white die describes an event still in process at this scale. The respective number of eyes on each die could represent a set of available states for a given system variable at this scale. To illustrate complex dynamics (over-)simplistically in terms of concepts from quantum mechanics, consider each magnitude of ε at the y-axis to represent a wave length in a kind of “complex system” wave function and each yellow die represents a “collapse” of this probability wave into a specific execution of the given event at a given point of unit time this time scale.

As the system is viewed towards coarser time scales (larger ε), the average frequency of change of state vanishes proportionally with 1/ε = 1/bz, where b is the logarithmic base and increasing z describes increasing scale level of bz. In other words, the larger the z, the more “strategic” a given event at this scale. In short, consider that each die on scale level 1 [log(b0)=1] is rolled at each time increment t=1, t=2, …, t=8; each die at level 2 [log(b1)=2] is on average rolled each second time increment, an so on.

In the illustrative example above, no events have taken place during the eight time increments at the two coarsest scales bz where z=7 (ε=128) and z=8 (ε=256). A substantial increase of the observation period would be needed to increase the probability of actually observing such coarse-scaled change of system state.

More strategic events are executed more rarely. Strategic events at a given scale bare initiated in a stochastic manner when observed from a finer time scale (smaller z), but increasingly deterministic when observed from coarser time scales. At finer scales such a strategic event may be inexplicable (thus appearing unexpectedly at a given point in time), while the causal relationship of the given process is established (visible) when the process is observed at the properly coarsened time scale. However, at each time scale there is an element of surprise factor, due influence from even coarser scale constraints and even lower frequency change of state of the system at these coarser scales. 

The unit time scale, log(b0)=1, captures the standard time axis, which is one-dimensional as long as the system can be described as non-complex. In other words, the y-axis’ dynamics do not occur, and – consequently – it makes no sense to talk about a parallel process in progress**. In this standard scale-specific framework, time is one-dimensional and describes scale-specific processes realistically. This includes the vast theories of low order Markovian processes (“mechanistic” modeling), the  mathematical theory of differential equations (calculus), and standard statistical mechanics.

For a deeper argument why a PP kind of fundamental system expansion seems necessary for a realistic description of system complexity, read my book and my previous blog posts. By the way, it should of course be considered pieces of a theoretical framework in progress.

The ε-concept was introduced in my book to allow for complex dynamics within a non-Markovian physical architecture. In other words, to allow for a proper description of parallel processing the concept of time as we know it in standard modeling in my view needs to be heuristically expanded to a two-dimensional description of dynamics.

The bottom line: it works! In particular, it seems to survive the acid tests when applied on empirical data, both with respect to individual space use and population dispersion.

Environment is hereby expanded with a two-dimensional representation of dynamical time. This implies that an individual’s environment not only consists of its three-dimensional surroundings at a given point in time but also its temporal “surroundings” due to the log compliant (scale-free) scale-stretching of time. In this manner an implementation of parallel processing turns the common Markovian, mechanistically modeled framework into a special case. According to the special case of standard mechanistic dynamics a given process may be realistically represented either by a scale-specific process at a given (unit) scale or a trivial linear superposition of such processes (e.g., a composite random walk toggling between different magnitudes of the diffusion parameter for each “layer”). On the other hand, complexity arises when such a description that is based on one-dimensional time is not sufficient to reproduce the system realistically.

Observe that in a PP-system several events (change of system state) may be executed in parallel! In the illustration above, see for example the situation for t=5 where events at three time scales by chance are initiated simultaneously but at different time scales as defined by ε. Such a kind of dynamics represents a paradox within the constraint of a Markovian (mechanistic) system.

An earlier illustration of the PP framework was given here. For other examples, search this blog for “parallel processing” or read my book.

Various aspects of scaling in animal space use; from power law scaling of displacement lengths (Lévy like distribution), fractal dispersion of GPS fixes (the home range ghost model) and scale free distribution of populations (Taylor’s power law and the Zoomer model) may be natural outcomes of systems that obey the PP conjecture.

NOTE

*) The base, b, of the logarithm does not matter. Any positive integer introduces scaling of the ε-axis.

**) in a standard, mechanistic process an event describes a change of system state at a given point in space at a given point it time. No “time stretching” takes place.