### The Inner Working of Parallel Processing

The concept of scale-free animal space use becomes increasingly difficult to avoid in modeling and statistical analysis of data. The empirical support for power law distributions continue to pile up, whether the pattern appears in GPS fixes of black bear movement or in the spatial dispersion of a population of sycamore aphids. What is the general class of mechanism, if any? In my approach into this challenging and often frustrating field of research on complex systems, one particular conjecture – parallel processing (PP) – percolates the model architecture. PP requires a non-mechanistic kind of dynamics. Sounding like a contradiction in terms? To illustrate PP in a simple graph, let’s roll dice!

The basic challenge regards how to model a process that consists of a mixture of short term tactics and longer time (coarser scale) strategic goals. Consider that the concept of “now” for a tactical response regards a temporally finer-grained event than “now” at the time scale for executing a more strategic event, which consequently takes place within a more “stretched” time frame relative to the tactical scale.

Strategy is defined in a hierarchy theoretical manner; coarser scale strategy consequently invokes a constraint on finer scaled events (references in my book). For example, while an individual executes a strategic change of state like starting a relatively large-distance displacement (towards a goal), finer-scaled events during this execution – consider shorter time goals – are processed freely but within the top-down constraint that they should not hinder the execution of the coarser goals. Hence, the degrees of process freedom increases with the scale distance between a given fine-scaled goal and a coarser-scaled goal.

To illustrate such a PP-compliant scale range from tactics to strategy within an extended statistical-mechanical system, consider the two-dimensional graph to the right. The x-axis represents a sequence of unidirectional classic time and the y-axis represents a log2-scaled* expression of time’s orthogonal axis, “elacs” (ε) along this sequence.

The continuous x-y plane has been discretized for simpler conceptualization, and each (x,y) pair shows a die. This die represents a potential change of state of the given process at the given point in time and at the given temporal scale. An actual change of state at a given (t,ε) location is marked by a yellow die, while a white die describes an event still in process at this scale. The respective number of eyes on each die could represent a set of available states for a given system variable at this scale. To illustrate complex dynamics (over-)simplistically in terms of concepts from quantum mechanics, consider each magnitude of ε at the y-axis to represent a wave length in a kind of “complex system” wave function and each yellow die represents a “collapse” of this probability wave into a specific execution of the given event at a given point of unit time this time scale.

As the system is viewed towards coarser time scales (larger ε), the average frequency of change of state vanishes proportionally with 1/ε = 1/bz, where b is the logarithmic base and increasing z describes increasing scale level of bz. In other words, the larger the z, the more “strategic” a given event at this scale. In short, consider that each die on scale level 1 [log(b0)=1] is rolled at each time increment at this subjectively chosen unit scale; i.e., a sequence of events t=1, t=2, …, t=8. At the next coarser time resolution each die at level 2 [log(b1)=2] is on average rolled each second time increment, an so on.

In the illustrative example above, no events have taken place during the eight time increments at the two coarsest scales bz where z=7 (ε=128) and z=8 (ε=256). A substantial increase of the observation period would be needed to increase the probability of actually observing such coarse-scaled change of system state. Such an event could for example be an animal suddenly initiating a relatively strategic and long-distance move.

More strategic events are executed more rarely. Strategic events at a given scale bz are initiated in a stochastic manner when observed from a finer time scale (smaller z), but increasingly deterministic when observed from coarser time scales. At finer scales such a strategic event may be inexplicable (thus appearing unexpectedly at a given point in time), while the causal relationship of the given process is established (visible) when the process is observed at the properly coarsened time scale. However, at each time scale there is an element of surprise factor, due influence from even coarser scale constraints and even lower frequency change of state of the system at these coarser scales.

The unit time scale, log(b0)=1, captures the standard time axis, which is one-dimensional as long as the system can be described as non-complex. In other words, in this standard framework the y-axis’ dynamics do not occur, and – consequently – it makes no sense to talk about a parallel process in progress**. In this standard scale-specific system, time is one-dimensional and hence describes scale-specific processes realistically. This includes the vast theories of low order Markovian processes (“mechanistic” modeling), the mathematical theory of differential equations (calculus), and standard statistical mechanics.

For a deeper argument why a PP kind of fundamental system expansion seems necessary for a realistic description of system complexity, read my book and my previous blog posts. By the way, it should at this stage be considered pieces of a theoretical framework in progress.

The ε-concept was introduced in my book to allow for complex dynamics within a non-Markovian physical architecture. In other words, to allow for a proper description of parallel processing the concept of time as we know it in standard modeling in my view needs to be heuristically expanded to a two-dimensional description of dynamics.

The bottom line: it works! In particular, it seems to survive the acid tests when applied on empirical space use data, both with respect to individual space use and population dispersion.

Environment is hereby expanded with a two-dimensional representation of dynamical time. This implies that an individual’s environment not only consists of its three-dimensional surroundings at a given point in time but also its temporal “surroundings” due to the log compliant (scale-free) scale-stretching of time. In this manner an implementation of parallel processing turns the common Markovian, mechanistically modeled framework into a special case. According to the special case of standard mechanistic dynamics a given process may be realistically represented either by a scale-specific process at a given (unit) scale or a trivial linear superposition of such processes (e.g., a composite random walk toggling between different magnitudes of the diffusion parameter for each “layer”, as was illustrated in the first Figure above). On the other hand, complexity arises when such a description that is based on one-dimensional time is not sufficient to reproduce the system realistically.

In a PP-system several events (change of system state) may be executed in parallel! In the illustration above, see for example the situation for t=5 where events at three time scales by chance are initiated simultaneously but at different time scales as defined by ε. Such a kind of dynamics represents a paradox within the constraint of a Markovian (mechanistic) system.

An earlier illustration of the PP framework was given here. For other examples, search this blog for “parallel processing” or read my book.

Various aspects of scaling in animal space use; from power law scaling of displacement lengths (Lévy like distribution), fractal dispersion of GPS fixes (the home range ghost model) and scale free distribution of populations (Taylor’s power law and the Zoomer model) may be natural outcomes of systems that obey the PP conjecture.

NOTE

*) The base, b, of the logarithm does not matter. Any positive integer introduces scaling of the ε-axis.

**) in a standard, mechanistic process an event describes a change of system state at a given point in space at a given point it time. No “time stretching” takes place.