Please observe: there is an unfortunate typo at page 2 of my book. Correct © should read 2015, not 2016. Thus, please refer to the book as Gautestad (2015), which is in compliance with ISBN.
In empirical data GPS fixes are never exact positions. A “fuzziness field” will always be introduced due to uncertain geolocation. When analyzing a set of fixes in the context of multi-scaled space use, are the parameter estimates sensitive to this kind of statistical error? Simultaneously, I also explore the effect on constraining the potential home range by disallowing sallies to the outermost range of available area.
To explore the triangulation error effect on space use analysis I have simulated Multi-scaled random walk in a homogeneous environment with N=10,000 fixes (of which the first 1,000 fixes were discarded) under two scenaria; a “sharp” location (no uncertainty polygons), and strong fuzziness. The latter introduced a random displacement to each x-y coordinate with a standard deviation (SD) of magnitude approximately equal to the system condition’s Characteristic scale of space use (CSSU). Displacements to the outermost parts of the given arena was disallowed, to study how this may influence the analyses. I then ran the following three algorithms in the MRW Simulator: (a) analysis of A(N) at the home range scale, (b) analysis of A(N) at a local scale (splitting the home range into four quadrants), and (c) analysis of the fix scatters’ fractal property.
The following image shows the sharp fix set and the strongest fuzzyness condition.
By visual inspection is is easy to spot the effect from the spatial error (SD = 1182 length units, upper row to the right). However, the respective A(N) analyses at the home range scale generated a CSSU estimate that was only 10% larger in the fuzzy set (linear scale). When superimposing cells of CSSU magnitude onto each fix, the home ranges appear quite similar in overall appearance and size. This was to be expected, since fuzziness influences fine-resolution space use only.
Visually, both home range areas appear somewhat constrained with respect to range, due to the condition to disallow displacements to the peripheral parts of the defined arena (influencing less than 1% of the displacements).
A(N) analysis (the Home range ghost). The two conditions appeared quite similar in plots of log[I(N)], where I is the number of fix-embedding pixels at optimized pixel resolution, as described in previous posts.
However, for the fuzzy series there is a more pronounced break-point with a transition towards exponent ∼0.5 in sample size of magnitude larger than log2(N) ≈ 3. This break-point “lifted” the regression line somewhat for the fuzzy series, leading to a slightly larger intercept with the y-axis when interpolating towards log(N) = 0. This difference between the two conditions with respect to the y-intercept, Δlog(c) from the home range ghost formula log[I(N)] = log(c) + z*log(N), also defines the difference in CSSU when comparing sharp and fuzzy data sets. recall that CSSU ≡ c.
The spatial constraint on extreme displacements relative to the respective home ranges’ centres apparently did not influence these results.
I have also superimposed local CSSU-analysis for respective four quadrants of the two home ranges. When area extent for analysis is constrained in this manner; i.e., spatial extent is reduced to 1:4 in area terms (1:2 linearly) for each local sub-set of fixes, respective (N,I) plot needs to be adjusted by a factor which compensates for the difference in scale range.
Since the present MRW conditions were run under fractal dimension D=1, each local log2[(N,I)] plot is rescaled to log2[(N,I)] + log2[ΔD(grain/extent)] = log2[(N,I)] + 1 when Δ regards the relative change of scale under condition D=1. After this rescaling the over-all CSSU and the local CSSU are overlapping, as shown by the regression lines in the A(N) analyses above. Overlapping CSSU implies that the four quadrants had similar space use conditions, which is true in this simplified case.
Fractal analysis. The Figure below shows the magnitude of incidence I as a function of relative spatial resolution k (the “box counting method”), spanning the scale range from k=1 (the entire arena, linear scale 40,000 units) and down to k = 40,000/(212) = 9.8 units, linear scale*.
Starting from the coarsest resolution, k=1, the log-log regression obviously shows I = 1. At resolution k=1:2 and k=1:4 (4 and 16 grid cells, respectively), I = 4 and I = 4. In other words, all boxes contain fixes at k=1/2, apparently satisfying a two-dimensional object, and at k=1:4 some empty cells (12 of 16) are peeled away as empty space from the periphery of the fix scatter at this finer resolution.
This coarse-scale pattern is a consequence of the defined space use constraint. Disallowing “occasional sallies” outside the core home range obviously influences the number of non-empty boxes relative to all boxes available at the coarse resolutions 1<k<4-1.
However, at progressively finer resolutions – below the “space fill effect” range transcending down to ca k=1:32 – the true fractal nature of the scatter of fixes begin to appear due to log-log linearity, confirming a statistical fractal with a stable dimension D≈1.1 over the resolution range 2-9 < k < 2-5 (showing log-log slope of -1.1). At finer resolutions, the dilution effect flattens further expansion of I. The D=1.1 scale range is close to expectation from the simulation conditions Dtrue=1, while the deviations above and below this range are trivial statistical artifacts from space filling and dilution.
The most interesting pattern regards the finest resolution range, where the fuzzy set of fixes somewhat unexpectedly follows a similar log(k,I) pattern as the non-fuzzy set. However, the slight difference in D, which increases to D =1.17, may be caused by the fuzziness (proportionally stronger space-fill effect as resolution is increased).
To conclude, if the magnitude of position uncertainty does not supersede the individual’s CSSU scale under the actual conditions, the A(N) analyses of MRW compliant space use does not seem to be seriously influenced by location fuzziness. The “fix scrambling” error is in most part subdued towards finer scales than the CSSU.
However, the story doesn’t end here. I have superimposed a dotted red line onto the Figure above. Overlapping with the D=1 section of grid resolutions, the line is extrapolated towards the intersection with log(I) ≈ 0 at a scale that is 21.5 = 2.8 times larger (linear) scale than the actual arena for the present analyses. In other words, in absence of area constraint and step length constraint (and disregarding step length constraint due to limited movement speed of the individual) one should expect the actual set of fixes to “fill up” the missing incidence over the coarse scale range, leading to D≈1 for the entire range from the dilution effect range towards coarser scales.
I have also marked the CSSU scale as the midpoint of the red line. A resolution of log2(k)=-5.5 is in fact very close to the CSSU estimate from the A(N) method (k=1:50 of actual arena using the A(N) method, versus 1:35 of the arena according to the fractal analysis). This alternative method to estimate CSSU was first published in this blog post.
A preliminary development towards this approach was explored both theoretically and empirically in Gautestad and Mysterud (2012).
*) This analysis of N= 8,000 fixes, spanning box counting of 1, 2, 4, 16, 32, … 16.8 million grid cells at respective scales, took ca 10 hours pr. series in the MRW Simulator, using a laptop computer. I suppose this enormous number cracking it would be outside the practical range of a similar algorithm in R.
Gautestad, A. O., and I. Mysterud. 2012. The Dilution Effect and the Space Fill Effect: Seeking to Offset Statistical Artifacts When Analyzing Animal Space Use from Telemetry Fixes. Ecological Complexity 9:33-42.
To understand populations’ space use one needs to understand the individual’s space use. To understand the individuals’ space use one needs to acknowledge the profound influence of spatio-temporal memory capacity combined with multi-scale landscape utilization, which continues to be empirically verified at a high pace in a surprisingly wide range of taxa. Complex space use has wide-ranging consequences for the traditional way of thinking when it comes to formulate these processes in models. In a nutshell, the old and hard-dying belief in the balance of nature needs a serious re-formulation, since complexity implies “strange” fluctuations of abundance over space, time and scale. A fresh perspective is needed with respect to inter-species interactions (community ecology) and environmental challenges from habitat destruction, fragmentation and chemical attacks. We need to address the challenge by rethinking also the very basic level of how we perceive an ecosystem’s constituents: how we assume individuals, populations and communities to relate to their surroundings in terms of statistical mechanics.
Stuart L. Pimm summarizes the Grand Ecological Challenge well in his book The Balance of Nature? (1991). Here he illustrates the need to rethink old perceptions linked to the implicit balancing principle of carrying capacity*, and he stresses the importance of understanding limits to how far population properties like resilience and resistance may be stretched before cascading effects appear. In particular, he advocates the need to extend the perspective from short-series local-scale population dynamics to long-term and broad scale community dynamics. In this regard, his book is as timely today as it was 27 years ago. However, in my view the challenge goes even deeper than the need to extending spatio-temporal scales and the web of species interactions.
My own approach towards the Grand Ecological Challenge started with similar thoughts and concerns as raised by Pimm**. However, as I gradually drifted from being a field ecologist towards actually attempting to model parsimonious population systems I found the theoretical toolbox to be void of key instruments to build realistic dynamics. In fact, the current methods were in many respects even seriously misleading, due to what I considered some key dissonant model assumptions.
In my book (Gautestad 2015), and here in my subsequent blog, I have summarized how – for example – individual-based modelling generally rests on a very unrealistic perception of site fidelity (March 23, 2017: “Why W. H. Burt is Now Hampering Progress in Modern Home Range Analysis“). I have also found it necessary to start from scratch when attempting to build what I consider a more realistic framework for population dynamics (November 8, 2017: “MRW and Ecology – Part IV: Metapopulations?“), for the time being culminating with my recent series of post on “Simulating Populations” (part I-X).
I guess the main take-home message from the present post is:
*) The need to rethink the concept of carrying capacity and accompanying “balance” (density dependent regulation) should be obvious from the simulations of the Zoomer model. Here a concept of carrying capacity (called CC) is introduced at a local scale only, where – logically – the crunch from overcrowding is felt by the individuals. By coarse-graining to a larger pixel than this finest system resolution we get a mosaic of local population densities where each pixel contains a heterogeneous collection of intra-pixel (local) CC-levels. If “standard” population dynamic principles applies, the population change when averaging the responses over a large number of pixels with similar density should be the same whether one considers the density at the coarser pixel or the average density of the embedded finer-grained sub-pixels. This mathematical simplification follows from the mean field principle. In other words, the sum equals the parts. On the other hand, if the principle of multi-scaled dynamics applies, two pixels at the coarser scale containing a similar average population density may respond differently during the next time increment due to inter-scale influence. At any given resolution the dynamics is as a function not only of the intra-pixel heterogeneity within the two pixels but also of their respective neighbourhood densities; i.e., the condition at an even coarser scale. The latter is obviously not compliant with the mean field principle, and thus requires a novel kind of population dynamical modelling.
**) In the early days I was particularly inspired by Strong et al. (1984), O’Neill et al. (1986) and L. R. Taylor; for example, Taylor (1986).
Gautestad, A. O. 2015, Animal Space Use: Memory Effects, Scaling Complexity, and Biophysical Model Coherence Indianapolis, Dog Ear Publishing.
O’Neill, R. V., D. L. DeAngelis, J. B. Wade, and T. F. H. Allen. 1986. A Hierarchical Concept of Ecosystems. Monographs in Population Biology. Princeton, Princeton University Press.
Pimm, S. L. 1991, The balance of nature? Ecological issues in the conservation of species and communities. Chicago, The University of Chicago Press.
Strong, D.E., Simberloff, D., Abele, L.G. & Thistle, A.B. (eds). 1984. Ecological Communities: Conceptual Issues and the Evidence. Princeton,Princeton University Press.
Taylor, L. R. 1986. Synoptic dynamics, migration and the Rothamsted insect survey. J. Anim. Ecol. 55:1-38.
In Part IX a standard Coupled map lattice model was shown to be able to display a 1/f power spectrum, by careful tuning of one of the conditions for population mixing. On the other hand, I also showed that the Zoomer model under similar general conditions showed a higher resilience with respect to the 1/f property. In the present post I explore this statistical resilience of the Zoomer model further, by stressing the population system towards other corners of extreme conditions. However, I start by elaborating on this pressing question: Why this recurrent focus on 1/f noise? To stimulate your curiosity I also reproduce from my book two analyses of empirical data; the sycamore aphid Drepanosiphum platanoides and the leaf miner Leucoptera meyricki.
First, a brief bird’s view of complex dynamics. I have repeatedly given partly answers to the question “why focusing on 1/f noise?”, but here I seek to give you a broader perspective, starting with a citation from Scholarpedia:
1/f fluctuations are widely found in nature. During 80 years since the first observation by Johnson (1925), long-memory processes with long-term correlations and 1/fα (with 0.5 ≲ α ≲ 1.5) behavior of power spectra at low frequencies f have been observed in physics, technology, biology, astrophysics, geophysics, economics, psychology, language and even music…
…1/f noise can not be obtained by the simple procedure of integration or of differentiation of such convenient signals. Moreover, there are no simple, even linear stochastic differential equations generating signals with 1/f noise. The widespread occurrence of signals exhibiting such behavior suggests that a generic mathematical explanation might exist. Except for some formal mathematical descriptions like fractional Brownian motion (half-integral of a white noise signal), however, no generally recognized physical explanation of 1/f noise has been proposed. Consequently, the ubiquity of 1/f noise is one of the oldest puzzles of contemporary physics and science in general.
Ward and Greenwood (2007)
As scientists – whether we are ecologists or working in other fields – we are of course curious about exploring this hard nut to crack, in particular since natural populations (paradoxically, if judged by expectation from standard theory) tend to show 1/f noise in their spatial variation and temporal fluctuations. Here are two examples from my book:
Both series show close agreement with 1/f noise; and their respective derivatives trivially show similar compliance with f noise (power changing proportionally with frequency, shown as dashed plots). The first example is a spatial transect of sycamore aphids (my own data), and the second example shows my power spectrogram analysis based on a time series collected from Bigger and Tapley (1969) of the coffee leaf miner Leucoptera meyricki. Both data sets and methods are described in detail in my book.
The burning question then arises: what kind of spatially extended population model may reproduce 1/f noise over space, time and scale? Within the current paradigm the coupled map lattice (CML) approach fails, since it by design cannot reproduce long distance and far back influence on a present location’s dynamics*.The standard calculus approach (ordinary and partial differential equations) also fails, since models of this design rests on the mean field approximation. In short a novel approach is needed. As previously stated, for the time being and to my knowledge the Zoomer model is the only candidate standing the test when confronting simulations with real data properties with respect to statistical mechanics.
In a series of posts I have now in a step-by step manner presented the Zoomer model, and compared it with the paradigmatic approach in the field, the coupled map lattice design (CML). In Part IX the CML model was shown to be more sensitive to the population’s response to local overcrowding (the CC level). Only 100% redistribution of individuals where CC was temporally exceeded led to a 1/f pattern. The Zoomer model showed 1/f, whether 40% or 100% of individuals emigrated. In other words, strong statistical resilience in this regard. However, the best way to test a model’s intrinsic dynamics is to stress it to its limit. Thus, below I test the effect from changing the redistribution behaviour of the individuals that emigrate from the “CC overshoot” localities.
So far, these individuals have been set to redistribute themselves to other cells in “swarms” of X% each of all swarmers at the actual point in time. For example, if two cells have an overshoot event at this point in time with a total number of dispersers of – say – 10,000 individuals, these dispersers settle in 5% of the other cells (of 1024 cells available) in swarms of ≈ 195 individuals pr. receiving cell, chosen randomly.
Below I explore two variants of this behaviour; (1) the dispersers settle randomly and individually among the available cells, and (2) the dispersers settle in only 1% of the cells. The former implies a more smooth dispersal kernel, and the latter a more collective kind of large swarms settling in just a few cells.
In short, the power spectrum of the time series of the smooth dispersal condition shows white noise (1/f0), while the extremely clumped dispersal shows more like red noise (1/f1.8).**
To conclude, the 1/f property in the Zoomer model seems to be critically sensitive to the behaviour of the dispersing individuals during local crowding (boom/bust) events. Whether the bust leads to local meltdown (0% remaining) or not (60% remaining), the 1/f pattern was robust. However, how the emigrating individual re-settled themselves (independently or in a contagious manner) determined 1/f compliance. An intermediate level of contagion was in best compliance with 1/f, and thereby with the empirical results shown above. Intermediate contagion implies intermediate level of population disturbance frequency***.
This conclusion also fits well with the general model property of zooming, which reflects conspecific attraction in over-all terms. The “balanced” and intermediate kind of zooming follows prom the generic condition that the strength of zooming is distributed evenly over that actual range of spatial scales (k–b zoomers to kb larger grid area, with b=1). However, while zoomers re-distribute themselves contagiously in a spatially explicit manner from a process that depends on the individuals’ spatial memory of past experiences, the swarms of emigrants in the current Zoomer design settle randomly. “Stressed” individuals (emigrants from local crunch events) are assumed to redistribute themselves in a tactical manner, temporarily following the collective behaviour of a swarm during the actual time increment.
Interestingly, the spatial pattern log(M,V) for the white noise condition shows – as expected – a similarly rattled pattern, with slope b ≈ 1 and log(a) >> 0. However, in the contageous dispersal scenario, the spatially self-similar property b ≈ 2 and log(a) ≈ 0 is maintained. Hence, the spatial pattern is still fractal-compliant with similar properties as for other levels of contagion during re-distribution events, despite the more classic “random walk-like” time series of fluctuations.
To conclude, by studying a population’s variation of abundance over space, time and scale it should be possible to analyze a wide range of key ecological signatures from the data series’ statistical properties; for example to what extent the population under the given environmental conditions adheres to a multi-scaled and scale-free kind of intrinsically driven population dynamics/kinetics, and how the population responds to local crowding events. Hopefully, this statistical-mechanical approach may lead to more realistic theory and thus better predictive power. Bringing population dynamical modelling and some basic empirical properties of real populations in closer agreement is long overdue in this important field of ecology.
Crucially, these basic analyses should also be able to cast light on to what extent the dynamics are compliant with expectations from a classical modeling regime; i.e., a Markov- and mean field compliant kind of statistical mechanics, or the alternative framework based on the parallel processing conjecture for individual space use (the MRW model and its population level version, the Zoomer model). This challenge is of course the first obstacle that has to be passed. Hopefully the two insect examples above will inspire others to perform broader and deeper tests – for example, based on my recent series of blog posts.
Bigger, M., and R. G. Tapley. 1969. Prediction of outbreaks of coffee leaf-miners on
Kilimanjaro. Bulletin of Entomological Research 58:601-618.
Ward, L. M. and P. E. Greenwood. 2007. 1/f noise. Scholarpedia 2(12):1537.
*) The 1/f pattern from a CML example in Part IX was a 1/f look-alike due to some tweaking of conditions, but it failed when other statistical aspects were scrutinized.
**) As in previous scenaria, the time scale is set to be smaller than the population’s net growth rate of 0-1%, giving a focus of the higher rate population mixing processes; diffusion 1% for the CML examples, zooming 5% in the Zoomer examples (distributed with 1% pr. spatial scale), and a stochastic magnitude of intrinsic re-distribution of individuals from local CC-linked events under both platforms.
***) By intermediate frequency of disturbance from intrinsic reshuffling of individuals I mean on one side that a given locality’s frequency of disruptions by swarms of dispersing individuals from other localities depends on swarm size. Larger swarms mean fewer locations where they settle. On one hand, large swarms (stronger contagion among emigrants) appear more rarely at a given locality in statistical terms but will have stronger local impact when a swarm arrives. On the other hand, an additional disruptive event from such an influx may happen within the next time increment if the sum of existing and arriving individuals surpass the local CC level. One also has to consider that a higher local abundance due to arriving immigrants may change a declining local abundance to an increasing one due to the zooming process of conspecific attraction.The abundance level as seen from a broader neighbourhood scale may increase sufficiently to toggle this neighbourhood (including the actual cell receiving the immigrants) from being a net supplier; i.e., subject to net local population decline, to becoming net receiver of individuals in the time ahead.
This perspective again underscores the importance of studying population dynamics not only over space and time, but also over respective scale axes and in a Parallel processing manner. Hence, a Markovian-compliant model framework is not sufficient to understand complex dynamics/kinetics, including the 1/f property.
In the previous post I presented a time series of the Zoomer model, verifying a 1/f (pink) noise signature. For the sake of comparison I here present a series from scale-specific dynamics (a Coupled map lattice model) using a comparable system setup. Interestingly, an apparent similarity with some statistical aspects of the Zoomer result is appearing, given specific simulation conditions. “Fine tuning”, if you wish. These critical aspects may reveal important insight in complex dynamics of spatially extended systems, with respect to whether individual (“particle”) mixing is driven by spatial memory and inter-individual congregation (multi-scaled zooming) or standard mixing (CML model with classical, single-scaled diffusion at fine resolutions).
The present CML model conditions are the same as for the Zoomer version with 5-scale zooming of 1% zoomers pr. scale level and time increment, except for replacing zooming with 1% diffusion between neighbour cells at unit scale. Recall from previous posts in this series – and from the older post where the Zoomer model was described in mathematical terms – that zooming has a component of intraspecific cohesion. In contrast, diffusion is a memory-less (Markov compliant) kind of re-distribution of individuals.
The spatial density snapshot (upper left) shows the smooth surface expected from even a weak magnitude of memory-less inter-cell mixing. Also observe the intercept log(a) << 0 in the log(M,V) plot.
By the way, such a smooth surface is what modelers of population dynamics tend to love, since it adheres to the mean field property. On the other hand, empirical data typically shows a very rugged surface over a wide range of resolutions (even in a relatively homogeneous environment), statistically in close compliance with what the Zoomer model generates.
At this particular instance in time of the CML simulation above one also sees the effect from a local cell overshooting its carrying capacity (CC) and redistributing its individuals as small swarms among some of the other cells (seen as temporary “warts” on the density surface). The log(M,V) plot shows a slope close to b = 2 and log(a) << 0, as in previous examples of scale-specific CML dynamics. I have previously pointed out that log(a) << 0 (in combination with b ≈ 2) is a hallmark of scale-specific rather than scale-free population dispersion. It does not comply with the “CV = 1” law for scale-free dynamics (b ≈ 2, log(a) ≈ 0). Now and then some larger number of simultaneous local population breakdown happen, and – as was shown also for the Zoomer condition – this led to a temporary change to b≈1 and log(a) >> 0.
The actual time series (above) appears qualitatively different from the Zoomer equivalent (see Part IIX), being less rugged. Despite this, the correlogram from a sampled series at frequency 1:80 does not reveal any significant difference to the Zoomer model result. However, the log(Mt,V) plot is both more clustered and with a generally smaller magnitude of log(V) than was found in the Zoomer example.
Surprisingly, the power spectrogram (log2-transformed) satisfies pink noise; 1/fμ with μ ≈ 1), and is thus similar to the zoomer example:
However, if local population survival when density overshoots CC is increased from 0% to 60%, the spectrum is more flattened (more like white noise). In comparison, a similar change to the CC overshoot behaviour for the Zoomer model did not change the spectrum away from 1/f. Both spectra are shown below.
What regards the other statistical aspects of the present zoomer example with CC survival 60%, these are reproduced below:
Despite a more “noisy” time series than the 0% local survival condition (see image above for 60%, and compare with Part IIX for 0%), which is also reflected in spatial log(M,V) with b ≈ 1.5 and log(a) >> 0 dominating most of the time (during more silent intervals the CV=1 law applies), the temporal log(M,V) plot shows larger log(V) for a given log(M), relative to the CML example. Thus, despite the narrow range of log(M) the level of log(V) is at least close to the line for b ≈ 2.
In summary, the traditional population dynamical modelling approach to apply scale-specific coupled map lattice models may to some degree mimic scale-free pattern generation with respect to the power spectrum by setting CC overshoot survival low (in this instance, survival = 0%, where “survival” implies remaining individuals in the given cell while most of the emigrants survive but redistribute themselves to other cells). On the other hand, the Zoomer model shows stronger statistical resilience over a broader range of aspects, including both the power spectrum and the CV ≈ 1 property.
In a follow-up post I will reveal further theoretical details, by linking the Zoomer model to so-called self-organized criticality in statistical mechanics of complex systems. Step-by-step I’m approaching a shift towards ecological application of the Zoomer model. However; by necessity, theory first.
On rare occasions one’s research effort may lead to a Eureka! moment. While exploring and experimenting with the latest refinement of my Zoomer model for population dynamics I stumbled over a key condition to tune the model in and out of full coherence between spatial and temporal variability from the perspective of a self-similar statistical pattern. Fractal-like variability over space and time has been repeatedly reported in the ecological literature, but rarely has such pattern been studied simultaneously as two aspects of the same population. My hope is that the Zoomer model also has a more broadened potential by casting stronger light on the strange 1/f noise phenomenon (also called pink noise, or Flicker noise), which still tends to create paradoxes between empirical patterns and theoretical models in many fields of science.
First, consider again some basic statistical aspects of a Zoomer-based simulation. Above I show the spatial dispersion of individuals within the given arena. See previous posts I-VII in this series for other examples, system conditions and technical details. The new aspect in the present post is a toggle towards temporal variation of the spatial dispersion, and fractal properties in that respect (called a self-affine pattern in a series of measurements, consistent with 1/f noise*). This phenomenon usually indicates the presence of a broad distribution of time scales in the system.
In particular, observe the rightmost log(M,V) regression. As in previous examples, the variable M in the Taylor’s power law model V = aMb represents – in my application of the law – a combination of the average number of individuals in a set of samples at different locations at a given scale and supplemented by samples of M at different scales. The latter normally contributes the most to the linearity of the log(M,V) plot, due to a better spread-out of the range of M. During the present scenario that was run for 5000 time steps after skipping an initial transient period, the self-similar and self-organized spatial pattern of slope b ≈ 2 and log(a) ≈ 0 was dominating, only occasionally interrupted by some short intervals with b << 2 and log(a) >> 0. These episodes were caused by simultaneous disruption of a relatively high number of local populations in cells that exceeded their respective carrying capacity level (CC), leading to a lot of simultaneous re-positioning of these individuals to other parts of the arena in accordance to the actual simulation rules. Thus, some temporary “disruptive noise” could appear now and then, occasionally leading to a relatively scrambled population dispersion with b ≈ 1 (Poisson distribution) and log(a) >> 0 for short periods of time.
The actual time series for the present example is given above, showing local density at a given location (a cell at unit scale) over a period of 5,000 increments. The scramble events due to many simultaneous population crashes in other cells over the arena can indirectly be seen as the strongest density peaks in the series. The sudden inflow of immigrants to the given locality caused a disruption to the monitored local population, which typically crashed during the next increment due to density > CC.
In may respects the series illustrates the “frustrating” phenomenon seen in long time series in ecology: the longer the observation period, the larger the accumulated magnitude of fluctuations (Pimm and Redfearn 1988, Pimm 1991). However, in the present scenario where conditions are known, there is no uncertainty related to whether the complex fluctuations are caused by extrinsic (environmental) or intrinsic dynamics. Since CC was set to be uniform over space and time and only varied due to some level of stochastic noise (a constant + a smaller random number) the time series above is expressing intrinsically driven, self-organized kind of fluctuations over space, time and scale.
The key lesson from the present analysis is to understand local variability as a function not only of local conditions, but also a function of coarser-scale events and conditions in the surroundings of the actual population being monitored. The latter is often referred to a long-range dependence. In the Zoomer model this phenomenon is rooted in the emergence of scale-free space use of the population, due to scale-free space use at the individual level (the Multi-scaled random walk model). In the present post I illustrate long-range dependence also over temporal variability.
To remove the major magnitude of serial autocorrelation the original series above was sampled 1:80. The correlogram to the right shows this frequency to be a feasible compromise between full non-autocorrelation and a reasonably long remaining series length.
After this transformation towards observing the series at a different scale, the sampled series was subject to log(Mt,V) analysis, where Mt represents – in a sliding window manner – the mean abundance over a few time steps of the temporally coarser-scaled series, and V represents these intervals’ respective variance.
Interestingly, the log(Mt,V) pattern was reasonably similar to the log(M,V) pattern from the spatial transect above; i.e., the marked line with b ≈ 2 and log(a) ≈ 0, under the condition that the series was analyzed at a sufficiently coarse temporal scale from sampling to avoid most of the serial autocorrelation.
The most intriguing result is perhaps the Figure below, showing the log-log transformed power spectrogram of the original time series**. Over a temporal scale range up to frequency of 1500 (the x-axis) the distribution of power (the y-axis); i.e., amplitudes squared, satisfies 1/f noise!
See my book for an introduction to this statistical “beast” ***, where I devote several chapters to it. At higher frequencies further down towards the “rock bottom” of 4096 (the dimmed area), the power shows inflation. This is probably due to an integer effect at the finest resolutions of the time series, since individuals always come as whole numbers rather than fractions. Thus, the finest-resolved peak in power was probably influenced by this effect.
To my knowledge, no other model framework has simultaneously been able to reproduce the empirically often reported fractal-like spatial population abundance (the spatial expression of Taylor’s power law) with a fractal-like temporal variation (the temporal Taylor’s power law). Only either-or results have been previously reproduced by models among the thousands of papers on this topic over the last 60 years or so.
For an introduction to 1/f noise in an ecological context, you may find this link to a review by Halley and Inchausti (2004) helpfull. I cite:
1/f-noises share with ecological time series a number of important properties: features on many scales (fractality), variance growth and long-term memory. (…) A key feature of 1/f-noises is their long memory. (…) Given the undoubted ubiquity and importance of 1/f-noise, it is surprising that so much work still revolves about either observing the 1/f-noise in (yet more) novel situations, or chasing what seems to have become something of a “holy grail” in physics: a universal mechanism for 1/f-noise. Meanwhile, important statistical questions remain poorly understood.
What about ecological methods based on the theory above? By exploring the statistical pattern of animal dispersion over space and its variability over time – and mimicking this pattern in the Zoomer model – a wide range of ecological aspects may be studied within a hopefully more realistic framework than the presently dominating approach using coupled map lattice models or differential equations. Statistical signatures of quite novel kind can be extracted from real data and interpreted in the light of complex space use theory.
In upcoming posts I will on one hand dig into the set of Zoomer model “knobs” that steer the statistical pattern in and out of – for example – the 1/f noise condition, and on the other hand I will exemplify the potential for ecological applications of the model.
Halley, J. M. and P. Inchausti. 2004. The increasing importance of 1/f noises as models of ecological variability. Fluctuation and Noise Letters 4:R1–R26.
Pimm, S. L. 1991, The balance of nature? Ecological issues in the conservation of species and communities. Chicago, The University of Chicago Press.
Pimm, S. L., and A. Redfearn. 1988. The variability of animal populations. Nature 334:613-614.
*) “1/f-noise” refers to 1/fν-noise for which 0 ≤ ν ≤ 2; “near-pink 1/f-noise” refers to cases where 0.5 ≤ ν ≤ 1.5 and “pink noise” refers to the specific case were ν = 1. All 1/fν-noises are defined by the shape of their power spectrum S(ω):
S ∝ 1/ων
Here ω = 2πf is the angular frequency. “White noise” is characterized by ν ≈ 0, and its integral, “red” or “brown” noise, has ν ≈ 2. A classical random walk along a time axis is an example of the latter, and its derivative produces white noise.
**) Since a FFT analysis requires a series length satisfying a power or 2, only the first 4096 steps were included.
***) 1/f noise is challenging, both mathematically and conceptually:
The non-integrability in the cases ν ≥ 1 is associated with infinite power in low frequency events; this is called the infrared catastrophe. Conversely for ν ≤ 1, which contains infinite power at high frequencies, it is called the ultraviolet catastrophe. Pink noise (ν = 1) is non-integrable at both ends of the spectrum. The upper and lower frequencies of observation are limited by the length of the time series and the resolution of the measurement, respectively.
Halley and Inchausti (2004), p R7.
In my continued quest for a more realistic statistical-mechanical theory for spatially extended population dynamics I have previously pointed out a specific property of the inter-scale spatial coefficient of variation as one of the hallmarks of scale-free dispersion (see Part III). In the present post I study another statistical property, the spatial autocorrelation, which may provide additional cue about the population’s compliance with standard or complex space use.
First, consider the standard theory, based on mean field compliant population redistribution (mixing). The following three images show a typical example, where the population is subject to 5% diffusion rate at unit (pixel) scale, net population growth of 1%, no Allée effect, and over-all population density below carrying capacity. As previously described in this series, diffusion tends to smoothen the density surface. The log(M,V) plot over a scale range typically shows y-intercept [log(a)] substantially below zero a hallmark of fine-scale smoothness – under condition where the slope β ≈ 2. I refer to previous posts in this series for technical details.
The correlogram (above) reflects the undulating density surface.
Next, consider the following scenario under the same condition, except for 5% standard diffusion at unit scale (k=1) being replaced by 5% scale-free “zooming” with 1% pr. scale level.
The present snapshot of population dispersion represents the population a few time increments after a general population crash (bottleneck episode), when the population is in the process of re-organizing itself. Despite the short time span since the episode (5-6 time increments) the log(M,V) plot has already adjusted itself from log(a) >>0 and b ≈ 1 (random mixing due to external forcing) to log(a)≈ 0 and b ≈ 2.
Observe that in this additional example of complex population dynamics the log(M,V) plot again satisfies log(a) ≈ 0 when b ≈ 2. Here the local effect at next time increment from overshooting local carrying capacity both locally and elsewhere is influenced by a rate of 50% remaining population at the actual location (as in previous examples) to zero individuals remaining. In other words, this hallmark of Zoomer-like dynamics is quite resilient to this modification of ecological conditions.
The interesting aspect in the present context is its spatial transect correlogram at unit scale k=1. It shows low level of autocorrelation at all spatial lags except for scale 0, which trivially illustrates that the local population correlates with itself 100% at lag zero. Despite the non-significant autocorrelation the parameter condition for log(M,V), log(a) = 0 and b = 2.
Thus, scale-free population abundance regards both spatially autocorrelated transects (as shown in previous parts of this series) and – as shown here – non-autocorrelated transects a short period following a perturbation [during the first 3-5 increments after the event*, log(a) >> 0 and b ≈ 1].
In this respect I refer to the so-called Z-paradox, which is resolved under the Zoomer model but not under the standard framework of population modelling. Thus, my proposed model may also provide a novel approach towards the famous and controversial Taylor’s Power law.
*) Under the condition that the bottleneck condition lasts for one time increment. If the change of condition is permanent, it may typically take 20-50 time steps to restore log(a) ≈ 0 and b ≈ 2.