**In Part I of this set of posts I described animal space use from the perspective of ergodicity. This is a key concept of standard statistical mechanics, with similar importance to analysis of individual paths and home range data under the extended theoretical framework (the parallel processing conjecture, expressed by the MRW model). Below I elaborate on this theme. I indicate by simulation examples the transition from a fully ergodic state on the home range scale to a narrowing of this scale as the series of fixes become increasingly autocorrelated (higher-frequency sampling). **

From a biophysical perspective, the following system description may point towards my most important theoretical development since the Scaling cube. This will become clearer as I gradually turn my upcoming posts from description of statistical-mechanical properties towards how to explore these properties in novel methods for ecological inference.

First consider the general principle that the transition from non-autocorrelated data towards increased time-dependency (stronger serial autocorrelation) narrows the spatio-temporal scale range for which the statistical-mechanical system description is applicable. In the limit of very high frequency sampling we reach the biological resolution, where the animal’s behaviour is directly observable. In this limit “the hidden layer” has vanished. In my book I introduced the hidden layer concept to explain the ergodic principle in the context of animal GPS series (and allowing for a statistical-mechanical system description), and related ergodicity to the degree of serial autocorrelation.

Thus, by varying sampling frequency of an animal’s path one passes a transitional state between two kinds of system representations of animal space use; direct observation of a systems’ dynamic development from successive causal events to an indirect observation of space use as the hidden layer is increased. The former description requires a high frequency observational approach, while the latter requires lower frequency of path sampling.

**However, here comes the novel and interesting part. In several blog posts I have described another important theoretical concept – the individual’s characteristic scale of space use (CSSU). In previous posts I have underscored that its estimation requires serially non-autocorrelated fixes. In the following simulations of MRW I show for the first time (a) how the CSSU concept may be extended in a theoretically consistent manner to auto-correlated series, given that the hidden layer is still sufficiently deep to allow for a statistical-mechanical system description, and (b) how this extension towards a time-dependent process description has implications for the system’s CSSU characteristics.**

In the illustration above the standard result (serially non-autocorrrelated fixes) is shown by black/white circle symbols. After a “zooming excercise” to find the proper grid resolution for *I*(N) from the home ghost formula *I*(N) = *c*N^{z} , CSSU is found when the y-intercept is close to the optimal, *c* ≈ 1 [log(c) ≈ 0]. At this grid resolution the power exponent is expected to be close to the default MRW condition, *z* ≈ 0.5. Filled symbols show the result from a large series, N=100,000. Open symbols show a separate analysis using the last 20% of the series; N=20,000. As expected, *z* and *c* are of similar magnitude, due to the series’ non-autocorrelation property. In this domain, *z* and *c* are expected to be independent both on sampling frequency and series length. *Thus, CSSU estimation is resilient under this condition*.

Then turn your focus to the coloured symbols, showing the average result from four replicates. The orange and red circles show log[*I*(N)] based on the same grid resolution as for the black/white circles, but now from strongly autocorrelated series as a result of higher sampling frequency. This is a project under development, but here follow some initial comments for this scenario.

- Autocorrelated series require quite large N. The present example – based on averaging over 4 replicate series – shows instability of
*I*(N) for small samples (N <≈ 250. The orange circles show the result for N = 100,000 and the red circles regard a subset of the last 20% of the series. - Observe an N-dependency expressed by the difference between the orange and the red circles.
*I*(N) for a given N is somewhat larger (larger intercept) when using the last part of the series and the data are autocorrelated. The difference is indicated by the { character. I will return with an explanation of this effect in a later post. Quite trivial, in fact. - In addition to this N-dependency, incidence for a given N is also smaller when the series are autocorrelated and fixes are sampled at an even higher frequency than the coloured series above (not shown here).
**The stronger the autocorrelation, the smaller the y-intercept**. - Given that the sampling frequency is still within the domain of providing sufficiently deep hidden layer relative to the individual’s true path,
**for large series the regression tends to confirm power exponent**.*z ≈*0.5, as for the non-autocorrelated series - The smaller intercept, log(
*c*), for autocorrelated series implies that a smaller grid scale is required to optimize towards log(*c*) ≈ 0. Apparently this implies that CSSU is non-stationary under varying degree of autocorrelation. - However, consider the red triangles, using N=20% of the total series (data similar to the red circles). The difference between the red circles and the red triangles is due to a change of grid scale to a 1:4 finer resolution, following the procedure in an upcoming post in May to optimize grid scale towards achieving log2(
*c*) ≈ 0 and using*z*=0.5 as an assumption; i.e. interpolating to log(N) = 0 from N_{max}.**After this re-scaling of the grid, CSSU is similar in magnitude to the estimate for the non-autocorrelated sampling scheme, but a model extension is needed to account for the fact that the process is now observed in the time-dependent domain**(I will return to this model variant later). In other words, incidence (non-empty grid cells) for a given N is of similar magnitude, but the cells are smaller. The model extension thus regards the rescaling factor to account for strength of time-dependency for the sake of estimating CSSU under this condition. - The relative change in spatial resolution to maintain log(c) = 0 is reflecting a similar narrowing of the upper limit of the spatial scale where system ergodicity is maintained.
**Thus, higher sampling frequency in the auto-correlated domain implies a narrowing of the scale range over which ergodicity is satisfied**. If this range is further narrowed, we reach the transition from the statistical-mechanical to the biological system representation (see above). - Autocorrelation implies time dependent system description, and
**CSSU becomes a function of both space and time scales. In non-autocorrelated series, CSSU is independent of the time dimension.** - In this paricular example, CSSU is 1/4 smaller for the given degree of autocorrelation, which is a function of average return frequency relative to path sampling frequency; see my book.