Posts

Showing posts from September, 2016

Three Bold Steps for Movement Ecology

Image
Linking the statistical pattern of space use to general models of movement behaviour has always been a cornerstone of animal ecology. However, over the last 10-20 years or so we have seen a rapidly growing interest in studying these processes more explicitly from a biophysical perspective. Biologists and physicists have come together on a common arena – movement ecology – seeking to resolve some key theoretical challenges. There is now a consensus that space use is more complex (in the physical sense of the word) than the traditional text book models have accounted for. In particular, individuals are generally utilizing their environment in a spatio-temporal multi-scaled manner, and species within a broad range of taxa also show capacity for spatially explicit memory utilization (e.g., a memory map). However, despite the emergence of very sophisticated models, movement ecology still has a long way to go to fully embrace these concepts and embed them into a coherent theoretical framewor

CSSU – the Alternative Approach

Image
In my book and in several blog posts I have described in detail how a spatial scatter of GPS fixes can be analyzed to estimate the animal’s characteristic scale of space use (CSSU). The method is based on zooming the resolution of a virtual grid until the intercept log(c) ≈ 0 and the slope z ≈ 0.5 in the log-transformed home range ghost formula log[I(N)] = log(c) + z*log(N). Incidence, I, regards “box counting” of grid cells that embed at least one fix at the given resolution of the grid. In this post I present for the first time an alternative method to estimate CSSU. Space use that is influenced by multi-scaled, spatial memory utilization tends to generate a GPS fix pattern from path sampling that is self-similar; i.e., the scatter is compliant with a statistical fractal with dimension D ≈ 1. Since D<<2, the dispersion is statistically non-stationary under change of sample size for its estimation (N). This property explains the home range ghost formula with the “paradoxical”

The KDE Smoothing Parameter: Approaching the Core Issue

Image
When calculating individual space use by the kernel density estimation (KDE), the smoothing parameter h must be specified. The choice of method to calculate h has a dramatic effect on the resulting estimate. Here I argue that looking for the optimal algorithm for h is probably a blind alley for other reasons than generally acknowledged. Two methods that are used extensively for KDE home-range analysis; the least square cross validation method (LSCV) and the method to determining the optimal h for a standard multivariate normal distribution (Href). In short, both methods have been found to have serious drawbacks. In particular, LSCV is generally under-smoothing the home range representation, leading to a utilization distribution (UD) that tends to be fragmented with many local “peaks”. On the other hand, Href tends to over-smooth the UD. Thus, relative to LSCV the resulting UD suppresses the local peaks in density of fixes and tends to show a larger home range for a given isopleth.