Random+movement+rules

Introduction
Random movement is a relatively straightforward concept generally consisting of the idea that the movement of an individual occurs as a fully stochastic process, similar to a martingale. Typically, the implementation of any of these strategies by an agent will produce a random walk produced using a Monte Carlo method. These will typically (but not necessarily) constitute Markov processes. The realization of the stochastic sequence (aka the martingale) can be highly dependent on the particular Monte Carlo method employed, which is differentiated by the selection of distribution from which the random numbers for step distance and/or direction is drawn.

A random walk in two dimensions can be utilized to represent animal movement (ie. foraging strategies), although the specification of the type of walk (i.e. distributions utilized) can have a substantial effect on the outcome, especially as an iterative process.

Key Definitions

 * Monte Carlo methods ** (or ** Monte Carlo experiments **) are a class of computational algorithms that rely on repeated random sampling to compute their results.

A **Markov process**, named after the Russian mathematician Andrey Markov, is a time-varying random phenomenon for which a specific property (the [|Markov property] ) holds. In a common description, a stochastic process with the Markov property , or memorylessness, is one for which // conditional o n the present state of the system, its future and past are independent//. [|[1]]

A **martingale** is a model of a fair game where no knowledge of past events can help to predict future winnings. In particular, a martingale is a [|sequence] of [|random variables] (i.e., a [|stochastic process] ) for which, at a particular time in the [|realized] sequence, the [|expectation] of the next value in the sequence is equal to the present observed value even given knowledge of all prior [|observed values] at a current time.

A ** random walk **is a [|mathematical] formalisation of a trajectory that consists of taking successive [|random] steps. Often, random walks are assumed to be [|Markov chains] or [|Markov processes], but other, more complicated walks are also of interest.

Key Concepts

 * **2D Random walk** - ‘random’ direction (derived from dunif~[0, 2*pi)), fixed step length
 * **Brownian motion** - random walk with random step sizes (PRESUMABLY uniformly distributed?)
 * **Gaussian random walk** - step size that varies according to a normal distribution
 * **Lévy walk** - random walk with step length drawn from a heavy- or fat-tailed distribuiton (ie. power-law distribution - Viswanathan 1999)
 * **Lévy flight** - random walk with step duration (in terms of //t//) drawn from a heavy- or fat-tailed distribuiton (not sure of effective difference)
 * **Cauchy flight** - random walk with step length drawn from a Cauchy distribuiton
 * **Rayleigh flight** - random walk with step length drawn from a normal/Gaussian distribution
 * **Wiener process** - standard brownian motion? Direction and distance are both Gaussian processes?

See Also

Application of random walks to foraging strategies

BIBLIOGRAPHY

@article{edwards2007revisiting, title={Revisiting L{\'e}vy flight search patterns of wandering albatrosses, bumblebees and deer}, author={Edwards, A.M. and Phillips, R.A. and Watkins, N.W. and Freeman, M.P. and Murphy, E.J. and Afanasyev, V. and Buldyrev, S.V. and da Luz, M.G.E. and Raposo, E.P. and Stanley, H.E. and others}, journal={Nature}, volume={449}, number={7165}, pages={1044--1048}, year={2007}, publisher={Nature Publishing Group} }

@article{viswanathan1999optimizing, title={Optimizing the success of random searches}, author={Viswanathan, GM and Buldyrev, S.V. and Havlin, S. and Da Luz, MGE and Raposo, EP and Stanley, H.E.}, journal={Nature}, volume={401}, number={6756}, pages={911--914}, year={1999}, publisher={Nature Publishing Group} }