A wild Bessel function has appeared!
I have to tell you something: I really like Bessel functions. All of them. Plain or modified, of first or second kind. I am particularly amazed at how often they appear in completely unrelated contexts. For example, they can be used to describe the vibrations of a drum or, as I recently stumbled upon, random walks in one dimension. And that is also the example I want to show you today. To understand the problem, imagine an infinite line divided into equal steps marked by the index . A ball is placed on the line at some position. At a rate it goes one step to the right, , and one step to the left, , at the rate :
The ball therefore has a certain probability of being on the line at position and time . We are mainly interested in the time evolution and form of the probability, so how can we describe the dynamics of the system? We can do this using a master equation. To derive it, note that the evolution of boils down to a balance of „outflux“ and „influx“ of probabilities into a location . What are these influxes and outfluxes? First of all, the ball at can move to at rates or , so the outflux from this site is . On the other hand, the ball could move from the locations to at the respective rates, resulting in a probability influx of . In total we find for the rate of change of the probability:
,
which is a combined differential and recurrence equation. To solve this equation, we can use the generating function approach. We take the probability as the coefficients of a power series and write
.
Inserting this expression into the master equation yields the following differential equation:
.
Since the prefactor of on the righthand side is independent of , we obtain together with the condition (i.e., the ball starts at )
.
All that remains is to find the power series coefficients of the above function. This is where the Bessel function joins the game. You see, the modified Bessel function of the first kind fulfills the rather similar-looking generating function
,
therefore, it’s a matter of some simple algebra to show that using it we can write the expansion coefficients of as
.
This, in turn, implies that the probability is given by+
Using the generating function, we were able to find the microscopic behavior of the random walk and show how it relates to Bessel functions (pretty cool!). In many applications, however, one is more interested in macroscopic properties of the system, such as the mean position or the spread of the mean position (aka the variance). We can also use the generating function for these situations. To see how, note that we want to compute the moments of the probability distribution, i.e,
.
The first moment is the mean position , while the variance of the position can be calculated from . Since , we find
,
and for the mean and variance
.
In the case of a biased random walk, , the ball is expected to move on average with a „velocity“ , while the uncertainty of the position spreads with factor . This behavior is not really apparent from the Bessel-form of the probability. However, we can derive another form of that makes these properties (and an interesting connection to another process) obvious. To do this, we look at the long-term dynamics of the system, i.e. what happens to for . Our starting point is a very convenient integral representation* of our probability, namely
.
In the limit of large times, we can very well approximate the second exponent by means of a power series up to the second order and extend the integration bounds to infinity#, which gives us
.
This integral is just the Fourier transform of a shifted Gaussian, so we have
,
which, after identifying and , gives the well-known formula for diffusion with drift:
.
Therefore, we can understand diffusion (with drift) at the microscopic level as a collection of (biased) random walks of the indiviual particles. And finally, here is a plot of the probability for the rates and at time (red line) compared with the Gaussian approximation (black dashed):
Now that we know how to deal with single steps, we can further generalize this approach to next-nearest-neighbors steps. The general idea is quite simple: before we only had rates for steps from to , now we have a second set of rates for steps from to . Let’s denote the 1-step rates by and and the 2-step rates by and , namely
Similar to before, the master equation describing the time evolution of the probability is given by
Using the generating function approach, we find for the function :
.
Again, we need to find the power series coefficients of this function to find the probability of the system. Since the generating function can be factored into the functions
,
we can expand the first two exponentials as power series:
.
This expression is just a product of two Laurent series and can be simplified using the Cauchy product formula as follows:
.
Therefore, the probability describing the system is given by the expression
,
where the sum cannot be simplified any further (to my knowledge). As for the one-step random walk, we expect it to behave like a Gaussian in the long-time limit,
.
With the help of the generating function, we find for the diffusion constant and velocity and respectively.
In the last section, I want to take a quick look at the most general random walk with arbitrary step sizes. In this case, the random walker moves with rates steps to the right and with rates steps to the left. Similar to the previous two special cases, the master equation of this arbitrary step random walk is given by
.
We first compute the macroscopic properties of the system, the mean position and the variance of the position. By mathematical induction, we find
.
From these follow the drift velocity and diffusion constant
.
An interesting observation from these expressions is that unless the rate constants are zero after some maximum step size, the rates must decay faster than for the long-time diffusion constant to be finite. This means that otherwise in the long time limit, similar to the Cauchy distribution, the arbitrary step random walk has no well-defined second moment (or any higher moment). Figuratively speaking, the random walker occasionally wanders off arbitrarily far from the mean position. On the other hand, in the case of infinitely many rates where the second moment does converge, such as , even the infinitely many possible step sizes and the occasional large steps are not enough to „smear out“ the random walker’s position.
To elaborate on the last example, take the symmetric rates . The drift velocity and diffusion constant in this case are given by
.
To calculate the form of the probability, we again consider the generating function, which is now given by
.
To compute the coefficients of the infinite product, we first consider the case of only two step sizes. From earlier we know that the coefficients are given by
,
which we can write as a product of integrals using their respective integral representations:
.
Switching the order of integration and summation, we can identify the well-known Fourier series representation of the Dirac delta, which gives us
and after integrating over :
.
We can repeat these steps for larger and larger and arrive at (and I leave this as an exercise for the reader):
.
The sum in the exponent is a special case of the Clausen function (more precisely, the Glaisher-Clausen function), and the probability can thus be written as
.
But we can actually do a little bit better. First, we take advantage of the symmetry of the second integrand and write the integral as
.
For the interval , the Glaisher-Clausen function of order 4 has the simple expansion
,
thus, the above integral can also be expressed as
.
And just to round things off, compare this to the much simpler long time limit:
.
Here a quick comparison of the integral expression with the Gaussian approximation:
Whew! What a ride! This has probably been the most laborious post so far, but it’s been great fun to dive back into Bessel functions and also a bit into diffusion and random walks. But that’s it for now.
See you next time, Cheers!
I don’t really have any reading suggestions. But anything related to Bessel is probably good
*For a derivation, see my response to this math.stackexchange post
+This probability density is also called Skellam distribution
#This is called a saddlepoint approximation
Hinterlasse einen Kommentar