Pseudo-Riemannian Manifold of Random Ideas

A blog where I share some of my personal research and random thoughts on maths and physics

Additional thoughts on: Random walks in one dimension

The math behind Sisyphos.

After I wrote the blog post about discrete random walks in 1D and their connection to Bessel functions, I started wondering about their continuous counterpart. In particular, one question that got stuck in my mind was what happens if you have discrete steps in one direction but continuous motion in the other. As I thought about the motivations for this case, I realized that it could be linked to a famous figure in Greek mythology: Sisyphos. „How?“ you may ask. To give a brief summary of his story, Sisyphos was the king of Corinth and was known for his cunning and cleverness. Depending on the story, he cheated death twice and was subsequently punished by the gods for this trickery and his arrogant belief that he was smarter than the gods. As punishment for his crimes, Hades made Sisyphos endlessly roll a huge boulder up a steep hill in Tartarus, only for it to roll back down each time it neared the top, repeating the action for eternity. More abstractly, what we have here is a walker moving up an inclined plane of some inclination angle \theta in discrete steps with rate k_\mathrm{R} (from now on this is the right direction), interrupted by phases of continuously sliding back down the hill some distance* with rate k_\mathrm{L} (from now on this is the left direction). We can now ask whether our walker will ever climb the hill, or whether he will make no progress at all.

As for the original post on random walks, we will approach the question of stepping with backsliding using the probability p(x,t) for the walker (Sisyphos) to be at position x at time t. The time evolution of p(x,t) is described by the continuous analog of the master equation:

\displaystyle \frac{\mathrm{d}}{\mathrm{d}t}p(x|x_0;t)=\int\mathrm{d}x'\,\bigl[W(x|x')p(x'|x_0;t)-W(x'|x)p(x|x_0;t)\bigr] ,

where W(x|x') are the continuous transition rate from position x' to x. As ansatz from the transition rate, we use the following:

W(x|x')=k_\mathrm{L}X_\mathrm{slide}(x'-x)\Theta(x'-x)+k_\mathrm{R}\delta(x'+\Delta-x) .

The first term is the rate of moving from the right to a position x with distances given by the distribution of sliding distances X_\mathrm{slide} (note that whenever X_\mathrm{slide} is infinitely localized at one sliding distance, the model degenerates back to a normal random walker in 1D), while the second term is the rate of stepping from the left to x with step size \Delta. Inserting this transition rate back into the master equation and further defining p(x,t)\equiv p(x|0;t) (we start at the origin), we get the equation

\displaystyle {\frac{\mathrm{d}}{\mathrm{d}t}p(x,t)=k_\mathrm{L}\int_0^\infty\mathrm{d}x'\,X_\mathrm{slide}(x')p(x+x',t)+k_\mathrm{R}p(x-\Delta,t)-(k_\mathrm{L}+k_\mathrm{R})p(x,t)} .

Without defining what the sliding distance distribution is, we cannot solve for the probability. However, we can already answer the question of whether Sisyphos will make any progress pushing the boulder up the hill. For this, we multiply the master equation by x and integrate over \mathbb{R}. This gets us

\displaystyle {\frac{\mathrm{d}}{\mathrm{d}t}\langle x\rangle=k_\mathrm{L}(\langle x\rangle-\langle x_\mathrm{slide}\rangle)+k_\mathrm{R}(\langle x\rangle+\Delta)-(k_\mathrm{L}+k_\mathrm{R})\langle x\rangle} ,

and after solving the differential equation with the condition \langle x\rangle(0)=0 we get the average position of the sliding walker:

\displaystyle \langle x\rangle(t)=(k_\mathrm{R}\Delta-k_\mathrm{L}\langle x_\mathrm{slide}\rangle)t ,

where \langle x_\mathrm{slide}\rangle is the average of the sliding distance distribution. We immediately see that the walker will on average only go up the hill if k_\mathrm{R}\Delta>k_\mathrm{L}\langle x_\mathrm{slide}\rangle (which is not surprising, really). To complete the picture, we will also compute the variance. The relevant differential equation is

\displaystyle {\frac{\mathrm{d}}{\mathrm{d}t}\langle x^2\rangle=k_\mathrm{L}(\langle x^2\rangle-2\langle x\rangle\langle x_\mathrm{slide}\rangle+\langle x_\mathrm{slide}^2\rangle)+k_\mathrm{R}(\langle x^2\rangle+2\Delta\langle x\rangle+\Delta^2)-(k_\mathrm{L}+k_\mathrm{R})\langle x^2\rangle} ,

which leads to

\displaystyle \mathrm{Var}[x](t)=(k_\mathrm{R}\Delta^2+k_\mathrm{L}\langle x_\mathrm{slide}^2\rangle)t ,

i.e., the usual \sigma_x\propto\sqrt{t} behavior of standard diffusion.

Although we don’t need the distribution of sliding distances for our problem, once you want to simulate this process, it’s good to have an idea of what X_\mathrm{slide} might look like (and it’s also a nice exercise in model building, classical mechanics, and probability density transforms). The starting point is the standard textbook equation of motion of a particle sliding on an inclined plane with a friction term:

\displaystyle m\frac{\mathrm{d}^2x}{\mathrm{d}t^2}=\sin(\theta)mg+F_\mathrm{F} .

One can check through stability analysis that any friction force of the form F_\mathrm{F}=f(\dot{x}) will not lead to a fixed point of the dynamics, i.e. the particle will slide forever and never stop. This implies that we need a distance dependent friction term. Therefore we choose F_\mathrm{F}=-\mu(x)\cos(\theta)mg. To model the stopping of the motion and to find the point (and thus the distance) where the sliding ends, we have to think about forms of the friction coefficient \mu(x). It is usually set to a constant, but we will make a simple linear ansatz:

\displaystyle \frac{\mathrm{d}^2x}{\mathrm{d}t^2}=\sin(\theta)g-(\mu_0+\mu_1x)\cos(\theta)g .

To motivate having a linearly increasing coefficient of friction, imagine you’re climbing a hill and you start to slide down. Usually what happens is that you try to hold on to the ground more tightly, or you start „digging“ into the ground with your foot, thus increasing your coefficient of friction. Since this is expected to increase progressively as you slide down the hill, a linear approximation is justified. We are only interested in the position, thus we can rewrite the left derivative using the chain rule to get rid of the time dependence and arrive at

\displaystyle v\frac{\mathrm{d}v}{\mathrm{d}x}=\sin(\theta)g-(\mu_0+\mu_1x)\cos(\theta)g ,

whose solution simply is

\displaystyle v(x)=\sqrt{gx}\sqrt{2\sin(\theta)-\cos(\theta)(2\mu_0+\mu_1x)} .

We define the moment when the velocity reaches zero (apart from v(0)) as the distance at which the sliding stops. Therefore, we find the sliding distance to be given by

\displaystyle x_\mathrm{slide}=\frac{2}{\mu_1}(\tan(\theta)-\mu_0) .

Two observations right from the start: as expected, the better your ability to stop yourself (greater \mu_1), the less you slide, and if \mu_0 is greater than \tan(\theta), there is no sliding at all. Now, to use this expression to find a distribution of sliding distances, we promote the two friction coefficients \mu_0 and \mu_1 to distributions themselves and transform x_\mathrm{slide} accordingly:

\displaystyle X_\mathrm{slide}=\int_0^\infty\mathrm{d}\nu_0\int_0^\infty\mathrm{d}\nu_1\,\delta\!\left(\!x_\mathrm{slide}-\frac{2}{\nu_1}(\tan(\theta)-\nu_0)\!\right)M_{\mu_0}(\nu_0)M_{\mu_1}(\nu_1) ,

where M_{\mu_0} and M_{\mu_1} are the distributions of \mu_0 and \mu_1, respectively. Perfoming the integral over \nu_1 gives

\displaystyle X_\mathrm{slide}=\int_0^\infty\mathrm{d}\nu_0\,\frac{2(\tan(\theta)-\nu_0)}{x_\mathrm{slide}^2}M_{\mu_0}(\nu_0)M_{\mu_1}\!\left(\frac{2(\tan(\theta)-\nu_0)}{x_\mathrm{slide}}\right) ,

and to make things simple, we will set the zeroth-order friction factor constant, i.e., M_{\mu_0}(\nu_0)=\delta(\nu_0-\bar{\mu}_0). This gives as distribution of sliding positions

\displaystyle X_\mathrm{slide}=\frac{2(\tan(\theta)-\bar{\mu}_0)}{x_\mathrm{slide}^2}M_{\mu_1}\!\left(\frac{2(\tan(\theta)-\bar{\mu}_0)}{x_\mathrm{slide}}\right) .

All that remains is to motivate a form of the distribution function of the first-order friction coefficients. One condition that M_{\mu_1} should satisfy is that \mu_1=0 is sufficiently suppressed so that Sisyphos doesn’t slide back down the whole hill, or in other words, the transformed distribution should not be too heavy tailed. Another desirable condition on the transformed distribution is that it „preserves“ the non-stochastic sliding position, i.e., that the mean satisfies \langle x_\mathrm{slide}\rangle=2(\tan(\theta)-\bar{\mu}_0)/\bar{\mu}_1. If we assume that M_{\mu_1} is inversely gamma distributed according to

\displaystyle M_{\mu_1}(\nu_1)=\frac{\bar{\mu}_1}{\nu_1^2}\mathrm{e}^{-\frac{\bar{\mu}_1}{\nu_1}} ,

then these conditions are satisfied and we get a simple exponential distribution for the sliding positions:

\displaystyle X_\mathrm{slide}(x_\mathrm{slide})=\frac{\bar{\mu}_1}{2(\tan(\theta)-\bar{\mu}_0)}\mathrm{e}^{-\frac{\bar{\mu}_1 }{2(\tan(\theta)-\bar{\mu}_0)}x_\mathrm{slide}} ,

neat. Now, one could actually try to solve the integro-differential equation that our master equation represents. However, I don’t think there are any analytical solutions to this equation, but you never know.

This was a rather tailored post for the specific question of how to describe discrete stepping with continuous backsliding. Still, it was interesting for me to use the continuous counterpart of the master equation, since you usually only use the discrete version. It was also a good chance to do some classical mechanics and probability density transformations again, as it’s been a long time since I’ve done either of those. Although I basically chose M_{\mu_1} to get an exponential distribution of sliding distances, I wouldn’t say that the steps leading up to it weren’t well motivated, and it was nice to finally see it emerge at the end. But that should be it for now!

See you next time, Cheers!


*In Sisyphos‘ case, the walker would slide down to the foot of the hill, but for our problem, imagine that the hill is infinitely long, thus only finite sliding distances allowed

Published by

Hinterlasse einen Kommentar

Erstelle eine Website wie diese mit WordPress.com
Jetzt starten