Café Math : Computing $\alpha$-Stable Densities Using the FFT
1, 1, 2, 2, 1, 8, 6, 7, 14, 27, 26, 80, 133, 170, 348, 765, 1002, 2176, 4682, ...   (A121350)
Math Blog
Articles
Talks
Teaching
Phd Thesis
Financial Maths
About me
PrevNext

Computing $\alpha$-Stable Densities Using the FFT

Mon, 9 Apr 2012 11:45:28 GMT


1. Foreword

Yesterday I reopened a box of handwritten documents that I wrote when I had the time to study and think about math $24/7$. Those boxes were packed when I moved from Lille to Paris almost two years ago and remained unopened since then sitting there near my shelf collecting the dust.

I am really happy to have my hands on those gems again and today, I'll talk about a small and clever computation that I've found inside that box while scanning quickly through them, namely: how to compute the density function of an $\alpha$-stable distribution using the fast Fourier transform algorithm.

2. Mathematical Description

The Levi $\alpha$-stable laws is a four parameter family of random variable distributions that generalizes the normal distribution and contains several famous other distributions as particular cases. Unfortunately, not all the density functions of this family have closed form formula. The general case is expressed in term of its characteristic function, $$ \phi(t; \mu, c, \alpha, \beta) = e^{it\mu - |\,ct\,|^\alpha (1- i\beta\, \mathrm{sgn}(t)\Phi)} $$ where $\mathrm{sgn}(t)$ is the sign of $t$ and, $$ \Phi = \begin{cases} -\frac{2}{\pi} \log |\,t\,| & \text{if $\alpha=1$,} \\ \tan(\frac{\pi\alpha}{2}) & \text{otherwise.} \end{cases} $$ The density of the corresponding stable distribution is then, $$ f(x) = \frac{1}{2\pi} \int_{-\infty}^{+\infty} \phi(t)\,e^{-ixt}\,dt $$

3. Simplification

To simplify the exposition, I'll cheat a bit and remove most of the parameters by setting, $\mu = 0$, $c = 1$, $\beta = 0$. This choice of parameter for $\mu$ and $c$ are harmless as they are mere position and scaling factors. The choice for $\beta$ is really cheating as it destroys an important degree of freedom. The benefit of cheating like that is that we are faced with the very simple formula, $$ f(x) = \frac{1}{2\pi} \int_{-\infty}^{+\infty} e^{- |\,t\,|^\alpha}e^{-ixt}\,dt $$ and by the following computation, it simplifies a bit more, \begin{align*} f(x) &= \frac{1}{2\pi} \int_{-\infty}^{+\infty} e^{- |\,t\,|^\alpha}e^{-ixt}\,dt \\ &= \frac{1}{2\pi} \int_{0}^{+\infty} e^{- |\,t\,|^\alpha}e^{-ixt}\,dt + \frac{1}{2\pi}\int_{0}^{+\infty} e^{- |\,-t\,|^\alpha}e^{ixt}\,dt \\ &= \frac{1}{2\pi} \int_{0}^{+\infty} \left( e^{-ixt} + e^{ixt}\right)e^{- t^\alpha}\,dt \\ &= \frac{1}{\pi} \int_{0}^{+\infty} \cos(xt)\,e^{- t^\alpha}\,dt. \end{align*} It looks much simpler now, doesn't it?

4. Series Expansion

One can derive Taylor a series expansion, \begin{align*} f(x) &= \frac{1}{\pi} \int_{0}^{+\infty} \cos(xt)\,e^{- t^\alpha}\,dt \\ &= \frac{1}{\pi} \sum_{k \ge 0} \frac{(-1)^k x^{2k}}{(2k)!} \int_{0}^{+\infty} t^{2k}\,e^{- t^\alpha}\,dt \end{align*} The integral on the right looks very much like the Euler integral for the gamma function, $$ \Gamma(1+z) = \int_0^{+\infty} u^z \, e^{-u} \, du $$ and indeed with a little change of variable, $u = t^\alpha$, $du = \alpha\, t^{\alpha-1}\, dt$ we get, \begin{align*} \Gamma(1+z) &= \alpha \int_0^{+\infty} t^{\alpha z} e^{-t^\alpha} t^{\alpha - 1}\, dt \\ &= \alpha \int_0^{+\infty} t^{\alpha (z + 1) - 1} e^{-t^\alpha} \, dt \end{align*} By equating the exponent of $t$, we get, $$ \int_{0}^{+\infty} t^{2k}\,e^{- t^\alpha}\,dt = \frac{1}{\alpha} \,\Gamma\left(\frac{2k+1}{\alpha}\right) $$ Finally, $$ f(x) = \frac{1}{\pi \alpha} \sum_{k \ge 0} \frac{(-1)^k}{(2k)!}\Gamma\left(\frac{2k+1}{\alpha}\right) \, x^{2k} $$

Consequence. While not being exceedingly terrific due to its small radius of convergence this formula has the merits of giving us the value of $f$ and all its derivatives at $0$, $$ f(0) = \frac{1}{\pi \alpha} \Gamma\left(\frac{1}{\alpha}\right) $$ and more generally, \begin{align*} \left.\frac{d^{n}}{dx^{n}}\,f(x)\right|_{x=0} = \begin{cases} \displaystyle \frac{(-1)^k}{\pi \alpha} \Gamma\left(\frac{2k+1}{\alpha}\right) & \text{if $n = 2k$,} \\ \\ 0 &\text{if $n$ is odd.} \end{cases} \end{align*}

5. Fourier Transform

Now the real thing ha ha ! The first part of the method is to express the desired integral as a Riemann sum, \begin{align*} f(x) &= \frac{1}{\pi} \int_{0}^{+\infty} \cos(xt)\,e^{- t^\alpha}\,dt \\ &= \lim_{\Delta t \rightarrow 0} \, \frac{1}{\pi} \, \, \sum_{k \ge 0} \cos( x\, k\, \Delta\, t) \, e^{-(k\, \Delta t)^\alpha} \Delta\, t. \end{align*} For those out there who don't know the trick, it's simple: basically, you replace $t$ by $k\,\Delta t$ and $dt$ by $\Delta t$ and you're good.

If you denote by $W[\,j\,]$ the discrete Fourier transform of the vector $V[\,k\,]$, then the formula is, $$ W[\,j\,] = \frac{1}{\sqrt{N}}\, \sum_{k=1}^{N} e^{\frac{2i\pi}{N}(j-1)(k-1)}\, V[\,k\,] $$ Humm, how to reconcile the two ?

Well, there we have to be creative and search for analogies between the two formulae. First step is to identify the analog of $xt$ in the formula for the discrete Fourier transform. It seams that a good choice might be to set, $$ xt = \frac{2\pi}{N}(j-1)(k-1) $$ From that, I chose to take, \begin{align*} x &= (j-1) \, \Delta x, & \Delta x &= \sqrt{\frac{2\pi}{N}}, \\ t &= (k-1) \, \Delta t , & \Delta t &= \sqrt{\frac{2\pi}{N}}. \end{align*}

But really this choice for $\Delta x$ and $\Delta t$ is arbitrary. You can very well chose anything you want for $\Delta x$ just as long as the following relation is satisfied, $$ \Delta x \, \Delta t = \frac{2\pi}{N}. $$ When your choice is made for $\Delta x$, the value of $\Delta t$ is completely determined and we have, $$ \Delta t = \frac{2\pi}{N \, \Delta x}. $$

Final step. We are guided by that relation, $$ f\left((j-1) \, \Delta x \right) = \Re\, W[\,j\,] $$ Now, it's time to vanquish or perish. \begin{align*} \Re\, W[\,j\,] &= \frac{1}{\sqrt{N}}\, \sum_{k=1}^{N} \cos\left( (j-1) \, \Delta x \,(k-1) \, \Delta t\right) \, e^{-\left((k-1) \, \Delta t \right)^\alpha} \frac{\Delta t}{\pi}\sqrt{N} \end{align*} It's done ! $$ V[k] = e^{-\left((k-1) \, \Delta t \right)^\alpha} \frac{\Delta t}{\pi}\sqrt{N}. $$

Whaa ! I love this feeling, when every pieces of the puzzle fall into place.

3 Comments

Samuel VIDAL  posted 2012-04-16 00:03:30

I've found the Maple programs that I wrote when doing this small research project. I'll update this post soon with source code and graphics.

Alex  posted 2012-05-18 18:11:17

What is the $\Delta t$ term in the answer?

Samuel Vidal  posted 2012-05-20 20:05:55

Ok very good question Alex. The number $\Delta t$ is the discretization scale for the characteristic function.

Here we have a functions $\phi(t)$ and we want to compute its Fourier transform using the Fast Fourier Transform algorithm of Cooley-Tukey. To do that, we first replace the function $\phi(t)$ by its discretized version which is the vector $W$ of its values took at equally spaced abscissas $t = t_1, ..., t_k, ...$. The value $\Delta t$ is the distance between two successive such abscissas, $$ \Delta t = t_{k+1} - t_k. $$ If we chose to make this sequence of abscissas starting at $t_1 = 0$ we have $t_k = (k-1) \, \Delta t$. If we chose to start at $t_0 = 0$ instead, we would have $t_k = k \, \Delta t$.

The same is true for the function $f(x)$ which is the function we wanted to evaluate in the first place. The continuum parameterized by $x$ is replace by the sequence of equally spaced values $x = x_1, ..., x_j, ...$ and, $$ \Delta x = x_{j+1} - x_j $$ We took the convention of starting at $x_1 = 0$ so we have $x_j = (j - 1) \, \Delta x$.

Comment:

Name:

Email:

Website: