You are currently browsing the tag archive for the ‘probability measures’ tag.

If {f: {\bf R}^n \rightarrow {\bf C}} and {g: {\bf R}^n \rightarrow {\bf C}} are two absolutely integrable functions on a Euclidean space {{\bf R}^n}, then the convolution {f*g: {\bf R}^n \rightarrow {\bf C}} of the two functions is defined by the formula

\displaystyle  f*g(x) := \int_{{\bf R}^n} f(y) g(x-y)\ dy = \int_{{\bf R}^n} f(x-z) g(z)\ dz.

A simple application of the Fubini-Tonelli theorem shows that the convolution {f*g} is well-defined almost everywhere, and yields another absolutely integrable function. In the case that {f=1_F}, {g=1_G} are indicator functions, the convolution simplifies to

\displaystyle  1_F*1_G(x) = m( F \cap (x-G) ) = m( (x-F) \cap G ) \ \ \ \ \ (1)

where {m} denotes Lebesgue measure. One can also define convolution on more general locally compact groups than {{\bf R}^n}, but we will restrict attention to the Euclidean case in this post.

The convolution {f*g} can also be defined by duality by observing the identity

\displaystyle  \int_{{\bf R}^n} f*g(x) h(x)\ dx = \int_{{\bf R}^n} \int_{{\bf R}^n} h(y+z)\ f(y) dy g(z) dz

for any bounded measurable function {h: {\bf R}^n \rightarrow {\bf C}}. Motivated by this observation, we may define the convolution {\mu*\nu} of two finite Borel measures on {{\bf R}^n} by the formula

\displaystyle  \int_{{\bf R}^n} h(x)\ d\mu*\nu(x) := \int_{{\bf R}^n} \int_{{\bf R}^n} h(y+z)\ d\mu(y) d\nu(z) \ \ \ \ \ (2)

for any bounded (Borel) measurable function {h: {\bf R}^n \rightarrow {\bf C}}, or equivalently that

\displaystyle  \mu*\nu(E) = \int_{{\bf R}^n} \int_{{\bf R}^n} 1_E(y+z)\ d\mu(y) d\nu(z) \ \ \ \ \ (3)

for all Borel measurable {E}. (In another equivalent formulation: {\mu*\nu} is the pushforward of the product measure {\mu \times \nu} with respect to the addition map {+: {\bf R}^n \times {\bf R}^n \rightarrow {\bf R}^n}.) This can easily be verified to again be a finite Borel measure.

If {\mu} and {\nu} are probability measures, then the convolution {\mu*\nu} also has a simple probabilistic interpretation: it is the law (i.e. probability distribution) of a random varible of the form {X+Y}, where {X, Y} are independent random variables taking values in {{\bf R}^n} with law {\mu,\nu} respectively. Among other things, this interpretation makes it obvious that the support of {\mu*\nu} is the sumset of the supports of {\mu} and {\nu}, and that {\mu*\nu} will also be a probability measure.

While the above discussion gives a perfectly rigorous definition of the convolution of two measures, it does not always give helpful guidance as to how to compute the convolution of two explicit measures (e.g. the convolution of two surface measures on explicit examples of surfaces, such as the sphere). In simple cases, one can work from first principles directly from the definition (2), (3), perhaps after some application of tools from several variable calculus, such as the change of variables formula. Another technique proceeds by regularisation, approximating the measures {\mu, \nu} involved as the weak limit (or vague limit) of absolutely integrable functions

\displaystyle  \mu = \lim_{\epsilon \rightarrow 0} f_\epsilon; \quad \nu =\lim_{\epsilon \rightarrow 0} g_\epsilon

(where we identify an absolutely integrable function {f} with the associated absolutely continuous measure {dm_f(x) := f(x)\ dx}) which then implies (assuming that the sequences {f_\epsilon,g_\epsilon} are tight) that {\mu*\nu} is the weak limit of the {f_\epsilon * g_\epsilon}. The latter convolutions {f_\epsilon * g_\epsilon}, being convolutions of functions rather than measures, can be computed (or at least estimated) by traditional integration techniques, at which point the only difficulty is to ensure that one has enough uniformity in {\epsilon} to maintain control of the limit as {\epsilon \rightarrow 0}.

A third method proceeds using the Fourier transform

\displaystyle  \hat \mu(\xi) := \int_{{\bf R}^n} e^{-2\pi i x \cdot \xi}\ d\mu(\xi)

of {\mu} (and of {\nu}). We have

\displaystyle  \widehat{\mu*\nu}(\xi) = \hat{\mu}(\xi) \hat{\nu}(\xi)

and so one can (in principle, at least) compute {\mu*\nu} by taking Fourier transforms, multiplying them together, and applying the (distributional) inverse Fourier transform. Heuristically, this formula implies that the Fourier transform of {\mu*\nu} should be concentrated in the intersection of the frequency region where the Fourier transform of {\mu} is supported, and the frequency region where the Fourier transform of {\nu} is supported. As the regularity of a measure is related to decay of its Fourier transform, this also suggests that the convolution {\mu*\nu} of two measures will typically be more regular than each of the two original measures, particularly if the Fourier transforms of {\mu} and {\nu} are concentrated in different regions of frequency space (which should happen if the measures {\mu,\nu} are suitably “transverse”). In particular, it can happen that {\mu*\nu} is an absolutely continuous measure, even if {\mu} and {\nu} are both singular measures.

Using intuition from microlocal analysis, we can combine our understanding of the spatial and frequency behaviour of convolution to the following heuristic: a convolution {\mu*\nu} should be supported in regions of phase space {\{ (x,\xi): x \in {\bf R}^n, \xi \in {\bf R}^n \}} of the form {(x,\xi) = (x_1+x_2,\xi)}, where {(x_1,\xi)} lies in the region of phase space where {\mu} is concentrated, and {(x_2,\xi)} lies in the region of phase space where {\nu} is concentrated. It is a challenge to make this intuition perfectly rigorous, as one has to somehow deal with the obstruction presented by the Heisenberg uncertainty principle, but it can be made rigorous in various asymptotic regimes, for instance using the machinery of wave front sets (which describes the high frequency limit of the phase space distribution).

Let us illustrate these three methods and the final heuristic with a simple example. Let {\mu} be a singular measure on the horizontal unit interval {[0,1] \times \{0\} = \{ (x,0): 0 \leq x \leq 1 \}}, given by weighting Lebesgue measure on that interval by some test function {\phi: {\bf R} \rightarrow {\bf C}} supported on {[0,1]}:

\displaystyle  \int_{{\bf R}^2} f(x,y)\ d\mu(x,y) := \int_{\bf R} f(x,0) \phi(x)\ dx.

Similarly, let {\nu} be a singular measure on the vertical unit interval {\{0\} \times [0,1] = \{ (0,y): 0 \leq y \leq 1 \}} given by weighting Lebesgue measure on that interval by another test function {\psi: {\bf R} \rightarrow {\bf C}} supported on {[0,1]}:

\displaystyle  \int_{{\bf R}^2} g(x,y)\ d\nu(x,y) := \int_{\bf R} g(0,y) \psi(y)\ dy.

We can compute the convolution {\mu*\nu} using (2), which in this case becomes

\displaystyle  \int_{{\bf R}^2} h( x, y ) d\mu*\nu(x,y) = \int_{{\bf R}^2} \int_{{\bf R}^2} h(x_1+x_2, y_1+y_2)\ d\mu(x_1,y_1) d\nu(x_2,y_2)

\displaystyle  = \int_{\bf R} \int_{\bf R} h( x_1, y_2 )\ \phi(x_1) dx_1 \psi(y_2) dy_2

and we thus conclude that {\mu*\nu} is an absolutely continuous measure on {{\bf R}^2} with density function {(x,y) \mapsto \phi(x) \psi(y)}:

\displaystyle  d(\mu*\nu)(x,y) = \phi(x) \psi(y) dx dy. \ \ \ \ \ (4)

In particular, {\mu*\nu} is supported on the unit square {[0,1]^2}, which is of course the sumset of the two intervals {[0,1] \times\{0\}} and {\{0\} \times [0,1]}.

We can arrive at the same conclusion from the regularisation method; the computations become lengthier, but more geometric in nature, and emphasises the role of transversality between the two segments supporting {\mu} and {\nu}. One can view {\mu} as the weak limit of the functions

\displaystyle  f_\epsilon(x,y) := \frac{1}{\epsilon} \phi(x) 1_{[0,\epsilon]}(y)

as {\epsilon \rightarrow 0} (where we continue to identify absolutely integrable functions with absolutely continuous measures, and of course we keep {\epsilon} positive). We can similarly view {\nu} as the weak limit of

\displaystyle  g_\epsilon(x,y) := \frac{1}{\epsilon} 1_{[0,\epsilon]}(x) \psi(y).

Let us first look at the model case when {\phi=\psi=1_{[0,1]}}, so that {f_\epsilon,g_\epsilon} are renormalised indicator functions of thin rectangles:

\displaystyle  f_\epsilon = \frac{1}{\epsilon} 1_{[0,1]\times [0,\epsilon]}; \quad g_\epsilon = \frac{1}{\epsilon} 1_{[0,\epsilon] \times [0,1]}.

By (1), the convolution {f_\epsilon*g_\epsilon} is then given by

\displaystyle  f_\epsilon*g_\epsilon(x,y) := \frac{1}{\epsilon^2} m( E_\epsilon )

where {E_\epsilon} is the intersection of two rectangles:

\displaystyle  E_\epsilon := ([0,1] \times [0,\epsilon]) \cap ((x,y) - [0,\epsilon] \times [0,1]).

When {(x,y)} lies in the square {[\epsilon,1] \times [\epsilon,1]}, one readily sees (especially if one draws a picture) that {E_\epsilon} consists of an {\epsilon \times \epsilon} square and thus has measure {\epsilon^2}; conversely, if {(x,y)} lies outside {[0,1+\epsilon] \times [0,1+\epsilon]}, {E_\epsilon} is empty and thus has measure zero. In the intermediate region, {E_\epsilon} will have some measure between {0} and {\epsilon^2}. From this we see that {f_\epsilon*g_\epsilon} converges pointwise almost everywhere to {1_{[0,1] \times [0,1]}} while also being dominated by an absolutely integrable function, and so converges weakly to {1_{[0,1] \times [0,1]}}, giving a special case of the formula (4).

Exercise 1 Use a similar method to verify (4) in the case that {\phi, \psi} are continuous functions on {[0,1]}. (The argument also works for absolutely integrable {\phi,\psi}, but one needs to invoke the Lebesgue differentiation theorem to make it run smoothly.)

Now we compute with the Fourier-analytic method. The Fourier transform {\hat \mu(\xi,\eta)} of {\mu} is given by

\displaystyle  \hat \mu(\xi,\eta) =\int_{{\bf R}^2} e^{-2\pi i (x \xi + y \eta)}\ d\mu(x,y)

\displaystyle  = \int_{\bf R} \phi(x) e^{-2\pi i x \xi}\ dx

\displaystyle  = \hat \phi(\xi)

where we abuse notation slightly by using {\hat \phi} to refer to the one-dimensional Fourier transform of {\phi}. In particular, {\hat \mu} decays in the {\xi} direction (by the Riemann-Lebesgue lemma) but has no decay in the {\eta} direction, which reflects the horizontally grained structure of {\mu}. Similarly we have

\displaystyle  \hat \nu(\xi,\eta) = \hat \psi(\eta),

so that {\hat \nu} decays in the {\eta} direction. The convolution {\mu*\nu} then has decay in both the {\xi} and {\eta} directions,

\displaystyle  \widehat{\mu*\nu}(\xi,\eta) = \hat \phi(\xi) \hat \psi(\eta)

and by inverting the Fourier transform we obtain (4).

Exercise 2 Let {AB} and {CD} be two non-parallel line segments in the plane {{\bf R}^2}. If {\mu} is the uniform probability measure on {AB} and {\nu} is the uniform probability measure on {CD}, show that {\mu*\nu} is the uniform probability measure on the parallelogram {AB + CD} with vertices {A+C, A+D, B+C, B+D}. What happens in the degenerate case when {AB} and {CD} are parallel?

Finally, we compare the above answers with what one gets from the microlocal analysis heuristic. The measure {\mu} is supported on the horizontal interval {[0,1] \times \{0\}}, and the cotangent bundle at any point on this interval points in the vertical direction. Thus, the wave front set of {\mu} should be supported on those points {((x_1,x_2),(\xi_1,\xi_2))} in phase space with {x_1 \in [0,1]}, {x_2 = 0} and {\xi_1=0}. Similarly, the wave front set of {\nu} should be supported at those points {((y_1,y_2),(\xi_1,\xi_2))} with {y_1 = 0}, {y_2 \in [0,1]}, and {\xi_2=0}. The convolution {\mu * \nu} should then have wave front set supported on those points {((x_1+y_1,x_2+y_2), (\xi_1,\xi_2))} with {x_1 \in [0,1]}, {x_2 = 0}, {\xi_1=0}, {y_1=0}, {y_2 \in [0,1]}, and {\xi_2=0}, i.e. it should be spatially supported on the unit square and have zero (rescaled) frequency, so the heuristic predicts a smooth function on the unit square, which is indeed what happens. (The situation is slightly more complicated in the non-smooth case {\phi=\psi=1_{[0,1]}}, because {\mu} and {\nu} then acquire some additional singularities at the endpoints; namely, the wave front set of {\mu} now also contains those points {((x_1,x_2),(\xi_1,\xi_2))} with {x_1 \in \{0,1\}}, {x_2=0}, and {\xi_1,\xi_2} arbitrary, and {\nu} similarly contains those points {((y_1,y_2), (\xi_1,\xi_2))} with {y_1=0}, {y_2 \in \{0,1\}}, and {\xi_1,\xi_2} arbitrary. I’ll leave it as an exercise to the reader to compute what this predicts for the wave front set of {\mu*\nu}, and how this compares with the actual wave front set.)

Exercise 3 Let {\mu} be the uniform measure on the unit sphere {S^{n-1}} in {{\bf R}^n} for some {n \geq 2}. Use as many of the above methods as possible to establish multiple proofs of the following fact: the convolution {\mu*\mu} is an absolutely continuous multiple {f(x)\ dx} of Lebesgue measure, with {f(x)} supported on the ball {B(0,2)} of radius {2} and obeying the bounds

\displaystyle  |f(x)| \ll \frac{1}{|x|}

for {|x| \leq 1} and

\displaystyle  |f(x)| \ll (2-|x|)^{(n-3)/2}

for {1 \leq |x| \leq 2}, where the implied constants are allowed to depend on the dimension {n}. (Hint: try the {n=2} case first, which is particularly simple due to the fact that the addition map {+: S^1 \times S^1 \rightarrow {\bf R}^2} is mostly a local diffeomorphism. The Fourier-based approach is instructive, but requires either asymptotics of Bessel functions or the principle of stationary phase.)

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.

RSS Mathematics in Australia

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,889 other followers