Let {\Omega} be some domain (such as the real numbers). For any natural number {p}, let {L(\Omega^p)_{sym}} denote the space of symmetric real-valued functions {F^{(p)}: \Omega^p \rightarrow {\bf R}} on {p} variables {x_1,\dots,x_p \in \Omega}, thus

\displaystyle F^{(p)}(x_{\sigma(1)},\dots,x_{\sigma(p)}) = F^{(p)}(x_1,\dots,x_p)

for any permutation {\sigma: \{1,\dots,p\} \rightarrow \{1,\dots,p\}}. For instance, for any natural numbers {k,p}, the elementary symmetric polynomials

\displaystyle e_k^{(p)}(x_1,\dots,x_p) = \sum_{1 \leq i_1 < i_2 < \dots < i_k \leq p} x_{i_1} \dots x_{i_k}

will be an element of {L({\bf R}^p)_{sym}}. With the pointwise product operation, {L(\Omega^p)_{sym}} becomes a commutative real algebra. We include the case {p=0}, in which case {L(\Omega^0)_{sym}} consists solely of the real constants.

Given two natural numbers {k,p}, one can “lift” a symmetric function {F^{(k)} \in L(\Omega^k)_{sym}} of {k} variables to a symmetric function {[F^{(k)}]_{k \rightarrow p} \in L(\Omega^p)_{sym}} of {p} variables by the formula

\displaystyle [F^{(k)}]_{k \rightarrow p}(x_1,\dots,x_p) = \sum_{1 \leq i_1 < i_2 < \dots < i_k \leq p} F^{(k)}(x_{i_1}, \dots, x_{i_k})

\displaystyle = \frac{1}{k!} \sum_\pi F^{(k)}( x_{\pi(1)}, \dots, x_{\pi(k)} )

where {\pi} ranges over all injections from {\{1,\dots,k\}} to {\{1,\dots,p\}} (the latter formula making it clearer that {[F^{(k)}]_{k \rightarrow p}} is symmetric). Thus for instance

\displaystyle [F^{(1)}(x_1)]_{1 \rightarrow p} = \sum_{i=1}^p F^{(1)}(x_i)

\displaystyle [F^{(2)}(x_1,x_2)]_{2 \rightarrow p} = \sum_{1 \leq i < j \leq p} F^{(2)}(x_i,x_j)

and

\displaystyle e_k^{(p)}(x_1,\dots,x_p) = [x_1 \dots x_k]_{k \rightarrow p}.

Also we have

\displaystyle [1]_{k \rightarrow p} = \binom{p}{k} = \frac{p(p-1)\dots(p-k+1)}{k!}.

With these conventions, we see that {[F^{(k)}]_{k \rightarrow p}} vanishes for {p=0,\dots,k-1}, and is equal to {F} if {k=p}. We also have the transitivity

\displaystyle [F^{(k)}]_{k \rightarrow p} = \frac{1}{\binom{p-k}{p-l}} [[F^{(k)}]_{k \rightarrow l}]_{l \rightarrow p}

if {k \leq l \leq p}.

The lifting map {[]_{k \rightarrow p}} is a linear map from {L(\Omega^k)_{sym}} to {L(\Omega^p)_{sym}}, but it is not a ring homomorphism. For instance, when {\Omega={\bf R}}, one has

\displaystyle [x_1]_{1 \rightarrow p} [x_1]_{1 \rightarrow p} = (\sum_{i=1}^p x_i)^2 \ \ \ \ \ (1)

 

\displaystyle = \sum_{i=1}^p x_i^2 + 2 \sum_{1 \leq i < j \leq p} x_i x_j

\displaystyle = [x_1^2]_{1 \rightarrow p} + 2 [x_1 x_2]_{1 \rightarrow p}

\displaystyle \neq [x_1^2]_{1 \rightarrow p}.

In general, one has the identity

\displaystyle [F^{(k)}(x_1,\dots,x_k)]_{k \rightarrow p} [G^{(l)}(x_1,\dots,x_l)]_{l \rightarrow p} = \sum_{k,l \leq m \leq k+l} \frac{1}{k! l!} \ \ \ \ \ (2)

 

\displaystyle [\sum_{\pi, \rho} F^{(k)}(x_{\pi(1)},\dots,x_{\pi(k)}) G^{(l)}(x_{\rho(1)},\dots,x_{\rho(l)})]_{m \rightarrow p}

for all natural numbers {k,l,p} and {F^{(k)} \in L(\Omega^k)_{sym}}, {G^{(l)} \in L(\Omega^l)_{sym}}, where {\pi, \rho} range over all injections {\pi: \{1,\dots,k\} \rightarrow \{1,\dots,m\}}, {\rho: \{1,\dots,l\} \rightarrow \{1,\dots,m\}} with {\pi(\{1,\dots,k\}) \cup \rho(\{1,\dots,l\}) = \{1,\dots,m\}}. Combinatorially, the identity (2) follows from the fact that given any injections {\tilde \pi: \{1,\dots,k\} \rightarrow \{1,\dots,p\}} and {\tilde \rho: \{1,\dots,l\} \rightarrow \{1,\dots,p\}} with total image {\tilde \pi(\{1,\dots,k\}) \cup \tilde \rho(\{1,\dots,l\})} of cardinality {m}, one has {k,l \leq m \leq k+l}, and furthermore there exist precisely {m!} triples {(\pi, \rho, \sigma)} of injections {\pi: \{1,\dots,k\} \rightarrow \{1,\dots,m\}}, {\rho: \{1,\dots,l\} \rightarrow \{1,\dots,m\}}, {\sigma: \{1,\dots,m\} \rightarrow \{1,\dots,p\}} such that {\tilde \pi = \sigma \circ \pi} and {\tilde \rho = \sigma \circ \rho}.

Example 1 When {\Omega = {\bf R}}, one has

\displaystyle [x_1 x_2]_{2 \rightarrow p} [x_1]_{1 \rightarrow p} = [\frac{1}{2! 1!}( 2 x_1^2 x_2 + 2 x_1 x_2^2 )]_{2 \rightarrow p} + [\frac{1}{2! 1!} 6 x_1 x_2 x_3]_{3 \rightarrow p}

\displaystyle = [x_1^2 x_2 + x_1 x_2^2]_{2 \rightarrow p} + [3x_1 x_2 x_3]_{3 \rightarrow p}

which is just a restatement of the identity

\displaystyle (\sum_{i < j} x_i x_j) (\sum_k x_k) = \sum_{i<j} x_i^2 x_j + x_i x_j^2 + \sum_{i < j < k} 3 x_i x_j x_k.

Note that the coefficients appearing in (2) do not depend on the final number of variables {p}. We may therefore abstract the role of {p} from the law (2) by introducing the real algebra {L(\Omega^*)_{sym}} of formal sums

\displaystyle F^{(*)} = \sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow *}

where for each {k}, {F^{(k)}} is an element of {L(\Omega^k)_{sym}} (with only finitely many of the {F^{(k)}} being non-zero), and with the formal symbol {[]_{k \rightarrow *}} being formally linear, thus

\displaystyle [F^{(k)}]_{k \rightarrow *} + [G^{(k)}]_{k \rightarrow *} := [F^{(k)} + G^{(k)}]_{k \rightarrow *}

and

\displaystyle c [F^{(k)}]_{k \rightarrow *} := [cF^{(k)}]_{k \rightarrow *}

for {F^{(k)}, G^{(k)} \in L(\Omega^k)_{sym}} and scalars {c \in {\bf R}}, and with multiplication given by the analogue

\displaystyle [F^{(k)}(x_1,\dots,x_k)]_{k \rightarrow *} [G^{(l)}(x_1,\dots,x_l)]_{l \rightarrow *} = \sum_{k,l \leq m \leq k+l} \frac{1}{k! l!} \ \ \ \ \ (3)

 

\displaystyle [\sum_{\pi, \rho} F^{(k)}(x_{\pi(1)},\dots,x_{\pi(k)}) G^{(l)}(x_{\rho(1)},\dots,x_{\rho(l)})]_{m \rightarrow *}

of (2). Thus for instance, in this algebra {L(\Omega^*)_{sym}} we have

\displaystyle [x_1]_{1 \rightarrow *} [x_1]_{1 \rightarrow *} = [x_1^2]_{1 \rightarrow *} + 2 [x_1 x_2]_{2 \rightarrow *}

and

\displaystyle [x_1 x_2]_{2 \rightarrow *} [x_1]_{1 \rightarrow *} = [x_1^2 x_2 + x_1 x_2^2]_{2 \rightarrow *} + [3 x_1 x_2 x_3]_{3 \rightarrow *}.

Informally, {L(\Omega^*)_{sym}} is an abstraction (or “inverse limit”) of the concept of a symmetric function of an unspecified number of variables, which are formed by summing terms that each involve only a bounded number of these variables at a time. One can check (somewhat tediously) that {L(\Omega^*)_{sym}} is indeed a commutative real algebra, with a unit {[1]_{0 \rightarrow *}}. (I do not know if this algebra has previously been studied in the literature; it is somewhat analogous to the abstract algebra of finite linear combinations of Schur polynomials, with multiplication given by a Littlewood-Richardson rule. )

For natural numbers {p}, there is an obvious specialisation map {[]_{* \rightarrow p}} from {L(\Omega^*)_{sym}} to {L(\Omega^p)_{sym}}, defined by the formula

\displaystyle [\sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow *}]_{* \rightarrow p} := \sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow p}.

Thus, for instance, {[]_{* \rightarrow p}} maps {[x_1]_{1 \rightarrow *}} to {[x_1]_{1 \rightarrow p}} and {[x_1 x_2]_{2 \rightarrow *}} to {[x_1 x_2]_{2 \rightarrow p}}. From (2) and (3) we see that this map {[]_{* \rightarrow p}: L(\Omega^*)_{sym} \rightarrow L(\Omega^p)_{sym}} is an algebra homomorphism, even though the maps {[]_{k \rightarrow *}: L(\Omega^k)_{sym} \rightarrow L(\Omega^*)_{sym}} and {[]_{k \rightarrow p}: L(\Omega^k)_{sym} \rightarrow L(\Omega^p)_{sym}} are not homomorphisms. By inspecting the {p^{th}} component of {L(\Omega^*)_{sym}} we see that the homomorphism {[]_{* \rightarrow p}} is in fact surjective.

Now suppose that we have a measure {\mu} on the space {\Omega}, which then induces a product measure {\mu^p} on every product space {\Omega^p}. To avoid degeneracies we will assume that the integral {\int_\Omega \mu} is strictly positive. Assuming suitable measurability and integrability hypotheses, a function {F \in L(\Omega^p)_{sym}} can then be integrated against this product measure to produce a number

\displaystyle \int_{\Omega^p} F\ d\mu^p.

In the event that {F} arises as a lift {[F^{(k)}]_{k \rightarrow p}} of another function {F^{(k)} \in L(\Omega^k)_{sym}}, then from Fubini’s theorem we obtain the formula

\displaystyle \int_{\Omega^p} F\ d\mu^p = \binom{p}{k} (\int_{\Omega^k} F^{(k)}\ d\mu^k) (\int_\Omega\ d\mu)^{p-k}.

Thus for instance, if {\Omega={\bf R}},

\displaystyle \int_{{\bf R}^p} [x_1]_{1 \rightarrow p}\ d\mu^p = p (\int_{\bf R} x\ d\mu(x)) (\int_{\bf R} \mu)^{p-1} \ \ \ \ \ (4)

 

and

\displaystyle \int_{{\bf R}^p} [x_1 x_2]_{2 \rightarrow p}\ d\mu^p = \binom{p}{2} (\int_{{\bf R}^2} x_1 x_2\ d\mu(x_1) d\mu(x_2)) (\int_{\bf R} \mu)^{p-2}. \ \ \ \ \ (5)

 

On summing, we see that if

\displaystyle F^{(*)} = \sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow *}

is an element of the formal algebra {L(\Omega^*)_{sym}}, then

\displaystyle \int_{\Omega^p} [F^{(*)}]_{* \rightarrow p}\ d\mu^p = \sum_{k=0}^\infty \binom{p}{k} (\int_{\Omega^k} F^{(k)}\ d\mu^k) (\int_\Omega\ d\mu)^{p-k}. \ \ \ \ \ (6)

 

Note that by hypothesis, only finitely many terms on the right-hand side are non-zero.

Now for a key observation: whereas the left-hand side of (6) only makes sense when {p} is a natural number, the right-hand side is meaningful when {p} takes a fractional value (or even when it takes negative or complex values!), interpreting the binomial coefficient {\binom{p}{k}} as a polynomial {\frac{p(p-1) \dots (p-k+1)}{k!}} in {p}. As such, this suggests a way to introduce a “virtual” concept of a symmetric function on a fractional power space {\Omega^p} for such values of {p}, and even to integrate such functions against product measures {\mu^p}, even if the fractional power {\Omega^p} does not exist in the usual set-theoretic sense (and {\mu^p} similarly does not exist in the usual measure-theoretic sense). More precisely, for arbitrary real or complex {p}, we now define {L(\Omega^p)_{sym}} to be the space of abstract objects

\displaystyle F^{(p)} = [F^{(*)}]_{* \rightarrow p} = \sum_{k=0}^\infty [F^{(k)}]_{k \rightarrow p}

with {F^{(*)} \in L(\Omega^*)_{sym}} and {[]_{* \rightarrow p}} (and {[]_{k \rightarrow p}} now interpreted as formal symbols, with the structure of a commutative real algebra inherited from {L(\Omega^*)_{sym}}, thus

\displaystyle [F^{(*)}]_{* \rightarrow p} + [G^{(*)}]_{* \rightarrow p} := [F^{(*)} + G^{(*)}]_{* \rightarrow p}

\displaystyle c [F^{(*)}]_{* \rightarrow p} := [c F^{(*)}]_{* \rightarrow p}

\displaystyle [F^{(*)}]_{* \rightarrow p} [G^{(*)}]_{* \rightarrow p} := [F^{(*)} G^{(*)}]_{* \rightarrow p}.

In particular, the multiplication law (2) continues to hold for such values of {p}, thanks to (3). Given any measure {\mu} on {\Omega}, we formally define a measure {\mu^p} on {\Omega^p} with regards to which we can integrate elements {F^{(p)}} of {L(\Omega^p)_{sym}} by the formula (6) (providing one has sufficient measurability and integrability to make sense of this formula), thus providing a sort of “fractional dimensional integral” for symmetric functions. Thus, for instance, with this formalism the identities (4), (5) now hold for fractional values of {p}, even though the formal space {{\bf R}^p} no longer makes sense as a set, and the formal measure {\mu^p} no longer makes sense as a measure. (The formalism here is somewhat reminiscent of the technique of dimensional regularisation employed in the physical literature in order to assign values to otherwise divergent integrals. See also this post for an unrelated abstraction of the integration concept involving integration over supercommutative variables (and in particular over fermionic variables).)

Example 2 Suppose {\mu} is a probability measure on {\Omega}, and {X: \Omega \rightarrow {\bf R}} is a random variable; on any power {\Omega^k}, we let {X_1,\dots,X_k: \Omega^k \rightarrow {\bf R}} be the usual independent copies of {X} on {\Omega^k}, thus {X_j(\omega_1,\dots,\omega_k) := X(\omega_j)} for {(\omega_1,\dots,\omega_k) \in \Omega^k}. Then for any real or complex {p}, the formal integral

\displaystyle \int_{\Omega^p} [X_1]_{1 \rightarrow p}^2\ d\mu^p

can be evaluated by first using the identity

\displaystyle [X_1]_{1 \rightarrow p}^2 = [X_1^2]_{1 \rightarrow p} + 2[X_1 X_2]_{2 \rightarrow p}

(cf. (1)) and then using (6) and the probability measure hypothesis {\int_\Omega\ d\mu = 1} to conclude that

\displaystyle \int_{\Omega^p} [X_1]_{1 \rightarrow p}^2\ d\mu^p = \binom{p}{1} \int_{\Omega} X^2\ d\mu + 2 \binom{p}{2} \int_{\Omega^2} X_1 X_2\ d\mu^2

\displaystyle = p (\int_\Omega X^2\ d\mu - (\int_\Omega X\ d\mu)^2) + p^2 (\int_\Omega X\ d\mu)^2

or in probabilistic notation

\displaystyle \int_{\Omega^p} [X_1]_{1 \rightarrow p}^2\ d\mu^p = p \mathbf{Var}(X) + p^2 \mathbf{E}(X)^2. \ \ \ \ \ (7)

 

For {p} a natural number, this identity has the probabilistic interpretation

\displaystyle \mathbf{E}( X_1 + \dots + X_p)^2 = p \mathbf{Var}(X) + p^2 \mathbf{E}(X)^2 \ \ \ \ \ (8)

 

whenever {X_1,\dots,X_p} are jointly independent copies of {X}, which reflects the well known fact that the sum {X_1 + \dots + X_p} has expectation {p \mathbf{E} X} and variance {p \mathbf{Var}(X)}. One can thus view (7) as an abstract generalisation of (8) to the case when {p} is fractional, negative, or even complex, despite the fact that there is no sensible way in this case to talk about {p} independent copies {X_1,\dots,X_p} of {X} in the standard framework of probability theory.

In this particular case, the quantity (7) is non-negative for every nonnegative {p}, which looks plausible given the form of the left-hand side. Unfortunately, this sort of non-negativity does not always hold; for instance, if {X} has mean zero, one can check that

\displaystyle \int_{\Omega^p} [X_1]_{1 \rightarrow p}^4\ d\mu^p = p \mathbf{Var}(X^2) + p(3p-2) (\mathbf{E}(X^2))^2

and the right-hand side can become negative for {p < 2/3}. This is a shame, because otherwise one could hope to start endowing {L(X^p)_{sym}} with some sort of commutative von Neumann algebra type structure (or the abstract probability structure discussed in this previous post) and then interpret it as a genuine measure space rather than as a virtual one. (This failure of positivity is related to the fact that the characteristic function of a random variable, when raised to the {p^{th}} power, need not be a characteristic function of any random variable once {p} is no longer a natural number: “fractional convolution” does not preserve positivity!) However, one vestige of positivity remains: if {F: \Omega \rightarrow {\bf R}} is non-negative, then so is

\displaystyle \int_{\Omega^p} [F]_{1 \rightarrow p}\ d\mu^p = p (\int_\Omega F\ d\mu) (\int_\Omega\ d\mu)^{p-1}.

One can wonder what the point is to all of this abstract formalism and how it relates to the rest of mathematics. For me, this formalism originated implicitly in an old paper I wrote with Jon Bennett and Tony Carbery on the multilinear restriction and Kakeya conjectures, though we did not have a good language for working with it at the time, instead working first with the case of natural number exponents {p} and appealing to a general extrapolation theorem to then obtain various identities in the fractional {p} case. The connection between these fractional dimensional integrals and more traditional integrals ultimately arises from the simple identity

\displaystyle (\int_\Omega\ d\mu)^p = \int_{\Omega^p}\ d\mu^p

(where the right-hand side should be viewed as the fractional dimensional integral of the unit {[1]_{0 \rightarrow p}} against {\mu^p}). As such, one can manipulate {p^{th}} powers of ordinary integrals using the machinery of fractional dimensional integrals. A key lemma in this regard is

Lemma 3 (Differentiation formula) Suppose that a positive measure {\mu = \mu(t)} on {\Omega} depends on some parameter {t} and varies by the formula

\displaystyle \frac{d}{dt} \mu(t) = a(t) \mu(t) \ \ \ \ \ (9)

 

for some function {a(t): \Omega \rightarrow {\bf R}}. Let {p} be any real or complex number. Then, assuming sufficient smoothness and integrability of all quantities involved, we have

\displaystyle \frac{d}{dt} \int_{\Omega^p} F^{(p)}\ d\mu(t)^p = \int_{\Omega^p} F^{(p)} [a(t)]_{1 \rightarrow p}\ d\mu(t)^p \ \ \ \ \ (10)

 

for all {F^{(p)} \in L(\Omega^p)_{sym}} that are independent of {t}. If we allow {F^{(p)}(t)} to now depend on {t} also, then we have the more general total derivative formula

\displaystyle \frac{d}{dt} \int_{\Omega^p} F^{(p)}(t)\ d\mu(t)^p \ \ \ \ \ (11)

 

\displaystyle = \int_{\Omega^p} \frac{d}{dt} F^{(p)}(t) + F^{(p)}(t) [a(t)]_{1 \rightarrow p}\ d\mu(t)^p,

again assuming sufficient amounts of smoothness and regularity.

Proof: We just prove (10), as (11) then follows by same argument used to prove the usual product rule. By linearity it suffices to verify this identity in the case {F^{(p)} = [F^{(k)}]_{k \rightarrow p}} for some symmetric function {F^{(k)} \in L(\Omega^k)_{sym}} for a natural number {k}. By (6), the left-hand side of (10) is then

\displaystyle \frac{d}{dt} [\binom{p}{k} (\int_{\Omega^k} F^{(k)}\ d\mu(t)^k) (\int_\Omega\ d\mu(t))^{p-k}]. \ \ \ \ \ (12)

 

Differentiating under the integral sign using (9) we have

\displaystyle \frac{d}{dt} \int_\Omega\ d\mu(t) = \int_\Omega\ a(t)\ d\mu(t)

and similarly

\displaystyle \frac{d}{dt} \int_{\Omega^k} F^{(k)}\ d\mu(t)^k = \int_{\Omega^k} F^{(k)}(a_1+\dots+a_k)\ d\mu(t)^k

where {a_1,\dots,a_k} are the standard {k} copies of {a = a(t)} on {\Omega^k}:

\displaystyle a_j(\omega_1,\dots,\omega_k) := a(\omega_j).

By the product rule, we can thus expand (12) as

\displaystyle \binom{p}{k} (\int_{\Omega^k} F^{(k)}(a_1+\dots+a_k)\ d\mu^k ) (\int_\Omega\ d\mu)^{p-k}

\displaystyle + \binom{p}{k} (p-k) (\int_{\Omega^k} F^{(k)}\ d\mu^k) (\int_\Omega\ a\ d\mu) (\int_\Omega\ d\mu)^{p-k-1}

where we have suppressed the dependence on {t} for brevity. Since {\binom{p}{k} (p-k) = \binom{p}{k+1} (k+1)}, we can write this expression using (6) as

\displaystyle \int_{\Omega^p} [F^{(k)} (a_1 + \dots + a_k)]_{k \rightarrow p} + [ F^{(k)} \ast a ]_{k+1 \rightarrow p}\ d\mu^p

where {F^{(k)} \ast a \in L(\Omega^{k+1})_{sym}} is the symmetric function

\displaystyle F^{(k)} \ast a(\omega_1,\dots,\omega_{k+1}) := \sum_{j=1}^{k+1} F^{(k)}(\omega_1,\dots,\omega_{j-1},\omega_{j+1} \dots \omega_{k+1}) a(\omega_j).

But from (2) one has

\displaystyle [F^{(k)} (a_1 + \dots + a_k)]_{k \rightarrow p} + [ F^{(k)} \ast a ]_{k+1 \rightarrow p} = [F^{(k)}]_{k \rightarrow p} [a]_{1 \rightarrow p}

and the claim follows. \Box

Remark 4 It is also instructive to prove this lemma in the special case when {p} is a natural number, in which case the fractional dimensional integral {\int_{\Omega^p} F^{(p)}\ d\mu(t)^p} can be interpreted as a classical integral. In this case, the identity (10) is immediate from applying the product rule to (9) to conclude that

\displaystyle \frac{d}{dt} d\mu(t)^p = [a(t)]_{1 \rightarrow p} d\mu(t)^p.

One could in fact derive (10) for arbitrary real or complex {p} from the case when {p} is a natural number by an extrapolation argument; see the appendix of my paper with Bennett and Carbery for details.

Let us give a simple PDE application of this lemma as illustration:

Proposition 5 (Heat flow monotonicity) Let {u: [0,+\infty) \times {\bf R}^d \rightarrow {\bf R}} be a solution to the heat equation {u_t = \Delta u} with initial data {\mu_0} a rapidly decreasing finite non-negative Radon measure, or more explicitly

\displaystyle u(t,x) = \frac{1}{(4\pi t)^{d/2}} \int_{{\bf R}^d} e^{-|x-y|^2/4t}\ d\mu_0(y)

for al {t>0}. Then for any {p>0}, the quantity

\displaystyle Q_p(t) := t^{\frac{d}{2} (p-1)} \int_{{\bf R}^d} u(t,x)^p\ dx

is monotone non-decreasing in {t \in (0,+\infty)} for {1 < p < \infty}, constant for {p=1}, and monotone non-increasing for {0 < p < 1}.

Proof: By a limiting argument we may assume that {d\mu_0} is absolutely continuous, with Radon-Nikodym derivative a test function; this is more than enough regularity to justify the arguments below.

For any {(t,x) \in (0,+\infty) \times {\bf R}^d}, let {\mu(t,x)} denote the Radon measure

\displaystyle d\mu(t,x)(y) := \frac{1}{(4\pi)^{d/2}} e^{-|x-y|^2/4t}\ d\mu_0(y).

Then the quantity {Q_p(t)} can be written as a fractional dimensional integral

\displaystyle Q_p(t) = t^{-d/2} \int_{{\bf R}^d} \int_{({\bf R}^d)^p}\ d\mu(t,x)^p\ dx.

Observe that

\displaystyle \frac{\partial}{\partial t} d\mu(t,x) = \frac{|x-y|^2}{4t^2} d\mu(t,x)

and thus by Lemma 3 and the product rule

\displaystyle \frac{d}{dt} Q_p(t) = -\frac{d}{2t} Q_p(t) + t^{-d/2} \int_{{\bf R}^d} \int_{({\bf R}^d)^p} [\frac{|x-y|^2}{4t^2}]_{1 \rightarrow p} d\mu(t,x)^p\ dx \ \ \ \ \ (13)

 

where we use {y} for the variable of integration in the factor space {{\bf R}^d} of {({\bf R}^d)^p}.

To simplify this expression we will take advantage of integration by parts in the {x} variable. Specifically, in any direction {x_j}, we have

\displaystyle \frac{\partial}{\partial x_j} d\mu(t,x) = -\frac{x_j-y_j}{2t} d\mu(t,x)

and hence by Lemma 3

\displaystyle \frac{\partial}{\partial x_j} \int_{({\bf R}^d)^p}\ d\mu(t,x)^p\ dx = - \int_{({\bf R}^d)^p} [\frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx.

Multiplying by {x_j} and integrating by parts, we see that

\displaystyle d Q_p(t) = \int_{{\bf R}^d} \int_{({\bf R}^d)^p} x_j [\frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx

\displaystyle = \int_{{\bf R}^d} \int_{({\bf R}^d)^p} x_j [\frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx

where we use the Einstein summation convention in {j}. Similarly, if {F_j(y)} is any reasonable function depending only on {y}, we have

\displaystyle \frac{\partial}{\partial x_j} \int_{({\bf R}^d)^p}[F_j(y)]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx

\displaystyle = - \int_{({\bf R}^d)^p} [F_j(y)]_{1 \rightarrow p} [\frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx

and hence on integration by parts

\displaystyle 0 = \int_{{\bf R}^d} \int_{({\bf R}^d)^p} [F_j(y) \frac{x_j-y_j}{2t}]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx.

We conclude that

\displaystyle \frac{d}{2t} Q_p(t) = t^{-d/2} \int_{{\bf R}^d} \int_{({\bf R}^d)^p} (x_j - [F_j(y)]_{1 \rightarrow p}) [\frac{(x_j-y_j)}{4t}]_{1 \rightarrow p} d\mu(t,x)^p\ dx

and thus by (13)

\displaystyle \frac{d}{dt} Q_p(t) = \frac{1}{4t^{\frac{d}{2}+2}} \int_{{\bf R}^d} \int_{({\bf R}^d)^p}

\displaystyle [(x_j-y_j)(x_j-y_j)]_{1 \rightarrow p} - (x_j - [F_j(y)]_{1 \rightarrow p}) [x_j - y_j]_{1 \rightarrow p}\ d\mu(t,x)^p\ dx.

The choice of {F_j} that then achieves the most cancellation turns out to be {F_j(y) = \frac{1}{p} y_j} (this cancels the terms that are linear or quadratic in the {x_j}), so that {x_j - [F_j(y)]_{1 \rightarrow p} = \frac{1}{p} [x_j - y_j]_{1 \rightarrow p}}. Repeating the calculations establishing (7), one has

\displaystyle \int_{({\bf R}^d)^p} [(x_j-y_j)(x_j-y_j)]_{1 \rightarrow p}\ d\mu^p = p \mathop{\bf E} |x-Y|^2 (\int_{{\bf R}^d}\ d\mu)^{p}

and

\displaystyle \int_{({\bf R}^d)^p} [x_j-y_j]_{1 \rightarrow p} [x_j-y_j]_{1 \rightarrow p}\ d\mu^p

\displaystyle = (p \mathbf{Var}(x-Y) + p^2 |\mathop{\bf E} x-Y|^2) (\int_{{\bf R}^d}\ d\mu)^{p}

where {Y} is the random variable drawn from {{\bf R}^d} with the normalised probability measure {\mu / \int_{{\bf R}^d}\ d\mu}. Since {\mathop{\bf E} |x-Y|^2 = \mathbf{Var}(x-Y) + |\mathop{\bf E} x-Y|^2}, one thus has

\displaystyle \frac{d}{dt} Q_p(t) = (p-1) \frac{1}{4t^{\frac{d}{2}+2}} \int_{{\bf R}^d} \mathbf{Var}(x-Y) (\int_{{\bf R}^d}\ d\mu)^{p}\ dx. \ \ \ \ \ (14)

 

This expression is clearly non-negative for {p>1}, equal to zero for {p=1}, and positive for {0 < p < 1}, giving the claim. (One could simplify {\mathbf{Var}(x-Y)} here as {\mathbf{Var}(Y)} if desired, though it is not strictly necessary to do so for the proof.) \Box

Remark 6 As with Remark 4, one can also establish the identity (14) first for natural numbers {p} by direct computation avoiding the theory of fractional dimensional integrals, and then extrapolate to the case of more general values of {p}. This particular identity is also simple enough that it can be directly established by integration by parts without much difficulty, even for fractional values of {p}.

A more complicated version of this argument establishes the non-endpoint multilinear Kakeya inequality (without any logarithmic loss in a scale parameter {R}); this was established in my previous paper with Jon Bennett and Tony Carbery, but using the “natural number {p} first” approach rather than using the current formalism of fractional dimensional integration. However, the arguments can be translated into this formalism without much difficulty; we do so below the fold. (To simplify the exposition slightly we will not address issues of establishing enough regularity and integrability to justify all the manipulations, though in practice this can be done by standard limiting arguments.)

— 1. Multilinear heat flow monotonicity —

Before we give a multilinear variant of Proposition 5 of relevance to the multilinear Kakeya inequality, we first need to briefly set up the theory of finite products

\displaystyle \Omega_1^{p_1} \times \dots \times \Omega_k^{p_k}

of fractional powers of spaces {\Omega_1,\dots,\Omega_k}, where {p_1,\dots,p_k} are real or complex numbers. The functions {F^{(p_1,\dots,p_k)}} to integrate here lie in the tensor product space

\displaystyle L(\Omega_1^{p_1})_{sym} \otimes \dots \otimes L(\Omega_k^{p_k})_{sym}, \ \ \ \ \ (15)

 

which is generated by tensor powers

\displaystyle F^{(p_1,\dots,p_k)} = F_1^{(p_1)} \otimes \dots \otimes F_k^{(p_k)}

with {F_j^{(p_j)} \in L(\Omega_j^{p_j})_{sym}}, with the usual tensor product identifications and algebra operations. One can evaluate fractional dimensional integrals of such functions against “virtual product measures” {d\mu_1^{p_1} \dots d\mu_k^{p_k}}, with {\mu_j} a measure on {\Omega_j}, by the natural formula

\displaystyle \int_{\Omega_1^{p_1} \times \dots \times \Omega_k^{p_k}} F_1^{(p_1)} \otimes \dots \otimes F_k^{(p_k)} d\mu_1^{p_1} \dots d\mu_k^{p_k} := \prod_{j=1}^k ( \int_{\Omega_j^{p_j}} F_j^{(p_j)}\ d\mu_j^{p_j} )

assuming sufficient measurability and integrability hypotheses. We can lift functions {F_j^{(m)} \in L(\Omega_j^m)_{sym}} to an element {[F_j^{(m)}]_{m \rightarrow p; j}} of the space (15) by the formula

\displaystyle [F_j^{(m)}]_{m \rightarrow p; j} := 1^{\otimes j-1} \otimes [F_j^{(m)}]_{m \rightarrow p} \otimes 1^{\otimes k-j}.

This is easily seen to be an algebra homomorphism.

Example 7 If {F_1: \Omega_1 \rightarrow {\bf R}} and {F_2: \Omega_2 \rightarrow {\bf R}} are functions and {\mu_1, \mu_2} are measures on {\Omega_1, \Omega_2} respectively, then (assuming sufficient measurability and integrability) then the multiple fractional dimensional integral

\displaystyle \int_{\Omega_1^{p_1} \times \Omega_2^{p_2}} [F_1]_{1 \rightarrow p_1; 1} [F_2]_{1 \rightarrow p_2;2}\ d\mu_1^{p_1} d\mu^{p_2}

is equal to

\displaystyle p_1 (\int_{\Omega_1} F_1\ d\mu_1) (\int_{\Omega_1}\ d\mu_1)^{p_1-1} p_2 (\int_{\Omega_2} F_2\ d\mu_2) (\int_{\Omega_2}\ d\mu_2)^{p_2-2}.

In the case that {p_1,p_2} are natural numbers, one can view the “virtual” integrand {[F_1]_{1 \rightarrow p_1; 1} [F_2]_{1 \rightarrow p_2;2}} here as an actual function on {\Omega_1^{p_1} \times \Omega_2^{p_2}}, namely

\displaystyle (\omega_{1;1},\dots,\omega_{p_1;1}), (\omega_{1;2},\dots,\omega_{p_2;2}) \mapsto \sum_{i_1=1}^{p_1} F_1(\omega_{i_1;1}) \sum_{i_2=1}^{p_2} F_2(\omega_{i_2;2})

in which case the above evaluation of the integral can be achieved classically.

From a routine application of Lemma 3 and various forms of the product rule, we see that if each {\mu_j(t)} varies with respect to a time parameter {t} by the formula

\displaystyle \frac{d}{dt} \mu_j(t) = a_j(t) \mu_j(t)

and {F^{(p_1,\dots,p_k)}(t)} is a time-varying function in (15), then (assuming sufficient regularity and integrability), the time derivative

\displaystyle \frac{d}{dt} \int_{\Omega_1^{p_1} \times \dots \times \Omega_k^{p_k}} F^{(p_1,\dots,p_k)}(t)\ d\mu_1(t)^{p_1} \dots d\mu_k(t)^{p_k}

is equal to

\displaystyle \int_{\Omega_1^{p_1} \times \dots \times \Omega_k^{p_k}} \frac{d}{dt} F^{(p_1,\dots,p_k)}(t) \ \ \ \ \ (16)

 

Now suppose that for each space {\Omega_j} one has a non-negative measure {\mu_j^0}, a vector-valued function {y_j: \Omega_j \rightarrow {\bf R}^d}, and a matrix-valued function {A_j: \Omega_j \rightarrow {\bf R}^{d \times d}} taking values in real symmetric positive semi-definite {d \times d} matrices. Let {p_1,\dots,p_k} be positive real numbers; we make the abbreviations

\displaystyle \vec p := (p_1,\dots,p_k)

\displaystyle \Omega^{\vec p} := \Omega_1^{p_1} \times \dots \times \Omega_k^{p_k}.

For any {t > 0} and {x \in {\bf R}^d}, we define the modified measures

\displaystyle d\mu_j(t,x) := e^{-\pi \langle A_j(x-y_j), (x-y_j) \rangle/t}\ d\mu_j^0

and then the product fractional power measure

\displaystyle d\mu(t,x)^{\vec p} := d\mu(t,x)_1^{p_1} \dots d\mu(t,x)_k^{p_k}.

If we then define the heat-type functions

\displaystyle u_j(t,x) := \int_{{\bf R}^d}\ d\mu_j(t,x) = \int_{{\bf R}^d} e^{-\pi \langle A_j(x-y_j), (x-y_j) \rangle/t}\ d\mu_j^0

(where we drop the normalising power of {t} for simplicity) we see in particular that

\displaystyle \int_{{\bf R}^d} \prod_{j=1}^k u_j(t,x)^{p_j}\ dx = \int_{\Omega^{\vec p}}\ d\mu(t,x)^{\vec p} \ \ \ \ \ (17)

 

hence we can interpret the multilinear integral in the left-hand side of (17) as a product fractional dimensional integral. (We remark that in my paper with Bennett and Carbery, a slightly different parameterisation is used, replacing {x} with {t x}, and also replacing {t} with {1/t}.)

If the functions {A_j: \Omega_j \rightarrow {\bf R}^{d \times d}} were constant in {\Omega_j}, then the functions {u_j(t,x)} would obey some heat-type partial differential equation, and the situation is now very analogous to Proposition 5 (and is also closely related to Brascamp-Lieb inequalities, as discussed for instance in this paper of Carlen, Lieb, and Loss, or this paper of mine with Bennett, Carbery, and Christ). However, for applications to the multilinear Kakeya inequality, we permit {A_j} to vary slightly in the {\Omega_j} variable, and now the {u_j} do not directly obey any PDE.

A naive extension of Proposition 5 would then seek to establish monotonicity of the quantity (17). While such monotonicity is available in the “Brascamp-Lieb case” of constant {A_j}, as discussed in the above papers, this does not quite seem to be to be true for variable {A_j}. To fix this problem, a weight is introduced in order to avoid having to take matrix inverses (which are not always available in this algebra). On the product fractional dimensional space {\Omega^{\vec p}}, we have a matrix-valued function {A_*} defined by

\displaystyle A_* := \sum_{j=1}^k [A_j]_{1 \rightarrow p_j; j}.

The determinant {\mathrm{det}(A_*)} is then a scalar element of the algebra (15). We then define the quantity

\displaystyle Q_{\vec p}(t) := t^{-d/2} \int_{\Omega^{\vec p}}\mathrm{det}(A_*)\ d\mu(t,x)^{\vec p}. \ \ \ \ \ (18)

 

Example 8 Suppose we take {k=2} and let {p_1,p_2} be natural numbers. Then {A_*} can be viewed as the {2 \times 2}-matrix valued function

\displaystyle A_*(\omega_{1;1},\dots,\omega_{p_1;1},\omega_{1;2},\dots,\omega_{p_2;2}) = \sum_{i=1}^{p_1} A_1(\omega_{i;1}) + \sum_{i=1}^{p_2} A_2(\omega_{i;2}).

By slight abuse of notation, we write the determinant {\mathrm{det}(A)} of a {2 \times 2} matrix as {X \wedge Y}, where {X} and {Y} are the first and second rows of {A}. Then

\displaystyle \mathrm{det}(A_*) = \sum_{1 \leq i,i' \leq p_1} X_1(\omega_{i;1}) \wedge Y_1(\omega_{i';1})

\displaystyle + \sum_{i=1}^{p_1} \sum_{i'=1}^{p_2} X_1(\omega_{i;1}) \wedge Y_2(\omega_{i';2}) + X_2(\omega_{i';2}) \wedge Y_1(\omega_{i;1})

\displaystyle + \sum_{1 \leq i,i' \leq p_2} X_2(\omega_{i;2}) \wedge Y_2(\omega_{i';2})

and after some calculation, one can then write {Q_{\vec p}(t)} as

\displaystyle p t^{-d/2} \int_{{\bf R}^d} (\int_{\Omega_1} X_1 \wedge Y_1\ d\mu_1(t,x)) u_1(t,x)^{p_1-1} u_2(t,x)^{p_2}\ dx

\displaystyle + p(p-1)t^{-d/2} \int_{{\bf R}^d} (\int_{\Omega_1^2} X_1(\omega_1) \wedge Y_1(\omega_2)\ d\mu^2_1(t,x)(\omega_1,\omega_2))

\displaystyle u_1(t,x)^{p_1-2} u_2(t,x)^{p_2}\ dx

\displaystyle + p^2t^{-d/2} \int_{{\bf R}^d} (\int_{\Omega_1} X_1\ d\mu_1(t,x) \wedge \int_{\Omega_2} Y_2\ d\mu_2(t,x)

\displaystyle + \int_{\Omega_2} X_2\ d\mu_2(t,x) \wedge \int_{\Omega_1} Y_1\ d\mu_1(t,x)) u_1(t,x)^{p_1-1} u_2(t,x)^{p_2-1}\ dx

\displaystyle p t^{-d/2}\int_{{\bf R}^d} (\int_{\Omega_2} X_2 \wedge Y_2\ d\mu_2(t,x)) u_1(t,x)^{p_1} u_2(t,x)^{p_2-1}\ dx

\displaystyle + p(p-1)t^{-d/2} \int_{{\bf R}^d} (\int_{\Omega_2^2} X_2(\omega_1) \wedge Y_2(\omega_2)\ d\mu^2_2(t,x)(\omega_1,\omega_2))

\displaystyle u_1(t,x)^{p_1} u_2(t,x)^{p_2-2}\ dx.

By a polynomial extrapolation argument, this formula is then also valid for fractional values of {p}; this can also be checked directly from the definitions after some tedious computation. Thus we see that while the compact-looking fractional dimensional integral (18) can be expressed in terms of more traditional integrals, the formulae get rather messy, even in the {d=2} case. As such, the fractional dimensional calculus (based heavily on derivative identities such as (16)) gives a more convenient framework to manipulate these otherwise quite complicated expressions.

Suppose the functions {A_j: \Omega_j \rightarrow {\bf R}^{d \times d}} are close to constant {d \times d} matrices {A_j^0 \in {\bf R}^{d \times d}}, in the sense that

\displaystyle A_j = A_j^0 + O(\varepsilon) \ \ \ \ \ (19)

 

uniformly on {\Omega_j} for some small {\varepsilon>0} (where we use for instance the operator norm to measure the size of matrices, and we allow implied constants in the {O()} notation to depend on {d, \vec p}, and the {A_j^0}). Then we can write {A_j = A_j^0 + \varepsilon C_j} for some bounded matrix {C_j}, and then we can write

\displaystyle A_* = \sum_{j=1}^k [A_j^0]_{1 \rightarrow p_j;j} + \varepsilon [B_j]_{1 \rightarrow p_j;j} = \sum_{j=1}^k p_j A_j^0 + \varepsilon \sum_{j=1}^k [B_j^0]_{1 \rightarrow p_j;j}.

We can therefore write

\displaystyle \mathrm{det}(A_*) = \mathrm{det}(A_*^0) + \varepsilon B_*

where {A_*^0 := \sum_{j=1}^k p_j A_j^0} and the coefficients of the matrix {C_*} are some polynomial combination of the coefficients of {[C_j^0]_{1 \rightarrow p_j;j}}, with all coefficients in this polynomial of bounded size. As a consequence, and on expanding out all the fractional dimensional integrals, one obtains a formula of the form

\displaystyle Q_{\vec p}(t) = t^{-d/2} (\mathrm{det}(A_*^0) + O(\varepsilon)) \int_{\Omega^{\vec p}} \ d\mu(t,x)^{\vec p}

\displaystyle = t^{-d/2} (\mathrm{det}(A_*^0) + O(\varepsilon)) \int_{{\bf R}^d} \prod_{j=1}^k u_j(t,x)^{p_j}\ dx .

Thus, as long as {A_*^0} is strictly positive definite and {\varepsilon} is small enough, this quantity {Q_{\vec p}(t)} is comparable to the classical integral

\displaystyle t^{-d/2} \int_{{\bf R}^d} \prod_{j=1}^k u_j(t,x)^{p_j}\ dx.

Now we compute the time derivative of {Q_{\vec p}(t)}. We have

\displaystyle \frac{\partial}{\partial t} \mu_j(t,x) = \frac{\pi}{t^2} \langle A_j(x-y_j),(x-y_j) \rangle \mu_j(t,x)

so by (16), one can write {\frac{d}{dt} Q_{\vec p}(t)} as

\displaystyle -\frac{d}{2} Q_{\vec p}(t) + \frac{\pi}{t^{\frac{d}{2}+2}} \int_{{\bf R}^d} \int_{\Omega^{\vec p}} \ \ \ \ \ (20)

 

\displaystyle \mathrm{det}(A_*) \sum_{j=1}^k [\langle A_j(x-y_j),(x-y_j) \rangle]_{1 \rightarrow p_j;j}\ d\mu(t,x)^{\vec p}\ dx

where we use {y_j} as the coordinate for the copy of {{\bf R}^d} that is being lifted to {({\bf R}^d)^{p_j}}.

As before, we can take advantage of some cancellation in this expression using integration by parts. Since

\displaystyle \frac{\partial}{\partial x_i} \mu_j(t,x) = -\frac{2\pi}{t} \langle A_j(x-y_j), e_i\rangle \mu_j(t,x)

where {e_1,\dots,e_d} are the standard basis for {{\bf R}^d}, we see from (16) and integration by parts that

\displaystyle d Q_{\vec p}(t) = \frac{2\pi}{t^{\frac{d}{2}+1}} \int_{{\bf R}^d} \int_{\Omega^{\vec p}}\mathrm{det}(A_*) \sum_{j=1}^k x_i [\langle A_j(x-y_j), e_i\rangle]_{1 \rightarrow p_j;j}\ d\mu(t,x)^{\vec p}

with the usual summation conventions on the index {i}. Also, similarly to before, we suppose we have an element {F_i} of (15) for each {i} that does not depend on {x}, then by (16) and integration by parts

\displaystyle \int_{{\bf R}^d} \int_{\Omega^{\vec p}} \sum_{j=1}^k F_i [\langle A_j(x-y_j), e_i\rangle]_{1 \rightarrow p_j;j}\ d\mu(t,x)^{\vec p} = 0

or, writing {F = (F_1,\dots,F_d)},

\displaystyle \int_{{\bf R}^d} \int_{\Omega^{\vec p}} \sum_{j=1}^k \langle [A_j(x-y_j)]_{1 \rightarrow p_j;j}, F \rangle\ d\mu(t,x)^{\vec p} = 0.

We can thus write (20) as

\displaystyle \frac{d}{dt} Q_{\vec p}(t) = \frac{\pi}{t^{\frac{d}{2}+2}} \int_{{\bf R}^d} \int_{\Omega^{\vec p}} G\ d\mu(t,x)^{\vec p}\ dx \ \ \ \ \ (21)

 

where {G = G(x)} is the element of (15) given by

\displaystyle G := \mathrm{det}(A_*) \sum_{j=1}^k [\langle A_j(x-y_j),(x-y_j) \rangle]_{1 \rightarrow p_j;j} \ \ \ \ \ (22)

 

\displaystyle - \langle \sum_{j=1}^k [A_j(x-y_j)]_{1 \rightarrow p_j;j}, \mathrm{det}(A_*) x - F \rangle.

The terms in {G} that are quadratic in {x} cancel. The linear term can be rearranged as

\displaystyle \langle x, A_* F - \mathrm{det}(A_*) \sum_{j=1}^k [A_j y_j]_{1 \rightarrow p_j; j} \rangle.

To cancel this, one would like to set {F} equal to

\displaystyle F = A_*^{-1}\mathrm{det}(A_*) \sum_{j=1}^k [A_j y_j]_{1 \rightarrow p_j; j} .

Now in the commutative algebra (15), the inverse {A_*^{-1}} does not necessarily exist. However, because of the weight factor {\mathrm{det}(A_*)}, one can work instead with the adjugate matrix {\mathrm{adj}(A_*)}, which is such that {\mathrm{adj}(A_*) A_* = A_* \mathrm{adj}(A_*) \mathrm{det}(A_*) I} where {I} is the identity matrix. We therefore set {F} equal to the expression

\displaystyle F := \mathrm{adj}(A_*) \sum_{j=1}^k [A_j y_j]_{1 \rightarrow p_j; j}

and now the expression in (22) does not contain any linear or quadratic terms in {x}. In particular it is completely independent of {x}, and thus we can write

\displaystyle G = \mathrm{det}(A_*) \sum_{j=1}^k [\langle A_j(\overline{y}-y_j),(\overline{y}-y_j) \rangle]_{1 \rightarrow p_j;j}

\displaystyle - \langle \sum_{j=1}^k [A_j(\overline{y}-y_j)]_{1 \rightarrow p_j;j}, \mathrm{det}(A_*) \overline{y} - F \rangle

where {\overline{y} = \overline{y}(t,x)} is an arbitrary element of {{\bf R}^d} that we will select later to obtain a useful cancellation. We can rewrite this a little as

\displaystyle G = \mathrm{det}(A_*) \sum_{j=1}^k [\langle A_j(\overline{y}-y_j),(\overline{y}-y_j) \rangle]_{1 \rightarrow p_j;j}

\displaystyle - \langle \sum_{j=1}^k [A_j(\overline{y}-y_j)]_{1 \rightarrow p_j;j}, \mathrm{adj}(A_*) \sum_{j'=1}^k [A_j(\overline{y} - y_j)]_{1 \rightarrow p_j; j} \rangle.

If we now introduce the matrix functions

\displaystyle B_j := A_j^{1/2}

and the vector functions

\displaystyle w_j := B_j( \overline{y} - y_j)

then this can be rewritten as

\displaystyle G = \mathrm{det}(A_*) \sum_{j=1}^k [\|w_j\|^2]_{1 \rightarrow p_j;j}

\displaystyle - \langle \sum_{j=1}^k [B_j w_j]_{1 \rightarrow p_j;j}, \mathrm{adj}(A_*) \sum_{j'=1}^k [B_j w_j]_{1 \rightarrow p_j; j} \rangle.

Similarly to (19), suppose that we have

\displaystyle B_j = B_j^0 + O(\varepsilon)

uniformly on {\Omega_j}, where {B_j^0 := (A_j^0)^{1/2}}, thus we can write

\displaystyle B_j = B_j^0 + \varepsilon D_j \ \ \ \ \ (23)

 

for some bounded matrix-valued functions {D_j}. Inserting this into the previous expression (and expanding out {A_*} appropriately) one can eventually write

\displaystyle G = G^0 + \varepsilon H

where

\displaystyle G^0 = \mathrm{det}(A^0_*) (\sum_{j=1}^k [\|w_j\|^2]_{1 \rightarrow p_j;j}

\displaystyle - \langle \sum_{j=1}^k B^0_j [w_j]_{1 \rightarrow p_j;j}, (A^0_*)^{-1} \sum_{j'=1}^k B^0_j [w_j]_{1 \rightarrow p_j; j} \rangle )

and {H} is some polynomial combination of the {D_j} and {w_j} (or more precisely, of the quantities {[D_j]_{1 \rightarrow p_j;j}}, {[w_j]_{1 \rightarrow p_j;j}}, {[D_j w_j]_{1 \rightarrow p_j;j}}, {[\|w_j\|^2]_{1 \rightarrow p_j;j}}) that is quadratic in the {w_j} variables, with bounded coefficients. As a consequence, after expanding out the product fractional dimensional integrals and applying some Cauchy-Schwarz to control cross-terms, we have

\displaystyle \frac{d}{dt} Q_{\vec p}(t) = \frac{\pi}{t^{\frac{d}{2}+2}} \int_{{\bf R}^d} \int_{\Omega^{\vec p}} G^0\ d\mu(t,x)^{\vec p}\ dx

\displaystyle + O( \varepsilon t^{-\frac{d}{2}+2} \int_{{\bf R}^d} \int_{\Omega^{\vec p}} \sum_{j=1}^k [\| w_j \|^2]_{1 \rightarrow p_j;j}d\mu(t,x)^{\vec p}\ dx).

Now we simplify {G^0}. We let

\displaystyle \overline{w_j} := \frac{\int_{\Omega_j} w_j\ d\mu_j}{\int_{\Omega_j}\ d\mu_j}

be the average value of {\overline{w_j}}; for each {t,x} this is just a vector in {{\bf R}^d}. We then split {w_j = \overline{w_j} + (w_j - \overline{w_j})}, leading to the identities

\displaystyle [\|w_j\|^2]_{1 \rightarrow p_j;j} = p_j \|\overline{w_j}\|^2 + 2 \langle \overline{w_j}, [w_j - \overline{w_j}]_{1 \rightarrow p_j;j}\rangle

\displaystyle + [\| w_j - \overline{w_j} \|^2]_{1 \rightarrow p_j;j}

and

\displaystyle \sum_{j=1}^k B^0_j [w_j]_{1 \rightarrow p_j;j} = \sum_{j=1}^k p_j B^0_j \overline{w_j} + \sum_{j=1}^k B^0_j [(w_j - \overline{w_j})]_{1 \rightarrow p_j;j}.

The term {\sum_{j=1}^k p_j B^0_j \overline{w_j}} is problematic, but we can eliminate it as follows. By construction one has (supressing the dependence on {t,x})

\displaystyle \sum_{j=1}^k p_j B^0_j \overline{w_j} \int_{\Omega^p} \ d\mu^{\vec p} = \int_{\Omega^p} \sum_{j=1}^k B^0_j [w_j]_{1 \rightarrow p_j; j} \ d\mu^{\vec p}

\displaystyle = \int_{\Omega^p}\sum_{j=1}^k B^0_j [B_j(\overline{y} - y_j)]_{1 \rightarrow p_j; j} \ d\mu^{\vec p}

\displaystyle = \int_{\Omega^p}\sum_{j=1}^k B^0_j [B_j]_{1 \rightarrow p_j;j} \overline{y} - \int_{\Omega^p}\sum_{j=1}^k B^0_j [B_j y_j]_{1 \rightarrow p_j; j} \ d\mu^{\vec p}.

By construction, one has

\displaystyle \int_{\Omega^p}\sum_{j=1}^k B^0_j [B_j]_{1 \rightarrow p_j;j}\ d\mu^{\vec p} = (\sum_{j=1}^k p B_0^j B_0^j + O(\varepsilon)) \int_{\Omega^p}\ d\mu^{\vec p}

\displaystyle = (A^0_* + O(\varepsilon)) \int_{\Omega^p}\ d\mu^{\vec p}.

Thus if {A^0_*} is positive definite and {\varepsilon} is small enough, this matrix is invertible, and we can choose {\overline{y}} so that the expression {\sum_{j=1}^k p_j B^0_j \overline{w_j} } vanishes. Making this choice, we then have

\displaystyle \sum_{j=1}^k [B^0_j w_j]_{1 \rightarrow p_j;j} = \sum_{j=1}^k [B^0_j (w_j - \overline{w_j})]_{1 \rightarrow p_j;j}.

Observe that the fractional dimensional integral of

\displaystyle \langle \overline{w_j}, [w_j - \overline{w_j}]_{1 \rightarrow p_j;j} \rangle

or

\displaystyle \langle [w_j - \overline{w_j}]_{1 \rightarrow p_j;j}, M [w_{j'} - \overline{w_{j'}}]_{1 \rightarrow p_{j'};j'}

for {j \neq j'} and arbitrary constant matrices {M} against {d\mu^{\vec p}} vanishes. As a consequence, we can now simplify the integral

\displaystyle \int_{\Omega^p} G_0\ d\mu^{\vec p} \ \ \ \ \ (24)

 

as

\displaystyle \mathrm{det}(A^0_*) \int_{\Omega^p} \sum_{j=1}^k p_j \| \overline{w_j}\|^2

\displaystyle + \langle [w_j - \overline{w_j}]_{1 \rightarrow p_j;j}, (1 - B_0^j (A^0_*)^{-1} B_0^{j}) [w_j - \overline{w_j}]_{1 \rightarrow p_j; j} \rangle\ d\mu^{\vec p}.

Using (2), we can split

\displaystyle \langle [w_j - \overline{w_j}]_{1 \rightarrow p_j;j}, (1 - B_0^j (A^0_*)^{-1} B_0^{j}) [w_j - \overline{w_j}]_{1 \rightarrow p_j; j} \rangle

as the sum of

\displaystyle [\langle w_j - \overline{w_j}, (1 - B_0^j (A^0_*)^{-1} B_0^{j}) (w_j - \overline{w_j}) \rangle ]_{1 \rightarrow p_j; j}

and

\displaystyle 2[\langle w_j(\omega_1) - \overline{w_j}, (1 - B_0^j (A^0_*)^{-1} B_0^{j}) (w_j(\omega_2) - \overline{w_j}) \rangle ]_{2 \rightarrow p_j; j}.

The latter also integrates to zero by the mean zero nature of {w_j - \overline{w_j}}. Thus we have simplified (24) to

\displaystyle \mathrm{det}(A^0_*) \int_{\Omega^p} \sum_{j=1}^k p_j \| \overline{w_j}\|^2

\displaystyle + [\langle w_j - \overline{w_j}, (1 - B_0^j (A^0_*)^{-1} B_0^{j}) (w_j - \overline{w_j}) \rangle ]_{1 \rightarrow p_j; j} \ d\mu^{\vec p}.

Now let us make the key hypothesis that the matrix

\displaystyle 1 - B_0^j (A^0_*)^{-1} B_0^{j}

is strictly positive definite, or equivalently that

\displaystyle \sum_{j'=1}^k p_{j'} A_{j'}^0 > A_j

for all {j=1,\dots,k}, where the ordering is in the sense of positive definite matrices. Then we have the pointwise bound

\displaystyle \langle w_j - \overline{w_j}, (1 - B_0^j (A^0_*)^{-1} B_0^{j}) (w_j - \overline{w_j}) \rangle \gtrsim \| w_j - \overline{w_j} \|^2

and thus

\displaystyle \frac{d}{dt} Q_{\vec p}(t) \gtrsim t^{-\frac{d}{2}+2} \int_{{\bf R}^d} \int_{\Omega^p}

\displaystyle \sum_{j=1}^k [\|\overline{w_j}\|^2 + \| w_j - \overline{w_j} \|^2 - O(\varepsilon) \|w_j\|^2 ]_{1 \rightarrow p_j;j}\ d\mu^{\vec p}\ dx.

For {\varepsilon} small enough, the expression inside the {[]_{1 \rightarrow p_j;j}} is non-negative, and we conclude the monotonicity

\displaystyle \frac{d}{dt} Q_{\vec p}(t) \geq 0.

We have thus proven the following statement, which is essentially Proposition 4.1 of my paper with Bennett and Carbery:

Proposition 9 Let {d,k \geq 1}, let {A_1^0,\dots,A_k^0} be positive semi-definite real symmetric {d \times d} matrices, and let {p_1,\dots,p_k>0} be such that

\displaystyle p_1 A_1^0 + \dots + p_k A_k^0 > A_j^0 \ \ \ \ \ (25)

 

for {j=1,\dots,d}. Then for any positive measure spaces {\Omega_1,\dots,\Omega_k} with measures {\mu_1^0,\dots,\mu_k^0} and any functions {A_j, y_j} on {\Omega_j} with {A_j = A_j^0 + O(\varepsilon)} for a sufficiently small {\varepsilon>0}, the quantity {Q_{\vec p}(t)} is non-decreasing in {t \in (0,+\infty)}, and is also equal to

\displaystyle t^{-d/2} (\mathrm{det}(A_*^0) + O(\varepsilon)) \int_{{\bf R}^d} \prod_{j=1}^k u_j(t,x)^{p_j}\ dx.

In particular, we have

\displaystyle t^{-d/2} \int_{{\bf R}^d} \prod_{j=1}^k u_j(t,x)^{p_j}\ dx \lesssim \limsup_{T \rightarrow \infty} T^{-d/2} \int_{{\bf R}^d} \prod_{j=1}^k u_j(T,x)^{p_j}\ dx

for any {t>0}.

A routine calculation shows that for reasonable choices of {\mu_j^0} (e.g. discrete measures of finite support), one has

\displaystyle \limsup_{T \rightarrow \infty} T^{-d/2} \int_{{\bf R}^d} \prod_{j=1}^k u_j(T,x)^{p_j}\ dx \lesssim \prod_{j=1}^k \mu_j^0({\bf R}^d)^{p_j}

and hence (setting {t=1}) we have

\displaystyle \int_{{\bf R}^d} \prod_{j=1}^k u_j(t,x)^{p_j}\ dx \lesssim \prod_{j=1}^k \mu_j^0({\bf R}^d).

If we choose the {\mu_j^0} to be the sum of {N_j} Dirac masses, and each {A_j^0} to be the diagonal matrix {A_j^0 = I - e_j e_j^T}, then the key condition (25) is obeyed for {p_1=\dots=p_k = p > \frac{1}{d-1}}, and one arrives at the multilinear Kakeya inequality

\displaystyle \int_{{\bf R}^d} \prod_{j=1}^k (\sum_{i=1}^{N_j} 1_{T_{j,i}})^p \lesssim \prod_{j=1}^k N_j^{p_j}

whenever {T_{j,i}} are infinite tubes in {{\bf R}^d} of width {1} and oriented within {\varepsilon} of the basis vector {e_j}, for a sufficiently small absolute constant {\varepsilon}. (The hypothesis on the directions can then be relaxed to a transversality hypothesis by applying some linear transformations and the triangle inequality.)