You are currently browsing the category archive for the ‘math.CA’ category.

Let {S} be a non-empty finite set. If {X} is a random variable taking values in {S}, the Shannon entropy {H[X]} of {X} is defined as

\displaystyle H[X] = -\sum_{s \in S} {\bf P}[X = s] \log {\bf P}[X = s].

There is a nice variational formula that lets one compute logs of sums of exponentials in terms of this entropy:

Lemma 1 (Gibbs variational formula) Let {f: S \rightarrow {\bf R}} be a function. Then

\displaystyle  \log \sum_{s \in S} \exp(f(s)) = \sup_X {\bf E} f(X) + {\bf H}[X]. \ \ \ \ \ (1)

Proof: Note that shifting {f} by a constant affects both sides of (1) the same way, so we may normalize {\sum_{s \in S} \exp(f(s)) = 1}. Then {\exp(f(s))} is now the probability distribution of some random variable {Y}, and the inequality can be rewritten as

\displaystyle  0 = \sup_X \sum_{s \in S} {\bf P}[X = s] \log {\bf P}[Y = s] -\sum_{s \in S} {\bf P}[X = s] \log {\bf P}[X = s].

But this is precisely the Gibbs inequality. (The expression inside the supremum can also be written as {-D_{KL}(X||Y)}, where {D_{KL}} denotes Kullback-Leibler divergence. One can also interpret this inequality as a special case of the Fenchel–Young inequality relating the conjugate convex functions {x \mapsto e^x} and {y \mapsto y \log y - y}.) \Box

In this note I would like to use this variational formula (which is also known as the Donsker-Varadhan variational formula) to give another proof of the following inequality of Carbery.

Theorem 2 (Generalized Cauchy-Schwarz inequality) Let {n \geq 0}, let {S, T_1,\dots,T_n} be finite non-empty sets, and let {\pi_i: S \rightarrow T_i} be functions for each {i=1,\dots,n}. Let {K: S \rightarrow {\bf R}^+} and {f_i: T_i \rightarrow {\bf R}^+} be positive functions for each {i=1,\dots,n}. Then

\displaystyle  \sum_{s \in S} K(s) \prod_{i=1}^n f_i(\pi_i(s)) \leq Q \prod_{i=1}^n (\sum_{t_i \in T_i} f_i(t_i)^{n+1})^{1/(n+1)}

where {Q} is the quantity

\displaystyle  Q := (\sum_{(s_0,\dots,s_n) \in \Omega_n} K(s_0) \dots K(s_n))^{1/(n+1)}

where {\Omega_n} is the set of all tuples {(s_0,\dots,s_n) \in S^{n+1}} such that {\pi_i(s_{i-1}) = \pi_i(s_i)} for {i=1,\dots,n}.

Thus for instance, the identity is trivial for {n=0}. When {n=1}, the inequality reads

\displaystyle  \sum_{s \in S} K(s) f_1(\pi_1(s)) \leq (\sum_{s_0,s_1 \in S: \pi_1(s_0)=\pi_1(s_1)} K(s_0) K(s_1))^{1/2}

\displaystyle  ( \sum_{t_1 \in T_1} f_1(t_1)^2)^{1/2},

which is easily proven by Cauchy-Schwarz, while for {n=2} the inequality reads

\displaystyle  \sum_{s \in S} K(s) f_1(\pi_1(s)) f_2(\pi_2(s))

\displaystyle  \leq (\sum_{s_0,s_1, s_2 \in S: \pi_1(s_0)=\pi_1(s_1); \pi_2(s_1)=\pi_2(s_2)} K(s_0) K(s_1) K(s_2))^{1/3}

\displaystyle (\sum_{t_1 \in T_1} f_1(t_1)^3)^{1/3} (\sum_{t_2 \in T_2} f_2(t_2)^3)^{1/3}

which can also be proven by elementary means. However even for {n=3}, the existing proofs require the “tensor power trick” in order to reduce to the case when the {f_i} are step functions (in which case the inequality can be proven elementarily, as discussed in the above paper of Carbery).

We now prove this inequality. We write {K(s) = \exp(k(s))} and {f_i(t_i) = \exp(g_i(t_i))} for some functions {k: S \rightarrow {\bf R}} and {g_i: T_i \rightarrow {\bf R}}. If we take logarithms in the inequality to be proven and apply Lemma 1, the inequality becomes

\displaystyle  \sup_X {\bf E} k(X) + \sum_{i=1}^n g_i(\pi_i(X)) + {\bf H}[X]

\displaystyle  \leq \frac{1}{n+1} \sup_{(X_0,\dots,X_n)} {\bf E} k(X_0)+\dots+k(X_n) + {\bf H}[X_0,\dots,X_n]

\displaystyle  + \frac{1}{n+1} \sum_{i=1}^n \sup_{Y_i} (n+1) {\bf E} g_i(Y_i) + {\bf H}[Y_i]

where {X} ranges over random variables taking values in {S}, {X_0,\dots,X_n} range over tuples of random variables taking values in {\Omega_n}, and {Y_i} range over random variables taking values in {T_i}. Comparing the suprema, the claim now reduces to

Lemma 3 (Conditional expectation computation) Let {X} be an {S}-valued random variable. Then there exists a {\Omega_n}-valued random variable {(X_0,\dots,X_n)}, where each {X_i} has the same distribution as {X}, and

\displaystyle  {\bf H}[X_0,\dots,X_n] = (n+1) {\bf H}[X]

\displaystyle - {\bf H}[\pi_1(X)] - \dots - {\bf H}[\pi_n(X)].

Proof: We induct on {n}. When {n=0} we just take {X_0 = X}. Now suppose that {n \geq 1}, and the claim has already been proven for {n-1}, thus one has already obtained a tuple {(X_0,\dots,X_{n-1}) \in \Omega_{n-1}} with each {X_0,\dots,X_{n-1}} having the same distribution as {X}, and

\displaystyle  {\bf H}[X_0,\dots,X_{n-1}] = n {\bf H}[X] - {\bf H}[\pi_1(X)] - \dots - {\bf H}[\pi_{n-1}(X)].

By hypothesis, {\pi_n(X_{n-1})} has the same distribution as {\pi_n(X)}. For each value {t_n} attained by {\pi_n(X)}, we can take conditionally independent copies of {(X_0,\dots,X_{n-1})} and {X} conditioned to the events {\pi_n(X_{n-1}) = t_n} and {\pi_n(X) = t_n} respectively, and then concatenate them to form a tuple {(X_0,\dots,X_n)} in {\Omega_n}, with {X_n} a further copy of {X} that is conditionally independent of {(X_0,\dots,X_{n-1})} relative to {\pi_n(X_{n-1}) = \pi_n(X)}. One can the use the entropy chain rule to compute

\displaystyle  {\bf H}[X_0,\dots,X_n] = {\bf H}[\pi_n(X_n)] + {\bf H}[X_0,\dots,X_n| \pi_n(X_n)]

\displaystyle  = {\bf H}[\pi_n(X_n)] + {\bf H}[X_0,\dots,X_{n-1}| \pi_n(X_n)] + {\bf H}[X_n| \pi_n(X_n)]

\displaystyle  = {\bf H}[\pi_n(X)] + {\bf H}[X_0,\dots,X_{n-1}| \pi_n(X_{n-1})] + {\bf H}[X_n| \pi_n(X_n)]

\displaystyle  = {\bf H}[\pi_n(X)] + ({\bf H}[X_0,\dots,X_{n-1}] - {\bf H}[\pi_n(X_{n-1})])

\displaystyle + ({\bf H}[X_n] - {\bf H}[\pi_n(X_n)])

\displaystyle  ={\bf H}[X_0,\dots,X_{n-1}] + {\bf H}[X_n] - {\bf H}[\pi_n(X_n)]

and the claim now follows from the induction hypothesis. \Box

With a little more effort, one can replace {S} by a more general measure space (and use differential entropy in place of Shannon entropy), to recover Carbery’s inequality in full generality; we leave the details to the interested reader.

In my previous post, I walked through the task of formally deducing one lemma from another in Lean 4. The deduction was deliberately chosen to be short and only showcased a small number of Lean tactics. Here I would like to walk through the process I used for a slightly longer proof I worked out recently, after seeing the following challenge from Damek Davis: to formalize (in a civilized fashion) the proof of the following lemma:

Lemma. Let \{a_k\} and \{D_k\} be sequences of real numbers indexed by natural numbers k=0,1,\dots, with a_k non-increasing and D_k non-negative. Suppose also that a_k \leq D_k - D_{k+1} for all k \geq 0. Then a_k \leq \frac{D_0}{k+1} for all k.

Here I tried to draw upon the lessons I had learned from the PFR formalization project, and to first set up a human readable proof of the lemma before starting the Lean formalization – a lower-case “blueprint” rather than the fancier Blueprint used in the PFR project. The main idea of the proof here is to use the telescoping series identity

\displaystyle \sum_{i=0}^k D_i - D_{i+1} = D_0 - D_{k+1}.

Since D_{k+1} is non-negative, and a_i \leq D_i - D_{i+1} by hypothesis, we have

\displaystyle \sum_{i=0}^k a_i \leq D_0

but by the monotone hypothesis on a_i the left-hand side is at least (k+1) a_k, giving the claim.

This is already a human-readable proof, but in order to formalize it more easily in Lean, I decided to rewrite it as a chain of inequalities, starting at a_k and ending at D_0 / (k+1). With a little bit of pen and paper effort, I obtained

a_k = (k+1) a_k / (k+1)

(by field identities)

= (\sum_{i=0}^k a_k) / (k+1)

(by the formula for summing a constant)

\leq (\sum_{i=0}^k a_i) / (k+1)

(by the monotone hypothesis)

\leq (\sum_{i=0}^k D_i - D_{i+1}) / (k+1)

(by the hypothesis a_i \leq D_i - D_{i+1}

= (D_0 - D_{k+1}) / (k+1)

(by telescoping series)

\leq D_0 / (k+1)

(by the non-negativity of D_{k+1}).

I decided that this was a good enough blueprint for me to work with. The next step is to formalize the statement of the lemma in Lean. For this quick project, it was convenient to use the online Lean playground, rather than my local IDE, so the screenshots will look a little different from those in the previous post. (If you like, you can follow this tour in that playground, by clicking on the screenshots of the Lean code.) I start by importing Lean’s math library, and starting an example of a statement to state and prove:

Now we have to declare the hypotheses and variables. The main variables here are the sequences a_k and D_k, which in Lean are best modeled by functions a, D from the natural numbers ℕ to the reals ℝ. (One can choose to “hardwire” the non-negativity hypothesis into the D_k by making D take values in the nonnegative reals {\bf R}^+ (denoted NNReal in Lean), but this turns out to be inconvenient, because the laws of algebra and summation that we will need are clunkier on the non-negative reals (which are not even a group) than on the reals (which are a field). So we add in the variables:

Now we add in the hypotheses, which in Lean convention are usually given names starting with h. This is fairly straightforward; the one thing is that the property of being monotone decreasing already has a name in Lean’s Mathlib, namely Antitone, and it is generally a good idea to use the Mathlib provided terminology (because that library contains a lot of useful lemmas about such terms).

One thing to note here is that Lean is quite good at filling in implied ranges of variables. Because a and D have the natural numbers ℕ as their domain, the dummy variable k in these hypotheses is automatically being quantified over ℕ. We could have made this quantification explicit if we so chose, for instance using ∀ k : ℕ, 0 ≤ D k instead of ∀ k, 0 ≤ D k, but it is not necessary to do so. Also note that Lean does not require parentheses when applying functions: we write D k here rather than D(k) (which in fact does not compile in Lean unless one puts a space between the D and the parentheses). This is slightly different from standard mathematical notation, but is not too difficult to get used to.

This looks like the end of the hypotheses, so we could now add a colon to move to the conclusion, and then add that conclusion:

This is a perfectly fine Lean statement. But it turns out that when proving a universally quantified statement such as ∀ k, a k ≤ D 0 / (k + 1), the first step is almost always to open up the quantifier to introduce the variable k (using the Lean command intro k). Because of this, it is slightly more efficient to hide the universal quantifier by placing the variable k in the hypotheses, rather than in the quantifier (in which case we have to now specify that it is a natural number, as Lean can no longer deduce this from context):

At this point Lean is complaining of an unexpected end of input: the example has been stated, but not proved. We will temporarily mollify Lean by adding a sorry as the purported proof:

Now Lean is content, other than giving a warning (as indicated by the yellow squiggle under the example) that the proof contains a sorry.

It is now time to follow the blueprint. The Lean tactic for proving an inequality via chains of other inequalities is known as calc. We use the blueprint to fill in the calc that we want, leaving the justifications of each step as “sorry”s for now:

Here, we “open“ed the Finset namespace in order to easily access Finset‘s range function, with range k basically being the finite set of natural numbers \{0,\dots,k-1\}, and also “open“ed the BigOperators namespace to access the familiar ∑ notation for (finite) summation, in order to make the steps in the Lean code resemble the blueprint as much as possible. One could avoid opening these namespaces, but then expressions such as ∑ i in range (k+1), a i would instead have to be written as something like Finset.sum (Finset.range (k+1)) (fun i ↦ a i), which looks a lot less like like standard mathematical writing. The proof structure here may remind some readers of the “two column proofs” that are somewhat popular in American high school geometry classes.

Now we have six sorries to fill. Navigating to the first sorry, Lean tells us the ambient hypotheses, and the goal that we need to prove to fill that sorry:

The ⊢ symbol here is Lean’s marker for the goal. The uparrows ↑ are coercion symbols, indicating that the natural number k has to be converted to a real number in order to interact via arithmetic operations with other real numbers such as a k, but we can ignore these coercions for this tour (for this proof, it turns out Lean will basically manage them automatically without need for any explicit intervention by a human).

The goal here is a self-evident algebraic identity; it involves division, so one has to check that the denominator is non-zero, but this is self-evident. In Lean, a convenient way to establish algebraic identities is to use the tactic field_simp to clear denominators, and then ring to verify any identity that is valid for commutative rings. This works, and clears the first sorry:

field_simp, by the way, is smart enough to deduce on its own that the denominator k+1 here is manifestly non-zero (and in fact positive); no human intervention is required to point this out. Similarly for other “clearing denominator” steps that we will encounter in the other parts of the proof.

Now we navigate to the next `sorry`. Lean tells us the hypotheses and goals:

We can reduce the goal by canceling out the common denominator ↑k+1. Here we can use the handy Lean tactic congr, which tries to match two sides of an equality goal as much as possible, and leave any remaining discrepancies between the two sides as further goals to be proven. Applying congr, the goal reduces to

Here one might imagine that this is something that one can prove by induction. But this particular sort of identity – summing a constant over a finite set – is already covered by Mathlib. Indeed, searching for Finset, sum, and const soon leads us to the Finset.sum_const lemma here. But there is an even more convenient path to take here, which is to apply the powerful tactic simp, which tries to simplify the goal as much as possible using all the “simp lemmas” Mathlib has to offer (of which Finset.sum_const is an example, but there are thousands of others). As it turns out, simp completely kills off this identity, without any further human intervention:

Now we move on to the next sorry, and look at our goal:

congr doesn’t work here because we have an inequality instead of an equality, but there is a powerful relative gcongr of congr that is perfectly suited for inequalities. It can also open up sums, products, and integrals, reducing global inequalities between such quantities into pointwise inequalities. If we invoke gcongr with i hi (where we tell gcongr to use i for the variable opened up, and hi for the constraint this variable will satisfy), we arrive at a greatly simplified goal (and a new ambient variable and hypothesis):

Now we need to use the monotonicity hypothesis on a, which we have named ha here. Looking at the documentation for Antitone, one finds a lemma that looks applicable here:

One can apply this lemma in this case by writing apply Antitone.imp ha, but because ha is already of type Antitone, we can abbreviate this to apply ha.imp. (Actually, as indicated in the documentation, due to the way Antitone is defined, we can even just use apply ha here.) This reduces the goal nicely:

The goal is now very close to the hypothesis hi. One could now look up the documentation for Finset.range to see how to unpack hi, but as before simp can do this for us. Invoking simp at hi, we obtain

Now the goal and hypothesis are very close indeed. Here we can just close the goal using the linarith tactic used in the previous tour:

The next sorry can be resolved by similar methods, using the hypothesis hD applied at the variable i:

Now for the penultimate sorry. As in a previous step, we can use congr to remove the denominator, leaving us in this state:

This is a telescoping series identity. One could try to prove it by induction, or one could try to see if this identity is already in Mathlib. Searching for Finset, sum, and sub will locate the right tool (as the fifth hit), but a simpler way to proceed here is to use the exact? tactic we saw in the previous tour:

A brief check of the documentation for sum_range_sub' confirms that this is what we want. Actually we can just use apply sum_range_sub' here, as the apply tactic is smart enough to fill in the missing arguments:

One last sorry to go. As before, we use gcongr to cancel denominators, leaving us with

This looks easy, because the hypothesis hpos will tell us that D (k+1) is nonnegative; specifically, the instance hpos (k+1) of that hypothesis will state exactly this. The linarith tactic will then resolve this goal once it is told about this particular instance:

We now have a complete proof – no more yellow squiggly line in the example. There are two warnings though – there are two variables i and hi introduced in the proof that Lean’s “linter” has noticed are not actually used in the proof. So we can rename them with underscores to tell Lean that we are okay with them not being used:

This is a perfectly fine proof, but upon noticing that many of the steps are similar to each other, one can do a bit of “code golf” as in the previous tour to compactify the proof a bit:

With enough familiarity with the Lean language, this proof actually tracks quite closely with (an optimized version of) the human blueprint.

This concludes the tour of a lengthier Lean proving exercise. I am finding the pre-planning step of the proof (using an informal “blueprint” to break the proof down into extremely granular pieces) to make the formalization process significantly easier than in the past (when I often adopted a sequential process of writing one line of code at a time without first sketching out a skeleton of the argument). (The proof here took only about 15 minutes to create initially, although for this blog post I had to recreate it with screenshots and supporting links, which took significantly more time.) I believe that a realistic near-term goal for AI is to be able to fill in automatically a significant fraction of the sorts of atomic “sorry“s of the size one saw in this proof, allowing one to convert a blueprint to a formal Lean proof even more rapidly.

One final remark: in this tour I filled in the “sorry“s in the order in which they appeared, but there is actually no requirement that one does this, and once one has used a blueprint to atomize a proof into self-contained smaller pieces, one can fill them in in any order. Importantly for a group project, these micro-tasks can be parallelized, with different contributors claiming whichever “sorry” they feel they are qualified to solve, and working independently of each other. (And, because Lean can automatically verify if their proof is correct, there is no need to have a pre-existing bond of trust with these contributors in order to accept their contributions.) Furthermore, because the specification of a “sorry” someone can make a meaningful contribution to the proof by working on an extremely localized component of it without needing the mathematical expertise to understand the global argument. This is not particularly important in this simple case, where the entire lemma is not too hard to understand to a trained mathematician, but can become quite relevant for complex formalization projects.

I have just uploaded to the arXiv my paper “A Maclaurin type inequality“. This paper concerns a variant of the Maclaurin inequality for the elementary symmetric means

\displaystyle  s_k(y) := \frac{1}{\binom{n}{k}} \sum_{1 \leq i_1 < \dots < i_k \leq n} y_{i_1} \dots y_{i_k}

of {n} real numbers {y_1,\dots,y_n}. This inequality asserts that

\displaystyle  s_\ell(y)^{1/\ell} \leq s_k(y)^{1/k}

whenever {1 \leq k \leq \ell \leq n} and {y_1,\dots,y_n} are non-negative. It can be proven as a consequence of the Newton inequality

\displaystyle  s_{k-1}(y) s_{k+1}(y) \leq s_k(y)^2

valid for all {1 \leq k < n} and arbitrary real {y_1,\dots,y_n} (in particular, here the {y_i} are allowed to be negative). Note that the {k=1, n=2} case of this inequality is just the arithmetic mean-geometric mean inequality

\displaystyle  y_1 y_2 \leq (\frac{y_1+y_2}{2})^2;

the general case of this inequality can be deduced from this special case by a number of standard manipulations (the most non-obvious of which is the operation of differentiating the real-rooted polynomial {\prod_{i=1}^n (z-y_i)} to obtain another real-rooted polynomial, thanks to Rolle’s theorem; the key point is that this operation preserves all the elementary symmetric means up to {s_{n-1}}). One can think of Maclaurin’s inequality as providing a refined version of the arithmetic mean-geometric mean inequality on {n} variables (which corresponds to the case {k=1}, {\ell=n}).

Whereas Newton’s inequality works for arbitrary real {y_i}, the Maclaurin inequality breaks down once one or more of the {y_i} are permitted to be negative. A key example occurs when {n} is even, half of the {y_i} are equal to {+1}, and half are equal to {-1}. Here, one can verify that the elementary symmetric means {s_k(y)} vanish for odd {k} and are equal to { (-1)^{k/2} \frac{\binom{n/2}{k/2}}{\binom{n}{k}}} for even {k}. In particular, some routine estimation then gives the order of magnitude bound

\displaystyle  |s_k(y)|^{\frac{1}{k}} \asymp \frac{k^{1/2}}{n^{1/2}} \ \ \ \ \ (1)

for {0 < k \leq n} even, thus giving a significant violation of the Maclaurin inequality even after putting absolute values around the {s_k(y)}. In particular, vanishing of one {s_k(y)} does not imply vanishing of all subsequent {s_\ell(y)}.

On the other hand, it was observed by Gopalan and Yehudayoff that if two consecutive values {s_k(y), s_{k+1}(y)} are small, then this makes all subsequent values {s_\ell(y)} small as well. More precise versions of this statement were subsequently observed by Meka-Reingold-Tal and Doron-Hatami-Hoza, who obtained estimates of the shape

\displaystyle  |s_\ell(y)|^{\frac{1}{\ell}} \ll \ell^{1/2} \max (|s_k(y)|^{\frac{1}{k}}, |s_{k+1}(y)|^{\frac{1}{k+1}}) \ \ \ \ \ (2)

whenever {1 \leq k \leq \ell \leq n} and {y_1,\dots,y_n} are real (but possibly negative). For instance, setting {k=1, \ell=n} we obtain the inequality

\displaystyle  (y_1 \dots y_n)^{1/n} \ll n^{1/2} \max( |s_1(y)|, |s_2(y)|^2)

which can be established by combining the arithmetic mean-geometric mean inequality

\displaystyle  (y_1 \dots y_n)^{2/n} \leq \frac{y_1^2 + \dots + y_n^2}{n}

with the Newton identity

\displaystyle  y_1^2 + \dots + y_n^2 = n^2 s_1(y)^2 - n(n-1) s_2(y).

As with the proof of the Newton inequalities, the general case of (2) can be obtained from this special case after some standard manipulations (including the differentiation operation mentioned previously).

However, if one inspects the bound (2) against the bounds (1) given by the key example, we see a mismatch – the right-hand side of (2) is larger than the left-hand side by a factor of about {k^{1/2}}. The main result of the paper rectifies this by establishing the optimal (up to constants) improvement

\displaystyle  |s_\ell(y)|^{\frac{1}{\ell}} \ll \frac{\ell^{1/2}}{k^{1/2}} \max (|s_k(y)|^{\frac{1}{k}}, |s_{k+1}(y)|^{\frac{1}{k+1}}) \ \ \ \ \ (3)

of (2). This answers a question posed on MathOverflow.

Unlike the previous arguments, we do not rely primarily on the arithmetic mean-geometric mean inequality. Instead, our primary tool is a new inequality

\displaystyle  \sum_{m=0}^\ell \binom{\ell}{m} |s_m(y)| r^m \geq (1+ |s_\ell(y)|^{2/\ell} r^2)^{\ell/2}, \ \ \ \ \ (4)

valid for all {1 \leq \ell \leq n} and {r>0}. Roughly speaking, the bound (3) would follow from (4) by setting {r \asymp (k/\ell)^{1/2} |s_\ell(y)|^{-1/\ell}}, provided that we can show that the {m=k,k+1} terms of the left-hand side dominate the sum in this regime. This can be done, after a technical step of passing to tuples {y} which nearly optimize the required inequality (3).

We sketch the proof of the inequality (4) as follows. One can use some standard manipulations reduce to the case where {\ell=n} and {|s_n(y)|=1}, and after replacing {r} with {1/r} one is now left with establishing the inequality

\displaystyle  \sum_{m=0}^n \binom{n}{m} |s_m(y)| r^{n-m} \geq (1+r^2)^{n/2}.

Note that equality is attained in the previously discussed example with half of the {y_i} equal to {+1} and the other half equal to {-1}, thanks to the binomial theorem.

To prove this identity, we consider the polynomial

\displaystyle  \prod_{j=1}^n (z - y_j) = \sum_{m=0}^n (-1)^m \binom{n}{m} s_k(m) z^{n-m}.

Evaluating this polynomial at {ir}, taking absolute values, using the triangle inequality, and then taking logarithms, we conclude that

\displaystyle  \frac{1}{2} \sum_{j=1}^n \log(y_j^2 + r^2) \leq \log(\sum_{m=0}^n \binom{n}{m} |s_m(y)| r^{n-m}).

A convexity argument gives the lower bound

\displaystyle  \log(y_j^2 + r^2) \geq \log(1+r^2) + \frac{2}{1+r^2} \log |y_j|

while the normalization {|s_n(y)|=1} gives

\displaystyle  \sum_{j=1}^n \log |y_j| = 0

and the claim follows.

A common task in analysis is to obtain bounds on sums

\displaystyle  \sum_{n \in A} f(n)

or integrals

\displaystyle  \int_A f(x)\ dx

where {A} is some simple region (such as an interval) in one or more dimensions, and {f} is an explicit (and elementary) non-negative expression involving one or more variables (such as {n} or {x}, and possibly also some additional parameters. Often, one would be content with an order of magnitude upper bound such as

\displaystyle  \sum_{n \in A} f(n) \ll X

or

\displaystyle  \int_A f(x)\ dx \ll X

where we use {X \ll Y} (or {Y \gg X} or {X = O(Y)}) to denote the bound {|X| \leq CY} for some constant {C}; sometimes one wishes to also obtain the matching lower bound, thus obtaining

\displaystyle  \sum_{n \in A} f(n) \asymp X

or

\displaystyle  \int_A f(x)\ dx \asymp X

where {X \asymp Y} is synonymous with {X \ll Y \ll X}. Finally, one may wish to obtain a more precise bound, such as

\displaystyle  \sum_{n \in A} f(n) = (1+o(1)) X

where {o(1)} is a quantity that goes to zero as the parameters of the problem go to infinity (or some other limit). (For a deeper dive into asymptotic notation in general, see this previous blog post.)

Here are some typical examples of such estimation problems, drawn from recent questions on MathOverflow:

  • (i) (From this question) If {d,p \geq 1} and {a>d/p}, is the expression

    \displaystyle  \sum_{j \in {\bf Z}} 2^{(\frac{d}{p}+1-a)j} \int_0^\infty e^{-2^j s} \frac{s^a}{1+s^{2a}}\ ds

    finite?
  • (ii) (From this question) If {h,m \geq 1}, how can one show that

    \displaystyle  \sum_{d=0}^\infty \frac{2d+1}{2h^2 (1 + \frac{d(d+1)}{h^2}) (1 + \frac{d(d+1)}{h^2m^2})^2} \ll 1 + \log(m^2)?

  • (iii) (From this question) Can one show that

    \displaystyle  \sum_{k=1}^{n-1} \frac{k^{2n-4k-3}(n^2-2nk+2k^2)}{(n-k)^{2n-4k-1}} = (c+o(1)) \sqrt{n}

    as {n \rightarrow \infty} for an explicit constant {c}, and what is this constant?

Compared to other estimation tasks, such as that of controlling oscillatory integrals, exponential sums, singular integrals, or expressions involving one or more unknown functions (that are only known to lie in some function spaces, such as an {L^p} space), high-dimensional geometry (or alternatively, large numbers of random variables), or number-theoretic structures (such as the primes), estimation of sums or integrals of non-negative elementary expressions is a relatively straightforward task, and can be accomplished by a variety of methods. The art of obtaining such estimates is typically not explicitly taught in textbooks, other than through some examples and exercises; it is typically picked up by analysts (or those working in adjacent areas, such as PDE, combinatorics, or theoretical computer science) as graduate students, while they work through their thesis or their first few papers in the subject.

Somewhat in the spirit of this previous post on analysis problem solving strategies, I am going to try here to collect some general principles and techniques that I have found useful for these sorts of problems. As with the previous post, I hope this will be something of a living document, and encourage others to add their own tips or suggestions in the comments.

Read the rest of this entry »

Jon Bennett and I have just uploaded to the arXiv our paper “Adjoint Brascamp-Lieb inequalities“. In this paper, we observe that the family of multilinear inequalities known as the Brascamp-Lieb inequalities (or Holder-Brascamp-Lieb inequalities) admit an adjoint formulation, and explore the theory of these adjoint inequalities and some of their consequences.

To motivate matters let us review the classical theory of adjoints for linear operators. If one has a bounded linear operator {T: L^p(X) \rightarrow L^q(Y)} for some measure spaces {X,Y} and exponents {1 < p, q < \infty}, then one can define an adjoint linear operator {T^*: L^{q'}(Y) \rightarrow L^{p'}(X)} involving the dual exponents {\frac{1}{p}+\frac{1}{p'} = \frac{1}{q}+\frac{1}{q'} = 1}, obeying (formally at least) the duality relation

\displaystyle  \langle Tf, g \rangle = \langle f, T^* g \rangle \ \ \ \ \ (1)

for suitable test functions {f, g} on {X, Y} respectively. Using the dual characterization

\displaystyle  \|f\|_{L^{p'}(X)} = \sup_{g: \|g\|_{L^p(X)} \leq 1} |\langle f, g \rangle|

of {L^{p'}(X)} (and similarly for {L^{q'}(Y)}), one can show that {T^*} has the same operator norm as {T}.

There is a slightly different way to proceed using Hölder’s inequality. For sake of exposition let us make the simplifying assumption that {T} (and hence also {T^*}) maps non-negative functions to non-negative functions, and ignore issues of convergence or division by zero in the formal calculations below. Then for any reasonable function {g} on {Y}, we have

\displaystyle  \| T^* g \|_{L^{p'}(X)}^{p'} = \langle (T^* g)^{p'-1}, T^* g \rangle = \langle T (T^* g)^{p'-1}, g \rangle

\displaystyle  \leq \|T\|_{op} \|(T^* g)^{p'-1}\|_{L^p(X)} \|g\|_{L^{p'}(Y)}

\displaystyle  = \|T\|_{op} \|T^* g \|_{L^{p'}(X)}^{p'-1} \|g\|_{L^{p'}(Y)};

by (1) and Hölder; dividing out by {\|T^* g \|_{L^{p'}(X)}^{p'-1}} we obtain {\|T^*\|_{op} \leq \|T\|_{op}}, and a similar argument also recovers the reverse inequality.

The first argument also extends to some extent to multilinear operators. For instance if one has a bounded bilinear operator {B: L^p(X) \times L^q(Y) \rightarrow L^r(Z)} for {1 < p,q,r < \infty} then one can then define adjoint bilinear operators {B^{*1}: L^q(Y) \times L^{r'}(Z) \rightarrow L^{p'}(X)} and {B^{*2}: L^p(X) \times L^{r'}(Z) \rightarrow L^{q'}(Y)} obeying the relations

\displaystyle  \langle B(f, g),h \rangle = \langle B^{*1}(g,h), f \rangle = \langle B^{*2}(f,h), g \rangle

and with exactly the same operator norm as {B}. It is also possible, formally at least, to adapt the Hölder inequality argument to reach the same conclusion.

In this paper we observe that the Hölder inequality argument can be modified in the case of Brascamp-Lieb inequalities to obtain a different type of adjoint inequality. (Continuous) Brascamp-Lieb inequalities take the form

\displaystyle  \int_{{\bf R}^d} \prod_{i=1}^k f_i^{c_i} \circ B_i \leq \mathrm{BL}(\mathbf{B},\mathbf{c}) (\prod_{i=1}^k \int_{{\bf R}^{d_i}} f_i)^{c_i}

for various exponents {c_1,\dots,c_k} and surjective linear maps {B_i: {\bf R}^d \rightarrow {\bf R}^{d_i}}, where {f_i: {\bf R}^{d_i} \rightarrow {\bf R}} are arbitrary non-negative measurable functions and {\mathrm{BL}(\mathbf{B},\mathbf{c})} is the best constant for which this inequality holds for all such {f_i}. [There is also another inequality involving variances with respect to log-concave distributions that is also due to Brascamp and Lieb, but it is not related to the inequalities discussed here.] Well known examples of such inequalities include Hölder’s inequality and the sharp Young convolution inequality; another is the Loomis-Whitney inequality, the first non-trivial example of which is

\displaystyle  \int_{{\bf R}^3} f(y,z)^{1/2} g(x,z)^{1/2} h(x,y)^{1/2}

\displaystyle  \leq (\int_{{\bf R}^2} f)^{1/2} (\int_{{\bf R}^2} g)^{1/2} (\int_{{\bf R}^2} h)^{1/2} \ \ \ \ \ (2)

for all non-negative measurable {f,g,h: {\bf R}^2 \rightarrow {\bf R}}. There are also discrete analogues of these inequalities, in which the Euclidean spaces {{\bf R}^d, {\bf R}^{d_i}} are replaced by discrete abelian groups, and the surjective linear maps {B_i} are replaced by discrete homomorphisms.

The operation {f \mapsto f \circ B_i} of pulling back a function on {{\bf R}^{d_i}} by a linear map {B_i: {\bf R}^d \rightarrow {\bf R}^{d_i}} to create a function on {{\bf R}^d} has an adjoint pushforward map {(B_i)_*}, which takes a function on {{\bf R}^d} and basically integrates it on the fibers of {B_i} to obtain a “marginal distribution” on {{\bf R}^{d_i}} (possibly multiplied by a normalizing determinant factor). The adjoint Brascamp-Lieb inequalities that we obtain take the form

\displaystyle  \|f\|_{L^p({\bf R}^d)} \leq \mathrm{ABL}( \mathbf{B}, \mathbf{c}, \theta, p) \prod_{i=1}^k \|(B_i)_* f \|_{L^{p_i}({\bf R}^{d_i})}^{\theta_i}

for non-negative {f: {\bf R}^d \rightarrow {\bf R}} and various exponents {p, p_i, \theta_i}, where {\mathrm{ABL}( \mathbf{B}, \mathbf{c}, \theta, p)} is the optimal constant for which the above inequality holds for all such {f}; informally, such inequalities control the {L^p} norm of a non-negative function in terms of its marginals. It turns out that every Brascamp-Lieb inequality generates a family of adjoint Brascamp-Lieb inequalities (with the exponent {p} being less than or equal to {1}). For instance, the adjoints of the Loomis-Whitney inequality (2) are the inequalities

\displaystyle  \| f \|_{L^p({\bf R}^3)} \leq \| (B_1)_* f \|_{L^{p_1}({\bf R}^2)}^{\theta_1} \| (B_2)_* f \|_{L^{p_2}({\bf R}^2)}^{\theta_2} \| (B_3)_* f \|_{L^{p_3}({\bf R}^2)}^{\theta_3}

for all non-negative measurable {f: {\bf R}^3 \rightarrow {\bf R}}, all {\theta_1, \theta_2, \theta_3>0} summing to {1}, and all {0 < p \leq 1}, where the {p_i} exponents are defined by the formula

\displaystyle  \frac{1}{2} (1-\frac{1}{p}) = \theta_i (1-\frac{1}{p_i})

and the {(B_i)_* f:{\bf R}^2 \rightarrow {\bf R}} are the marginals of {f}:

\displaystyle  (B_1)_* f(y,z) := \int_{\bf R} f(x,y,z)\ dx

\displaystyle  (B_2)_* f(x,z) := \int_{\bf R} f(x,y,z)\ dy

\displaystyle  (B_3)_* f(x,y) := \int_{\bf R} f(x,y,z)\ dz.

One can derive these adjoint Brascamp-Lieb inequalities from their forward counterparts by a version of the Hölder inequality argument mentioned previously, in conjunction with the observation that the pushforward maps {(B_i)_*} are mass-preserving (i.e., they preserve the {L^1} norm on non-negative functions). Conversely, it turns out that the adjoint Brascamp-Lieb inequalities are only available when the forward Brascamp-Lieb inequalities are. In the discrete case the forward and adjoint Brascamp-Lieb constants are essentially identical, but in the continuous case they can (and often do) differ by up to a constant. Furthermore, whereas in the forward case there is a famous theorem of Lieb that asserts that the Brascamp-Lieb constants can be computed by optimizing over gaussian inputs, the same statement is only true up to constants in the adjoint case, and in fact in most cases the gaussians will fail to optimize the adjoint inequality. The situation appears to be complicated; roughly speaking, the adjoint inequalities only use a portion of the range of possible inputs of the forward Brascamp-Lieb inequality, and this portion often misses the gaussian inputs that would otherwise optimize the inequality.

We have located a modest number of applications of the adjoint Brascamp-Lieb inequality (but hope that there will be more in the future):

  • The inequalities become equalities at {p=1}; taking a derivative at this value (in the spirit of the replica trick in physics) we recover the entropic Brascamp-Lieb inequalities of Carlen and Cordero-Erausquin. For instance, the derivative of the adjoint Loomis-Whitney inequalities at {p=1} yields Shearer’s inequality.
  • The adjoint Loomis-Whitney inequalities, together with a few more applications of Hölder’s inequality, implies the log-concavity of the Gowers uniformity norms on non-negative functions, which was previously observed by Shkredov and by Manners.
  • Averaging the adjoint Loomis-Whitney inequalities over coordinate systems gives reverse {L^p} inequalities for the X-ray transform and other tomographic transforms that appear to be new in the literature. In particular, we obtain some monotonicity of the {L^{p_k}} norms or entropies of the {k}-plane transform in {k} (if the exponents {p_k} are chosen in a dimensionally consistent fashion).

We also record a number of variants of the adjoint Brascamp-Lieb inequalities, including discrete variants, and a reverse inequality involving {L^p} norms with {p>1} rather than {p<1}.

The “epsilon-delta” nature of analysis can be daunting and unintuitive to students, as the heavy reliance on inequalities rather than equalities. But it occurred to me recently that one might be able to leverage the intuition one already has from “deals” – of the type one often sees advertised by corporations – to get at least some informal understanding of these concepts.

Take for instance the concept of an upper bound {X \leq A} or a lower bound {X \geq B} on some quantity {X}. From an economic perspective, one could think of the upper bound as an assertion that {X} can be “bought” for {A} units of currency, and the lower bound can similarly be viewed as an assertion that {X} can be “sold” for {B} units of currency. Thus for instance, a system of inequalities and equations like

\displaystyle  2 \leq Y \leq 5

\displaystyle  X+Y \leq 7

\displaystyle  X+Y+Z = 10

\displaystyle  Y+Z \leq 6

could be viewed as analogous to a currency rate exchange board, of the type one sees for instance in airports:

Currency We buy at We sell at
{Y} {2} {5}
{X+Y} {7}
{X+Y+Z} {10} {10}
{Y+Z} {6}

Someone with an eye for spotting “deals” might now realize that one can actually buy {Y} for {3} units of currency rather than {5}, by purchasing one copy each of {X+Y} and {Y+Z} for {7+6=13} units of currency, then selling off {X+Y+Z} to recover {10} units of currency back. In more traditional mathematical language, one can improve the upper bound {Y \leq 5} to {Y \leq 3} by taking the appropriate linear combination of the inequalities {X+Y \leq 7}, {Y+Z \leq 6}, and {X+Y+Z=10}. More generally, this way of thinking is useful when faced with a linear programming situation (and of course linear programming is a key foundation for operations research), although this analogy begins to break down when one wants to use inequalities in a more non-linear fashion.

Asymptotic estimates such as {X = O(Y)} (also often written {X \lesssim Y} or {X \ll Y}) can be viewed as some sort of liquid market in which {Y} can be used to purchase {X}, though depending on market rates, one may need a large number of units of {Y} in order to buy a single unit of {X}. An asymptotic estimate like {X=o(Y)} represents an economic situation in which {Y} is so much more highly desired than {X} that, if one is a patient enough haggler, one can eventually convince someone to give up a unit of {X} for even just a tiny amount of {Y}.

When it comes to the basic analysis concepts of convergence and continuity, one can similarly view these concepts as various economic transactions involving the buying and selling of accuracy. One could for instance imagine the following hypothetical range of products in which one would need to spend more money to obtain higher accuracy to measure weight in grams:

Object Accuracy Price
Low-end kitchen scale {\pm 1} gram {\$ 5}
High-end bathroom scale {\pm 0.1} grams {\$ 15}
Low-end lab scale {\pm 0.01} grams {\$ 50}
High-end lab scale {\pm 0.001} grams {\$ 250}

The concept of convergence {x_n \rightarrow x} of a sequence {x_1,x_2,x_3,\dots} to a limit {x} could then be viewed as somewhat analogous to a rewards program, of the type offered for instance by airlines, in which various tiers of perks are offered when one hits a certain level of “currency” (e.g., frequent flyer miles). For instance, the convergence of the sequence {x_n := 2 + \frac{1}{\sqrt{n}}} to its limit {x := 2} offers the following accuracy “perks” depending on one’s level {n} in the sequence:

Status Accuracy benefit Eligibility
Basic status {|x_n - x| \leq 1} {n \geq 1}
Bronze status {|x_n - x| \leq 0.1} {n \geq 10^2}
Silver status {|x_n - x| \leq 0.01} {n \geq 10^4}
Gold status {|x_n - x| \leq 0.001} {n \geq 10^6}
{\dots} {\dots} {\dots}

With this conceptual model, convergence means that any status level of accuracy can be unlocked if one’s number {n} of “points earned” is high enough.

In a similar vein, continuity becomes analogous to a conversion program, in which accuracy benefits from one company can be traded in for new accuracy benefits in another company. For instance, the continuity of the function {f(x) = 2 + \sqrt{x}} at the point {x_0=0} can be viewed in terms of the following conversion chart:

Accuracy benefit of {x} to trade in Accuracy benefit of {f(x)} obtained
{|x - x_0| \leq 1} {|f(x) - f(x_0)| \leq 1}
{|x - x_0| \leq 0.01} {|f(x) - f(x_0)| \leq 0.1}
{|x - x_0| \leq 0.0001} {|f(x) - f(x_0)| \leq 0.01}
{\dots} {\dots}

Again, the point is that one can purchase any desired level of accuracy of {f(x)} provided one trades in a suitably high level of accuracy of {x}.

At present, the above conversion chart is only available at the single location {x_0}. The concept of uniform continuity can then be viewed as an advertising copy that “offer prices are valid in all store locations”. In a similar vein, the concept of equicontinuity for a class {{\mathcal F}} of functions is a guarantee that “offer applies to all functions {f} in the class {{\mathcal F}}, without any price discrimination. The combined notion of uniform equicontinuity is then of course the claim that the offer is valid in all locations and for all functions.

In a similar vein, differentiability can be viewed as a deal in which one can trade in accuracy of the input for approximately linear behavior of the output; to oversimplify slightly, smoothness can similarly be viewed as a deal in which one trades in accuracy of the input for high-accuracy polynomial approximability of the output. Measurability of a set or function can be viewed as a deal in which one trades in a level of resolution for an accurate approximation of that set or function at the given resolution. And so forth.

Perhaps readers can propose some other examples of mathematical concepts being re-interpreted as some sort of economic transaction?

In orthodox first-order logic, variables and expressions are only allowed to take one value at a time; a variable {x}, for instance, is not allowed to equal {+3} and {-3} simultaneously. We will call such variables completely specified. If one really wants to deal with multiple values of objects simultaneously, one is encouraged to use the language of set theory and/or logical quantifiers to do so.

However, the ability to allow expressions to become only partially specified is undeniably convenient, and also rather intuitive. A classic example here is that of the quadratic formula:

\displaystyle  \hbox{If } x,a,b,c \in {\bf R} \hbox{ with } a \neq 0, \hbox{ then }

\displaystyle  ax^2+bx+c=0 \hbox{ if and only if } x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}. \ \ \ \ \ (1)

Strictly speaking, the expression {x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}} is not well-formed according to the grammar of first-order logic; one should instead use something like

\displaystyle x = \frac{-b - \sqrt{b^2-4ac}}{2a} \hbox{ or } x = \frac{-b + \sqrt{b^2-4ac}}{2a}

or

\displaystyle x \in \left\{ \frac{-b - \sqrt{b^2-4ac}}{2a}, \frac{-b + \sqrt{b^2-4ac}}{2a} \right\}

or

\displaystyle x = \frac{-b + \epsilon \sqrt{b^2-4ac}}{2a} \hbox{ for some } \epsilon \in \{-1,+1\}

in order to strictly adhere to this grammar. But none of these three reformulations are as compact or as conceptually clear as the original one. In a similar spirit, a mathematical English sentence such as

\displaystyle  \hbox{The sum of two odd numbers is an even number} \ \ \ \ \ (2)

is also not a first-order sentence; one would instead have to write something like

\displaystyle  \hbox{For all odd numbers } x, y, \hbox{ the number } x+y \hbox{ is even} \ \ \ \ \ (3)

or

\displaystyle  \hbox{For all odd numbers } x,y \hbox{ there exists an even number } z \ \ \ \ \ (4)

\displaystyle  \hbox{ such that } x+y=z

instead. These reformulations are not all that hard to decipher, but they do have the aesthetically displeasing effect of cluttering an argument with temporary variables such as {x,y,z} which are used once and then discarded.

Another example of partially specified notation is the innocuous {\ldots} notation. For instance, the assertion

\displaystyle \pi=3.14\ldots,

when written formally using first-order logic, would become something like

\displaystyle \pi = 3 + \frac{1}{10} + \frac{4}{10^2} + \sum_{n=3}^\infty \frac{a_n}{10^n} \hbox{ for some sequence } (a_n)_{n=3}^\infty

\displaystyle  \hbox{ with } a_n \in \{0,1,2,3,4,5,6,7,8,9\} \hbox{ for all } n,

which is not exactly an elegant reformulation. Similarly with statements such as

\displaystyle \tan x = x + \frac{x^3}{3} + \ldots \hbox{ for } |x| < \pi/2

or

\displaystyle \tan x = x + \frac{x^3}{3} + O(|x|^5) \hbox{ for } |x| < \pi/2.

Below the fold I’ll try to assign a formal meaning to partially specified expressions such as (1), for instance allowing one to condense (2), (3), (4) to just

\displaystyle  \hbox{odd} + \hbox{odd} = \hbox{even}.

When combined with another common (but often implicit) extension of first-order logic, namely the ability to reason using ambient parameters, we become able to formally introduce asymptotic notation such as the big-O notation {O()} or the little-o notation {o()}. We will explain how to do this at the end of this post.

Read the rest of this entry »

A few months ago I posted a question about analytic functions that I received from a bright high school student, which turned out to be studied and resolved by de Bruijn. Based on this positive resolution, I thought I might try my luck again and list three further questions that this student asked which do not seem to be trivially resolvable.

  1. Does there exist a smooth function {f: {\bf R} \rightarrow {\bf R}} which is nowhere analytic, but is such that the Taylor series {\sum_{n=0}^\infty \frac{f^{(n)}(x_0)}{n!} (x-x_0)^n} converges for every {x, x_0 \in {\bf R}}? (Of course, this series would not then converge to {f}, but instead to some analytic function {f_{x_0}(x)} for each {x_0}.) I have a vague feeling that perhaps the Baire category theorem should be able to resolve this question, but it seems to require a bit of effort. (Update: answered by Alexander Shaposhnikov in comments.)
  2. Is there a function {f: {\bf R} \rightarrow {\bf R}} which meets every polynomial {P: {\bf R} \rightarrow {\bf R}} to infinite order in the following sense: for every polynomial {P}, there exists {x_0} such that {f^{(n)}(x_0) = P^{(n)}(x_0)} for all {n=0,1,2,\dots}? Such a function would be rather pathological, perhaps resembling a space-filling curve. (Update: solved for smooth {f} by Aleksei Kulikov in comments. The situation currently remains unclear in the general case.)
  3. Is there a power series {\sum_{n=0}^\infty a_n x^n} that diverges everywhere (except at {x=0}), but which becomes pointwise convergent after dividing each of the monomials {a_n x^n} into pieces {a_n x^n = \sum_{j=1}^\infty a_{n,j} x^n} for some {a_{n,j}} summing absolutely to {a_n}, and then rearranging, i.e., there is some rearrangement {\sum_{m=1}^\infty a_{n_m, j_m} x^{n_m}} of {\sum_{n=0}^\infty \sum_{j=1}^\infty a_{n,j} x^n} that is pointwise convergent for every {x}? (Update: solved by Jacob Manaker in comments.)

Feel free to post answers or other thoughts on these questions in the comments.

Louis Esser, Burt Totaro, Chengxi Wang, and myself have just uploaded to the arXiv our preprint “Varieties of general type with many vanishing plurigenera, and optimal sine and sawtooth inequalities“. This is an interdisciplinary paper that arose because in order to optimize a certain algebraic geometry construction it became necessary to solve a purely analytic question which, while simple, did not seem to have been previously studied in the literature. We were able to solve the analytic question exactly and thus fully optimize the algebraic geometry construction, though the analytic question may have some independent interest.

Let us first discuss the algebraic geometry application. Given a smooth complex {n}-dimensional projective variety {X} there is a standard line bundle {K_X} attached to it, known as the canonical line bundle; {n}-forms on the variety become sections of this bundle. The bundle may not actually admit global sections; that is to say, the dimension {h^0(X, K_X)} of global sections may vanish. But as one raises the canonical line bundle {K_X} to higher and higher powers to form further line bundles {mK_X}, the number of global sections tends to increase; in particular, the dimension {h^0(X, mK_X)} of global sections (known as the {m^{th}} plurigenus) always obeys an asymptotic of the form

\displaystyle  h^0(X, mK_X) = \mathrm{vol}(X) \frac{m^n}{n!} + O( m^{n-1} )

as {m \rightarrow \infty} for some non-negative number {\mathrm{vol}(X)}, which is called the volume of the variety {X}, which is an invariant that reveals some information about the birational geometry of {X}. For instance, if the canonical line bundle is ample (or more generally, nef), this volume is equal to the intersection number {K_X^n} (roughly speaking, the number of common zeroes of {n} generic sections of the canonical line bundle); this is a special case of the asymptotic Riemann-Roch theorem. In particular, the volume {\mathrm{vol}(X)} is a natural number in this case. However, it is possible for the volume to also be fractional in nature. One can then ask: how small can the volume get {\mathrm{vol}(X)} without vanishing entirely? (By definition, varieties with non-vanishing volume are known as varieties of general type.)

It follows from a deep result obtained independently by Hacon–McKernan, Takayama and Tsuji that there is a uniform lower bound for the volume {\mathrm{vol}(X)} of all {n}-dimensional projective varieties of general type. However, the precise lower bound is not known, and the current paper is a contribution towards probing this bound by constructing varieties of particularly small volume in the high-dimensional limit {n \rightarrow \infty}. Prior to this paper, the best such constructions of {n}-dimensional varieties basically had exponentially small volume, with a construction of volume at most {e^{-(1+o(1))n \log n}} given by Ballico–Pignatelli–Tasin, and an improved construction with a volume bound of {e^{-\frac{1}{3} n \log^2 n}} given by Totaro and Wang. In this paper, we obtain a variant construction with the somewhat smaller volume bound of {e^{-(1-o(1)) n^{3/2} \log^{1/2} n}}; the method also gives comparable bounds for some other related algebraic geometry statistics, such as the largest {m} for which the pluricanonical map associated to the linear system {|mK_X|} is not a birational embedding into projective space.

The space {X} is constructed by taking a general hypersurface of a certain degree {d} in a weighted projective space {P(a_0,\dots,a_{n+1})} and resolving the singularities. These varieties are relatively tractable to work with, as one can use standard algebraic geometry tools (such as the ReidTai inequality) to provide sufficient conditions to guarantee that the hypersurface has only canonical singularities and that the canonical bundle is a reflexive sheaf, which allows one to calculate the volume exactly in terms of the degree {d} and weights {a_0,\dots,a_{n+1}}. The problem then reduces to optimizing the resulting volume given the constraints needed for the above-mentioned sufficient conditions to hold. After working with a particular choice of weights (which consist of products of mostly consecutive primes, with each product occuring with suitable multiplicities {c_0,\dots,c_{b-1}}), the problem eventually boils down to trying to minimize the total multiplicity {\sum_{j=0}^{b-1} c_j}, subject to certain congruence conditions and other bounds on the {c_j}. Using crude bounds on the {c_j} eventually leads to a construction with volume at most {e^{-0.8 n^{3/2} \log^{1/2} n}}, but by taking advantage of the ability to “dilate” the congruence conditions and optimizing over all dilations, we are able to improve the {0.8} constant to {1-o(1)}.

Now it is time to turn to the analytic side of the paper by describing the optimization problem that we solve. We consider the sawtooth function {g: {\bf R} \rightarrow (-1/2,1/2]}, with {g(x)} defined as the unique real number in {(-1/2,1/2]} that is equal to {x} mod {1}. We consider a (Borel) probability measure {\mu} on the real line, and then compute the average value of this sawtooth function

\displaystyle  \mathop{\bf E}_\mu g(x) := \int_{\bf R} g(x)\ d\mu(x)

as well as various dilates

\displaystyle  \mathop{\bf E}_\mu g(kx) := \int_{\bf R} g(kx)\ d\mu(x)

of this expectation. Since {g} is bounded above by {1/2}, we certainly have the trivial bound

\displaystyle  \min_{1 \leq k \leq m} \mathop{\bf E}_\mu g(kx) \leq \frac{1}{2}.

However, this bound is not very sharp. For instance, the only way in which {\mathop{\bf E}_\mu g(x)} could attain the value of {1/2} is if the probability measure {\mu} was supported on half-integers, but in that case {\mathop{\bf E}_\mu g(2x)} would vanish. For the algebraic geometry application discussed above one is then led to the following question: for a given choice of {m}, what is the best upper bound {c^{\mathrm{saw}}_m} on the quantity {\min_{1 \leq k \leq m} \mathop{\bf E}_\mu g(kx)} that holds for all probability measures {\mu}?

If one considers the deterministic case in which {\mu} is a Dirac mass supported at some real number {x_0}, then the Dirichlet approximation theorem tells us that there is {1 \leq k \leq m} such that {x_0} is within {\frac{1}{m+1}} of an integer, so we have

\displaystyle  \min_{1 \leq k \leq m} \mathop{\bf E}_\mu g(kx) \leq \frac{1}{m+1}

in this case, and this bound is sharp for deterministic measures {\mu}. Thus we have

\displaystyle  \frac{1}{m+1} \leq c^{\mathrm{saw}}_m \leq \frac{1}{2}.

However, both of these bounds turn out to be far from the truth, and the optimal value of {c^{\mathrm{saw}}_m} is comparable to {\frac{\log 2}{\log m}}. In fact we were able to compute this quantity precisely:

Theorem 1 (Optimal bound for sawtooth inequality) Let {m \geq 1}.
  • (i) If {m = 2^r} for some natural number {r}, then {c^{\mathrm{saw}}_m = \frac{1}{r+2}}.
  • (ii) If {2^r < m \leq 2^{r+1}} for some natural number {r}, then {c^{\mathrm{saw}}_m = \frac{2^r}{2^r(r+1) + m}}.
In particular, we have {c^{\mathrm{saw}}_m = \frac{\log 2 + o(1)}{\log m}} as {m \rightarrow \infty}.

We establish this bound through duality. Indeed, suppose we could find non-negative coefficients {a_1,\dots,a_m} such that one had the pointwise bound

\displaystyle  \sum_{k=1}^m a_k g(kx) \leq 1 \ \ \ \ \ (1)

for all real numbers {x}. Integrating this against an arbitrary probability measure {\mu}, we would conclude

\displaystyle  (\sum_{k=1}^m a_k) \min_{1 \leq k \leq m} \mathop{\bf E}_\mu g(kx) \leq \sum_{k=1}^m a_k \mathop{\bf E}_\mu g(kx) \leq 1

and hence

\displaystyle  c^{\mathrm{saw}}_m \leq \frac{1}{\sum_{k=1}^m a_k}.

Conversely, one can find lower bounds on {c^{\mathrm{saw}}_m} by selecting suitable candidate measures {\mu} and computing the means {\mathop{\bf E}_\mu g(kx)}. The theory of linear programming duality tells us that this method must give us the optimal bound, but one has to locate the optimal measure {\mu} and optimal weights {a_1,\dots,a_m}. This we were able to do by first doing some extensive numerics to discover these weights and measures for small values of {m}, and then doing some educated guesswork to extrapolate these examples to the general case, and then to verify the required inequalities. In case (i) the situation is particularly simple, as one can take {\mu} to be the discrete measure that assigns a probability {\frac{1}{r+2}} to the numbers {\frac{1}{2}, \frac{1}{4}, \dots, \frac{1}{2^r}} and the remaining probability of {\frac{2}{r+2}} to {\frac{1}{2^{r+1}}}, while the optimal weighted inequality (1) turns out to be

\displaystyle  2g(x) + \sum_{j=1}^r g(2^j x) \leq 1

which is easily proven by telescoping series. However the general case turned out to be significantly tricker to work out, and the verification of the optimal inequality required a delicate case analysis (reflecting the fact that equality was attained in this inequality in a large number of places).

After solving the sawtooth problem, we became interested in the analogous question for the sine function, that is to say what is the best bound {c^{\sin}_m} for the inequality

\displaystyle  \min_{1 \leq k \leq m} \mathop{\bf E}_\mu \sin(kx) \leq c^{\sin}_m.

The left-hand side is the smallest imaginary part of the first {m} Fourier coefficients of {\mu}. To our knowledge this quantity has not previously been studied in the Fourier analysis literature. By adopting a similar approach as for the sawtooth problem, we were able to compute this quantity exactly also:

Theorem 2 For any {m \geq 1}, one has

\displaystyle  c^{\sin}_m = \frac{m+1}{2 \sum_{1 \leq j \leq m: j \hbox{ odd}} \cot \frac{\pi j}{2m+2}}.

In particular,

\displaystyle  c^{\sin}_m = \frac{\frac{\pi}{2} + o(1)}{\log m}.

Interestingly, a closely related cotangent sum recently appeared in this MathOverflow post. Verifying the lower bound on {c^{\sin}_m} boils down to choosing the right test measure {\mu}; it turns out that one should pick the probability measure supported the {\frac{\pi j}{2m+2}} with {1 \leq j \leq m} odd, with probability proportional to {\cot \frac{\pi j}{2m+2}}, and the lower bound verification eventually follows from a classical identity

\displaystyle  \frac{m+1}{2} = \sum_{1 \leq j \leq m; j \hbox{ odd}} \cot \frac{\pi j}{2m+2} \sin \frac{\pi jk}{m+1}

for {1 \leq k \leq m}, first posed by Eisenstein in 1844 and proved by Stern in 1861. The upper bound arises from establishing the trigonometric inequality

\displaystyle  \frac{2}{(m+1)^2} \sum_{1 \leq k \leq m; k \hbox{ odd}}

\displaystyle \cot \frac{\pi k}{2m+2} ( (m+1-k) \sin kx + k \sin(m+1-k)x ) \leq 1

for all real numbers {x}, which to our knowledge is new; the left-hand side has a Fourier-analytic intepretation as convolving the Fejér kernel with a certain discretized square wave function, and this interpretation is used heavily in our proof of the inequality.

In the modern theory of higher order Fourier analysis, a key role are played by the Gowers uniformity norms {\| \|_{U^k}} for {k=1,2,\dots}. For finitely supported functions {f: {\bf Z} \rightarrow {\bf C}}, one can define the (non-normalised) Gowers norm {\|f\|_{\tilde U^k({\bf Z})}} by the formula

\displaystyle  \|f\|_{\tilde U^k({\bf Z})}^{2^k} := \sum_{n,h_1,\dots,h_k \in {\bf Z}} \prod_{\omega_1,\dots,\omega_k \in \{0,1\}} {\mathcal C}^{\omega_1+\dots+\omega_k} f(x+\omega_1 h_1 + \dots + \omega_k h_k)

where {{\mathcal C}} denotes complex conjugation, and then on any discrete interval {[N] = \{1,\dots,N\}} and any function {f: [N] \rightarrow {\bf C}} we can then define the (normalised) Gowers norm

\displaystyle  \|f\|_{U^k([N])} := \| f 1_{[N]} \|_{\tilde U^k({\bf Z})} / \|1_{[N]} \|_{\tilde U^k({\bf Z})}

where {f 1_{[N]}: {\bf Z} \rightarrow {\bf C}} is the extension of {f} by zero to all of {{\bf Z}}. Thus for instance

\displaystyle  \|f\|_{U^1([N])} = |\mathop{\bf E}_{n \in [N]} f(n)|

(which technically makes {\| \|_{U^1([N])}} a seminorm rather than a norm), and one can calculate

\displaystyle  \|f\|_{U^2([N])} \asymp (N \int_0^1 |\mathop{\bf E}_{n \in [N]} f(n) e(-\alpha n)|^4\ d\alpha)^{1/4} \ \ \ \ \ (1)

where {e(\theta) := e^{2\pi i \alpha}}, and we use the averaging notation {\mathop{\bf E}_{n \in A} f(n) = \frac{1}{|A|} \sum_{n \in A} f(n)}.

The significance of the Gowers norms is that they control other multilinear forms that show up in additive combinatorics. Given any polynomials {P_1,\dots,P_m: {\bf Z}^d \rightarrow {\bf Z}} and functions {f_1,\dots,f_m: [N] \rightarrow {\bf C}}, we define the multilinear form

\displaystyle  \Lambda^{P_1,\dots,P_m}(f_1,\dots,f_m) := \sum_{n \in {\bf Z}^d} \prod_{j=1}^m f_j 1_{[N]}(P_j(n)) / \sum_{n \in {\bf Z}^d} \prod_{j=1}^m 1_{[N]}(P_j(n))

(assuming that the denominator is finite and non-zero). Thus for instance

\displaystyle  \Lambda^{\mathrm{n}}(f) = \mathop{\bf E}_{n \in [N]} f(n)

\displaystyle  \Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}}(f,g) = (\mathop{\bf E}_{n \in [N]} f(n)) (\mathop{\bf E}_{n \in [N]} g(n))

\displaystyle  \Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}, \mathrm{n}+2\mathrm{r}}(f,g,h) \asymp \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [-N,N]} f(n) g(n+r) h(n+2r)

\displaystyle  \Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}, \mathrm{n}+\mathrm{r}^2}(f,g,h) \asymp \mathop{\bf E}_{n \in [N]} \mathop{\bf E}_{r \in [-N^{1/2},N^{1/2}]} f(n) g(n+r) h(n+r^2)

where we view {\mathrm{n}, \mathrm{r}} as formal (indeterminate) variables, and {f,g,h: [N] \rightarrow {\bf C}} are understood to be extended by zero to all of {{\bf Z}}. These forms are used to count patterns in various sets; for instance, the quantity {\Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}, \mathrm{n}+2\mathrm{r}}(1_A,1_A,1_A)} is closely related to the number of length three arithmetic progressions contained in {A}. Let us informally say that a form {\Lambda^{P_1,\dots,P_m}(f_1,\dots,f_m)} is controlled by the {U^k[N]} norm if the form is small whenever {f_1,\dots,f_m: [N] \rightarrow {\bf C}} are {1}-bounded functions with at least one of the {f_j} small in {U^k[N]} norm. This definition was made more precise by Gowers and Wolf, who then defined the true complexity of a form {\Lambda^{P_1,\dots,P_m}} to be the least {s} such that {\Lambda^{P_1,\dots,P_m}} is controlled by the {U^{s+1}[N]} norm. For instance,
  • {\Lambda^{\mathrm{n}}} and {\Lambda^{\mathrm{n}, \mathrm{n} + \mathrm{r}}} have true complexity {0};
  • {\Lambda^{\mathrm{n}, \mathrm{n} + \mathrm{r}, \mathrm{n} + \mathrm{2r}}} has true complexity {1};
  • {\Lambda^{\mathrm{n}, \mathrm{n} + \mathrm{r}, \mathrm{n} + \mathrm{2r}, \mathrm{n} + \mathrm{3r}}} has true complexity {2};
  • The form {\Lambda^{\mathrm{n}, \mathrm{n}+2}} (which among other things could be used to count twin primes) has infinite true complexity (which is quite unfortunate for applications).
Roughly speaking, patterns of complexity {1} or less are amenable to being studied by classical Fourier analytic tools (the Hardy-Littlewood circle method); patterns of higher complexity can be handled (in principle, at least) by the methods of higher order Fourier analysis; and patterns of infinite complexity are out of range of both methods and are generally quite difficult to study. See these recent slides of myself (or this video of the lecture) for some further discussion.

Gowers and Wolf formulated a conjecture on what this complexity should be, at least for linear polynomials {P_1,\dots,P_m}; Ben Green and I thought we had resolved this conjecture back in 2010, though it turned out there was a subtle gap in our arguments and we were only able to resolve the conjecture in a partial range of cases. However, the full conjecture was recently resolved by Daniel Altman.

The {U^1} (semi-)norm is so weak that it barely controls any averages at all. For instance the average

\displaystyle  \Lambda^{2\mathrm{n}}(f) = \mathop{\bf E}_{n \in [N], \hbox{ even}} f(n)

is not controlled by the {U^1[N]} semi-norm: it is perfectly possible for a {1}-bounded function {f: [N] \rightarrow {\bf C}} to even have vanishing {U^1([N])} norm but have large value of {\Lambda^{2\mathrm{n}}(f)} (consider for instance the parity function {f(n) := (-1)^n}).

Because of this, I propose inserting an additional norm in the Gowers uniformity norm hierarchy between the {U^1} and {U^2} norms, which I will call the {U^{1^+}} (or “profinite {U^1}“) norm:

\displaystyle  \| f\|_{U^{1^+}[N]} := \frac{1}{N} \sup_P |\sum_{n \in P} f(n)| = \sup_P | \mathop{\bf E}_{n \in [N]} f 1_P(n)|

where {P} ranges over all arithmetic progressions in {[N]}. This can easily be seen to be a norm on functions {f: [N] \rightarrow {\bf C}} that controls the {U^1[N]} norm. It is also basically controlled by the {U^2[N]} norm for {1}-bounded functions {f}; indeed, if {P} is an arithmetic progression in {[N]} of some spacing {q \geq 1}, then we can write {P} as the intersection of an interval {I} with a residue class modulo {q}, and from Fourier expansion we have

\displaystyle  \mathop{\bf E}_{n \in [N]} f 1_P(n) \ll \sup_\alpha |\mathop{\bf E}_{n \in [N]} f 1_I(n) e(\alpha n)|.

If we let {\psi} be a standard bump function supported on {[-1,1]} with total mass and {\delta>0} is a parameter then

\displaystyle  \mathop{\bf E}_{n \in [N]} f 1_I(n) e(\alpha n)

\displaystyle \ll |\mathop{\bf E}_{n \in [-N,2N]; h, k \in [-N,N]} \frac{1}{\delta} \psi(\frac{h}{\delta N})

\displaystyle  1_I(n+h+k) f(n+h+k) e(\alpha(n+h+k))|

\displaystyle  \ll |\mathop{\bf E}_{n \in [-N,2N]; h, k \in [-N,N]} \frac{1}{\delta} \psi(\frac{h}{\delta N}) 1_I(n+k) f(n+h+k) e(\alpha(n+h+k))|

\displaystyle + \delta

(extending {f} by zero outside of {[N]}), as can be seen by using the triangle inequality and the estimate

\displaystyle  \mathop{\bf E}_{h \in [-N,N]} \frac{1}{\delta} \psi(\frac{h}{\delta N}) 1_I(n+h+k) - \mathop{\bf E}_{h \in [-N,N]} \frac{1}{\delta} \psi(\frac{h}{\delta N}) 1_I(n+k)

\displaystyle \ll (1 + \mathrm{dist}(n+k, I) / \delta N)^{-2}.

After some Fourier expansion of {\delta \psi(\frac{h}{\delta N})} we now have

\displaystyle  \mathop{\bf E}_{n \in [N]} f 1_P(n) \ll \frac{1}{\delta} \sup_{\alpha,\beta} |\mathop{\bf E}_{n \in [N]; h, k \in [-N,N]} e(\beta h + \alpha (n+h+k))

\displaystyle 1_P(n+k) f(n+h+k)| + \delta.

Writing {\alpha h + \alpha(n+h+k)} as a linear combination of {n, n+h, n+k} and using the Gowers–Cauchy–Schwarz inequality, we conclude

\displaystyle  \mathop{\bf E}_{n \in [N]} f 1_P(n) \ll \frac{1}{\delta} \|f\|_{U^2([N])} + \delta

hence on optimising in {\delta} we have

\displaystyle  \| f\|_{U^{1^+}[N]} \ll \|f\|_{U^2[N]}^{1/2}.

Forms which are controlled by the {U^{1^+}} norm (but not {U^1}) would then have their true complexity adjusted to {0^+} with this insertion.

The {U^{1^+}} norm recently appeared implicitly in work of Peluse and Prendiville, who showed that the form {\Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}, \mathrm{n}+\mathrm{r}^2}(f,g,h)} had true complexity {0^+} in this notation (with polynomially strong bounds). [Actually, strictly speaking this control was only shown for the third function {h}; for the first two functions {f,g} one needs to localize the {U^{1^+}} norm to intervals of length {\sim \sqrt{N}}. But I will ignore this technical point to keep the exposition simple.] The weaker claim that {\Lambda^{\mathrm{n}, \mathrm{n}+\mathrm{r}^2}(f,g)} has true complexity {0^+} is substantially easier to prove (one can apply the circle method together with Gauss sum estimates).

The well known inverse theorem for the {U^2} norm tells us that if a {1}-bounded function {f} has {U^2[N]} norm at least {\eta} for some {0 < \eta < 1}, then there is a Fourier phase {n \mapsto e(\alpha n)} such that

\displaystyle  |\mathop{\bf E}_{n \in [N]} f(n) e(-\alpha n)| \gg \eta^2;

this follows easily from (1) and Plancherel’s theorem. Conversely, from the Gowers–Cauchy–Schwarz inequality one has

\displaystyle  |\mathop{\bf E}_{n \in [N]} f(n) e(-\alpha n)| \ll \|f\|_{U^2[N]}.

For {U^1[N]} one has a trivial inverse theorem; by definition, the {U^1[N]} norm of {f} is at least {\eta} if and only if

\displaystyle  |\mathop{\bf E}_{n \in [N]} f(n)| \geq \eta.

Thus the frequency {\alpha} appearing in the {U^2} inverse theorem can be taken to be zero when working instead with the {U^1} norm.

For {U^{1^+}} one has the intermediate situation in which the frequency {\alpha} is not taken to be zero, but is instead major arc. Indeed, suppose that {f} is {1}-bounded with {\|f\|_{U^{1^+}[N]} \geq \eta}, thus

\displaystyle  |\mathop{\bf E}_{n \in [N]} 1_P(n) f(n)| \geq \eta

for some progression {P}. This forces the spacing {q} of this progression to be {\ll 1/\eta}. We write the above inequality as

\displaystyle  |\mathop{\bf E}_{n \in [N]} 1_{n=b\ (q)} 1_I(n) f(n)| \geq \eta

for some residue class {b\ (q)} and some interval {I}. By Fourier expansion and the triangle inequality we then have

\displaystyle  |\mathop{\bf E}_{n \in [N]} e(-an/q) 1_I(n) f(n)| \geq \eta

for some integer {a}. Convolving {1_I} by {\psi_\delta: n \mapsto \frac{1}{N\delta} \psi(\frac{n}{N\delta})} for {\delta} a small multiple of {\eta} and {\psi} a Schwartz function of unit mass with Fourier transform supported on {[-1,1]}, we have

\displaystyle  |\mathop{\bf E}_{n \in [N]} e(-an/q) (1_I * \psi_\delta)(n) f(n)| \gg \eta.

The Fourier transform {\xi \mapsto \sum_n 1_I * \psi_\delta(n) e(- \xi n)} of {1_I * \psi_\delta} is bounded by {O(N)} and supported on {[-\frac{1}{\delta N},\frac{1}{\delta N}]}, thus by Fourier expansion and the triangle inequality we have

\displaystyle  |\mathop{\bf E}_{n \in [N]} e(-an/q) e(-\xi n) f(n)| \gg \eta^2

for some {\xi \in [-\frac{1}{\delta N},\frac{1}{\delta N}]}, so in particular {\xi = O(\frac{1}{\eta N})}. Thus we have

\displaystyle  |\mathop{\bf E}_{n \in [N]} f(n) e(-\alpha n)| \gg \eta^2 \ \ \ \ \ (2)

for some {\alpha} of the major arc form {\alpha = \frac{a}{q} + O(1/\eta)} with {1 \leq q \leq 1/\eta}. Conversely, for {\alpha} of this form, some routine summation by parts gives the bound

\displaystyle  |\mathop{\bf E}_{n \in [N]} f(n) e(-\alpha n)| \ll \frac{q}{\eta} \|f\|_{U^{1^+}[N]} \ll \frac{1}{\eta^2} \|f\|_{U^{1^+}[N]}

so if (2) holds for a {1}-bounded {f} then one must have {\|f\|_{U^{1^+}[N]} \gg \eta^4}.

Here is a diagram showing some of the control relationships between various Gowers norms, multilinear forms, and duals of classes {{\mathcal F}} of functions (where each class of functions {{\mathcal F}} induces a dual norm {\| f \|_{{\mathcal F}^*} := \sup_{\phi \in {\mathcal F}} \mathop{\bf E}_{n \in[N]} f(n) \overline{\phi(n)}}:

Here I have included the three classes of functions that one can choose from for the {U^3} inverse theorem, namely degree two nilsequences, bracket quadratic phases, and local quadratic phases, as well as the more narrow class of globally quadratic phases.

The Gowers norms have counterparts for measure-preserving systems {(X,T,\mu)}, known as Host-Kra seminorms. The {U^1(X)} norm can be defined for {f \in L^\infty(X)} as

\displaystyle  \|f\|_{U^1(X)} := \lim_{N \rightarrow \infty} \int_X |\mathop{\bf E}_{n \in [N]} T^n f|\ d\mu

and the {U^2} norm can be defined as

\displaystyle  \|f\|_{U^2(X)}^4 := \lim_{N \rightarrow \infty} \mathop{\bf E}_{n \in [N]} \| T^n f \overline{f} \|_{U^1(X)}^2.

The {U^1(X)} seminorm is orthogonal to the invariant factor {Z^0(X)} (generated by the (almost everywhere) invariant measurable subsets of {X}) in the sense that a function {f \in L^\infty(X)} has vanishing {U^1(X)} seminorm if and only if it is orthogonal to all {Z^0(X)}-measurable (bounded) functions. Similarly, the {U^2(X)} norm is orthogonal to the Kronecker factor {Z^1(X)}, generated by the eigenfunctions of {X} (that is to say, those {f} obeying an identity {Tf = \lambda f} for some {T}-invariant {\lambda}); for ergodic systems, it is the largest factor isomorphic to rotation on a compact abelian group. In analogy to the Gowers {U^{1^+}[N]} norm, one can then define the Host-Kra {U^{1^+}(X)} seminorm by

\displaystyle  \|f\|_{U^{1^+}(X)} := \sup_{q \geq 1} \frac{1}{q} \lim_{N \rightarrow \infty} \int_X |\mathop{\bf E}_{n \in [N]} T^{qn} f|\ d\mu;

it is orthogonal to the profinite factor {Z^{0^+}(X)}, generated by the periodic sets of {X} (or equivalently, by those eigenfunctions whose eigenvalue is a root of unity); for ergodic systems, it is the largest factor isomorphic to rotation on a profinite abelian group.

Archives