You are currently browsing the monthly archive for July 2021.
Louis Esser, Burt Totaro, Chengxi Wang, and myself have just uploaded to the arXiv our preprint “Varieties of general type with many vanishing plurigenera, and optimal sine and sawtooth inequalities“. This is an interdisciplinary paper that arose because in order to optimize a certain algebraic geometry construction it became necessary to solve a purely analytic question which, while simple, did not seem to have been previously studied in the literature. We were able to solve the analytic question exactly and thus fully optimize the algebraic geometry construction, though the analytic question may have some independent interest.
Let us first discuss the algebraic geometry application. Given a smooth complex -dimensional projective variety
there is a standard line bundle
attached to it, known as the canonical line bundle;
-forms on the variety become sections of this bundle. The bundle may not actually admit global sections; that is to say, the dimension
of global sections may vanish. But as one raises the canonical line bundle
to higher and higher powers to form further line bundles
, the number of global sections tends to increase; in particular, the dimension
of global sections (known as the
plurigenus) always obeys an asymptotic of the form
It follows from a deep result obtained independently by Hacon–McKernan, Takayama and Tsuji that there is a uniform lower bound for the volume of all
-dimensional projective varieties of general type. However, the precise lower bound is not known, and the current paper is a contribution towards probing this bound by constructing varieties of particularly small volume in the high-dimensional limit
. Prior to this paper, the best such constructions of
-dimensional varieties basically had exponentially small volume, with a construction of volume at most
given by Ballico–Pignatelli–Tasin, and an improved construction with a volume bound of
given by Totaro and Wang. In this paper, we obtain a variant construction with the somewhat smaller volume bound of
; the method also gives comparable bounds for some other related algebraic geometry statistics, such as the largest
for which the pluricanonical map associated to the linear system
is not a birational embedding into projective space.
The space is constructed by taking a general hypersurface of a certain degree
in a weighted projective space
and resolving the singularities. These varieties are relatively tractable to work with, as one can use standard algebraic geometry tools (such as the Reid–Tai inequality) to provide sufficient conditions to guarantee that the hypersurface has only canonical singularities and that the canonical bundle is a reflexive sheaf, which allows one to calculate the volume exactly in terms of the degree
and weights
. The problem then reduces to optimizing the resulting volume given the constraints needed for the above-mentioned sufficient conditions to hold. After working with a particular choice of weights (which consist of products of mostly consecutive primes, with each product occuring with suitable multiplicities
), the problem eventually boils down to trying to minimize the total multiplicity
, subject to certain congruence conditions and other bounds on the
. Using crude bounds on the
eventually leads to a construction with volume at most
, but by taking advantage of the ability to “dilate” the congruence conditions and optimizing over all dilations, we are able to improve the
constant to
.
Now it is time to turn to the analytic side of the paper by describing the optimization problem that we solve. We consider the sawtooth function , with
defined as the unique real number in
that is equal to
mod
. We consider a (Borel) probability measure
on the real line, and then compute the average value of this sawtooth function
If one considers the deterministic case in which is a Dirac mass supported at some real number
, then the Dirichlet approximation theorem tells us that there is
such that
is within
of an integer, so we have
Theorem 1 (Optimal bound for sawtooth inequality) Let.
In particular, we have
- (i) If
for some natural number
, then
.
- (ii) If
for some natural number
, then
.
as
.
We establish this bound through duality. Indeed, suppose we could find non-negative coefficients such that one had the pointwise bound
After solving the sawtooth problem, we became interested in the analogous question for the sine function, that is to say what is the best bound for the inequality
Theorem 2 For any, one has
In particular,
Interestingly, a closely related cotangent sum recently appeared in this MathOverflow post. Verifying the lower bound on boils down to choosing the right test measure
; it turns out that one should pick the probability measure supported the
with
odd, with probability proportional to
, and the lower bound verification eventually follows from a classical identity
In the modern theory of higher order Fourier analysis, a key role are played by the Gowers uniformity norms for
. For finitely supported functions
, one can define the (non-normalised) Gowers norm
by the formula
The significance of the Gowers norms is that they control other multilinear forms that show up in additive combinatorics. Given any polynomials and functions
, we define the multilinear form
-
and
have true complexity
;
-
has true complexity
;
-
has true complexity
;
- The form
(which among other things could be used to count twin primes) has infinite true complexity (which is quite unfortunate for applications).
Gowers and Wolf formulated a conjecture on what this complexity should be, at least for linear polynomials ; Ben Green and I thought we had resolved this conjecture back in 2010, though it turned out there was a subtle gap in our arguments and we were only able to resolve the conjecture in a partial range of cases. However, the full conjecture was recently resolved by Daniel Altman.
The (semi-)norm is so weak that it barely controls any averages at all. For instance the average
Because of this, I propose inserting an additional norm in the Gowers uniformity norm hierarchy between the and
norms, which I will call the
(or “profinite
“) norm:
The norm recently appeared implicitly in work of Peluse and Prendiville, who showed that the form
had true complexity
in this notation (with polynomially strong bounds). [Actually, strictly speaking this control was only shown for the third function
; for the first two functions
one needs to localize the
norm to intervals of length
. But I will ignore this technical point to keep the exposition simple.] The weaker claim that
has true complexity
is substantially easier to prove (one can apply the circle method together with Gauss sum estimates).
The well known inverse theorem for the norm tells us that if a
-bounded function
has
norm at least
for some
, then there is a Fourier phase
such that
For one has a trivial inverse theorem; by definition, the
norm of
is at least
if and only if
For one has the intermediate situation in which the frequency
is not taken to be zero, but is instead major arc. Indeed, suppose that
is
-bounded with
, thus
Here is a diagram showing some of the control relationships between various Gowers norms, multilinear forms, and duals of classes of functions (where each class of functions
induces a dual norm
:
Here I have included the three classes of functions that one can choose from for the inverse theorem, namely degree two nilsequences, bracket quadratic phases, and local quadratic phases, as well as the more narrow class of globally quadratic phases.
The Gowers norms have counterparts for measure-preserving systems , known as Host-Kra seminorms. The
norm can be defined for
as
Joni Teräväinen and myself have just uploaded to the arXiv our preprint “Quantitative bounds for Gowers uniformity of the Möbius and von Mangoldt functions“. This paper makes quantitative the Gowers uniformity estimates on the Möbius function and the von Mangoldt function
.
To discuss the results we first discuss the situation of the Möbius function, which is technically simpler in some (though not all) ways. We assume familiarity with Gowers norms and standard notations around these norms, such as the averaging notation and the exponential notation
. The prime number theorem in qualitative form asserts that
Once one restricts to arithmetic progressions, the situation gets worse: the Siegel-Walfisz theorem gives the bound
for any residue classIn 1937, Davenport was able to show the discorrelation estimate
For the situation with the norm the previously known results were much weaker. Ben Green and I showed that
For higher norms , the situation is even worse, because the quantitative inverse theory for these norms is poorer, and indeed it was only with the recent work of Manners that any such bound is available at all (at least for
). Basically, Manners establishes if
Our first result gives an effective decay bound:
Theorem 1 For any, we have
for some
. The implied constants are effective.
This is off by a logarithm from the best effective bound (2) in the case. In the
case there is some hope to remove this logarithm based on the improved quantitative inverse theory currently available in this case, but there is a technical obstruction to doing so which we will discuss later in this post. For
the above bound is the best one could hope to achieve purely using the quantitative inverse theory of Manners.
We have analogues of all the above results for the von Mangoldt function . Here a complication arises that
does not have mean close to zero, and one has to subtract off some suitable approximant
to
before one would expect good Gowers norms bounds. For the prime number theorem one can just use the approximant
, giving
Theorem 2 For any, we have
for some
. The implied constants are effective.
By standard methods, this result also gives quantitative asymptotics for counting solutions to various systems of linear equations in primes, with error terms that gain a factor of with respect to the main term.
We now discuss the methods of proof, focusing first on the case of the Möbius function. Suppose first that there is no “Siegel zero”, by which we mean a quadratic character of some conductor
with a zero
with
for some small absolute constant
. In this case the Siegel-Walfisz bound (1) improves to a quasipolynomial bound
Now suppose we have a Siegel zero . In this case the bound (5) will not hold in general, and hence also (6) will not hold either. Here, the usual way out (while still maintaining effective estimates) is to approximate
not by
, but rather by a more complicated approximant
that takes the Siegel zero into account, and in particular is such that one has the (effective) pseudopolynomial bound
For the analogous problem with the von Mangoldt function (assuming a Siegel zero for sake of discussion), the approximant is simpler; we ended up using
In principle, the above results can be improved for due to the stronger quantitative inverse theorems in the
setting. However, there is a bottleneck that prevents us from achieving this, namely that the equidistribution theory of two-step nilmanifolds has exponents which are exponential in the dimension rather than polynomial in the dimension, and as a consequence we were unable to improve upon the doubly logarithmic results. Specifically, if one is given a sequence of bracket quadratics such as
that fails to be
-equidistributed, one would need to establish a nontrivial linear relationship modulo 1 between the
(up to errors of
), where the coefficients are of size
; current methods only give coefficient bounds of the form
. An old result of Schmidt demonstrates proof of concept that these sorts of polynomial dependencies on exponents is possible in principle, but actually implementing Schmidt’s methods here seems to be a quite non-trivial task. There is also another possible route to removing a logarithm, which is to strengthen the inverse
theorem to make the dimension of the nilmanifold logarithmic in the uniformity parameter
rather than polynomial. Again, the Freiman-Bilu theorem (see for instance this paper of Ben and myself) demonstrates proof of concept that such an improvement in dimension is possible, but some work would be needed to implement it.
Recent Comments