You are currently browsing the monthly archive for December 2018.
I have just learned that Jean Bourgain passed away last week in Belgium, aged 64, after a prolonged battle with cancer. He and Eli Stein were the two mathematicians who most influenced my early career; it is something of a shock to find out that they are now both gone, having died within a few days of each other.
Like Eli, Jean remained highly active mathematically, even after his cancer diagnosis. Here is a video profile of him by National Geographic, on the occasion of his 2017 Breakthrough Prize in Mathematics, doing a surprisingly good job of describing in lay terms the sort of mathematical work he did:
When I was a graduate student in Princeton, Tom Wolff came and gave a course on recent progress on the restriction and Kakeya conjectures, starting from the breakthrough work of Jean Bourgain in a now famous 1991 paper in Geom. Func. Anal.. I struggled with that paper for many months; it was by far the most difficult paper I had to read as a graduate student, as Jean would focus on the most essential components of an argument, treating more secondary details (such as rigorously formalising the uncertainty principle) in very brief sentences. This image of my own annotated photocopy of this article may help convey some of the frustration I had when first going through it:
Eventually, though, and with the help of Eli Stein and Tom Wolff, I managed to decode the steps which had mystified me – and my impression of the paper reversed completely. I began to realise that Jean had a certain collection of tools, heuristics, and principles that he regarded as “basic”, such as dyadic decomposition and the uncertainty principle, and by working “modulo” these tools (that is, by regarding any step consisting solely of application of these tools as trivial), one could proceed much more rapidly and efficiently. By reading through Jean’s papers, I was able to add these tools to my own “basic” toolkit, which then became a fundamental starting point for much of my own research. Indeed, a large fraction of my early work could be summarised as “take one of Jean’s papers, understand the techniques used there, and try to improve upon the final results a bit”. In time, I started looking forward to reading the latest paper of Jean. I remember being particularly impressed by his 1999 JAMS paper on global solutions of the energy-critical nonlinear Schrodinger equation for spherically symmetric data. It’s hard to describe (especially in lay terms) the experience of reading through (and finally absorbing) the sections of this paper one by one; the best analogy I can come up with would be watching an expert video game player nimbly navigate his or her way through increasingly difficult levels of some video game, with the end of each level (or section) culminating in a fight with a huge “boss” that was eventually dispatched using an array of special weapons that the player happened to have at hand. (I would eventually end up spending two years with four other coauthors trying to remove that spherical symmetry assumption; we did finally succeed, but it was and still is one of the most difficult projects I have been involved in.)
While I was a graduate student at Princeton, Jean worked at the Institute for Advanced Study which was just a mile away. But I never actually had the courage to set up an appointment with him (which, back then, would be more likely done in person or by phone rather than by email). I remember once actually walking to the Institute and standing outside his office door, wondering if I dared knock on it to introduce myself. (In the end I lost my nerve and walked back to the University.)
I think eventually Tom Wolff introduced the two of us to each other during one of Jean’s visits to Tom at Caltech (though I had previously seen Jean give a number of lectures at various places). I had heard that in his younger years Jean had quite the competitive streak; however, when I met him, he was extremely generous with his ideas, and he had a way of condensing even the most difficult arguments to a few extremely information-dense sentences that captured the essence of the matter, which I invariably found to be particularly insightful (once I had finally managed to understand it). He still retained a certain amount of cocky self-confidence though. I remember posing to him (some time in early 2002, I think) a problem Tom Wolff had once shared with me about trying to prove what is now known as a sum-product estimate for subsets of a finite field of prime order, and telling him that Nets Katz and I would be able to use this estimate for several applications to Kakeya-type problems. His initial reaction was to say that this estimate should easily follow from a Fourier analytic method, and promised me a proof the following morning. The next day he came up to me and admitted that the problem was more interesting than he had initially expected, and that he would continue to think about it. That was all I heard from him for several months; but one day I received a two-page fax from Jean with a beautiful hand-written proof of the sum-product estimate, which eventually became our joint paper with Nets on the subject (and the only paper I ended up writing with Jean). Sadly, the actual fax itself has been lost despite several attempts from various parties to retrieve a copy, but a LaTeX version of the fax, typed up by Jean’s tireless assistant Elly Gustafsson, can be seen here.
About three years ago, Jean was diagnosed with cancer and began a fairly aggressive treatment. Nevertheless he remained extraordinarily productive mathematically, authoring over thirty papers in the last three years, including such breakthrough results as his solution of the Vinogradov conjecture with Guth and Demeter, or his short note on the Schrodinger maximal function and his paper with Mirek, Stein, and Wróbel on dimension-free estimates for the Hardy-Littlewood maximal function, both of which made progress on problems that had been stuck for over a decade. In May of 2016 I helped organise, and then attended, a conference at the IAS celebrating Jean’s work and impact; by then Jean was not able to easily travel to attend, but he gave a superb special lecture, not announced on the original schedule, via videoconference that was certainly one of the highlights of the meeting. (UPDATE: a video of his talk is available here. Thanks to Brad Rodgers for the link.)
I last met Jean in person in November of 2016, at the award ceremony for his Breakthrough Prize, though we had some email and phone conversations after that date. Here he is with me and Richard Taylor at that event (demonstrating, among other things, that he wears a tuxedo much better than I do):
Jean was a truly remarkable person and mathematician. Certainly the world of analysis is poorer with his passing.
[UPDATE, Dec 31: Here is the initial IAS obituary notice for Jean.]
[UPDATE, Jan 3: See also this MathOverflow question “Jean Bourgain’s Relatively Lesser Known Significant Contributions”.]
This is the eleventh research thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.
There are currently two strands of activity. One is writing up the paper describing the combination of theoretical and numerical results needed to obtain the new bound . The latest version of the writeup may be found here, in this directory. The theoretical side of things have mostly been written up; the main remaining tasks to do right now are
- giving a more detailed description and illustration of the two major numerical verifications, namely the barrier verification that establishes a zero-free region for
for
, and the Dirichlet series bound that establishes a zero-free region for
; and
- giving more detail on the conditional results assuming more numerical verification of RH.
Meanwhile, several of us have been exploring the behaviour of the zeroes of for negative
; this does not directly lead to any new progress on bounding
(though there is a good chance that it may simplify the proof of
), but there have been some interesting numerical phenomena uncovered, as summarised in this set of slides. One phenomenon is that for large negative
, many of the complex zeroes begin to organise themselves near the curves
(An example of the agreement between the zeroes and these curves may be found here.) We now have a (heuristic) theoretical explanation for this; we should have an approximation
in this region (where are defined in equations (11), (15), (17) of the writeup, and the above curves arise from (an approximation of) those locations where two adjacent terms
,
in this series have equal magnitude (with the other terms being of lower order).
However, we only have a partial explanation at present of the interesting behaviour of the real zeroes at negative t, for instance the surviving zeroes at extremely negative values of appear to lie on the curve where the quantity
is close to a half-integer, where
The remaining zeroes exhibit a pattern in coordinates that is approximately 1-periodic in
, where
A plot of the zeroes in these coordinates (somewhat truncated due to the numerical range) may be found here.
We do not yet have a total explanation of the phenomena seen in this picture. It appears that we have an approximation
where is the non-zero multiplier
and
The derivation of this formula may be found in this wiki page. However our initial attempts to simplify the above approximation further have proven to be somewhat inaccurate numerically (in particular giving an incorrect prediction for the location of zeroes, as seen in this picture). We are in the process of using numerics to try to resolve the discrepancies (see this page for some code and discussion).
I was deeply saddened to learn that Elias Stein died yesterday, aged 87.
I have talked about some of Eli’s older mathematical work in these blog posts. He continued to be quite active mathematically in recent years, for instance finishing six papers (with various co-authors including Jean Bourgain, Mariusz Mirek, Błażej Wróbel, and Pavel Zorin-Kranich) in just this year alone. I last met him at Wrocław, Poland last September for a conference in his honour; he was in good health (and good spirits) then. Here is a picture of Eli together with several of his students (including myself) who were at that meeting (taken from the conference web site):
Eli was an amazingly effective advisor; throughout my graduate studies I think he never had fewer than five graduate students, and there was often a line outside his door when he was meeting with students such as myself. (The Mathematics Geneaology Project lists 52 students of Eli, but if anything this is an under-estimate.) My weekly meetings with Eli would tend to go something like this: I would report on all the many different things I had tried over the past week, without much success, to solve my current research problem; Eli would listen patiently to everything I said, concentrate for a moment, and then go over to his filing cabinet and fish out a preprint to hand to me, saying “I think the authors in this paper encountered similar problems and resolved it using Method X”. I would then go back to my office and read the preprint, and indeed they had faced something similar and I could often adapt the techniques there to resolve my immediate obstacles (only to encounter further ones for the next week, but that’s the way research tends to go, especially as a graduate student). Amongst other things, these meetings impressed upon me the value of mathematical experience, by being able to make more key progress on a problem in a handful of minutes than I was able to accomplish in a whole week. (There is a well known story about the famous engineer Charles Steinmetz fixing a broken piece of machinery by making a chalk mark; my meetings with Eli often had a similar feel to them.)
Eli’s lectures were always masterpieces of clarity. In one hour, he would set up a theorem, motivate it, explain the strategy, and execute it flawlessly; even after twenty years of teaching my own classes, I have yet to figure out his secret of somehow always being able to arrive at the natural finale of a mathematical presentation at the end of each hour without having to improvise at least a little bit halfway during the lecture. The clear and self-contained nature of his lectures (and his many books) were a large reason why I decided to specialise as a graduate student in harmonic analysis (though I would eventually return to other interests, such as analytic number theory, many years after my graduate studies).
Looking back at my time with Eli, I now realise that he was extraordinarily patient and understanding with the brash and naive teenager he had to meet with every week. A key turning point in my own career came after my oral qualifying exams, in which I very nearly failed due to my overconfidence and lack of preparation, particularly in my chosen specialty of harmonic analysis. After the exam, he sat down with me and told me, as gently and diplomatically as possible, that my performance was a disappointment, and that I seriously needed to solidify my mathematical knowledge. This turned out to be exactly what I needed to hear; I got motivated to actually work properly so as not to disappoint my advisor again.
So many of us in the field of harmonic analysis were connected to Eli in one way or another; the field always felt to me like a large extended family, with Eli as one of the patriarchs. He will be greatly missed.
[UPDATE: Here is Princeton’s obituary for Elias Stein.]
These lecture notes are a continuation of the 254A lecture notes from the previous quarter.
We consider the Euler equations for incompressible fluid flow on a Euclidean space ; we will label
as the “Eulerian space”
(or “Euclidean space”, or “physical space”) to distinguish it from the “Lagrangian space”
(or “labels space”) that we will introduce shortly (but the reader is free to also ignore the
or
subscripts if he or she wishes). Elements of Eulerian space
will be referred to by symbols such as
, we use
to denote Lebesgue measure on
and we will use
for the
coordinates of
, and use indices such as
to index these coordinates (with the usual summation conventions), for instance
denotes partial differentiation along the
coordinate. (We use superscripts for coordinates
instead of subscripts
to be compatible with some differential geometry notation that we will use shortly; in particular, when using the summation notation, we will now be matching subscripts with superscripts for the pair of indices being summed.)
In Eulerian coordinates, the Euler equations read
where is the velocity field and
is the pressure field. These are functions of time
and on the spatial location variable
. We will refer to the coordinates
as Eulerian coordinates. However, if one reviews the physical derivation of the Euler equations from 254A Notes 0, before one takes the continuum limit, the fundamental unknowns were not the velocity field
or the pressure field
, but rather the trajectories
, which can be thought of as a single function
from the coordinates
(where
is a time and
is an element of the label set
) to
. The relationship between the trajectories
and the velocity field was given by the informal relationship
We will refer to the coordinates as (discrete) Lagrangian coordinates for describing the fluid.
In view of this, it is natural to ask whether there is an alternate way to formulate the continuum limit of incompressible inviscid fluids, by using a continuous version of the Lagrangian coordinates, rather than Eulerian coordinates. This is indeed the case. Suppose for instance one has a smooth solution
to the Euler equations on a spacetime slab
in Eulerian coordinates; assume furthermore that the velocity field
is uniformly bounded. We introduce another copy
of
, which we call Lagrangian space or labels space; we use symbols such as
to refer to elements of this space,
to denote Lebesgue measure on
, and
to refer to the
coordinates of
. We use indices such as
to index these coordinates, thus for instance
denotes partial differentiation along the
coordinate. We will use summation conventions for both the Eulerian coordinates
and the Lagrangian coordinates
, with an index being summed if it appears as both a subscript and a superscript in the same term. While
and
are of course isomorphic, we will try to refrain from identifying them, except perhaps at the initial time
in order to fix the initialisation of Lagrangian coordinates.
Given a smooth and bounded velocity field , define a trajectory map for this velocity to be any smooth map
that obeys the ODE
in view of (2), this describes the trajectory (in ) of a particle labeled by an element
of
. From the Picard existence theorem and the hypothesis that
is smooth and bounded, such a map exists and is unique as long as one specifies the initial location
assigned to each label
. Traditionally, one chooses the initial condition
for , so that we label each particle by its initial location at time
; we are also free to specify other initial conditions for the trajectory map if we please. Indeed, we have the freedom to “permute” the labels
by an arbitrary diffeomorphism: if
is a trajectory map, and
is any diffeomorphism (a smooth map whose inverse exists and is also smooth), then the map
is also a trajectory map, albeit one with different initial conditions
.
Despite the popularity of the initial condition (4), we will try to keep conceptually separate the Eulerian space from the Lagrangian space
, as they play different physical roles in the interpretation of the fluid; for instance, while the Euclidean metric
is an important feature of Eulerian space
, it is not a geometrically natural structure to use in Lagrangian space
. We have the following more general version of Exercise 8 from 254A Notes 2:
Exercise 1 Let
be smooth and bounded.
- If
is a smooth map, show that there exists a unique smooth trajectory map
with initial condition
for all
.
- Show that if
is a diffeomorphism and
, then the map
is also a diffeomorphism.
Remark 2 The first of the Euler equations (1) can now be written in the form
which can be viewed as a continuous limit of Newton’s first law
.
Call a diffeomorphism (oriented) volume preserving if one has the equation
for all , where the total differential
is the
matrix with entries
for
and
, where
are the components of
. (If one wishes, one can also view
as a linear transformation from the tangent space
of Lagrangian space at
to the tangent space
of Eulerian space at
.) Equivalently,
is orientation preserving and one has a Jacobian-free change of variables formula
for all , which is in turn equivalent to
having the same Lebesgue measure as
for any measurable set
.
The divergence-free condition then can be nicely expressed in terms of volume-preserving properties of the trajectory maps
, in a manner which confirms the interpretation of this condition as an incompressibility condition on the fluid:
Lemma 3 Let
be smooth and bounded, let
be a volume-preserving diffeomorphism, and let
be the trajectory map. Then the following are equivalent:
on
.
is volume-preserving for all
.
Proof: Since is orientation-preserving, we see from continuity that
is also orientation-preserving. Suppose that
is also volume-preserving, then for any
we have the conservation law
for all . Differentiating in time using the chain rule and (3) we conclude that
for all , and hence by change of variables
which by integration by parts gives
for all and
, so
is divergence-free.
To prove the converse implication, it is convenient to introduce the labels map , defined by setting
to be the inverse of the diffeomorphism
, thus
for all . By the implicit function theorem,
is smooth, and by differentiating the above equation in time using (3) we see that
where is the usual material derivative
acting on functions on . If
is divergence-free, we have from integration by parts that
for any test function . In particular, for any
, we can calculate
and hence
for any . Since
is volume-preserving, so is
, thus
Thus is volume-preserving, and hence
is also.
Exercise 4 Let
be a continuously differentiable map from the time interval
to the general linear group
of invertible
matrices. Establish Jacobi’s formula
and use this and (6) to give an alternate proof of Lemma 3 that does not involve any integration in space.
Remark 5 One can view the use of Lagrangian coordinates as an extension of the method of characteristics. Indeed, from the chain rule we see that for any smooth function
of Eulerian spacetime, one has
and hence any transport equation that in Eulerian coordinates takes the form
for smooth functions
of Eulerian spacetime is equivalent to the ODE
where
are the smooth functions of Lagrangian spacetime defined by
In this set of notes we recall some basic differential geometry notation, particularly with regards to pullbacks and Lie derivatives of differential forms and other tensor fields on manifolds such as and
, and explore how the Euler equations look in this notation. Our discussion will be entirely formal in nature; we will assume that all functions have enough smoothness and decay at infinity to justify the relevant calculations. (It is possible to work rigorously in Lagrangian coordinates – see for instance the work of Ebin and Marsden – but we will not do so here.) As a general rule, Lagrangian coordinates tend to be somewhat less convenient to use than Eulerian coordinates for establishing the basic analytic properties of the Euler equations, such as local existence, uniqueness, and continuous dependence on the data; however, they are quite good at clarifying the more algebraic properties of these equations, such as conservation laws and the variational nature of the equations. It may well be that in the future we will be able to use the Lagrangian formalism more effectively on the analytic side of the subject also.
Remark 6 One can also write the Navier-Stokes equations in Lagrangian coordinates, but the equations are not expressed in a favourable form in these coordinates, as the Laplacian
appearing in the viscosity term becomes replaced with a time-varying Laplace-Beltrami operator. As such, we will not discuss the Lagrangian coordinate formulation of Navier-Stokes here.
Note: this post is not required reading for this course, or for the sequel course in the winter quarter.
In a Notes 2, we reviewed the classical construction of Leray of global weak solutions to the Navier-Stokes equations. We did not quite follow Leray’s original proof, in that the notes relied more heavily on the machinery of Littlewood-Paley projections, which have become increasingly common tools in modern PDE. On the other hand, we did use the same “exploiting compactness to pass to weakly convergent subsequence” strategy that is the standard one in the PDE literature used to construct weak solutions.
As I discussed in a previous post, the manipulation of sequences and their limits is analogous to a “cheap” version of nonstandard analysis in which one uses the Fréchet filter rather than an ultrafilter to construct the nonstandard universe. (The manipulation of generalised functions of Columbeau-type can also be comfortably interpreted within this sort of cheap nonstandard analysis.) Augmenting the manipulation of sequences with the right to pass to subsequences whenever convenient is then analogous to a sort of “lazy” nonstandard analysis, in which the implied ultrafilter is never actually constructed as a “completed object“, but is instead lazily evaluated, in the sense that whenever membership of a given subsequence of the natural numbers in the ultrafilter needs to be determined, one either passes to that subsequence (thus placing it in the ultrafilter) or the complement of the sequence (placing it out of the ultrafilter). This process can be viewed as the initial portion of the transfinite induction that one usually uses to construct ultrafilters (as discussed using a voting metaphor in this post), except that there is generally no need in any given application to perform the induction for any uncountable ordinal (or indeed for most of the countable ordinals also).
On the other hand, it is also possible to work directly in the orthodox framework of nonstandard analysis when constructing weak solutions. This leads to an approach to the subject which is largely equivalent to the usual subsequence-based approach, though there are some minor technical differences (for instance, the subsequence approach occasionally requires one to work with separable function spaces, whereas in the ultrafilter approach the reliance on separability is largely eliminated, particularly if one imposes a strong notion of saturation on the nonstandard universe). The subject acquires a more “algebraic” flavour, as the quintessential analysis operation of taking a limit is replaced with the “standard part” operation, which is an algebra homomorphism. The notion of a sequence is replaced by the distinction between standard and nonstandard objects, and the need to pass to subsequences disappears entirely. Also, the distinction between “bounded sequences” and “convergent sequences” is largely eradicated, particularly when the space that the sequences ranged in enjoys some compactness properties on bounded sets. Also, in this framework, the notorious non-uniqueness features of weak solutions can be “blamed” on the non-uniqueness of the nonstandard extension of the standard universe (as well as on the multiple possible ways to construct nonstandard mollifications of the original standard PDE). However, many of these changes are largely cosmetic; switching from a subsequence-based theory to a nonstandard analysis-based theory does not seem to bring one significantly closer for instance to the global regularity problem for Navier-Stokes, but it could have been an alternate path for the historical development and presentation of the subject.
In any case, I would like to present below the fold this nonstandard analysis perspective, quickly translating the relevant components of real analysis, functional analysis, and distributional theory that we need to this perspective, and then use it to re-prove Leray’s theorem on existence of global weak solutions to Navier-Stokes.
Read the rest of this entry »
Kaisa Matomäki, Maksym Radziwill, and I just uploaded to the arXiv our paper “Fourier uniformity of bounded multiplicative functions in short intervals on average“. This paper is the outcome of our attempts during the MSRI program in analytic number theory last year to attack the local Fourier uniformity conjecture for the Liouville function . This conjecture generalises a landmark result of Matomäki and Radziwill, who show (among other things) that one has the asymptotic
whenever and
goes to infinity as
. Informally, this says that the Liouville function has small mean for almost all short intervals
. The remarkable thing about this theorem is that there is no lower bound on how
goes to infinity with
; one can take for instance
. This lack of lower bound was crucial when I applied this result (or more precisely, a generalisation of this result to arbitrary non-pretentious bounded multiplicative functions) a few years ago to solve the Erdös discrepancy problem, as well as a logarithmically averaged two-point Chowla conjecture, for instance it implies that
The local Fourier uniformity conjecture asserts the stronger asymptotic
under the same hypotheses on and
. As I worked out in a previous paper, this conjecture would imply a logarithmically averaged three-point Chowla conjecture, implying for instance that
This particular bound also follows from some slightly different arguments of Joni Teräväinen and myself, but the implication would also work for other non-pretentious bounded multiplicative functions, whereas the arguments of Joni and myself rely more heavily on the specific properties of the Liouville function (in particular that for all primes
).
There is also a higher order version of the local Fourier uniformity conjecture in which the linear phase is replaced with a polynomial phase such as
, or more generally a nilsequence
; as shown in my previous paper, this conjecture implies (and is in fact equivalent to, after logarithmic averaging) a logarithmically averaged version of the full Chowla conjecture (not just the two-point or three-point versions), as well as a logarithmically averaged version of the Sarnak conjecture.
The main result of the current paper is to obtain some cases of the local Fourier uniformity conjecture:
Theorem 1 The asymptotic (2) is true when
for a fixed
.
Previously this was known for by the work of Zhan (who in fact proved the stronger pointwise assertion
for
in this case). In a previous paper with Kaisa and Maksym, we also proved a weak version
of (2) for any growing arbitrarily slowly with
; this is stronger than (1) (and is in fact proven by a variant of the method) but significantly weaker than (2), because in the latter the worst-case
is permitted to depend on the
parameter, whereas in (3)
must remain independent of
.
Unfortunately, the restriction is not strong enough to give applications to Chowla-type conjectures (one would need something more like
for this). However, it can still be used to control some sums that had not previously been manageable. For instance, a quick application of the circle method lets one use the above theorem to derive the asymptotic
whenever for a fixed
, where
is the von Mangoldt function. Amusingly, the seemingly simpler question of establishing the expected asymptotic for
is only known in the range (from the work of Zaccagnini). Thus we have a rare example of a number theory sum that becomes easier to control when one inserts a Liouville function!
We now give an informal description of the strategy of proof of the theorem (though for numerous technical reasons, the actual proof deviates in some respects from the description given here). If (2) failed, then for many values of we would have the lower bound
for some frequency . We informally describe this correlation between
and
by writing
for (informally, one should view this as asserting that
“behaves like” a constant multiple of
). For sake of discussion, suppose we have this relationship for all
, not just many.
As mentioned before, the main difficulty here is to understand how varies with
. As it turns out, the multiplicativity properties of the Liouville function place a significant constraint on this dependence. Indeed, if we let
be a fairly small prime (e.g. of size
for some
), and use the identity
for the Liouville function to conclude (at least heuristically) from (4) that
for . (In practice, we will have this sort of claim for many primes
rather than all primes
, after using tools such as the Turán-Kubilius inequality, but we ignore this distinction for this informal argument.)
Now let and
be primes comparable to some fixed range
such that
Then we have both
and
on essentially the same range of (two nearby intervals of length
). This suggests that the frequencies
and
should be close to each other modulo
, in particular one should expect the relationship
Comparing this with (5) one is led to the expectation that should depend inversely on
in some sense (for instance one can check that
would solve (6) if ; by Taylor expansion, this would correspond to a global approximation of the form
). One now has a problem of an additive combinatorial flavour (or of a “local to global” flavour), namely to leverage the relation (6) to obtain global control on
that resembles (7).
A key obstacle in solving (6) efficiently is the fact that one only knows that and
are close modulo
, rather than close on the real line. One can start resolving this problem by the Chinese remainder theorem, using the fact that we have the freedom to shift (say)
by an arbitrary integer. After doing so, one can arrange matters so that one in fact has the relationship
whenever and
obey (5). (This may force
to become extremely large, on the order of
, but this will not concern us.)
Now suppose that we have and primes
such that
For every prime , we can find an
such that
is within
of both
and
. Applying (8) twice we obtain
and
and thus by the triangle inequality we have
for all ; hence by the Chinese remainder theorem
In practice, in the regime that we are considering, the modulus
is so huge we can effectively ignore it (in the spirit of the Lefschetz principle); so let us pretend that we in fact have
whenever and
obey (9).
Now let be an integer to be chosen later, and suppose we have primes
such that the difference
is small but non-zero. If is chosen so that
(where one is somewhat loose about what means) then one can then find real numbers
such that
for , with the convention that
. We then have
which telescopes to
and thus
and hence
In particular, for each , we expect to be able to write
for some . This quantity
can vary with
; but from (10) and a short calculation we see that
whenever obey (9) for some
.
Now imagine a “graph” in which the vertices are elements of
, and two elements
are joined by an edge if (9) holds for some
. Because of exponential sum estimates on
, this graph turns out to essentially be an “expander” in the sense that any two vertices
can be connected (in multiple ways) by fairly short paths in this graph (if one allows one to modify one of
or
by
). As a consequence, we can assume that this quantity
is essentially constant in
(cf. the application of the ergodic theorem in this previous blog post), thus we now have
for most and some
. By Taylor expansion, this implies that
on for most
, thus
But this can be shown to contradict the Matomäki-Radziwill theorem (because the multiplicative function is known to be non-pretentious).
Recent Comments