You are currently browsing the category archive for the ‘question’ category.
A few months ago I posted a question about analytic functions that I received from a bright high school student, which turned out to be studied and resolved by de Bruijn. Based on this positive resolution, I thought I might try my luck again and list three further questions that this student asked which do not seem to be trivially resolvable.
- Does there exist a smooth function
which is nowhere analytic, but is such that the Taylor series
converges for every
? (Of course, this series would not then converge to
, but instead to some analytic function
for each
.) I have a vague feeling that perhaps the Baire category theorem should be able to resolve this question, but it seems to require a bit of effort. (Update: answered by Alexander Shaposhnikov in comments.)
- Is there a function
which meets every polynomial
to infinite order in the following sense: for every polynomial
, there exists
such that
for all
? Such a function would be rather pathological, perhaps resembling a space-filling curve. (Update: solved for smooth
by Aleksei Kulikov in comments. The situation currently remains unclear in the general case.)
- Is there a power series
that diverges everywhere (except at
), but which becomes pointwise convergent after dividing each of the monomials
into pieces
for some
summing absolutely to
, and then rearranging, i.e., there is some rearrangement
of
that is pointwise convergent for every
? (Update: solved by Jacob Manaker in comments.)
Feel free to post answers or other thoughts on these questions in the comments.
I was asked the following interesting question from a bright high school student I am working with, to which I did not immediately know the answer:
Question 1 Does there exist a smooth functionwhich is not real analytic, but such that all the differences
are real analytic for every
?
The hypothesis implies that the Newton quotients are real analytic for every
. If analyticity was preserved by smooth limits, this would imply that
is real analytic, which would make
real analytic. However, we are not assuming any uniformity in the analyticity of the Newton quotients, so this simple argument does not seem to resolve the question immediately.
In the case that is periodic, say periodic with period
, one can answer the question in the negative by Fourier series. Perform a Fourier expansion
. If
is not real analytic, then there is a sequence
going to infinity such that
as
. From the Borel-Cantelli lemma one can then find a real number
such that
(say) for infinitely many
, hence
for infinitely many
. Thus the Fourier coefficients of
do not decay exponentially and hence this function is not analytic, a contradiction.
I was not able to quickly resolve the non-periodic case, but I thought perhaps this might be a good problem to crowdsource, so I invite readers to contribute their thoughts on this problem here. In the spirit of the polymath projects, I would encourage comments that contain thoughts that fall short of a complete solution, in the event that some other reader may be able to take the thought further.
After some discussion with the applied math research groups here at UCLA (in particular the groups led by Andrea Bertozzi and Deanna Needell), one of the members of these groups, Chris Strohmeier, has produced a proposal for a Polymath project to crowdsource in a single repository (a) a collection of public data sets relating to the COVID-19 pandemic, (b) requests for such data sets, (c) requests for data cleaning of such sets, and (d) submissions of cleaned data sets. (The proposal can be viewed as a PDF, and is also available on Overleaf). As mentioned in the proposal, this database would be slightly different in focus than existing data sets such as the COVID-19 data sets hosted on Kaggle, with a focus on producing high quality cleaned data sets. (Another relevant data set that I am aware of is the SafeGraph aggregated foot traffic data, although this data set, while open, is not quite public as it requires a non-commercial agreement to execute. Feel free to mention further relevant data sets in the comments.)
This seems like a very interesting and timely proposal to me and I would like to open it up for discussion, for instance by proposing some seed requests for data and data cleaning and to discuss possible platforms that such a repository could be built on. In the spirit of “building the plane while flying it”, one could begin by creating a basic github repository as a prototype and use the comments in this blog post to handle requests, and then migrate to a more high quality platform once it becomes clear what direction this project might move in. (For instance one might eventually move beyond data cleaning to more sophisticated types of data analysis.)
UPDATE, Mar 25: a prototype page for such a clearinghouse is now up at this wiki page.
UPDATE, Mar 27: the data cleaning aspect of this project largely duplicates the existing efforts at the United against COVID-19 project, so we are redirecting requests of this type to that project (and specifically to their data discourse page). The polymath proposal will now refocus on crowdsourcing a list of public data sets relating to the COVID-19 pandemic.
[UPDATE, Feb 1, 2021: the strategy sketched out below has been successfully implemented to rigorously obtain the desired implication in this recent preprint of Giulio Bresciani.]
I recently came across this question on MathOverflow asking if there are any polynomials of two variables with rational coefficients, such that the map
is a bijection. The answer to this question is almost surely “no”, but it is remarkable how hard this problem resists any attempt at rigorous proof. (MathOverflow users with enough privileges to see deleted answers will find that there are no less than seventeen deleted attempts at a proof in response to this question!)
On the other hand, the one surviving response to the question does point out this paper of Poonen which shows that assuming a powerful conjecture in Diophantine geometry known as the Bombieri-Lang conjecture (discussed in this previous post), it is at least possible to exhibit polynomials which are injective.
I believe that it should be possible to also rule out the existence of bijective polynomials if one assumes the Bombieri-Lang conjecture, and have sketched out a strategy to do so, but filling in the gaps requires a fair bit more algebraic geometry than I am capable of. So as a sort of experiment, I would like to see if a rigorous implication of this form (similarly to the rigorous implication of the Erdos-Ulam conjecture from the Bombieri-Lang conjecture in my previous post) can be crowdsourced, in the spirit of the polymath projects (though I feel that this particular problem should be significantly quicker to resolve than a typical such project).
Here is how I imagine a Bombieri-Lang-powered resolution of this question should proceed (modulo a large number of unjustified and somewhat vague steps that I believe to be true but have not established rigorously). Suppose for contradiction that we have a bijective polynomial . Then for any polynomial
of one variable, the surface
has infinitely many rational points; indeed, every rational lifts to exactly one rational point in
. I believe that for “typical”
this surface
should be irreducible. One can now split into two cases:
- (a) The rational points in
are Zariski dense in
.
- (b) The rational points in
are not Zariski dense in
.
Consider case (b) first. By definition, this case asserts that the rational points in are contained in a finite number of algebraic curves. By Faltings’ theorem (a special case of the Bombieri-Lang conjecture), any curve of genus two or higher only contains a finite number of rational points. So all but finitely many of the rational points in
are contained in a finite union of genus zero and genus one curves. I think all genus zero curves are birational to a line, and all the genus one curves are birational to an elliptic curve (though I don’t have an immediate reference for this). These curves
all can have an infinity of rational points, but very few of them should have “enough” rational points
that their projection
to the third coordinate is “large”. In particular, I believe
- (i) If
is birational to an elliptic curve, then the number of elements of
of height at most
should grow at most polylogarithmically in
(i.e., be of order
.
- (ii) If
is birational to a line but not of the form
for some rational
, then then the number of elements of
of height at most
should grow slower than
(in fact I think it can only grow like
).
I do not have proofs of these results (though I think something similar to (i) can be found in Knapp’s book, and (ii) should basically follow by using a rational parameterisation of
with
nonlinear). Assuming these assertions, this would mean that there is a curve of the form
that captures a “positive fraction” of the rational points of
, as measured by restricting the height of the third coordinate
to lie below a large threshold
, computing density, and sending
to infinity (taking a limit superior). I believe this forces an identity of the form
for all . Such identities are certainly possible for some choices of
(e.g.
for arbitrary polynomials
of one variable) but I believe that the only way that such identities hold for a “positive fraction” of
(as measured using height as before) is if there is in fact a rational identity of the form
for some rational functions with rational coefficients (in which case we would have
and
). But such an identity would contradict the hypothesis that
is bijective, since one can take a rational point
outside of the curve
, and set
, in which case we have
violating the injective nature of
. Thus, modulo a lot of steps that have not been fully justified, we have ruled out the scenario in which case (b) holds for a “positive fraction” of
.
This leaves the scenario in which case (a) holds for a “positive fraction” of . Assuming the Bombieri-Lang conjecture, this implies that for such
, any resolution of singularities of
fails to be of general type. I would imagine that this places some very strong constraints on
, since I would expect the equation
to describe a surface of general type for “generic” choices of
(after resolving singularities). However, I do not have a good set of techniques for detecting whether a given surface is of general type or not. Presumably one should proceed by viewing the surface
as a fibre product of the simpler surface
and the curve
over the line
. In any event, I believe the way to handle (a) is to show that the failure of general type of
implies some strong algebraic constraint between
and
(something in the spirit of (1), perhaps), and then use this constraint to rule out the bijectivity of
by some further ad hoc method.
The Polymath15 paper “Effective approximation of heat flow evolution of the Riemann function, and a new upper bound for the de Bruijn-Newman constant“, submitted to Research in the Mathematical Sciences, has just been uploaded to the arXiv. This paper records the mix of theoretical and computational work needed to improve the upper bound on the de Bruijn-Newman constant
. This constant can be defined as follows. The function
where is the Riemann
function
has a Fourier representation
where is the super-exponentially decaying function
The Riemann hypothesis is equivalent to the claim that all the zeroes of are real. De Bruijn introduced (in different notation) the deformations
of ; one can view this as the solution to the backwards heat equation
starting at
. From the work of de Bruijn and of Newman, it is known that there exists a real number
– the de Bruijn-Newman constant – such that
has all zeroes real for
and has at least one non-real zero for
. In particular, the Riemann hypothesis is equivalent to the assertion
. Prior to this paper, the best known bounds for this constant were
with the lower bound due to Rodgers and myself, and the upper bound due to Ki, Kim, and Lee. One of the main results of the paper is to improve the upper bound to
At a purely numerical level this gets “closer” to proving the Riemann hypothesis, but the methods of proof take as input a finite numerical verification of the Riemann hypothesis up to some given height (in our paper we take
) and converts this (and some other numerical verification) to an upper bound on
that is of order
. As discussed in the final section of the paper, further improvement of the numerical verification of RH would thus lead to modest improvements in the upper bound on
, although it does not seem likely that our methods could for instance improve the bound to below
without an infeasible amount of computation.
We now discuss the methods of proof. An existing result of de Bruijn shows that if all the zeroes of lie in the strip
, then
; we will verify this hypothesis with
, thus giving (1). Using the symmetries and the known zero-free regions, it suffices to show that
whenever and
.
For large (specifically,
), we use effective numerical approximation to
to establish (2), as discussed in a bit more detail below. For smaller values of
, the existing numerical verification of the Riemann hypothesis (we use the results of Platt) shows that
for and
. The problem though is that this result only controls
at time
rather than the desired time
. To bridge the gap we need to erect a “barrier” that, roughly speaking, verifies that
for ,
, and
; with a little bit of work this barrier shows that zeroes cannot sneak in from the right of the barrier to the left in order to produce counterexamples to (2) for small
.
To enforce this barrier, and to verify (2) for large , we need to approximate
for positive
. Our starting point is the Riemann-Siegel formula, which roughly speaking is of the shape
where ,
is an explicit “gamma factor” that decays exponentially in
, and
is a ratio of gamma functions that is roughly of size
. Deforming this by the heat flow gives rise to an approximation roughly of the form
where and
are variants of
and
,
, and
is an exponent which is roughly
. In particular, for positive values of
,
increases (logarithmically) as
increases, and the two sums in the Riemann-Siegel formula become increasingly convergent (even in the face of the slowly increasing coefficients
). For very large values of
(in the range
for a large absolute constant
), the
terms of both sums dominate, and
begins to behave in a sinusoidal fashion, with the zeroes “freezing” into an approximate arithmetic progression on the real line much like the zeroes of the sine or cosine functions (we give some asymptotic theorems that formalise this “freezing” effect). This lets one verify (2) for extremely large values of
(e.g.,
). For slightly less large values of
, we first multiply the Riemann-Siegel formula by an “Euler product mollifier” to reduce some of the oscillation in the sum and make the series converge better; we also use a technical variant of the triangle inequality to improve the bounds slightly. These are sufficient to establish (2) for moderately large
(say
) with only a modest amount of computational effort (a few seconds after all the optimisations; on my own laptop with very crude code I was able to verify all the computations in a matter of minutes).
The most difficult computational task is the verification of the barrier (3), particularly when is close to zero where the series in (4) converge quite slowly. We first use an Euler product heuristic approximation to
to decide where to place the barrier in order to make our numerical approximation to
as large in magnitude as possible (so that we can afford to work with a sparser set of mesh points for the numerical verification). In order to efficiently evaluate the sums in (4) for many different values of
, we perform a Taylor expansion of the coefficients to factor the sums as combinations of other sums that do not actually depend on
and
and so can be re-used for multiple choices of
after a one-time computation. At the scales we work in, this computation is still quite feasible (a handful of minutes after software and hardware optimisations); if one assumes larger numerical verifications of RH and lowers
and
to optimise the value of
accordingly, one could get down to an upper bound of
assuming an enormous numerical verification of RH (up to height about
) and a very large distributed computing project to perform the other numerical verifications.
This post can serve as the (presumably final) thread for the Polymath15 project (continuing this post), to handle any remaining discussion topics for that project.
[This post is collectively authored by the ICM structure committee, whom I am currently chairing – T.]
The International Congress of Mathematicians (ICM) is widely considered to be the premier conference for mathematicians. It is held every four years; for instance, the 2018 ICM was held in Rio de Janeiro, Brazil, and the 2022 ICM is to be held in Saint Petersburg, Russia. The most high-profile event at the ICM is the awarding of the 10 or so prizes of the International Mathematical Union (IMU) such as the Fields Medal, and the lectures by the prize laureates; but there are also approximately twenty plenary lectures from leading experts across all mathematical disciplines, several public lectures of a less technical nature, about 180 more specialised invited lectures divided into about twenty section panels, each corresponding to a mathematical field (or range of fields), as well as various outreach and social activities, exhibits and satellite programs, and meetings of the IMU General Assembly; see for instance the program for the 2018 ICM for a sample schedule. In addition to these official events, the ICM also provides more informal networking opportunities, in particular allowing mathematicians at all stages of career, and from all backgrounds and nationalities, to interact with each other.
For each Congress, a Program Committee (together with subcommittees for each section) is entrusted with the task of selecting who will give the lectures of the ICM (excluding the lectures by prize laureates, which are selected by separate prize committees); they also have decided how to appropriately subdivide the entire field of mathematics into sections. Given the prestigious nature of invitations from the ICM to present a lecture, this has been an important and challenging task, but one for which past Program Committees have managed to fulfill in a largely satisfactory fashion.
Nevertheless, in the last few years there has been substantial discussion regarding ways in which the process for structuring the ICM and inviting lecturers could be further improved, for instance to reflect the fact that the distribution of mathematics across various fields has evolved over time. At the 2018 ICM General Assembly meeting in Rio de Janeiro, a resolution was adopted to create a new Structure Committee to take on some of the responsibilities previously delegated to the Program Committee, focusing specifically on the structure of the scientific program. On the other hand, the Structure Committee is not involved with the format for prize lectures, the selection of prize laureates, or the selection of plenary and sectional lecturers; these tasks are instead the responsibilities of other committees (the local Organizing Committee, the prize committees, and the Program Committee respectively).
The first Structure Committee was constituted on 1 Jan 2019, with the following members:
-
- Terence Tao [Chair from 15 Feb, 2019]
- Carlos Kenig [IMU President (from 1 Jan 2019), ex officio]
- Nalini Anantharaman
- Alexei Borodin
- Annalisa Buffa
- Hélène Esnault [from 21 Mar, 2019]
- Irene Fonseca
- János Kollár [until 21 Mar, 2019]
- Laci Lovász [Chair until 15 Feb, 2019]
- Terry Lyons
- Stephane Mallat
- Hiraku Nakajima
- Éva Tardos
- Peter Teichner
- Akshay Venkatesh
- Anna Wienhard
As one of our first actions, we on the committee are using this blog post to solicit input from the mathematical community regarding the topics within our remit. Among the specific questions (in no particular order) for which we seek comments are the following:
- Are there suggestions to change the format of the ICM that would increase its value to the mathematical community?
- Are there suggestions to change the format of the ICM that would encourage greater participation and interest in attending, particularly with regards to junior researchers and mathematicians from developing countries?
- What is the correct balance between research and exposition in the lectures? For instance, how strongly should one emphasize the importance of good exposition when selecting plenary and sectional speakers? Should there be “Bourbaki style” expository talks presenting work not necessarily authored by the speaker?
- Is the balance between plenary talks, sectional talks, and public talks at an optimal level? There is only a finite amount of space in the calendar, so any increase in the number or length of one of these types of talks will come at the expense of another.
- The ICM is generally perceived to be more important to pure mathematics than to applied mathematics. In what ways can the ICM be made more relevant and attractive to applied mathematicians, or should one not try to do so?
- Are there structural barriers that cause certain areas or styles of mathematics (such as applied or interdisciplinary mathematics) or certain groups of mathematicians to be under-represented at the ICM? What, if anything, can be done to mitigate these barriers?
Of course, we do not expect these complex and difficult questions to be resolved within this blog post, and debating these and other issues would likely be a major component of our internal committee discussions. Nevertheless, we would value constructive comments towards the above questions (or on other topics within the scope of our committee) to help inform these subsequent discussions. We therefore welcome and invite such commentary, either as responses to this blog post, or sent privately to one of the members of our committee. We would also be interested in having readers share their personal experiences at past congresses, and how it compares with other major conferences of this type. (But in order to keep the discussion focused and constructive, we request that comments here refrain from discussing topics that are out of the scope of this committee, such as suggesting specific potential speakers for the next congress, which is a task instead for the 2022 ICM Program Committee.)
While talking mathematics with a postdoc here at UCLA (March Boedihardjo) we came across the following matrix problem which we managed to solve, but the proof was cute and the process of discovering it was fun, so I thought I would present the problem here as a puzzle without revealing the solution for now.
The problem involves word maps on a matrix group, which for sake of discussion we will take to be the special orthogonal group of real
matrices (one of the smallest matrix groups that contains a copy of the free group, which incidentally is the key observation powering the Banach-Tarski paradox). Given any abstract word
of two generators
and their inverses (i.e., an element of the free group
), one can define the word map
simply by substituting a pair of matrices in
into these generators. For instance, if one has the word
, then the corresponding word map
is given by
for . Because
contains a copy of the free group, we see the word map is non-trivial (not equal to the identity) if and only if the word itself is nontrivial.
Anyway, here is the problem:
Problem. Does there exist a sequence
of non-trivial word maps
that converge uniformly to the identity map?
To put it another way, given any , does there exist a non-trivial word
such that
for all
, where
denotes (say) the operator norm, and
denotes the identity matrix in
?
As I said, I don’t want to spoil the fun of working out this problem, so I will leave it as a challenge. Readers are welcome to share their thoughts, partial solutions, or full solutions in the comments below.
This is the eleventh research thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.
There are currently two strands of activity. One is writing up the paper describing the combination of theoretical and numerical results needed to obtain the new bound . The latest version of the writeup may be found here, in this directory. The theoretical side of things have mostly been written up; the main remaining tasks to do right now are
- giving a more detailed description and illustration of the two major numerical verifications, namely the barrier verification that establishes a zero-free region for
for
, and the Dirichlet series bound that establishes a zero-free region for
; and
- giving more detail on the conditional results assuming more numerical verification of RH.
Meanwhile, several of us have been exploring the behaviour of the zeroes of for negative
; this does not directly lead to any new progress on bounding
(though there is a good chance that it may simplify the proof of
), but there have been some interesting numerical phenomena uncovered, as summarised in this set of slides. One phenomenon is that for large negative
, many of the complex zeroes begin to organise themselves near the curves
(An example of the agreement between the zeroes and these curves may be found here.) We now have a (heuristic) theoretical explanation for this; we should have an approximation
in this region (where are defined in equations (11), (15), (17) of the writeup, and the above curves arise from (an approximation of) those locations where two adjacent terms
,
in this series have equal magnitude (with the other terms being of lower order).
However, we only have a partial explanation at present of the interesting behaviour of the real zeroes at negative t, for instance the surviving zeroes at extremely negative values of appear to lie on the curve where the quantity
is close to a half-integer, where
The remaining zeroes exhibit a pattern in coordinates that is approximately 1-periodic in
, where
A plot of the zeroes in these coordinates (somewhat truncated due to the numerical range) may be found here.
We do not yet have a total explanation of the phenomena seen in this picture. It appears that we have an approximation
where is the non-zero multiplier
and
The derivation of this formula may be found in this wiki page. However our initial attempts to simplify the above approximation further have proven to be somewhat inaccurate numerically (in particular giving an incorrect prediction for the location of zeroes, as seen in this picture). We are in the process of using numerics to try to resolve the discrepancies (see this page for some code and discussion).
This is the tenth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.
Most of the progress since the last thread has been on the numerical side, in which the various techniques to numerically establish zero-free regions to the equation have been streamlined, made faster, and extended to larger heights than were previously possible. The best bound for
now depends on the height to which one is willing to assume the Riemann hypothesis. Using the conservative verification up to height (slightly larger than)
, which has been confirmed by independent work of Platt et al. and Gourdon-Demichel, the best bound remains at
. Using the verification up to height
claimed by Gourdon-Demichel, this improves slightly to
, and if one assumes the Riemann hypothesis up to height
the bound improves to
, contingent on a numerical computation that is still underway. (See the table below the fold for more data of this form.) This is broadly consistent with the expectation that the bound on
should be inversely proportional to the logarithm of the height at which the Riemann hypothesis is verified.
As progress seems to have stabilised, it may be time to transition to the writing phase of the Polymath15 project. (There are still some interesting research questions to pursue, such as numerically investigating the zeroes of for negative values of
, but the writeup does not necessarily have to contain every single direction pursued in the project. If enough additional interesting findings are unearthed then one could always consider writing a second paper, for instance.
Below the fold is the detailed progress report on the numerics by Rudolph Dwars and Kalpesh Muchhal.
This is the ninth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant , continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.
We have now tentatively improved the upper bound of the de Bruijn-Newman constant to . Among the technical improvements in our approach, we now are able to use Taylor expansions to efficiently compute the approximation
to
for many values of
in a given region, thus speeding up the computations in the barrier considerably. Also, by using the heuristic that
behaves somewhat like the partial Euler product
, we were able to find a good location to place the barrier in which
is larger than average, hence easier to keep away from zero.
The main remaining bottleneck is that of computing the Euler mollifier bounds that keep bounded away from zero for larger values of
beyond the barrier. In going below
we are beginning to need quite complicated mollifiers with somewhat poor tail behavior; we may be reaching the point where none of our bounds will succeed in keeping
bounded away from zero, so we may be close to the natural limits of our methods.
Participants are also welcome to add any further summaries of the situation in the comments below.
Recent Comments