You are currently browsing the category archive for the ‘question’ category.

A few months ago I posted a question about analytic functions that I received from a bright high school student, which turned out to be studied and resolved by de Bruijn. Based on this positive resolution, I thought I might try my luck again and list three further questions that this student asked which do not seem to be trivially resolvable.

1. Does there exist a smooth function ${f: {\bf R} \rightarrow {\bf R}}$ which is nowhere analytic, but is such that the Taylor series ${\sum_{n=0}^\infty \frac{f^{(n)}(x_0)}{n!} (x-x_0)^n}$ converges for every ${x, x_0 \in {\bf R}}$? (Of course, this series would not then converge to ${f}$, but instead to some analytic function ${f_{x_0}(x)}$ for each ${x_0}$.) I have a vague feeling that perhaps the Baire category theorem should be able to resolve this question, but it seems to require a bit of effort. (Update: answered by Alexander Shaposhnikov in comments.)
2. Is there a function ${f: {\bf R} \rightarrow {\bf R}}$ which meets every polynomial ${P: {\bf R} \rightarrow {\bf R}}$ to infinite order in the following sense: for every polynomial ${P}$, there exists ${x_0}$ such that ${f^{(n)}(x_0) = P^{(n)}(x_0)}$ for all ${n=0,1,2,\dots}$? Such a function would be rather pathological, perhaps resembling a space-filling curve. (Update: solved for smooth ${f}$ by Aleksei Kulikov in comments. The situation currently remains unclear in the general case.)
3. Is there a power series ${\sum_{n=0}^\infty a_n x^n}$ that diverges everywhere (except at ${x=0}$), but which becomes pointwise convergent after dividing each of the monomials ${a_n x^n}$ into pieces ${a_n x^n = \sum_{j=1}^\infty a_{n,j} x^n}$ for some ${a_{n,j}}$ summing absolutely to ${a_n}$, and then rearranging, i.e., there is some rearrangement ${\sum_{m=1}^\infty a_{n_m, j_m} x^{n_m}}$ of ${\sum_{n=0}^\infty \sum_{j=1}^\infty a_{n,j} x^n}$ that is pointwise convergent for every ${x}$? (Update: solved by Jacob Manaker in comments.)

Feel free to post answers or other thoughts on these questions in the comments.

I was asked the following interesting question from a bright high school student I am working with, to which I did not immediately know the answer:

Question 1 Does there exist a smooth function ${f: {\bf R} \rightarrow {\bf R}}$ which is not real analytic, but such that all the differences ${x \mapsto f(x+h) - f(x)}$ are real analytic for every ${h \in {\bf R}}$?

The hypothesis implies that the Newton quotients ${\frac{f(x+h)-f(x)}{h}}$ are real analytic for every ${h \neq 0}$. If analyticity was preserved by smooth limits, this would imply that ${f'}$ is real analytic, which would make ${f}$ real analytic. However, we are not assuming any uniformity in the analyticity of the Newton quotients, so this simple argument does not seem to resolve the question immediately.

In the case that ${f}$ is periodic, say periodic with period ${1}$, one can answer the question in the negative by Fourier series. Perform a Fourier expansion ${f(x) = \sum_{n \in {\bf Z}} c_n e^{2\pi i nx}}$. If ${f}$ is not real analytic, then there is a sequence ${n_j}$ going to infinity such that ${|c_{n_j}| = e^{-o(n_j)}}$ as ${j \rightarrow \infty}$. From the Borel-Cantelli lemma one can then find a real number ${h}$ such that ${|e^{2\pi i h n_j} - 1| \gg \frac{1}{n^2_j}}$ (say) for infinitely many ${j}$, hence ${|(e^{2\pi i h n_j} - 1) c_{n_j}| \gg n_j^2 e^{-o(n_j)}}$ for infinitely many ${j}$. Thus the Fourier coefficients of ${x \mapsto f(x+h) - f(x)}$ do not decay exponentially and hence this function is not analytic, a contradiction.

I was not able to quickly resolve the non-periodic case, but I thought perhaps this might be a good problem to crowdsource, so I invite readers to contribute their thoughts on this problem here. In the spirit of the polymath projects, I would encourage comments that contain thoughts that fall short of a complete solution, in the event that some other reader may be able to take the thought further.

After some discussion with the applied math research groups here at UCLA (in particular the groups led by Andrea Bertozzi and Deanna Needell), one of the members of these groups, Chris Strohmeier, has produced a proposal for a Polymath project to crowdsource in a single repository (a) a collection of public data sets relating to the COVID-19 pandemic, (b) requests for such data sets, (c) requests for data cleaning of such sets, and (d) submissions of cleaned data sets.  (The proposal can be viewed as a PDF, and is also available on Overleaf).  As mentioned in the proposal, this database would be slightly different in focus than existing data sets such as the COVID-19 data sets hosted on Kaggle, with a focus on producing high quality cleaned data sets.  (Another relevant data set that I am aware of is the SafeGraph aggregated foot traffic data, although this data set, while open, is not quite public as it requires a non-commercial agreement to execute.  Feel free to mention further relevant data sets in the comments.)

This seems like a very interesting and timely proposal to me and I would like to open it up for discussion, for instance by proposing some seed requests for data and data cleaning and to discuss possible platforms that such a repository could be built on.  In the spirit of “building the plane while flying it”, one could begin by creating a basic github repository as a prototype and use the comments in this blog post to handle requests, and then migrate to a more high quality platform once it becomes clear what direction this project might move in.  (For instance one might eventually move beyond data cleaning to more sophisticated types of data analysis.)

UPDATE, Mar 25: a prototype page for such a clearinghouse is now up at this wiki page.

UPDATE, Mar 27: the data cleaning aspect of this project largely duplicates the existing efforts at the United against COVID-19 project, so we are redirecting requests of this type to that project (and specifically to their data discourse page).  The polymath proposal will now refocus on crowdsourcing a list of public data sets relating to the COVID-19 pandemic.

[UPDATE, Feb 1, 2021: the strategy sketched out below has been successfully implemented to rigorously obtain the desired implication in this recent preprint of Giulio Bresciani.]
I recently came across this question on MathOverflow asking if there are any polynomials ${P}$ of two variables with rational coefficients, such that the map ${P: {\bf Q} \times {\bf Q} \rightarrow {\bf Q}}$ is a bijection. The answer to this question is almost surely “no”, but it is remarkable how hard this problem resists any attempt at rigorous proof. (MathOverflow users with enough privileges to see deleted answers will find that there are no less than seventeen deleted attempts at a proof in response to this question!)
On the other hand, the one surviving response to the question does point out this paper of Poonen which shows that assuming a powerful conjecture in Diophantine geometry known as the Bombieri-Lang conjecture (discussed in this previous post), it is at least possible to exhibit polynomials ${P: {\bf Q} \times {\bf Q} \rightarrow {\bf Q}}$ which are injective.
I believe that it should be possible to also rule out the existence of bijective polynomials ${P: {\bf Q} \times {\bf Q} \rightarrow {\bf Q}}$ if one assumes the Bombieri-Lang conjecture, and have sketched out a strategy to do so, but filling in the gaps requires a fair bit more algebraic geometry than I am capable of. So as a sort of experiment, I would like to see if a rigorous implication of this form (similarly to the rigorous implication of the Erdos-Ulam conjecture from the Bombieri-Lang conjecture in my previous post) can be crowdsourced, in the spirit of the polymath projects (though I feel that this particular problem should be significantly quicker to resolve than a typical such project).
Here is how I imagine a Bombieri-Lang-powered resolution of this question should proceed (modulo a large number of unjustified and somewhat vague steps that I believe to be true but have not established rigorously). Suppose for contradiction that we have a bijective polynomial ${P: {\bf Q} \times {\bf Q} \rightarrow {\bf Q}}$. Then for any polynomial ${Q: {\bf Q} \rightarrow {\bf Q}}$ of one variable, the surface

$\displaystyle S_Q := \{ (x,y,z) \in \mathbb{A}^3: P(x,y) = Q(z) \}$

has infinitely many rational points; indeed, every rational ${z \in {\bf Q}}$ lifts to exactly one rational point in ${S_Q}$. I believe that for “typical” ${Q}$ this surface ${S_Q}$ should be irreducible. One can now split into two cases:

• (a) The rational points in ${S_Q}$ are Zariski dense in ${S_Q}$.
• (b) The rational points in ${S_Q}$ are not Zariski dense in ${S_Q}$.

Consider case (b) first. By definition, this case asserts that the rational points in ${S_Q}$ are contained in a finite number of algebraic curves. By Faltings’ theorem (a special case of the Bombieri-Lang conjecture), any curve of genus two or higher only contains a finite number of rational points. So all but finitely many of the rational points in ${S_Q}$ are contained in a finite union of genus zero and genus one curves. I think all genus zero curves are birational to a line, and all the genus one curves are birational to an elliptic curve (though I don’t have an immediate reference for this). These curves ${C}$ all can have an infinity of rational points, but very few of them should have “enough” rational points ${C \cap {\bf Q}^3}$ that their projection ${\pi(C \cap {\bf Q}^3) := \{ z \in {\bf Q} : (x,y,z) \in C \hbox{ for some } x,y \in {\bf Q} \}}$ to the third coordinate is “large”. In particular, I believe

• (i) If ${C \subset {\mathbb A}^3}$ is birational to an elliptic curve, then the number of elements of ${\pi(C \cap {\bf Q}^3)}$ of height at most ${H}$ should grow at most polylogarithmically in ${H}$ (i.e., be of order ${O( \log^{O(1)} H )}$.
• (ii) If ${C \subset {\mathbb A}^3}$ is birational to a line but not of the form ${\{ (f(z), g(z), z) \}}$ for some rational ${f,g}$, then then the number of elements of ${\pi(C \cap {\bf Q}^3)}$ of height at most ${H}$ should grow slower than ${H^2}$ (in fact I think it can only grow like ${O(H)}$).

I do not have proofs of these results (though I think something similar to (i) can be found in Knapp’s book, and (ii) should basically follow by using a rational parameterisation ${\{(f(t),g(t),h(t))\}}$ of ${C}$ with ${h}$ nonlinear). Assuming these assertions, this would mean that there is a curve of the form ${\{ (f(z),g(z),z)\}}$ that captures a “positive fraction” of the rational points of ${S_Q}$, as measured by restricting the height of the third coordinate ${z}$ to lie below a large threshold ${H}$, computing density, and sending ${H}$ to infinity (taking a limit superior). I believe this forces an identity of the form

$\displaystyle P(f(z), g(z)) = Q(z) \ \ \ \ \ (1)$

for all ${z}$. Such identities are certainly possible for some choices of ${Q}$ (e.g. ${Q(z) = P(F(z), G(z))}$ for arbitrary polynomials ${F,G}$ of one variable) but I believe that the only way that such identities hold for a “positive fraction” of ${Q}$ (as measured using height as before) is if there is in fact a rational identity of the form

$\displaystyle P( f_0(z), g_0(z) ) = z$

for some rational functions ${f_0,g_0}$ with rational coefficients (in which case we would have ${f = f_0 \circ Q}$ and ${g = g_0 \circ Q}$). But such an identity would contradict the hypothesis that ${P}$ is bijective, since one can take a rational point ${(x,y)}$ outside of the curve ${\{ (f_0(z), g_0(z)): z \in {\bf Q} \}}$, and set ${z := P(x,y)}$, in which case we have ${P(x,y) = P(f_0(z), g_0(z) )}$ violating the injective nature of ${P}$. Thus, modulo a lot of steps that have not been fully justified, we have ruled out the scenario in which case (b) holds for a “positive fraction” of ${Q}$.
This leaves the scenario in which case (a) holds for a “positive fraction” of ${Q}$. Assuming the Bombieri-Lang conjecture, this implies that for such ${Q}$, any resolution of singularities of ${S_Q}$ fails to be of general type. I would imagine that this places some very strong constraints on ${P,Q}$, since I would expect the equation ${P(x,y) = Q(z)}$ to describe a surface of general type for “generic” choices of ${P,Q}$ (after resolving singularities). However, I do not have a good set of techniques for detecting whether a given surface is of general type or not. Presumably one should proceed by viewing the surface ${\{ (x,y,z): P(x,y) = Q(z) \}}$ as a fibre product of the simpler surface ${\{ (x,y,w): P(x,y) = w \}}$ and the curve ${\{ (z,w): Q(z) = w \}}$ over the line ${\{w \}}$. In any event, I believe the way to handle (a) is to show that the failure of general type of ${S_Q}$ implies some strong algebraic constraint between ${P}$ and ${Q}$ (something in the spirit of (1), perhaps), and then use this constraint to rule out the bijectivity of ${P}$ by some further ad hoc method.

The Polymath15 paper “Effective approximation of heat flow evolution of the Riemann ${\xi}$ function, and a new upper bound for the de Bruijn-Newman constant“, submitted to Research in the Mathematical Sciences, has just been uploaded to the arXiv. This paper records the mix of theoretical and computational work needed to improve the upper bound on the de Bruijn-Newman constant ${\Lambda}$. This constant can be defined as follows. The function

$\displaystyle H_0(z) := \frac{1}{8} \xi\left(\frac{1}{2} + \frac{iz}{2}\right),$

where ${\xi}$ is the Riemann ${\xi}$ function

$\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{-s/2} \Gamma\left(\frac{s}{2}\right) \zeta(s)$

has a Fourier representation

$\displaystyle H_0(z) = \int_0^\infty \Phi(u) \cos(zu)\ du$

where ${\Phi}$ is the super-exponentially decaying function

$\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3\pi n^2 e^{5u} ) \exp(-\pi n^2 e^{4u} ).$

The Riemann hypothesis is equivalent to the claim that all the zeroes of ${H_0}$ are real. De Bruijn introduced (in different notation) the deformations

$\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du$

of ${H_0}$; one can view this as the solution to the backwards heat equation ${\partial_t H_t = -\partial_{zz} H_t}$ starting at ${H_0}$. From the work of de Bruijn and of Newman, it is known that there exists a real number ${\Lambda}$ – the de Bruijn-Newman constant – such that ${H_t}$ has all zeroes real for ${t \geq \Lambda}$ and has at least one non-real zero for ${t < \Lambda}$. In particular, the Riemann hypothesis is equivalent to the assertion ${\Lambda \leq 0}$. Prior to this paper, the best known bounds for this constant were

$\displaystyle 0 \leq \Lambda < 1/2$

with the lower bound due to Rodgers and myself, and the upper bound due to Ki, Kim, and Lee. One of the main results of the paper is to improve the upper bound to

$\displaystyle \Lambda \leq 0.22. \ \ \ \ \ (1)$

At a purely numerical level this gets “closer” to proving the Riemann hypothesis, but the methods of proof take as input a finite numerical verification of the Riemann hypothesis up to some given height ${T}$ (in our paper we take ${T \sim 3 \times 10^{10}}$) and converts this (and some other numerical verification) to an upper bound on ${\Lambda}$ that is of order ${O(1/\log T)}$. As discussed in the final section of the paper, further improvement of the numerical verification of RH would thus lead to modest improvements in the upper bound on ${\Lambda}$, although it does not seem likely that our methods could for instance improve the bound to below ${0.1}$ without an infeasible amount of computation.

We now discuss the methods of proof. An existing result of de Bruijn shows that if all the zeroes of ${H_{t_0}(z)}$ lie in the strip ${\{ x+iy: |y| \leq y_0\}}$, then ${\Lambda \leq t_0 + \frac{1}{2} y_0^2}$; we will verify this hypothesis with ${t_0=y_0=0.2}$, thus giving (1). Using the symmetries and the known zero-free regions, it suffices to show that

$\displaystyle H_{0.2}(x+iy) \neq 0 \ \ \ \ \ (2)$

whenever ${x \geq 0}$ and ${0.2 \leq y \leq 1}$.

For large ${x}$ (specifically, ${x \geq 6 \times 10^{10}}$), we use effective numerical approximation to ${H_t(x+iy)}$ to establish (2), as discussed in a bit more detail below. For smaller values of ${x}$, the existing numerical verification of the Riemann hypothesis (we use the results of Platt) shows that

$\displaystyle H_0(x+iy) \neq 0$

for ${0 \leq x \leq 6 \times 10^{10}}$ and ${0.2 \leq y \leq 1}$. The problem though is that this result only controls ${H_t}$ at time ${t=0}$ rather than the desired time ${t = 0.2}$. To bridge the gap we need to erect a “barrier” that, roughly speaking, verifies that

$\displaystyle H_t(x+iy) \neq 0 \ \ \ \ \ (3)$

for ${0 \leq t \leq 0.2}$, ${x = 6 \times 10^{10} + O(1)}$, and ${0.2 \leq y \leq 1}$; with a little bit of work this barrier shows that zeroes cannot sneak in from the right of the barrier to the left in order to produce counterexamples to (2) for small ${x}$.

To enforce this barrier, and to verify (2) for large ${x}$, we need to approximate ${H_t(x+iy)}$ for positive ${t}$. Our starting point is the Riemann-Siegel formula, which roughly speaking is of the shape

$\displaystyle H_0(x+iy) \approx B_0(x+iy) ( \sum_{n=1}^N \frac{1}{n^{\frac{1+y-ix}{2}}} + \gamma_0(x+iy) \sum_{n=1}^N \frac{n^y}{n^{\frac{1+y+ix}{2}}} )$

where ${N := \sqrt{x/4\pi}}$, ${B_0(x+iy)}$ is an explicit “gamma factor” that decays exponentially in ${x}$, and ${\gamma_0(x+iy)}$ is a ratio of gamma functions that is roughly of size ${(x/4\pi)^{-y/2}}$. Deforming this by the heat flow gives rise to an approximation roughly of the form

$\displaystyle H_t(x+iy) \approx B_t(x+iy) ( \sum_{n=1}^N \frac{b_n^t}{n^{s_*}} + \gamma_t(x+iy) \sum_{n=1}^N \frac{n^y}{n^{\overline{s_*}}} ) \ \ \ \ \ (4)$

where ${B_t(x+iy)}$ and ${\gamma_t(x+iy)}$ are variants of ${B_0(x+iy)}$ and ${\gamma_0(x+iy)}$, ${b_n^t := \exp( \frac{t}{4} \log^2 n )}$, and ${s_*}$ is an exponent which is roughly ${\frac{1+y-ix}{2} + \frac{t}{4} \log \frac{x}{4\pi}}$. In particular, for positive values of ${t}$, ${s_*}$ increases (logarithmically) as ${x}$ increases, and the two sums in the Riemann-Siegel formula become increasingly convergent (even in the face of the slowly increasing coefficients ${b_n^t}$). For very large values of ${x}$ (in the range ${x \geq \exp(C/t)}$ for a large absolute constant ${C}$), the ${n=1}$ terms of both sums dominate, and ${H_t(x+iy)}$ begins to behave in a sinusoidal fashion, with the zeroes “freezing” into an approximate arithmetic progression on the real line much like the zeroes of the sine or cosine functions (we give some asymptotic theorems that formalise this “freezing” effect). This lets one verify (2) for extremely large values of ${x}$ (e.g., ${x \geq 10^{12}}$). For slightly less large values of ${x}$, we first multiply the Riemann-Siegel formula by an “Euler product mollifier” to reduce some of the oscillation in the sum and make the series converge better; we also use a technical variant of the triangle inequality to improve the bounds slightly. These are sufficient to establish (2) for moderately large ${x}$ (say ${x \geq 6 \times 10^{10}}$) with only a modest amount of computational effort (a few seconds after all the optimisations; on my own laptop with very crude code I was able to verify all the computations in a matter of minutes).

The most difficult computational task is the verification of the barrier (3), particularly when ${t}$ is close to zero where the series in (4) converge quite slowly. We first use an Euler product heuristic approximation to ${H_t(x+iy)}$ to decide where to place the barrier in order to make our numerical approximation to ${H_t(x+iy)}$ as large in magnitude as possible (so that we can afford to work with a sparser set of mesh points for the numerical verification). In order to efficiently evaluate the sums in (4) for many different values of ${x+iy}$, we perform a Taylor expansion of the coefficients to factor the sums as combinations of other sums that do not actually depend on ${x}$ and ${y}$ and so can be re-used for multiple choices of ${x+iy}$ after a one-time computation. At the scales we work in, this computation is still quite feasible (a handful of minutes after software and hardware optimisations); if one assumes larger numerical verifications of RH and lowers ${t_0}$ and ${y_0}$ to optimise the value of ${\Lambda}$ accordingly, one could get down to an upper bound of ${\Lambda \leq 0.1}$ assuming an enormous numerical verification of RH (up to height about ${4 \times 10^{21}}$) and a very large distributed computing project to perform the other numerical verifications.

This post can serve as the (presumably final) thread for the Polymath15 project (continuing this post), to handle any remaining discussion topics for that project.

[This post is collectively authored by the ICM structure committee, whom I am currently chairing – T.]

The International Congress of Mathematicians (ICM) is widely considered to be the premier conference for mathematicians.  It is held every four years; for instance, the 2018 ICM was held in Rio de Janeiro, Brazil, and the 2022 ICM is to be held in Saint Petersburg, Russia.  The most high-profile event at the ICM is the awarding of the 10 or so prizes of the International Mathematical Union (IMU) such as the Fields Medal, and the lectures by the prize laureates; but there are also approximately twenty plenary lectures from leading experts across all mathematical disciplines, several public lectures of a less technical nature, about 180 more specialised invited lectures divided into about twenty section panels, each corresponding to a mathematical field (or range of fields), as well as various outreach and social activities, exhibits and satellite programs, and meetings of the IMU General Assembly; see for instance the program for the 2018 ICM for a sample schedule.  In addition to these official events, the ICM also provides more informal networking opportunities, in particular allowing mathematicians at all stages of career, and from all backgrounds and nationalities, to interact with each other.

For each Congress, a Program Committee (together with subcommittees for each section) is entrusted with the task of selecting who will give the lectures of the ICM (excluding the lectures by prize laureates, which are selected by separate prize committees); they also have decided how to appropriately subdivide the entire field of mathematics into sections.   Given the prestigious nature of invitations from the ICM to present a lecture, this has been an important and challenging task, but one for which past Program Committees have managed to fulfill in a largely satisfactory fashion.

Nevertheless, in the last few years there has been substantial discussion regarding ways in which the process for structuring the ICM and inviting lecturers could be further improved, for instance to reflect the fact that the distribution of mathematics across various fields has evolved over time.   At the 2018 ICM General Assembly meeting in Rio de Janeiro, a resolution was adopted to create a new Structure Committee to take on some of the responsibilities previously delegated to the Program Committee, focusing specifically on the structure of the scientific program.  On the other hand, the Structure Committee is not involved with the format for prize lectures, the selection of prize laureates, or the selection of plenary and sectional lecturers; these tasks are instead the responsibilities of other committees (the local Organizing Committee, the prize committees, and the Program Committee respectively).

The first Structure Committee was constituted on 1 Jan 2019, with the following members:

As one of our first actions, we on the committee are using this blog post to solicit input from the mathematical community regarding the topics within our remit.  Among the specific questions (in no particular order) for which we seek comments are the following:

1. Are there suggestions to change the format of the ICM that would increase its value to the mathematical community?
2. Are there suggestions to change the format of the ICM that would encourage greater participation and interest in attending, particularly with regards to junior researchers and mathematicians from developing countries?
3. What is the correct balance between research and exposition in the lectures?  For instance, how strongly should one emphasize the importance of good exposition when selecting plenary and sectional speakers?  Should there be “Bourbaki style” expository talks presenting work not necessarily authored by the speaker?
4. Is the balance between plenary talks, sectional talks, and public talks at an optimal level?  There is only a finite amount of space in the calendar, so any increase in the number or length of one of these types of talks will come at the expense of another.
5. The ICM is generally perceived to be more important to pure mathematics than to applied mathematics.  In what ways can the ICM be made more relevant and attractive to applied mathematicians, or should one not try to do so?
6. Are there structural barriers that cause certain areas or styles of mathematics (such as applied or interdisciplinary mathematics) or certain groups of mathematicians to be under-represented at the ICM?  What, if anything, can be done to mitigate these barriers?

Of course, we do not expect these complex and difficult questions to be resolved within this blog post, and debating these and other issues would likely be a major component of our internal committee discussions.  Nevertheless, we would value constructive comments towards the above questions (or on other topics within the scope of our committee) to help inform these subsequent discussions.  We therefore welcome and invite such commentary, either as responses to this blog post, or sent privately to one of the members of our committee.  We would also be interested in having readers share their personal experiences at past congresses, and how it compares with other major conferences of this type.   (But in order to keep the discussion focused and constructive, we request that comments here refrain from discussing topics that are out of the scope of this committee, such as suggesting specific potential speakers for the next congress, which is a task instead for the 2022 ICM Program Committee.)

While talking mathematics with a postdoc here at UCLA (March Boedihardjo) we came across the following matrix problem which we managed to solve, but the proof was cute and the process of discovering it was fun, so I thought I would present the problem here as a puzzle without revealing the solution for now.

The problem involves word maps on a matrix group, which for sake of discussion we will take to be the special orthogonal group $SO(3)$ of real $3 \times 3$ matrices (one of the smallest matrix groups that contains a copy of the free group, which incidentally is the key observation powering the Banach-Tarski paradox).  Given any abstract word $w$ of two generators $x,y$ and their inverses (i.e., an element of the free group ${\bf F}_2$), one can define the word map $w: SO(3) \times SO(3) \to SO(3)$ simply by substituting a pair of matrices in $SO(3)$ into these generators.  For instance, if one has the word $w = x y x^{-2} y^2 x$, then the corresponding word map $w: SO(3) \times SO(3) \to SO(3)$ is given by

$\displaystyle w(A,B) := ABA^{-2} B^2 A$

for $A,B \in SO(3)$.  Because $SO(3)$ contains a copy of the free group, we see the word map is non-trivial (not equal to the identity) if and only if the word itself is nontrivial.

Anyway, here is the problem:

Problem. Does there exist a sequence $w_1, w_2, \dots$ of non-trivial word maps $w_n: SO(3) \times SO(3) \to SO(3)$ that converge uniformly to the identity map?

To put it another way, given any $\varepsilon > 0$, does there exist a non-trivial word $w$ such that $\|w(A,B) - 1 \| \leq \varepsilon$ for all $A,B \in SO(3)$, where $\| \|$ denotes (say) the operator norm, and $1$ denotes the identity matrix in $SO(3)$?

As I said, I don’t want to spoil the fun of working out this problem, so I will leave it as a challenge. Readers are welcome to share their thoughts, partial solutions, or full solutions in the comments below.

This is the eleventh research thread of the Polymath15 project to upper bound the de Bruijn-Newman constant ${\Lambda}$, continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

There are currently two strands of activity.  One is writing up the paper describing the combination of theoretical and numerical results needed to obtain the new bound $\Lambda \leq 0.22$.  The latest version of the writeup may be found here, in this directory.  The theoretical side of things have mostly been written up; the main remaining tasks to do right now are

1. giving a more detailed description and illustration of the two major numerical verifications, namely the barrier verification that establishes a zero-free region for $H_t(x+iy)=0$ for $0 \leq t \leq 0.2, 0.2 \leq y \leq 1, |x - 6 \times 10^{10} - 83952| \leq 0.5$, and the Dirichlet series bound that establishes a zero-free region for $t = 0.2, 0.2 \leq y \leq 1, x \geq 6 \times 10^{10} + 83952$; and
2. giving more detail on the conditional results assuming more numerical verification of RH.

Meanwhile, several of us have been exploring the behaviour of the zeroes of $H_t$ for negative $t$; this does not directly lead to any new progress on bounding $\Lambda$ (though there is a good chance that it may simplify the proof of $\Lambda \geq 0$), but there have been some interesting numerical phenomena uncovered, as summarised in this set of slides.  One phenomenon is that for large negative $t$, many of the complex zeroes begin to organise themselves near the curves

$\displaystyle y = -\frac{t}{2} \log \frac{x}{4\pi n(n+1)} - 1.$

(An example of the agreement between the zeroes and these curves may be found here.)  We now have a (heuristic) theoretical explanation for this; we should have an approximation

$\displaystyle H_t(x+iy) \approx B_t(x+iy) \sum_{n=1}^\infty \frac{b_n^t}{n^{s_*}}$

in this region (where $B_t, b_n^t, n^{s_*}$ are defined in equations (11), (15), (17) of the writeup, and the above curves arise from (an approximation of) those locations where two adjacent terms $\frac{b_n^t}{n^{s_*}}$, $\frac{b_{n+1}^t}{(n+1)^{s_*}}$ in this series have equal magnitude (with the other terms being of lower order).

However, we only have a partial explanation at present of the interesting behaviour of the real zeroes at negative t, for instance the surviving zeroes at extremely negative values of $t$ appear to lie on the curve where the quantity $N$ is close to a half-integer, where

$\displaystyle \tilde x := x + \frac{\pi t}{4}$

$\displaystyle N := \sqrt{\frac{\tilde x}{4\pi}}$

The remaining zeroes exhibit a pattern in $(N,u)$ coordinates that is approximately 1-periodic in $N$, where

$\displaystyle u := \frac{4\pi |t|}{\tilde x}.$

A plot of the zeroes in these coordinates (somewhat truncated due to the numerical range) may be found here.

We do not yet have a total explanation of the phenomena seen in this picture.  It appears that we have an approximation

$\displaystyle H_t(x) \approx A_t(x) \sum_{n=1}^\infty \exp( -\frac{|t| \log^2(n/N)}{4(1-\frac{iu}{8\pi})} - \frac{1+i\tilde x}{2} \log(n/N) )$

where $A_t(x)$ is the non-zero multiplier

$\displaystyle A_t(x) := e^{\pi^2 t/64} M_0(\frac{1+i\tilde x}{2}) N^{-\frac{1+i\tilde x}{2}} \sqrt{\frac{\pi}{1-\frac{iu}{8\pi}}}$

and

$\displaystyle M_0(s) := \frac{1}{8}\frac{s(s-1)}{2}\pi^{-s/2} \sqrt{2\pi} \exp( (\frac{s}{2}-\frac{1}{2}) \log \frac{s}{2} - \frac{s}{2} )$

The derivation of this formula may be found in this wiki page.  However our initial attempts to simplify the above approximation further have proven to be somewhat inaccurate numerically (in particular giving an incorrect prediction for the location of zeroes, as seen in this picture).  We are in the process of using numerics to try to resolve the discrepancies (see this page for some code and discussion).

This is the tenth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant ${\Lambda}$, continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

Most of the progress since the last thread has been on the numerical side, in which the various techniques to numerically establish zero-free regions to the equation $H_t(x+iy)=0$ have been streamlined, made faster, and extended to larger heights than were previously possible.  The best bound for $\Lambda$ now depends on the height to which one is willing to assume the Riemann hypothesis.  Using the conservative verification up to height (slightly larger than) $3 \times 10^{10}$, which has been confirmed by independent work of Platt et al. and Gourdon-Demichel, the best bound remains at $\Lambda \leq 0.22$.  Using the verification up to height $2.5 \times 10^{12}$ claimed by Gourdon-Demichel, this improves slightly to $\Lambda \leq 0.19$, and if one assumes the Riemann hypothesis up to height $5 \times 10^{19}$ the bound improves to $\Lambda \leq 0.11$, contingent on a numerical computation that is still underway.   (See the table below the fold for more data of this form.)  This is broadly consistent with the expectation that the bound on $\Lambda$ should be inversely proportional to the logarithm of the height at which the Riemann hypothesis is verified.

As progress seems to have stabilised, it may be time to transition to the writing phase of the Polymath15 project.  (There are still some interesting research questions to pursue, such as numerically investigating the zeroes of $H_t$ for negative values of $t$, but the writeup does not necessarily have to contain every single direction pursued in the project. If enough additional interesting findings are unearthed then one could always consider writing a second paper, for instance.

Below the fold is the detailed progress report on the numerics by Rudolph Dwars and Kalpesh Muchhal.

This is the ninth “research” thread of the Polymath15 project to upper bound the de Bruijn-Newman constant ${\Lambda}$, continuing this post. Discussion of the project of a non-research nature can continue for now in the existing proposal thread. Progress will be summarised at this Polymath wiki page.

We have now tentatively improved the upper bound of the de Bruijn-Newman constant to ${\Lambda \leq 0.22}$. Among the technical improvements in our approach, we now are able to use Taylor expansions to efficiently compute the approximation ${A+B}$ to ${H_t(x+iy)}$ for many values of ${x,y}$ in a given region, thus speeding up the computations in the barrier considerably. Also, by using the heuristic that ${H_t(x+iy)}$ behaves somewhat like the partial Euler product ${\prod_p (1 - \frac{1}{p^{\frac{1+y-ix}{2}}})^{-1}}$, we were able to find a good location to place the barrier in which ${H_t(x+iy)}$ is larger than average, hence easier to keep away from zero.

The main remaining bottleneck is that of computing the Euler mollifier bounds that keep ${A+B}$ bounded away from zero for larger values of ${x}$ beyond the barrier. In going below ${0.22}$ we are beginning to need quite complicated mollifiers with somewhat poor tail behavior; we may be reaching the point where none of our bounds will succeed in keeping ${A+B}$ bounded away from zero, so we may be close to the natural limits of our methods.

Participants are also welcome to add any further summaries of the situation in the comments below.