You are currently browsing the monthly archive for November 2007.

In the previous post, I discussed how an induction on dimension approach could establish Hilbert’s nullstellensatz, which we interpreted as a result describing all the obstructions to solving a system of polynomial equations and inequations over an algebraically closed field. Today, I want to point out that exactly the same approach also gives the Hahn-Banach theorem (at least in finite dimensions), which we interpret as a result describing all the obstructions to solving a system of linear inequalities over the reals (or in other words, a linear programming problem); this formulation of the Hahn-Banach theorem is sometimes known as Farkas’ lemma. Then I would like to discuss some standard applications of the Hahn-Banach theorem, such as the separation theorem of Dieudonné, the minimax theorem of von Neumann, Menger’s theorem, and Helly’s theorem (which was mentioned recently in an earlier post).

Read the rest of this entry »

I had occasion recently to look up the proof of Hilbert’s nullstellensatz, which I haven’t studied since cramming for my algebra qualifying exam as a graduate student. I was a little unsatisfied with the proofs I was able to locate – they were fairly abstract and used a certain amount of algebraic machinery, which I was terribly rusty on – so, as an exercise, I tried to find a more computational proof that avoided as much abstract machinery as possible. I found a proof which used only the extended Euclidean algorithm and high school algebra, together with an induction on dimension and the obvious observation that any non-zero polynomial of one variable on an algebraically closed field has at least one non-root. It probably isn’t new (in particular, it might be related to the standard model-theoretic proof of the nullstellensatz, with the Euclidean algorithm and high school algebra taking the place of quantifier elimination), but I thought I’d share it here anyway.

Throughout this post, F is going to be a fixed algebraically closed field (e.g. the complex numbers {\Bbb C}). I’d like to phrase the nullstellensatz in a fairly concrete fashion, in terms of the problem of solving a set of simultaneous polynomial equations P_1(x) = \ldots = P_m(x) = 0 in several variables x = (x_1,\ldots,x_d) \in F^d over F, thus P_1,\ldots,P_m \in F[x] are polynomials in d variables. One obvious obstruction to solvability of this system is if the equations one is trying to solve are inconsistent in the sense that they can be used to imply 1=0. In particular, if one can find polynomials Q_1,\ldots,Q_m \in F[x] such that P_1 Q_1 + \ldots + P_m Q_m = 1, then clearly one cannot solve P_1(x)=\ldots=P_m(x)=0. The weak nullstellensatz asserts that this is, in fact, the only obstruction:

Weak nullstellensatz. Let P_1,\ldots,P_m \in F[x] be polynomials. Then exactly one of the following statements holds:

  1. The system of equations P_1(x)=\ldots=P_m(x)=0 has a solution x \in F^d.
  2. There exist polynomials Q_1,\ldots,Q_m \in F[x] such that P_1 Q_1 + \ldots + P_m Q_m = 1.

Note that the hypothesis that F is algebraically closed is crucial; for instance, if F is the reals, then the equation x^2+1=0 has no solution, but there is no polynomial Q(x) such that (x^2+1) Q(x) = 1.

Like many results of the “The only obstructions are the obvious obstructions” type, the power of the nullstellensatz lies in the ability to take a hypothesis about non-existence (in this case, non-existence of solutions to P_1(x)=\ldots=P_m(x)=0) and deduce a conclusion about existence (in this case, existence of Q_1,\ldots,Q_m such that P_1 Q_1 + \ldots + P_m Q_m = 1). The ability to get “something from nothing” is clearly going to be both non-trivial and useful. In particular, the nullstellensatz offers an important correspondence between algebraic geometry (the conclusion 1 is an assertion that a certain algebraic variety is empty) and commutative algebra (the conclusion 2 is an assertion that a certain ideal is non-proper).

Now suppose one is trying to solve the more complicated system P_1(x)=\ldots=P_d(x)=0; R(x) \neq 0 for some polynomials P_1,\ldots,P_d, R. Again, any identity of the form P_1 Q_1 + \ldots + P_m Q_m = 1 will be an obstruction to solvability, but now more obstructions are possible: any identity of the form P_1 Q_1 + \ldots + P_m Q_m = R^r for some non-negative integer r will also obstruct solvability. The strong nullstellensatz asserts that this is the only obstruction:

Strong nullstellensatz. Let P_1,\ldots,P_m, R \in F[x] be polynomials. Then exactly one of the following statements holds:

  1. The system of equations P_1(x)=\ldots=P_m(x)=0, R(x) \neq 0 has a solution x \in F^d.
  2. There exist polynomials Q_1,\ldots,Q_m \in F[x] and a non-negative integer r such that P_1 Q_1 + \ldots + P_m Q_m = R^r.

Of course, the weak nullstellensatz corresponds to the special case in which R=1. The strong nullstellensatz is usually phrased instead in terms of ideals and radicals, but the above formulation is easily shown to be equivalent to the usual version (modulo Hilbert’s basis theorem).

One could consider generalising the nullstellensatz a little further by considering systems of the form P_1(x)=\ldots=P_m(x)=0, R_1(x),\ldots,R_n(x) \neq 0, but this is not a significant generalisation, since all the inequations R_1(x) \neq 0, \ldots, R_n(x) \neq 0 can be concatenated into a single inequation R_1(x) \ldots R_n(x) \neq 0. The presence of the exponent r in conclusion (2) is a little annoying; to get rid of it, one needs to generalise the notion of an algebraic variety to that of a scheme (which is worth doing for several other reasons too, in particular one can now work over much more general objects than just algebraically closed fields), but that is a whole story in itself (and one that I am not really qualified to tell).

[Update, Nov 26: It turns out that my approach is more complicated than I first thought, and so I had to revise the proof quite a bit to fix a certain gap, in particular making it significantly messier than my first version. On the plus side, I was able to at least eliminate any appeal to Hilbert’s basis theorem, so in particular the proof is now manifestly effective (but with terrible bounds). In any case, I am keeping the argument here in case it has some interest.]

Read the rest of this entry »

Ben Green and I have just uploaded our joint paper, “The distribution of polynomials over finite fields, with applications to the Gowers norms“, to the arXiv, and submitted to Contributions to Discrete Mathematics. This paper, which we first announced at the recent FOCS meeting, and then gave an update on two weeks ago on this blog, is now in final form. It is being made available simultaneously with a closely related paper of Lovett, Meshulam, and Samorodnitsky.

In the previous post on this topic, I focused on the negative results in the paper, and in particular the fact that the inverse conjecture for the Gowers norm fails for certain degrees in low characteristic. Today, I’d like to focus instead on the positive results, which assert that for polynomials in many variables over finite fields whose degree is less than the characteristic of the field, one has a satisfactory theory for the distribution of these polynomials. Very roughly speaking, the main technical results are:

  • A regularity lemma: Any polynomial can be expressed as a combination of a bounded number of other polynomials which are regular, in the sense that no non-trivial linear combination of these polynomials can be expressed efficiently in terms of lower degree polynomials.
  • A counting lemma: A regular collection of polynomials behaves as if the polynomials were selected randomly. In particular, the polynomials are jointly equidistributed.

Read the rest of this entry »

I’m continuing my series of articles for the Princeton Companion to Mathematics with my article on compactness and compactification. This is a fairly recent article for the PCM, which is now at the stage in which most of the specialised articles have been written, and now it is the general articles on topics such as compactness which are being finished up. The topic of this article is self-explanatory; it is a brief and non-technical introduction as to the incredibly useful concept of compactness in topology, analysis, geometry, and other areas mathematics, and the closely related concept of a compactification, which allows one to rigorously take limits of what would otherwise be divergent sequences.

The PCM has an extremely broad scope, covering not just mathematics itself, but the context that mathematics is placed in. To illustrate this, I will mention Michael Harris‘s essay for the Companion, ““Why mathematics?”, you may ask“.

Today, Charlie wrapped up several loose ends in his lectures, including the connection with the classical Whitney extension theorem, the role of convex bodies and Whitney convexity, and a glimpse as to how one obtains the remarkably fast (almost linear time) algorithms in which one actually computes interpolation of functions from finite amounts of data.

Read the rest of this entry »

On Thursday, Charlie Fefferman continued his lecture series on interpolation of functions. Here, he stated the main technical theorem about bundles that underlies all the results, answering the “cliffhanger” question from the last lecture, and broadly outlined the proof, except for a major technical wrinkle about “Whitney convexity” which he will discuss on Friday. Read the rest of this entry »

The first Distinguished Lecture Series at UCLA of this academic year is being given this week by my good friend and fellow Medalist Charlie Fefferman, who also happens to be my “older brother” (we were both students of Elias Stein). The theme of Charlie’s lectures is “Interpolation of functions on {\Bbb R}^n“, in the spirit of the classical Whitney extension theorem, except that now one is considering much more quantitative and computational extension problems (in particular, viewing the problem from a theoretical computer science perspective). Today Charlie introduced the basic problems in this subject, and stated some of the results of his joint work with Bo’az Klartag; he will continue the lectures on Thursday and Friday.

The general topic of extracting quantitative bounds from classical qualitative theorems is a subject that I am personally very fond of, and Charlie gave a wonderfully accessible presentation of the main results, though the actual details of the proofs were left to the next two lectures.

As usual, all errors and omissions here are my responsibility, and are not due to Charlie.

Read the rest of this entry »

I’ve joined the inaugural editorial board for a new mathematical journal, Analysis & PDE. This journal is owned by Mathematical Sciences Publishers, a non-profit organisation dedicated to high-quality, low-cost, and broad-availability mathematical publishing, and run primarily by professional mathematicians. The scope of the journal is, of course, self-explanatory; MSP’s other journals have titles such as Geometry & Topology, Algebra & Number Theory, and Algebraic & Geometric Topology.

We’re just starting out (and haven’t even filled up our first issue yet), so we are looking for strong and significant submissions in all areas of analysis and PDE (broadly defined). If you have a good paper in these areas and are deciding on which journal to submit to, you might want to take a look at the submission guidelines for our journal. Of course, the papers are subject to the usual peer review process and will be held to high standards in order to be accepted.

[Update, Nov 11: Link fixed.]

Recently, I had tentatively announced a forthcoming result with Ben Green establishing the “Gowers inverse conjecture” (or more accurately, the “inverse conjecture for the Gowers uniformity norm”) for vector spaces {\Bbb F}_p^n over a finite field {\Bbb F}_p, in the special case when p=2 and when the function f: {\Bbb F}_p^n \to {\Bbb C} for which the inverse conjecture is to be applied is assumed to be a polynomial phase of bounded degree (thus f= e^{2\pi i P/|{\Bbb F}|}, where P: {\Bbb F}_p^n \to {\Bbb F}_p is a polynomial of some degree d=O(1)). See my FOCS article for some further discussion of this conjecture, which has applications to both polynomiality testing and to various structural decompositions involving the Gowers norm.

This conjecture can be informally stated as follows. By iterating the obvious fact that the derivative of a polynomial of degree at most d is a polynomial of degree at most d-1, we see that a function P: {\Bbb F}_p^n \to {\Bbb F}_p is a polynomial of degree at most d if and only if

\sum_{\omega_1,\ldots,\omega_{d+1} \in \{0,1\}} (-1)^{\omega_1+\ldots+\omega_{d+1}} P(x +\omega_1 h_1 + \ldots + \omega_{d+1} h_{d+1}) = 0

for all x,h_1,\ldots,h_{d+1} \in {\Bbb F}_p^n. From this one can deduce that a function f: {\Bbb F}_p^n \to {\Bbb C} bounded in magnitude by 1 is a polynomial phase of degree at most d if and only if the Gowers norm

\|f\|_{U^{d+1}({\Bbb F}_p^n)} := \bigl( {\Bbb E}_{x,h_1,\ldots,h_{d+1} \in {\Bbb F}_p^n} \prod_{\omega_1,\ldots,\omega_{d+1} \in \{0,1\}}

{\mathcal C}^{\omega_1+\ldots+\omega_{d+1}} f(x + \omega_1 h_1 + \ldots + \omega_{d+1} h_{d+1}) \bigr)^{1/2^{d+1}}

is equal to its maximal value of 1. The inverse conjecture for the Gowers norm, in its usual formulation, says that, more generally, if a function f: {\Bbb F}_p^n \to {\Bbb C} bounded in magnitude by 1 has large Gowers norm (e.g. \|f\|_{U^{d+1}} \geq \varepsilon) then f has some non-trivial correlation with some polynomial phase g (e.g. \langle f, g \rangle > c(\varepsilon) for some c(\varepsilon) > 0). Informally, this conjecture asserts that if a function has biased (d+1)^{th} derivatives, then one should be able to “integrate” this bias and conclude that the function is biased relative to a polynomial of degree d. The conjecture has already been proven for d \leq 2. There are analogues of this conjecture for cyclic groups which are of relevance to Szemerédi’s theorem and to counting linear patterns in primes, but I will not discuss those here.

At the time of the announcement, our paper had not quite been fully written up. This turned out to be a little unfortunate, because soon afterwards we discovered that our arguments at one point had to go through a version of Newton’s interpolation formula, which involves a factor of d! in the denominator and so is only valid when the characteristic p of the field exceeds the degree. So our arguments in fact are only valid in the range p > d, and in particular are rather trivial in the important case p=2; my previous announcement should thus be amended accordingly.

Read the rest of this entry »

Archives