You are currently browsing the tag archive for the ‘Ben Green’ tag.
Emmanuel Breuillard, Ben Green, and I have just uploaded to the arXiv the paper “Suzuki groups as expanders“, to be submitted. The purpose of this paper is to finish off the last case of the following theorem:
Theorem 1 (All finite simple groups have expanders) For every finite simple non-abelian group , there exists a set of generators of cardinality bounded uniformly in , such that the Cayley graph on generated by (i.e. the graph that connects with for all and ) has expansion constant bounded away from zero uniformly in , or equivalently that for all with and some independent of .
To put in an essentially equivalent way, one can quickly generate a random element of a finite simple group with a near-uniform distribution by multiplying together a few (, to be more precise) randomly chosen elements of a fixed set . (The most well-known instance of this phenomenon is the famous result of Bayer and Diaconis that one can shuffle a 52-card deck reasonably well after seven riffle shuffles, and almost perfectly after ten.) Note that the abelian simple groups do not support expanders due to the slow mixing time of random walks in the abelian setting.
The first step in proving this theorem is, naturally enough, the classification of finite simple groups. The sporadic groups have bounded cardinality and are a trivial case of this theorem, so one only has to deal with the seventeen infinite families of finite non-abelian simple groups. With one exception, the groups in all of these families contain a copy of for some that goes to infinity as . Using this and several other non-trivial tools (such as Kazhdan’s property (T) and a deep model-theoretic result of Hrushovski and Pillay), the above theorem was proven for all groups outside of this exceptional family by a series of works culminating in the paper of Kassabov, Lubotzky, and Nikolov.
The exceptional family is the family of Suzuki groups , where is an odd power of . The Suzuki group can be viewed explicitly as a subgroup of the symplectic group and has cardinality . This cardinality is not divisible by , whereas all groups of the form have cardinality divisible by ; thus Suzuki groups do not contain copies of and the Kassabov-Lubotsky-Nikolov argument does not apply.
Our main result is that the Suzuki groups also support expanders, thus completing the last case of the above theorem. In fact we can pick just two random elements of the Suzuki group, and with probability , the Cayley graph generated by will be an expander uniformly in . (As stated in the paper of Kassabov-Lubotsky-Nikolov, the methods in that paper should give an upper bound on which they conservatively estimate to be .)
Our methods are different, instead following closely the arguments of Bourgain and Gamburd, which established the analogue of our result (i.e. that two random elements generate an expander graph) for the family of groups ( a large prime); the arguments there have since been generalised to several other major families of groups, and our result here can thus be viewed as one further such generalisation. Roughly speaking, the strategy is as follows. Let be the uniform probability measure on the randomly chosen set of generators , and let be the -fold convolution. We need to converge rapidly to the uniform measure on (with a mixing time of ). There are three steps to obtain this mixing:
- (Early period) When for some small , one wants to spread out a little bit in the sense that no individual element of is assigned a mass of any more than for some fixed . More generally, no proper subgroup of should be assigned a mass of more than .
- (Middle period) Once is somewhat spread out, one should be able to convolve this measure with itself a bounded number of times and conclude that the measure for some suitable is reasonably spread out in the sense that its norm is comparable (up to powers of for some small ) to the norm of the uniform distribution.
- (Late period) Once is reasonably spread out, a few more convolutions should make it extremely close to uniform (e.g. within in the norm).
The late period claim is easy to establish from Gowers’ theory of quasirandom groups, the key point being that (like all other finite simple nonabelian groups), the Suzuki groups do not admit any non-trivial low-dimensional irreducible representations (we can for instance use a precise lower bound of , due to Landazuri and Seitz). (One can also proceed here using a trace formula argument of Sarnak and Xue; the two approaches are basically equivalent.) The middle period reduces, by a variant of the Balog-Szemerédi-Gowers lemma, to a product estimate in which was recently established by Pyber-Szábo and can also be obtained by the methods of proof of the results announced by ourselves. (These arguments are in turn based on an earlier result of Helfgott establishing the analogous claim for .) This requires checking that is a “sufficiently Zariski dense” subgroup of the finite Lie group , but this can be done using an explicit description of the Suzuki group and the Schwartz-Zippel lemma.
The main difficulty is then to deal with the early period, obtaining some initial non-concentration in the random walk associated to away from subgroups of . These subgroups have been classified for some time (see e.g. the book of Wilson); they split into two families, the algebraic subgroups, which in the Suzuki case turn out to be solvable of derived length at most three, and the arithmetic subgroups, which are conjugate to , where is a subfield of .
In the algebraic case, one can prevent concentration using a lower bound on the girth of random Cayley graphs due to Gamburd, Hoory, Shahshahani, Shalev, and Virág (and we also provide an independent proof of this fact for completeness, which fortunately is able to avoid any really deep technology, such as Lang-Weil estimates); this follows an analogous argument of Bourgain-Gamburd in the case fairly closely, and is ultimately based on the fact that all the algebraic subgroups obey a fixed law (in this case, the law arises from the solvability). In the arithmetic case, the main task is to show that the coefficients of the characteristic polynomial of a typical word in does not fall into a proper subfield of , but this can be accomplished by a variant of the Schwartz-Zippel lemma.
Ben Green, and I have just uploaded to the arXiv a short (six-page) paper “Yet another proof of Szemeredi’s theorem“, submitted to the 70th birthday conference proceedings for Endre Szemerédi. In this paper we put in print a folklore observation, namely that the inverse conjecture for the Gowers norm, together with the density increment argument, easily implies Szemerédi’s famous theorem on arithmetic progressions. This is unsurprising, given that Gowers’ proof of Szemerédi’s theorem proceeds through a weaker version of the inverse conjecture and a density increment argument, and also given that it is possible to derive Szemerédi’s theorem from knowledge of the characteristic factor for multiple recurrence (the ergodic theory analogue of the inverse conjecture, first established by Host and Kra), as was done by Bergelson, Leibman, and Lesigne (and also implicitly in the earlier paper of Bergelson, Host, and Kra); but to our knowledge the exact derivation of Szemerédi’s theorem from the inverse conjecture was not in the literature. Ordinarily this type of folklore might be considered too trifling (and too well known among experts in the field) to publish; but we felt that the venue of the Szemerédi birthday conference provided a natural venue for this particular observation.
The key point is that one can show (by an elementary argument relying primarily an induction on dimension argument and the Weyl recurrence theorem, i.e. that given any real and any integer , that the expression gets arbitrarily close to an integer) that given a (polynomial) nilsequence , one can subdivide any long arithmetic progression (such as ) into a number of medium-sized progressions, where the nilsequence is nearly constant on each progression. As a consequence of this and the inverse conjecture for the Gowers norm, if a set has no arithmetic progressions, then it must have an elevated density on a subprogression; iterating this observation as per the usual density-increment argument as introduced long ago by Roth, one obtains the claim. (This is very close to the scheme of Gowers’ proof.)
Technically, one might call this the shortest proof of Szemerédi’s theorem in the literature (and would be something like the sixteenth such genuinely distinct proof, by our count), but that would be cheating quite a bit, primarily due to the fact that it assumes the inverse conjecture for the Gowers norm, our current proof of which is checking in at about 100 pages…
Ben Green, and I have just uploaded to the arXiv our paper “An arithmetic regularity lemma, an associated counting lemma, and applications“, submitted (a little behind schedule) to the 70th birthday conference proceedings for Endre Szemerédi. In this paper we describe the general-degree version of the arithmetic regularity lemma, which can be viewed as the counterpart of the Szemerédi regularity lemma, in which the object being regularised is a function on a discrete interval rather than a graph, and the type of patterns one wishes to count are additive patterns (such as arithmetic progressions ) rather than subgraphs. Very roughly speaking, this regularity lemma asserts that all such functions can be decomposed as a degree nilsequence (or more precisely, a variant of a nilsequence that we call an virtual irrational nilsequence), plus a small error, plus a third error which is extremely tiny in the Gowers uniformity norm . In principle, at least, the latter two errors can be readily discarded in applications, so that the regularity lemma reduces many questions in additive combinatorics to questions concerning (virtual irrational) nilsequences. To work with these nilsequences, we also establish a arithmetic counting lemma that gives an integral formula for counting additive patterns weighted by such nilsequences.
The regularity lemma is a manifestation of the “dichotomy between structure and randomness”, as discussed for instance in my ICM article or FOCS article. In the degree case , this result is essentially due to Green. It is powered by the inverse conjecture for the Gowers norms, which we and Tamar Ziegler have recently established (paper to be forthcoming shortly; the case of our argument is discussed here). The counting lemma is established through the quantitative equidistribution theory of nilmanifolds, which Ben and I set out in this paper.
The regularity and counting lemmas are designed to be used together, and in the paper we give three applications of this combination. Firstly, we give a new proof of Szemerédi’s theorem, which proceeds via an energy increment argument rather than a density increment one. Secondly, we establish a conjecture of Bergelson, Host, and Kra, namely that if has density , and , then there exist shifts for which contains at least arithmetic progressions of length of spacing . (The case of this conjecture was established earlier by Green; the case is false, as was shown by Ruzsa in an appendix to the Bergelson-Host-Kra paper.) Thirdly, we establish a variant of a recent result of Gowers-Wolf, showing that the true complexity of a system of linear forms over indeed matches the conjectured value predicted in their first paper.
In all three applications, the scheme of proof can be described as follows:
- Apply the arithmetic regularity lemma, and decompose a relevant function into three pieces, .
- The uniform part is so tiny in the Gowers uniformity norm that its contribution can be easily dealt with by an appropriate “generalised von Neumann theorem”.
- The contribution of the (virtual, irrational) nilsequence can be controlled using the arithmetic counting lemma.
- Finally, one needs to check that the contribution of the small error does not overwhelm the main term . This is the trickiest bit; one often needs to use the counting lemma again to show that one can find a set of arithmetic patterns for that is so sufficiently “equidistributed” that it is not impacted by the small error.
To illustrate the last point, let us give the following example. Suppose we have a set of some positive density (say ) and we have managed to prove that contains a reasonable number of arithmetic progressions of length (say), e.g. it contains at least such progressions. Now we perturb by deleting a small number, say , elements from to create a new set . Can we still conclude that the new set contains any arithmetic progressions of length ?
Unfortunately, the answer could be no; conceivably, all of the arithmetic progressions in could be wiped out by the elements removed from , since each such element of could be associated with up to (or even ) arithmetic progressions in .
But suppose we knew that the arithmetic progressions in were equidistributed, in the sense that each element in belonged to the same number of such arithmetic progressions, namely . Then each element deleted from only removes at most progressions, and so one can safely remove elements from and still retain some arithmetic progressions. The same argument works if the arithmetic progressions are only approximately equidistributed, in the sense that the number of progressions that a given element belongs to concentrates sharply around its mean (for instance, by having a small variance), provided that the equidistribution is sufficiently strong. Fortunately, the arithmetic regularity and counting lemmas are designed to give precisely such a strong equidistribution result.
A succinct (but slightly inaccurate) summation of the regularity+counting lemma strategy would be that in order to solve a problem in additive combinatorics, it “suffices to check it for nilsequences”. But this should come with a caveat, due to the issue of the small error above; in addition to checking it for nilsequences, the answer in the nilsequence case must be sufficiently “dispersed” in a suitable sense, so that it can survive the addition of a small (but not completely negligible) perturbation.
One last “production note”. Like our previous paper with Emmanuel Breuillard, we used Subversion to write this paper, which turned out to be a significant efficiency boost as we could work on different parts of the paper simultaneously (this was particularly important this time round as the paper was somewhat lengthy and complicated, and there was a submission deadline). When doing so, we found it convenient to split the paper into a dozen or so pieces (one for each section of the paper, basically) in order to avoid conflicts, and to help coordinate the writing process. I’m also looking into git (a more advanced version control system), and am planning to use it for another of my joint projects; I hope to be able to comment on the relative strengths of these systems (and with plain old email) in the future.
The main result is a step towards the classification of -approximate groups, in the specific setting of simple and semisimple Lie groups (with some partial results for more general Lie groups). For , define a -approximate group to be a finite subset of a group which is a symmetric neighbourhood of the origin (thus and is equal to ), and such that the product set is covered by left-translates (or equivalently, right-translates) of . For , this is the same concept as a finite subgroup of , but for larger values of , one also gets some interesting objects which are close to, but not exactly groups, such as geometric progressions for some and .
The expectation is that -approximate groups are -controlled by “structured” objects, such as actual groups and progressions, though the precise formulation of this has not yet been finalised. (We say that one finite set -controls another if is at most times larger than in cardinality, and can be covered by at most left translates or right translates of .) The task of stating and proving this statement is the noncommutative Freiman theorem problem, discussed in these earlier blog posts.
While this problem remains unsolved for general groups, significant progress has been made in special groups, notably abelian, nilpotent, and solvable groups. Furthermore, the work of Chang (over ) and Helfgott (over ) has established the important special cases of the special linear groups and :
Theorem 1 (Helfgott’s theorem) Let and let be either or for some prime . Let be a -approximate subgroup of .
- If generates the entire group (which is only possible in the finite case ), then is either controlled by the trivial group or the whole group.
- If , then is -controlled by a solvable -approximate subgroup of , or by itself. If , the latter possibility cannot occur, and must be abelian.
Our main result is an extension of Helfgott’s theorem to for general . In fact, we obtain an analogous result for any simple (or almost simple) Chevalley group over an arbitrary finite field (not necessarily of prime order), or over . (Standard embedding arguments then allow us to in fact handle arbitrary fields.) The results from simple groups can also be extended to (almost) semisimple Lie groups by an approximate version of Goursat’s lemma. Given that general Lie groups are known to split as extensions of (almost) semisimple Lie groups by solvable Lie groups, and Freiman-type theorems are known for solvable groups also, this in principle gives a Freiman-type theorem for arbitrary Lie groups; we have already established this in the characteristic zero case , but there are some technical issues in the finite characteristic case that we are currently in the process of resolving.
We remark that a qualitative version of this result (with the polynomial bounds replaced by an ineffective bound ) was also recently obtained by Hrushovski.
Our arguments are based in part on Helfgott’s arguments, in particular maximal tori play a major role in our arguments for much the same reason they do in Helfgott’s arguments. Our main new ingredient is a surprisingly simple argument, which we call the pivot argument, which is an analogue of a corresponding argument of Konyagin and Bourgain-Glibichuk-Konyagin that was used to prove a sum-product estimate. Indeed, it seems that Helfgott-type results in these groups can be viewed as a manifestation of a product-conjugation phenomenon analogous to the sum-product phenomenon. Namely, the sum-product phenomenon asserts that it is difficult for a subset of a field to be simultaneously approximately closed under sums and products, without being close to an actual field; similarly, the product-conjugation phenomenon asserts that it is difficult for a union of (subsets of) tori to be simultaneously approximately closed under products and conjugations, unless it is coming from a genuine group. In both cases, the key is to exploit a sizeable gap between the behaviour of two types of “pivots” (which are scaling parameters in the sum-product case, and tori in the product-conjugation case): ones which interact strongly with the underlying set , and ones which do not interact at all. The point is that there is no middle ground of pivots which only interact weakly with the set. This separation between interacting (or “involved”) and non-interacting (or “non-involved”) pivots can then be exploited to bootstrap approximate algebraic structure into exact algebraic structure. (Curiously, a similar argument is used all the time in PDE, where it goes under the name of the “bootstrap argument”.)
Below the fold we give more details of this crucial pivot argument.
One piece of trivia about the writing of this paper: this was the first time any of us had used
modern version control software to collaboratively write a paper; specifically, we used Subversion, with the repository being hosted online by xp-dev. (See this post at the Secret Blogging Seminar for how to get started with this software.) There were a certain number of technical glitches in getting everything to install and run smoothly, but once it was set up, it was significantly easier to use than our traditional system of emailing draft versions of the paper back and forth, as one could simply download and upload the most recent versions whenever one wished, with all changes merged successfully. I had a positive impression of this software and am likely to try it again in future collaborations, particularly those involving at least three people. (It would also work well for polymath projects, modulo the technical barrier of every participant having to install some software.)
This is an adaptation of a talk I gave recently for a program at IPAM. In this talk, I gave a (very informal and non-rigorous) overview of Hrushovski’s use of model-theoretic techniques to establish new Freiman-type theorems in non-commutative groups, and some recent work in progress of Ben Green, Tom Sanders and myself to establish combinatorial proofs of some of Hrushovski’s results.
Ben Green, Tamar Ziegler and I have just uploaded to the arXiv our paper “An inverse theorem for the Gowers norm“. This paper establishes the next case of the inverse conjecture for the Gowers norm for the integers (after the case, which was done by Ben and myself a few years ago). This conjecture has a number of combinatorial and number-theoretic consequences, for instance by combining this new inverse theorem with previous results, one can now get the correct asymptotic for the number of arithmetic progressions of primes of length five in any large interval .
To state the inverse conjecture properly requires a certain amount of notation. Given a function and a shift , define the multiplicative derivative
and then define the Gowers norm of a function to (essentially) be the quantity
where we extend f by zero outside of . (Actually, we use a slightly different normalisation to ensure that the function 1 has a norm of 1, but never mind this for now.)
Informally, the Gowers norm measures the amount of bias present in the multiplicative derivatives of . In particular, if for some polynomial , then the derivative of is identically 1, and so is the Gowers norm.
However, polynomial phases are not the only functions with large Gowers norm. For instance, consider the function , which is what we call a quadratic bracket polynomial phase. This function isn’t quite quadratic, but it is close enough to being quadratic (because one has the approximate linearity relationship holding a good fraction of the time) that it turns out that third derivative is trivial fairly often, and the Gowers norm is comparable to 1. This bracket polynomial phase can be modeled as a nilsequence , where is a polynomial orbit on a nilmanifold , which in this case has step 2. (The function F is only piecewise smooth, due to the discontinuity in the floor function , so strictly speaking we would classify this as an almost nilsequence rather than a nilsequence, but let us ignore this technical issue here.) In fact, there is a very close relationship between nilsequences and bracket polynomial phases, but I will detail this in a later post.
The inverse conjecture for the Gowers norm, GI(s), asserts that such nilsequences are the only obstruction to the Gowers norm being small. Roughly speaking, it goes like this:
Inverse conjecture, GI(s). (Informal statement) Suppose that is bounded but has large norm. Then there is an s-step nilsequence of “bounded complexity” that correlates with f.
This conjecture is trivial for s=0, is a short consequence of Fourier analysis when s=1, and was proven for s=2 by Ben and myself. In this paper we establish the s=3 case. An equivalent formulation in this case is that any bounded function of large norm must correlate with a “bracket cubic phase”, which is the product of a bounded number of phases from the following list
for various real numbers .
It appears that our methods also work in higher step, though for technical reasons it is convenient to make a number of adjustments to our arguments to do so, most notably a switch from standard analysis to non-standard analysis, about which I hope to say more later. But there are a number of simplifications available on the s=3 case which make the argument significantly shorter, and so we will be writing the higher s argument in a separate paper.
The arguments largely follow those for the s=2 case (which in turn are based on this paper of Gowers). Two major new ingredients are a deployment of a normal form and equidistribution theory for bracket quadratic phases, and a combinatorial decomposition of frequency space which we call the sunflower decomposition. I will sketch these ideas below the fold.
For a number of reasons, including the start of the summer break for me and my coauthors, a number of papers that we have been working on are being released this week. For instance, Ben Green and I have just uploaded to the arXiv our paper “An equivalence between inverse sumset theorems and inverse conjectures for the norm“, submitted to Math. Proc. Camb. Phil. Soc.. The main result of this short paper (which was briefly announced in this earlier post) is a connection between two types of inverse theorems in additive combinatorics, namely the inverse sumset theorems of Freiman type, and inverse theorems for the Gowers uniformity norm, and more specifically, for the norm
on finite additive group G, where is a complex-valued function.
As usual, the connection is easiest to state in a finite field model such as . In this case, we have the following inverse sumset theorem of Ruzsa:
Theorem 1. If is such that , then A can be covered by a translate of a subspace of of cardinality at most .
The constant has been improved for large in a sequence of papers, from by Ruzsa, by Green-Ruzsa, by Sanders, by Green and myself, and finally by Konyagin (private communication) which is sharp except for the precise value of the O() implied constant (as can be seen by considering the example when A consists of about 2K independent elements). However, it is conjectured that the polynomial loss can be removed entirely if one modifies the conclusion slightly:
Conjecture 1. (Polynomial Freiman-Ruzsa conjecture for .) If is such that , then A can be covered by translates of subspaces of of cardinality at most |A|.
This conjecture was verified for downsets by Green and myself, but is open in general. This conjecture has a number of equivalent formulations; see this paper of Green for more discussion. In this previous post we show that a stronger version of this conjecture fails.
Meanwhile, for the Gowers norm, we have the following inverse theorem, due to Samorodnitsky:
Theorem 2. Let be a function whose norm is at least 1/K. Then there exists a quadratic polynomial such that .
Note that the quadratic phases are the only functions taking values in [-1,1] whose norm attains its maximal value of 1.
It is conjectured that the exponentially weak correlation here can be strengthened to a polynomial one:
Conjecture 2. (Polynomial inverse conjecture for the norm). Let be a function whose norm is at least 1/K. Then there exists a quadratic polynomial such that .
The first main result of this paper is
Theorem 3. Conjecture 1 and Conjecture 2 are equivalent.
This result was also independently observed by Shachar Lovett (private communication). We also establish an analogous result for the cyclic group , in which the notion of polynomial is replaced by that of a subexponential , and in which the notion of a quadratic polynomial is replaced by a 2-step nilsequence; the precise statement is a bit technical and will not be given here. We also observe a partial partial analogue of the correpsondence between inverse sumset theorems and Gowers norms in the higher order case, in particular observing that inverse theorems imply a certain rigidity result for “Freiman-quadratic polynomials” (a quadratic version of Conjecture 3 below).
Below the fold, we sketch the proof of Theorem 3.
This week I am at Penn State University, giving this year’s Marker lectures. My chosen theme for my four lectures here is “recent developments in additive prime number theory”. My first lecture, “Long arithmetic progressions in primes”, is similar to my AMS lecture on the same topic and so I am not reposting it here. The second lecture, the notes for which begin after the fold, is on “Linear equations in primes”. These two lectures focus primarily on work of myself and Ben Green. The third and fourth lectures, entitled “Small gaps between primes” and “Sieving for almost primes and expander graphs”, will instead be focused on the work of Goldston-Yildirim-Pintz and Bourgain-Gamburd-Sarnak respectively.
Read the rest of this entry »
One of my favourite open problems in additive combinatorics is the polynomial Freiman-Ruzsa conjecture, which Ben Green guest blogged about here some time ago. It has many equivalent formulations (which is always a healthy sign when considering a conjecture), but here is one involving “approximate homomorphisms”:
Polynomial Freiman-Ruzsa conjecture. Let be a function which is an approximate homomorphism in the sense that for all and some set . Then there exists a genuine homomorphism such that takes at most values.
Remark 1. The key point here is that the bound on the range of is at most polynomial in |S|. An exponential bound of can be trivially established by splitting into the subspace spanned by S (which has size at most ) and some complementary subspace, and then letting g be the projection of f to that complementary subspace.
Recently, Ben Green and I have shown that this conjecture is equivalent to a certain polynomially quantitative strengthening of the inverse conjecture for the Gowers norm ; I hope to talk about this in a future post. For this (somewhat technical) post, I want to comment on a possible further strengthening of this conjecture, namely
Strong Polynomial Freiman-Ruzsa conjecture. Let be a function which is an approximate homomorphism in the sense that for all and some set . Then there exists a genuine homomorphism such that takes values in the sumset for some fixed .
This conjecture is known to be true for certain types of set S (e.g. for Hamming balls, this is a result of Farah). Unfortunately, it is false in general; the purpose of this post is to describe one counterexample (related to the failure of the inverse conjecture for the Gowers norm for for classical polynomials; in particular, the arguments here have several features in common with those in the papers of Lovett-Meshulam-Samorodnitsky and Green-Tao). [A somewhat different counterexample also appears in the paper of Farah.] The verification of the counterexample is surprisingly involved, ultimately relying on the multidimensional Szemerédi theorem of Furstenberg and Katznelson.
(The results here are derived from forthcoming joint work with Ben Green.)
[This post is authored by Ben Green, who has kindly "guest blogged" this week's "open problem of the week". - T.]
In an earlier blog post Terry discussed Freiman’s theorem. The name of Freiman is attached to a growing body of theorems which take some rather “combinatorial” hypothesis, such that the sumset |A+A| of some set A is small, and deduce from it rather “algebraic” information (such that A is contained in a subspace or a grid).
The easiest place to talk about Freiman’s theorem is in the finite field model (see my survey article on this subject for a full discussion). Here it was shown by Ruzsa that if |A+A| is at most then A is contained in a subspace of size no more than about . The exponent has been improved a few times since Ruzsa’s paper, the best result currently in print being due to Sanders, who obtains an upper bound of . Terry and I are in the process of writing a paper which obtains , which is best possible in view of the example where ; this set has doubling roughly K but is not contained in a subspace of dimension smaller than 2K.
This result has an air of finality (except for the true nature of the o(K) term, which represents an interesting open problem). This is something of an illusion, however. Even using this theorem, one loses an exponential every time one tries to transition between “combinatorial” structure and “algebraic” structure and back again. Indeed if one knows that A is contained in a subspace of size then the strongest assertion one can make about the doubling of A is that it is at most .
The Polynomial Freiman-Ruzsa conjecture (PFR), in , hypothesises a more precise structure theorem for sets with small doubling. Using this
conjecture, one may flit back and forth between combinatorial and algebraic structure with only polynomial losses. Ruzsa attributes the conjecture to
Marton: it states that if A has doubling at most K then A is contained in the union of translates of some subspace H of size at most |A|.