The Polymath14 online collaboration has uploaded to the arXiv its paper “Homogeneous length functions on groups“, submitted to Algebra & Number Theory. The paper completely classifies *homogeneous length functions* on an arbitrary group , that is to say non-negative functions that obey the symmetry condition , the non-degeneracy condition , the triangle inequality , and the homogeneity condition . It turns out that these norms can only arise from pulling back the norm of a Banach space by an isometric embedding of the group. Among other things, this shows that can only support a homogeneous length function if and only if it is abelian and torsion free, thus giving a metric description of this property.

The proof is based on repeated use of the homogeneous length function axioms, combined with elementary identities of commutators, to obtain increasingly good bounds on quantities such as , until one can show that such norms have to vanish. See the previous post for a full proof. The result is robust in that it allows for some loss in the triangle inequality and homogeneity condition, allowing for some new results on “quasinorms” on groups that relate to quasihomomorphisms.

As there are now a large number of comments on the previous post on this project, this post will also serve as the new thread for any final discussion of this project as it winds down.

### Like this:

Like Loading...

## 13 comments

Comments feed for this article

11 January, 2018 at 9:06 pm

Terence TaoI’ve also submitted the paper to Algebra & Number Theory. Thanks to everyone for their participation in what I found to be an enjoyable and productive project!

12 January, 2018 at 2:33 am

Tobias FritzI also want to say thanks to everyone! It has been a lot of fun and also a great learning experience.

I feel like our problem has been just about right for such a project: not too hard and not too easy, simple to grasp, and approachable from a lot of different angles. Especially concerning the level of difficulty, this problem was more rewarding than Polymath11 on the union-closed sets conjecture, where we had run out of steam after two months. I’m looking forward to possible future Polymath projects, whether that may be as a participant or as a curious spectator.

12 January, 2018 at 2:34 am

AnonymousIt seems that the numbering of results is not consecutive (e.g. results 1.1 and 1.2 are missing).

12 January, 2018 at 2:40 am

Tobias FritzWe seem to be using an enumeration style where equations and theorem environments have a shared counter. So (1.1) and (1.3) are equations, and similarly in Section 2.

12 January, 2018 at 5:14 am

Siddhartha GadgilThanks to everyone from me too. It was truly enjoyable and I learnt a lot.

Mainly as an exercise for my benefit. I have formalized (in the sense of computer verified, but with idiosyncratic foundations) the internal repetition lemma at http://siddhartha-gadgil.github.io/ProvingGround/tuts/LengthFunctions.html

13 January, 2018 at 4:58 pm

nicodeanTerence Tao wrote on 22.Dec 2017: “Secondly, again as with some previous Polymath projects, computer assistance was quite important, even if the final proof is not visibly computer-assisted in any way. […]. We’re still some way off from the dream of computers routinely generating large chunks of proofs and/or conjectures for us, but nevertheless they are playing an increasingly essential role in mathematics.”

Is there any possibility to summarize in more details of computer-assistance (by any of the participants; and on a more abstract level maybe, i am not an expert in this field)? Also maybe in the other polymath projects? Is there a structure of mathematical problems that are quite good for computer-assisted proofs? What where the insights, has it “just” been the calculation of difficult formulars – i.e. standard Mathematica problems? This is super interesting!

13 January, 2018 at 7:09 pm

Siddhartha GadgilThe role of the computer (used by me) was to find a proof of an (at that time best) concrete bound on the length of the commutator of generators, and output this in a (barely) human readable form as [posted](https://github.com/siddhartha-gadgil/Superficial/wiki/A-commutator-bound). Pace Nielsen read through the proof and saw a pattern, with the “same” conjugacy and the “same” pair of triangle inequalities being repeatedly applied in this proof. This was used by him to get strong bounds, and then he and others refined and abstracted this to get the _splitting lemma_ in the paper.

There were two limitations of the way the computer proof was done:

* While the use of conjugacy invariance and triangle inequality was optimal and algorithmic, of which elements to take powers was manually specified by me. This should have been made smart, and would have soon enough except the extreme smartness of the people in this polymath group made this redundant (problem was solved within 24 hours of the first posted computer proof).

* More importantly, I used [domain specific foundations](https://github.com/siddhartha-gadgil/Superficial/blob/master/src/main/scala/freegroups/LinNormBound.scala), which could encode only one kind of proof, that a specific word has length bounded by a specific number. This rules out in particular both firmulas for bounds that are quatified (and so must invlove variables) and recursion/induction. To show that such results can be at least _encoded_ I formalized the [internal repetition trick](http://siddhartha-gadgil.github.io/ProvingGround/tuts/LengthFunctions.html).

More generally, where a computer helped was in following instructions of the form “try these method in lots of cases in lots of ways and give me the best proof for thess cases (or where we got a strong result)”. It is obvious that the “lots of cases” and “lots of ways” are much bigger numbers for computers than by hand. The question is how general one can be with “these methods”. I do think even in practice a lot of methods can be encoded, and this is underutilized as people underestimate this. In principle, in the era of Homotopy type theory and Deep learning, presumably every method can be encoded.

16 January, 2018 at 4:49 am

J ButtonHi – didn’t see the blog until the arxiv paper was posted and now

this feels rather late in the day (or sometime the next day) but

still: why can’t we argue thus….

Given with homogeneous (pseudo-) length function ,

let be your favourite upper bound that holds for all

commutators, thus for all . By

Culler’s identity

(sometimes seen in introductions to stable commutator length)

we have , thus we now know holds for any

commutator. So now use the argument recursively to get

for all and hence .

16 January, 2018 at 5:05 am

Tobias FritzThat’s a nice idea, but it doesn’t quite work like that: in any nonzero upper bound, the right-hand side also should be linear in , e.g. of the form . Using conjugation invariance, we can deduce from Culler’s identity that

,

assuming that satisfies homogeneity and the triangle inequality on the nose. Thus .

Doing a more refined analysis along the lines of Proposition 1 in the second blog post could be interesting, but I haven’t done this yet and find it hard to say whether it would have the potential to lead to an alternative proof or not.

16 January, 2018 at 8:26 am

J ButtonAh, yes – thanks! We don’t have a uniform upper bound for commutators (was going between a,b for free basis and for

arbitrary elements).

16 January, 2018 at 11:13 am

Lior SilbermanThis is to record an erratum pointed out to us by email and its correction.

Let be a group equipped with a homogeneous norm ( is necessarily abelian and torsion-free). Then this norm extends to a norm on the -vectorspace . In detail, every element of has a representative of the form with and we set .

Now let $\overline{A_\mathbb{Q}}$ be the metric completion of . This is still a normed group into which embeds, and is in fact an -vectorspace: if and are Cauchy sequences and then so is and it is easy to check that its equivalence class depends only on the equivalence classes on the original sequences. Since also it also follows that the norm is -homogenous. In summary, is naturally a complete normed -vector space, that is a Banach space.

Finally, there is a natural map of -vectorspaces . This map is injective since our extension of the norm was still a norm. Since the RHS is compatibly an -vectorspace, this induces a further map of -vectorspaces

.

However, the latter map need not be injective! In other words, when we pull back the norm from to , the result need only be a seminorm.

For example, if we start with the norm on then the same formula defines a norm on but only a seminorm on . To get a norm we need to divide by the subspace which is compatible with everything we’ve said since this subspace is disjoint from the image of here so and still inject in the quotient (isometrically, as they must).

16 January, 2018 at 12:04 pm

Lior SilbermanAdding: this is all already in the paper; but we could clarify better the distinction between (in the paper called ) and ).

25 January, 2018 at 10:28 pm

Spontaneous Polymath 14 – A success! | The polymath blog[…] This post is to report an unplanned polymath project, now called polymath 14 that took place over Terry Tao’s blog. A problem was posed by Apoorva Khare was presented and discussed and openly and collectively solved. (And the paper arxived.) […]