An expert is a man who has made all the mistakes, which can be made, in a very narrow field. (Niels Bohr)
If you unexpectedly find a problem solving itself almost effortlessly, and you can’t quite see why, you should try to analyse your solution more sceptically.
In particular, the method may also be able to prove much stronger statements which are known to be false, which would imply that there is a flaw in the method.
In a related spirit, if you are trying to prove some ambitious claim, you might try to first look for a counterexample; either you find one, which saves you a lot of time and may well be publishable in its own right, or else you encounter some obstruction, which should give some clue as to what one has to do in order to establish the claim positively (in particular, it can “identify the enemy” that has to be neutralised in order to conclude the proof).
Actually, it’s not a bad idea to apply this type of scepticism to other mathematician’s claims also; if nothing else, they can give you a sense of why that claim is true and how powerful it is.
A sceptical attitude towards your own work should be especially enforced when dealing with a problem which is known to be difficult (and this includes most “famous problems“), or one which is outside your usual area of expertise. In particular, if your solution to that problem resembled this process:
- Transform the difficult problem to another difficult problem.
- Transform the problem again to yet another difficult problem.
- …
- Transform the problem again to yet another difficult problem.
- Transform the problem again. Suddenly the problem becomes much simpler!
- Transform the simple problem to another simple problem.
- …
- Transform the simple problem again to another simple problem.
- Solve the last simple problem. Done!
then there is almost certainly a major error in your argument in Step 5. (This is especially true if the difficulty of the transformed problem had been steadily increasing through steps 1-4.) At a bare minimum, this suspicious step should be thoroughly checked and rechecked, any hand-waving arguments near this step should be written out in full, and some analysis should be undertaken as to understanding what exactly was the decisive step in the argument that dramatically simplified the problem, and how that step could be so powerful as to achieve such a simplification.
Here is another common type of suspicious argument:
- To prove Famous Conjecture X, use reductio ad absurdum, and assume for sake of contradiction that X is false.
- Do some random computations of tangential relevance to X.
- Do some more random computations of this type.
- …
- Do another random computation, but this time unwittingly make a sign error, division by zero, or similar mistake.
- Do yet more random computations.
- …
- Notice that two of your computations are inconsistent with each other.
- Congratulations – you’ve obtained the desired contradiction. Declare victory!
A good way to stress-test this sort of false argument is to try to run the same argument without the initial assumption that X is false. If one can easily modify the argument to again lead to a contradiction, it shows the problem wasn’t with X – it was with the argument. A classic example here would be a “proof” that the existence of non-trivial natural number solutions to the equation leads to a contradiction, which mysteriously fails to use in any significant way the hypothesis that
and would in fact would also work (perhaps after some small modification) for
also. Another good practice is to take an argument that you first found using reductio ad absurdum, and try to untangle it to create a more direct proof that avoids reductio ad absurdum. If the argument is fairly “linear” in nature, this should end up with a simpler and more streamlined argument (which is part of the rewriting process in any event), and can often also catch errors of the type in Step 5 above.
Another warning sign is if the computations lead you further and further away from the mathematical topics and connections that X is supposed to be addressing (e.g. a proposed proof of the Riemann hypothesis that proceeds almost entirely using the theory of meromorphic functions, with almost no reference to integers, primes, or other basic number-theoretic concepts; or, conversely, an argument that proceeds entirely by working with the integers, with barely any reference to the zeta function).
A final type of warning sign is if your approach is based on adapting a method that has been used frequently in the past by many experts in the field, but with an additional twist of your own that all the previous experts somehow “missed”. In such situations, there is a high probability that this twist contains a serious technical flaw that the previous experts were aware of and avoided, and a particularly high level of scepticism is warranted in such situations.
For comparison, actual solutions to a major problem tend to be arrived at by a process more like the following (often involving several mathematicians over a period of years or decades, with many of the intermediate steps described here being significant publishable papers in their own right):
- Isolate a toy model case x of major problem X.
- Solve model case x using method A.
- Try using method A to solve the full problem X.
- This does not succeed, but method A can be extended to handle a few more model cases of X, such as x’ and x”.
- Eventually, it is realised that method A relies crucially on a property P being true; this property is known for x, x’, and x”, thus explaining the current progress so far.
- Conjecture that P is true for all instances of problem X.
- Discover a family of counterexamples y, y’, y”, … to this conjecture. This shows that either method A has to be adapted to avoid reliance on P, or that a new method is needed.
- Take the simplest counterexample y in this family, and try to prove X for this special case. Meanwhile, try to see whether method A can work in the absence of P.
- Discover several counterexamples in which method A fails, in which the cause of failure can be definitively traced back to P. Abandon efforts to modify method A.
- Realise that special case y is related to (or at least analogous to) a problem z in another field of mathematics. Look up the literature on z, and ask experts in that field for the latest perspectives on that problem.
- Learn that z has been successfully attacked in that field by use of method B. Attempt to adapt method B to solve y.
- After much effort, an adapted method B’ is developed to solve y.
- Repeat the above steps 1-12 with A replaced by B’ (the outcome will of course probably be a little different from the sample storyline presented above). Continue doing this for a few years, until all model special cases can be solved by one method or another.
- Eventually, one possesses an array of methods that can give partial results on X, each of having their strengths and weaknesses. Considerable intuition is gained as to the circumstances in which a given method is likely to yield something non-trivial or not.
- Begin combining the methods together, simplifying the execution of these methods, locating new model problems, and/or finding a unified and clarifying framework in which many previous methods, insights, results, etc. become special cases.
- Eventually, one realises that there is a family of methods A^* (of which A was the first to be discovered) which, roughly speaking, can handle all cases in which property P^* (a modern generalisation of property P) occurs. There is also a rather different family of methods B^* which can handle all cases in which Q^* occurs.
- From all the prior work on this problem, all known model examples are known to obey either P^* or Q^*. Formulate Conjecture C: all cases of problem X obey either P^* or Q^*.
- Verify that Conjecture C in fact implies the problem. This is a major reduction!
- Repeat steps 1-18, but with problem X replaced by Conjecture C. (Again, the storyline may be different from that presented above.) This procedure itself may iterate a few times.
- Finally, the problem has been boiled down to its most purified essence: a key conjecture K which (morally, at least) provides the decisive input into the known methods A^*, B^*, etc. which will settle conjecture C and hence problem X.
- A breakthrough: a new method Z is introduced to solve an important special case of K.
- The endgame: method Z is rapidly developed and extended, using the full power of all the intuition, experience, and past results, to fully settle K, then C, and then at last X.
- The technology developed to solve major problem X is adapted to solve other related problems in the field. But now a natural successor question X’ to X arises, which lies just outside of the reach of the newly developed tools… and we go back to Step 1.
See also “Learn the limitations of your tools“, “Ask yourself dumb questions“, “Think ahead“, and “Use the wastebasket“.
23 comments
Comments feed for this article
14 June, 2008 at 11:40 am
这等牛人也在wordpress上写blog! « Just For Fun
[…] some to become overly obsessed with “big problems” or “big theories”, others to lose any healthy scepticism in their own work or in their tools, and yet others still to become too discouraged to continue working in […]
6 April, 2009 at 8:11 pm
Prerequisite levels « Matthew’s Math Blog
[…] 1 – A post with this label is intended to be for a general audience. It might look something like this. I don’t plan on writing a lot of these, but we will […]
10 May, 2009 at 11:53 am
Qiaochu Yuan
I’m quite curious – can you give an example where that process actually occurred, preferably multiple times? I’m not really aware of any resources for finding out the full story behind how a conjecture – especially not a particularly well-known, but still important, one – was proven, so I (and I hope others as well) would find such an example quite valuable.
Really the only good example I know of is the writeup of the Polymath project!
10 May, 2009 at 4:26 pm
Terence Tao
Dear Qiaochu,
Actually most solutions to major problems (e.g. Poincare conjecture, Fermat’s last theorem, etc.) have this sort of history – a painstaking series of explorations, conjectures, setbacks, steady accumulation of intuition and insights, realisation of the naivete of earlier approaches, etc. It is indeed difficult though to find a coherent narrative for any given one of these problems (except perhaps at conference talks, or an advanced survey article); usually the story is spread throughout a dozen papers, and one has to have a fair amount of familiarity with the problem to really appreciate the unfolding of progress.
For some problems, though, there are some good surveys that focus on exactly this sort of narrative, e.g. Goldston-Pintz-Yildirim’s “The path to recent progress on small gaps between primes”. Milnor’s Clay article on the Poincare conjecture covers the topological phase of the attacks on that problem fairly well, and the important discovery of the geometrisation conjecture, though it is rather light on the Ricci flow approach which is of course at the heart of Perelman’s solution (but it is fair to say that this geometric approach would only have been seriously attempted after it became abundantly clear that the topological methods had fallen quite short of what was needed to settle the conjecture). Morgan’s BAMS survey article on the subject covers the latter quite thoroughly, though the focus is more on the technical details than on the development of ideas. Finally, I discuss the narrative surrounding Szemeredi’s theorem in my article “What is good mathematics?” (linked to on the sidebar of this blog).
In a few days I will post my talk on the Kakeya conjecture, where the process has definitely been of the type of form described above, though of course we have not yet reached the end of that particular story yet.
10 May, 2009 at 7:54 pm
Greg Kuperberg
There is a famous unpublished paper by the John Stallings entitled, “How Not to Prove the Poincare Conjecture”. It was written in 1965 and Stallings also died not long ago, but the paper is still there on his home page.
Click to access notPC.pdf
It is true that combinatorial topology dominated people’s attention to the Poincare conjecture until the work of Thurston, Hamilton, and Perelman. Moreover Thurston’s work, which was more on the complementary, hyperbolic part of the geometrization conjecture, can be called half combinatorial topology and half differential geometry. It is also true that these traditional methods never really worked. However, I don’t know of any strong reasons, Stallings’ paper notwithstanding, that combinatorial topology can’t work for this conjecture. I don’t think that it’s especially true that anyone waited to see combinatorial topology methods fail before attempting differential geometry methods. On the contrary, relatively few people tried to burnish Hamilton’s program. Many more 3-manifold topologists looked for geometrization results “under the combinatorial street lamp”, to use the proverb.
Anyway, in general when you work on an open problem, there must be some reason lurking that it’s open. Unlike homework or contest problems, open problems survive by natural selection; the well-known ones feel like they already have been picked clean by other people. You should expect Murphy’s Law to hold.
An example: Until 1998 it was an open problem that the smallest possible single Voronoi region of a sphere packing in 3D is a regular dodecahedron. Even now, the proof by Hales and McLaughlin is quite complicated and I don’t know how well it has been checked. (But let me be clear that I do not mean to cast doubt on their work in saying that.)
Here is how you might trick yourself and fail to prove this conjecture, in a way that more than one professional mathematician was tricked: (1) Notice, by making examples, that all of the Voronoi regions with 13 or more sides look terrible. (2) Infer that 12 sides is the most competitive case and therefore the hardest case. (3) Find an elegant proof that the smallest 12-sided polyhedron that contains a unit sphere is the regular dodecahedron, whether or not it can be realized as a Voronoi region. (4) Congratulations! You “almost” solved the problem! The fact that you solved the most competitive case, and the fact that the solution is elegant, both give strength to your position.
Except that actually, you didn’t solve almost the problem. The theorem about 12-sided polyhedra that contain the sphere certainly is elegant, but it was published in 1948 by Laszlo Fejes Toth in a paper that can’t easily be Googled. Fejes Toth was not famous the way that Perelman is famous, but he was a good mathematician. If this line of attack were close to a full solution of the Voronoi problem, he probably would have found it. Nor was he the only capable mathematician between 1948 and 1998 to get stuck here. It turns out that even though 12 sides does seem to be the competitive case, it is not the hardest case. The hard cases are messy cases with 13 and 14 sides. The fact that the 12-sided case has an elegant solution is also a red herring.
11 May, 2009 at 8:43 am
harrison
Greg, Of course the very title of Stallings’ paper has been parodied and paid homage to multiple times — although I don’t know how many of these are actually a similar type of paper.
With a little background in number theory, it’s not hard to get the narrative of how Fermat’s Last Theorem progressed at least through the nineteenth century — the n = 4 case is very simple and was proved by Fermat. Euler introduced the idea of considering the equation over an integral extension of Z in his (flawed) proof of the n = 3 case. Germain was the first to separate the conjecture into two cases (based on whether or not n divides abc) and this idea — along with others introduced by Germain and Euler (such as to consider classes of exponents rather than individual ones) — dominated the research throughout the 19th and well into the 20th century. The approach to the problem that looked like it was going to work in the mid-19th century was to consider the equation over the ring of integers of the nth cyclotomic field. Kummer pointed out that this ring is not always a unique factorization domain (as had been assumed); although he could extend the method to most primes, it failed for the so-called “irregular primes.” That’s about the extent of my knowledge on the subject, though; if anyone knows more (or knows I’m wrong), please fill in the details!
11 May, 2009 at 10:32 am
Greg Kuperberg
With a little background in number theory, it’s not hard to get the narrative of how Fermat’s Last Theorem progressed at least through the nineteenth century
This sort of larger-than-life example — and the Poincare conjecture is another one — is certainly interesting, but it doesn’t entirely convey Terry’s point about personal fallibility in the face of open problems. If you think that you have solved an open problem, you should keep trying to understand it better for a while, because there is a good chance that you will find a mistake. This is still good advice even if it’s a run-of-the-mill open problem, not an incredibly hard open problem like Fermat’s Last Theorem.
Here is the sort of intuitive fallacy that can come up in a geometry argument: “If I lengthen all three sides of a triangle, its area will go up.” It sounds believable, and it even works in a few examples — but it’s not always true for obtuse triangles. Of course if you view this example in isolation, it seems boneheaded. But when working on an open problem, most people have to make intuitive leaps of this sort. If you think that you have a new result, that’s a great time to go back and debug your intuition at each step.
Terry also makes the more specific point that you should know the difference between climbing the mountain and marching around it. Restating a problem several times, or reducing it to another problem, is actually very good practice. But solving an open problem usually takes more than that. A few problems can be solved that way. Also, a few people have at times brilliantly transformed a broad set of problems by generalizing and restating them — this was Grothendieck’s work philosophy. But a more common outcome is that if you restate a question too many times, eventually you will make it “easier” by restating it erroneously.
An example problem that can be solved almost entirely with restatement: Counting problems where the answer is the Catalan numbers, for example the number of lists of 2n balanced parentheses. However, this is only medium-hard as counting problems go, and the restatements have to be somewhat clever to work.
An example where restatements are not known to work by themselves: Counting alternating-sign matrices, or ASMs. There are a dozen or so interesting-looking encodings of alternating-sign matrices. You can spend ages rummaging through them in search of a simple counting principle to count ASMs. In fact, restatements of the ASM problem have been important, for instance the description of ASMs in terms of square ice. But even the important restatements do not solve this problem; rather they only give you access to substantive methods such as the Yang-Baxter equation.
11 May, 2009 at 7:01 pm
Richard
I like this comment in the Conclusion of the paper by Stallings cited by Greg above:
“The second point is that I was unable to find flaws in my ‘proof’ for quite a while, even though the error was very obvious. It was a psychological problem, a blindness, an excitement, an inhibition of reasoning by an underlying fear of being wrong. Techniques leading to the abandonment of such inhibitions should be cultivated by every honest mathematician.”
I recently fell into this trap myself. In the midst of a long paper in progress was a theorem with an assertion that did not seem entirely unreasonable, and I even made a second pass of rewriting in that section and didn’t see a problem. But eventually, it made me more and more nervous, and I started digging into some old literature and found a counterexample. I eventually traced the problem to a lemma that was only partly true, and I finally realized “of course that’s not true, it’s obvious it’s not true!” I could attribute the error to the dense complexity of the theory, or reassure myself that I must have written that up late at night in a state of brain fog (which is often the case), but in fact it was the kind of blindness that Stalling’s refers to.
I was shaken, because other results also depended on that lemma. Fortunately, I was able rewrite the section (again) and recover everything but that particular theorem using different techniques, and after that exercise, I had fresh ideas about how to attack other questions in the next section.
11 May, 2009 at 8:31 pm
Greg Kuperberg
It was a psychological problem, a blindness, an excitement, an inhibition of reasoning by an underlying fear of being wrong.
Yes, it’s the tremendous elation that you’ve solved a really hard open problem. After you’ve been through this a few times, it is a thrill that is also nauseating, because you realize that it is your worst enemy.
Again, to pick an example, I have been through this with the problem of finding a quantum algorithm for graph isomorphism, among other problems that I never solved. If Gamma is a graph and if you create a quantum superposition which is the sum of |f(Gamma)> for all permutations f of the vertices, then you’d have a quantum algorithm for graph isomorphism. Unfortunately, creating that superposition looks as hard as the original problem.
10 February, 2014 at 8:04 pm
Jarod Benowitz
You don’t need a quantum computer to do it :)
13 May, 2009 at 4:46 pm
john mangual
There a rigorous notion of “conservation of difficulty” in mathematics?
Some proofs from “the book” can violate this conservation law. What is compensating for their simplicity and insight?
13 May, 2009 at 7:30 pm
Terence Tao
Dear John,
It would be wonderful if there was indeed a rigorous notion of “conservation of difficulty” in mathematics – this would undoubtedly make problems such as
much more tractable! Unfortunately, we don’t yet have a precise and usable notion of difficulty that is this rigorous.
There are various heuristic arguments, on the other hand, that do suggest a lower bound to the amount of effort needed to prove certain theorems. For instance, if a very slight perturbation of the hypotheses of a problem X leads to a modified statement X’ which has a subtle counterexample, then this implies that any positive proof of X must be quite delicate; it cannot be so crude or elementary an argument that it cannot distinguish between the true statement X and the false statement X’. For instance, if a statement is known to be true for the reals
but false for the algebraic reals
, then one cannot hope to prove or disprove either statement by elementary algebra, since such tools cannot distinguish between the two fields; it is likely that one will need analytic tools, such as limits and integrals, instead.
Another measure of difficulty of a problem can come from lower bounds on the quantitative dependencies of the constants associated to the problem. For instance, it is known that the number of cells needed to regularise a graph to accuracy
grows tower-exponentially fast in
. This already rules out a number of overly simple approaches to proving the graph regularity lemma, since many such approaches would necessarily give polynomial or exponential bounds rather than tower-exponential ones.
A third way to gauge difficulty is to see how many independent applications the result would have. If a result X would easily imply as corollaries three very different-looking results A, B, and C, each of which is non-trivial in its own right (and are proven by completely different methods), then this makes it less likely that there is going to be a quick proof of X, as this would mean that there would be a “one-size-fits-all” proof of A, B, and C. (Instead, what normally happens is that the proof of X will be some amalgam of the proofs used for A, B, and C.)
Of course, not all of the perceived difficulty of a problem is genuinely present, and by cleverly reformulating the problem (or by choosing good notation) one can sometimes get rid of artificial obstacles to proving a result: many examples of “proofs from the book” are of this form, boiling down the difficulties to the bare essentials. But many results, particularly the deeper facts which touch upon particularly subtle, delicate, and broad mathematical phenomena, do seem to have a certain irreducible amount of difficulty to them, even after reformulating away all non-essential obstructions. In some cases, though, this difficulty can be absorbed into some larger theoretical framework which takes some effort to set up, but then can be used easily and efficiently for many further applications. (The construction of the Lebesgue measure and Lebesgue integral is a typical such example; remarkably tedious to set up initially, but extremely useful once in place.)
14 May, 2009 at 8:29 am
Anonymous
Prof. Tao,
Thanks for the above remarks on difficulty . They are great!
3 April, 2010 at 5:27 am
Terence Tao on Research « Mathematics Expressions
[…] Tao on Research By colinwytan Terence Tao gave an illustration on the process of research in his blog. I realize that I need to try my categorial approach on the know examples of the […]
24 April, 2010 at 5:29 am
Anonymous
Would it be fair to say that for difficult problems, an effective approach would have to involve lots of concrete examples and their specific details, and only then generalizing and abstracting from these concrete examples? Are approaches which always remain abstract, general and universal far less likely to be effective?
24 April, 2010 at 5:31 am
Anonymous
This reminds me, one common method of solving math problems, which also happens to be looked down upon a lot, is a case by case analysis. Another is trial and error.
24 December, 2011 at 9:23 pm
Warren
Step 10: searching the literature. This is a VERY big difficulty when one is unemployed and has no formal association with any university or corporation. Moreover, how does one search for a particular identity one might need? How does one enter a mathematical formula into a search engine to see if there exist any counterexamples or proofs of this desired formula? I am dying for library access to mathematical literature!
Contacting paid mathematicians is not sufficient. Often, they do not know the answer and, rightfully so, are not going to do literature searches for me for free.
21 May, 2013 at 3:44 am
Bisogna essere un genio per fare matematica? - Maddmaths
[…] a diventare troppo ossessionate con i "grandi problemi" e le "grandi teorie", altri a perdere quel sano scetticismo nel proprio lavoro o nei loro strumenti, e altri ancora a diventare troppo scoraggiati per continuare a fare […]
27 January, 2015 at 4:25 pm
Alex
What about coordinate changes? Algebraic/geometric machinery seem to find the right coordinate changes that reduce the problems to a simpler computation. Although I agree with you; what you said amounts to saying finding the right coordinate change or at least knowing the existence of a simplifying coordinate change can be very hard, but there are many instances where people seem to straight away stumble upon the right sequence of computations to get the simplifications..
18 May, 2015 at 2:25 pm
Steve Witham
“Read over your compositions, and wherever you meet with a passage which you think is particularly fine, strike it out.”
–Samuel Johnson, quoted by Boswell
2 October, 2015 at 3:24 pm
Right To Learn, Part 2 | Minds on Fire
[…] overly obsessed with “big problems” or “big theories”, others to lose any healthy scepticism in their own work or in their tools, and yet others still to become too discouraged to continue working in […]
7 January, 2017 at 6:36 am
Be sceptical of your own work
[…] An expert is a man who has made all the mistakes, which can be made, in a very narrow field. (Niels Bohr) If you unexpectedly find a problem solving itself almost effortlessly, and you can’t quite … – Hacker News […]
11 October, 2020 at 6:16 am
Does one have to be a genius to do maths? – By Terence Tao – A2M1N
[…] some to become overly obsessed with “big problems” or “big theories”, others to lose any healthy scepticism in their own work or in their tools, and yet others still to become too discouraged to continue working in […]