I’ve been focusing my blog posts recently on the mathematics around Hilbert’s fifth problem (is every locally Euclidean group a Lie group?), but today, I will be discussing another of Hilbert’s problems, namely Hilbert’s seventh problem, on the transcendence of powers {a^b} of two algebraic numbers {a,b}. (I am not randomly going through Hilbert’s list, by the way; I hope to explain my interest in the seventh problem in a later post.) This problem was famously solved by Gelfond and Schneider in the 1930s:

Theorem 1 (Gelfond-Schneider theorem) Let {a, b} be algebraic numbers, with {a \neq 0,1} and {b} irrational. Then (any of the values of the possibly multi-valued expression) {a^b} is transcendental.

For sake of simplifying the discussion, let us focus on just one specific consequence of this theorem:

Corollary 2 {\frac{\log 2}{\log 3}} is transcendental.

Proof: If not, one could obtain a contradiction to the Gelfond-Schneider theorem by setting {a := 3} and {b := \frac{\log 2}{\log 3}}. (Note that {\frac{\log 2}{\log 3}} is clearly irrational, since {3^p \neq 2^q} for any integers {p,q} with {q} positive.) \Box

In the 1960s, Alan Baker established a major generalisation of the Gelfond-Schneider theorem known as Baker’s theorem, as part of his work in transcendence theory that later earned him a Fields Medal. Among other things, this theorem provided explicit quantitative bounds on exactly how transcendental quantities such as {\frac{\log 2}{\log 3}} were. In particular, it gave a strong bound on how irrational such quantities were (i.e. how easily they were approximable by rationals). Here, in particular, is one special case of Baker’s theorem:

Proposition 3 (Special case of Baker’s theorem) For any integers {p, q} with {q} positive, one has

\displaystyle |\frac{\log 2}{\log 3} - \frac{p}{q}| \geq c \frac{1}{q^C}

for some absolute (and effectively computable) constants {c, C > 0}.

This theorem may be compared with (the easily proved) Liouville’s theorem on diophantine approximation, which asserts that if {\alpha} is an irrational algebraic number of degree {d}, then

\displaystyle |\alpha - \frac{p}{q}| \geq c \frac{1}{q^d}

for all {p,q} with {q} positive, and some effectively computable {c>0}, and (the more significantly difficult) Thue-Siegel-Roth theorem, which under the same hypotheses gives the bound

\displaystyle |\alpha - \frac{p}{q}| \geq c_\epsilon \frac{1}{q^{2+\epsilon}}

for all {\epsilon>0}, all {p,q} with {q} positive and an ineffective constant {c_\epsilon>0}. Finally, one should compare these results against Dirichlet’s theorem on Diophantine approximation, which asserts that for any real number {\alpha} one has

\displaystyle |\alpha - \frac{p}{q}| < \frac{1}{q^2}

for infinitely many {p,q} with {q} positive. (The reason the Thue-Siegel-Roth theorem is ineffective is because it relies heavily on the dueling conspiracies argument, i.e. playing off multiple “conspiracies” {\alpha \approx \frac{p}{q}} against each other; the other results however only focus on one approximation at a time and thus avoid ineffectivity.)

Proposition 3 easily implies the following separation property between powers of {2} and powers of {3}:

Corollary 4 (Separation between powers of {2} and powers of {3}) For any positive integers {p, q} one has

\displaystyle |3^p - 2^q| \geq \frac{c}{q^C} 3^p

for some effectively computable constants {c, C > 0} (which may be slightly different from those in Proposition 3).

Indeed, this follows quickly from Proposition 3, the identity

\displaystyle 3^p - 2^q = 3^p ( 1 - 3^{q (\frac{\log 2}{\log 3} - \frac{p}{q})}) \ \ \ \ \ (1)


and some elementary estimates.

In particular, the gap between powers of three {3^p} and powers of two {2^q} grows exponentially in the exponents {p,q}. I do not know of any other way to establish this fact other than essentially going through some version of Baker’s argument (which will be given below).

For comparison, by exploiting the trivial (yet fundamental) integrality gap – the obvious fact that if an integer {n} is non-zero, then its magnitude is at least {1} – we have the trivial bound

\displaystyle |3^p - 2^q| \geq 1

for all positive integers {p, q} (since, from the fundamental theorem of arithmetic, {3^p-2^q} cannot vanish). Putting this into (1) we obtain a very weak version of Proposition 3, that only gives exponential bounds instead of polynomial ones:

Proposition 5 (Trivial bound) For any integers {p, q} with {q} positive, one has

\displaystyle |\frac{\log 2}{\log 3} - \frac{p}{q}| \geq c \frac{1}{2^q}

for some absolute (and effectively computable) constant {c > 0}.

The proof of Baker’s theorem (or even of the simpler special case in Proposition 3) is largely elementary (except for some very basic complex analysis), but is quite intricate and lengthy, as a lot of careful book-keeping is necessary in order to get a bound as strong as that in Proposition 3. To illustrate the main ideas, I will prove a bound that is weaker than Proposition 3, but still significantly stronger than Proposition 5, and whose proof already captures many of the key ideas of Baker:

Proposition 6 (Weak special case of Baker’s theorem) For any integers {p, q} with {q > 1}, one has

\displaystyle |\frac{\log 2}{\log 3} - \frac{p}{q}| \geq \exp( - C \log^{C'} q )

for some absolute constants {C, C' > 0}.

Note that Proposition 3 is equivalent to the assertion that one can take {C'=1} (and {C} effective) in the above proposition.

The proof of Proposition 6 can be made effective (for instance, it is not too difficult to make the {C'} close to {2}); however, in order to simplify the exposition (and in particular, to use some nonstandard analysis terminology to reduce the epsilon management), I will establish Proposition 6 with ineffective constants {C, C'}.

Like many other results in transcendence theory, the proof of Baker’s theorem (and Proposition 6) rely on what we would nowadays call the polynomial method – to play off upper and lower bounds on the complexity of polynomials that vanish (or nearly vanish) to high order on a specified set of points. (I have discussed the polynomial method in relation to problems in incidence geometry in several previous blog posts.) In the specific case of Proposition 6, the points in question are of the form

\displaystyle \Gamma_N := \{ (2^n, 3^n): n = 1,\ldots,N \} \subset {\bf R}^2

for some large integer {N}. On the one hand, the irrationality of {\frac{\log 2}{\log 3}} ensures that the curve

\displaystyle \gamma := \{ (2^t, 3^t): t \in {\bf R} \}

is not algebraic, and so it is difficult for a polynomial {P} of controlled complexity to vanish (or nearly vanish) to high order at all the points of {\Gamma_N}; the trivial bound in Proposition 5 allows one to make this statement more precise. (Here, “complexity” of a polynomial is an informal term referring both to the degree of the polynomial, and the height of the coefficients, which in our application will essentially be integers up to some normalisation factors.) On the other hand, if Proposition 6 failed, then {\frac{\log 2}{\log 3}} is close to a rational, which by Taylor expansion makes {\gamma} close to an algebraic curve over the rationals (up to some rescaling by factors such as {\log 2} and {\log 3}) at each point of {\Gamma_N}. This, together with a pigeonholing argument, allows one to find a polynomial {P} of reasonably controlled complexity to (nearly) vanish to high order at every point of {\Gamma_N}.

These observations, by themselves, are not sufficient to get beyond the trivial bound in Proposition 5. However, Baker’s key insight was to exploit the integrality gap to bootstrap the (near) vanishing of {P} on a set {\Gamma_N} to imply near-vanishing of {P} on a larger set {\Gamma_{N'}} with {N'>N}. The point is that if a polynomial {P} of controlled degree and size (nearly) vanishes to higher order on a lot of points on an analytic curve such as {\gamma}, then it will also be fairly small on many other points in {\gamma} as well. (To quantify this statement efficiently, it is convenient to use the tools of complex analysis, which are particularly well suited to understand zeroes (or small values) of polynomials.) But then, thanks to the integrality gap (and the controlled complexity of {P}), we can amplify “fairly small” to “very small”.

Using this observation and an iteration argument, Baker was able to take a polynomial of controlled complexity {P} that nearly vanished to high order on a relatively small set {\Gamma_{N_0}}, and bootstrap that to show near-vanishing on a much larger set {\Gamma_{N_k}}. This bootstrap allows one to dramatically bridge the gap between the upper and lower bounds on the complexity of polynomials that nearly vanish to a specified order on a given {\Gamma_N}, and eventually leads to Proposition 6 (and, with much more care and effort, to Proposition 3).

Below the fold, I give the details of this argument. My treatment here is inspired by this expose of Serre, and these lecture notes of Soundararajan (as transcribed by Ian Petrow).

— 1. Nonstandard formulation —

The proof of Baker’s theorem requires a lot of “epsilon management” in that one has to carefully choose a lot of parameters such as {C} and {\epsilon} in order to make the argument work properly. This is particularly the case if one wants a good value of exponents in the final result, such as the quantity {C'} in Proposition 6. To simplify matters, we will abandon all attempts to get good values of constants anywhere, which allows one to retreat to the nonstandard analysis setting where the notation is much cleaner, and much (though not all) of the epsilon management is eliminated. This is a relatively mild use of nonstandard analysis, though, and it is not difficult to turn all the arguments below into standard effective arguments (but at the cost of explicitly tracking all the constants {C}). See for instance the notes of Soundararajan for such an effective treatment.

We turn to the details. We will assume some basic familiarity with nonstandard analysis, as covered for instance in this previous blog post (but one should be able to follow this argument using only non-rigorous intuition of what terms such as “unbounded” or “infinitesimal” mean).

Let {H} be an unbounded (nonstandard) positive real number. Relative to this {H}, we can define various notions of size:

  • A nonstandard number {z} is said to be of polynomial size if one has {|z| \leq C H^C} for some standard {C > 0}.
  • A nonstandard number {z} is said to be of polylogarithmic size if one has {|z| \leq C \log^C H} for some standard {C > 0}.
  • A nonstandard number {z} is said to be of quasipolynomial size if one has {|z| \leq \exp( C \log^C H) } for some standard {C > 0}.
  • A nonstandard number {z} is said to be quasiexponentially small if one has {|z| \leq \exp( - C \log^C H)} for every standard {C > 0}.
  • Given two nonstandard numbers {X, Y} with {Y} non-negative, we write {X \ll Y} or {X=O(Y)} if {|X| \leq CY} for some standard {C > 0}. We write {X=o(Y)} or {X \lll Y} if we have {|X| \leq cY} for all standard {c>0}.

As a general rule of thumb, in our analysis all exponents will be of polylogarithmic size, all coefficients will be of quasipolynomial size, and all error terms will be quasiexponentially small.

In this nonstandard analysis setting, there is a clean calculus (analogous to the calculus of the asymptotic notations {O()} and {o()}) to manipulate these sorts of quantities without having to explicitly track the constants {C}. For instance:

  • The sum, product, or difference of two quantities of a given size (polynomial, polylogarithmic, quasipolynomial, or quasiexponentially small) remains of that given size (i.e. each size range forms a ring).
  • If {X \ll Y}, and {Y} is of a given size, then {X} is also of that size.
  • If {X} is of quasipolynomial size and {Y} is of polylogarithmic size, then {X^Y} is of quasipolynomial size, and (if {Y} is a natural number) {Y!} is also of quasipolynomial size.
  • If {\epsilon} is quasiexponentially small, and {X} is of quasipolynomial size, then {X\epsilon} is also quasiexponentially small. (Thus, the quasiexponentially small numbers form an ideal in the ring of quasipolynomial numbers.)
  • Any quantity of polylogarithmic size, is of polynomial size; and any quantity of polynomial size, is of quasipolynomial size.

We will refer to these sorts of facts as asymptotic calculus, and rely upon them heavily to simplify a lot of computations (particularly regarding error terms).

Proposition 6 is then equivalent to the following assertion:

Proposition 7 (Nonstandard weak special case of Baker) Let {H} be an unbounded nonstandard natural number, and let {\frac{p}{q}} be a rational of height at most {H} (i.e. {|p|, |q| \leq H}). Then {\frac{\log 2}{\log 3} - \frac{p}{q}} is not quasiexponentially small (relative to {H}, of course).

Let us quickly see why Proposition 7 implies Proposition 6 (the converse is easy and is left to the reader). This is the usual “compactness and contradiction” argument. Suppose for contradiction that Proposition 6 failed. Carefully negating the quantifiers, we may then find a sequence {\frac{p_n}{q_n}} of (standard) rationals with {q_n > 1}, such that

\displaystyle |\frac{\log 2}{\log 3} - \frac{p_n}{q_n}| \leq \exp( - n \log^n q_n )

for all natural numbers {n}. As {\frac{\log 2}{\log 3}} is irrational, {q_n} must go to infinity. Taking the ultralimit {\frac{p}{q}} of the {\frac{p_n}{q_n}}, and setting {H} to be (say) {q}, we contradict Proposition 7.

It remains to prove Proposition 7. We fix the unbounded nonstandard natural number {H}, and assume for contradiction that {\frac{\log 2}{\log 3}} is quasiexponentially close to a nonstandard rational {\frac{p}{q}} of height at most {H}. We will write {X \approx Y} for the assertion that {X-Y} is quasiexponentially small, thus

\displaystyle \frac{\log 2}{\log 3} \approx \frac{p}{q}. \ \ \ \ \ (2)


The objective is to show that (2) leads to a contradiction.

— 2. The polynomial method —

Now it is time to introduce the polynomial method. We will be working with the following class of polynomials:

Definition 8 A good polynomial is a nonstandard polynomial {P: {}^* {\bf C}^2 \rightarrow {}^* {\bf C}} of the form

\displaystyle P(x, y) = \sum_{0 \leq a, b \leq D} c_{a,b} x^a y^b \ \ \ \ \ (3)


of two nonstandard variables of some (nonstandard) degree at most {D} (in each variable), where {D} is a nonstandard natural number of polylogarithmic size, and whose coefficients {c_{a,b}} are (nonstandard) integers of quasipolynomial size. (A technical point: we require the {c_{a,b}} to depend in an internal fashion on the indices {a,b}, in order for the nonstandard summation here to be well-defined.) Define the height {M} of the polynomial to be the maximum magnitude of the coefficients in {P}; thus, by hypothesis, {M} is of quasipolynomial size.

We have a key definition:

Definition 9 Let {N, J} be two (nonstandard) positive numbers of polylogarithmic size. A good polynomial {P} is said to nearly vanish to order {J} on {\Gamma_N} if one has

\displaystyle \frac{d^j}{dz^j} P(2^z,3^z)|_{z=n} \approx 0 \ \ \ \ \ (4)


for all nonstandard natural numbers {0 \leq j \leq J} and {1 \leq n \leq N}.

The derivatives in (4) can be easily computed. Indeed, if we expand out the good polynomial {P} out as (3), then the left-hand side of (4) is

\displaystyle \sum_{0 \leq a, b \leq D} c_{a,b} (a \log 2 + b \log 3)^j 2^{an} 3^{bn}.

Now, from (2) we have

\displaystyle a \log 2 + b \log 3 \approx \frac{\log 3}{q} (aq+bp).

Using the asymptotic calculus (and the hypotheses that {D, j} are of polylogarithmic size, and the {c_{a,b}} are of quasipolynomial size) we conclude that the left-hand side of (4) is

\displaystyle \approx (\frac{\log 3}{q})^j \sum_{0 \leq a, b \leq D} c_{a,b} (ap+bq)^j 2^{an} 3^{bn}. \ \ \ \ \ (5)


The quantity {(\frac{\log 3}{q})^j} (and its reciprocal) is of quasipolynomial size. Thus, the condition (4) is equivalent to the assertion that

\displaystyle \sum_{0 \leq a, b \leq D} c_{a,b} (ap+bq)^j 2^{an} 3^{bn} \approx 0

for all {0 \leq j \leq J} and {1 \leq n \leq N}; as the left-hand side is a nonstandard integer, we see from the integrality gap that the condition is in fact equivalent to the exact constraint

\displaystyle \sum_{0 \leq a, b \leq D} c_{a,b} (ap+bq)^j 2^{an} 3^{bn} = 0 \ \ \ \ \ (6)


for all {0 \leq j \leq J} and {1 \leq n \leq N}.

Using this reformulation of (4), we can now give some upper and lower bounds on the complexity of good polynomials that nearly vanish to a high order on a set {\Gamma_N}. We first give an lower bound, that prevents the degree {D} from being smaller than {N^{1/2}}:

Proposition 10 (Lower bound) Let {P} be a non-trivial good polynomial of degree {D} that nearly vanishes to order at least {0} on {\Gamma_N}. Then {(D+1)^2 > N}.

Proof: Suppose for contradiction that {(D+1)^2 \leq N}. Then from (6) we have

\displaystyle \sum_{0 \leq a, b \leq D} c_{a,b} (2^{a} 3^{b})^n = 0

for {1 \leq n \leq (D+1)^2}; thus there is a non-trivial linear dependence between the {(D+1)^2} (nonstandard) vectors {((2^a 3^b)^n)_{1 \leq n \leq (D+1)^2} \in {}^* {\bf R}^{(D+1)^2}} for {0 \leq a,b \leq D}. But, from the formula for the Vandermonde determinant, this would imply that two of the {2^a 3^b} are equal, which is absurd. \Box

In the converse direction, we can obtain polynomials that vanish to a high order {J} on {\Gamma_N}, but with degree {D} larger than {N^{1/2} J^{1/2}}:

Proposition 11 (Upper bound) Let {D, J, N} be positive quantities of polylogarithmic size such that

\displaystyle D^2 \ggg N J.

Then there exists a non-trivial good polynomial {P} of degree at most {D} that vanishes to order {J} on {\Gamma_N}. Furthermore, {P} has height at most

\displaystyle \exp( O( \frac{NJ^2 \log H}{D^2} + \frac{N^2 J}{D} ) ).

Proof: We use the pigeonholing argument of Thue and Siegel. Let {M} be an positive quantity of quasipolynomial size to be chosen later, and choose coefficients {c_{a,b}} for {0 \leq a,b \leq D} that are nonstandard natural numbers between {1} and {M}. There are {M^{(D+1)^2} \geq \exp( D^2 \log M)} possible ways to make such a selection. For each such selection, we consider the {N(J+1)} expressions arising as the left-hand side of (6) with {0 \leq j \leq J} and {1 \leq n \leq N}. These expressions are nonstandard integers whose magnitude is bounded by

\displaystyle O( (D+1)^2 M O( DH )^J \exp( O( N D ) ) )

which by asymptotic calculus simplifies to be bounded by

\displaystyle \exp( \log M + O( J \log H ) + O( N D ) ).

The number of possible values of these {N(J+1)} expressions is thus

\displaystyle \exp( N(J+1) \log M + O( N J^2 \log H ) + O( N^2 J D ) ).

By the hypothesis {D^2 \gg NJ} and asymptotic calculus, we can make this quantity less than {\exp(D^2 \log M)} for some {M} of size

\displaystyle M \ll \exp( O( \frac{NJ^2 \log H}{D^2} + \frac{N^2 J}{D} ) ).

In particular, {M} can be taken to be of polylogarithmic size. Thus, by the pigeonhole principle, one can find two choices for the coefficients {c_{a,b}} which give equal values for the expressions in the left-hand side of (6). Subtracting those two choices we obtain the result. \Box

— 3. The bootstrap —

At present, there is no contradiction between the lower bound in Proposition 10 and the upper bound in Proposition 11, because there is plenty of room between the two bounds. To bridge the gap between the bounds, we need a bootstrap argument that uses vanishing on one {\Gamma_N} to imply vanishing (to slightly lower order) on a larger {\Gamma_{N'}}. The key bootstrap in this regard is:

Proposition 12 (Bootstrap) Let {D, J, N} be unbounded polylogarithmic quantities, such that

\displaystyle N \ggg \log H.

Let {P} be a good polynomial of degree at most {D} and height {\exp( O(NJ) )}, that nearly vanishes to order {2J} on {\Gamma_N}. Then {P} also vanishes to order {J} on {\Gamma_{N'}} for any {N' = o( \frac{J}{D} N )}.

Proof: It is convenient to use complex analysis methods. We consider the entire function

\displaystyle f(z) := P( 2^z, 3^z ),

thus by (3)

\displaystyle f(z) = \sum_{0 \leq a, b \leq D} c_{a,b} 2^{az} 3^{bz}.

By hypothesis, we have

\displaystyle f^{(j)}(n) \approx 0

for all {0 \leq j \leq 2J} and {1 \leq n \leq N}. We wish to show that

\displaystyle f^{(j)}(n') \approx 0

for {0 \leq j \leq J} and {1 \leq n' \leq N'}. Clearly we may assume that {N' \geq n' > N}.

Fix {0 \leq j \leq J} and {1 \leq n' \leq N'}. To estimate {f^{(j)}(n')}, we consider the contour integral

\displaystyle \frac{1}{2\pi i} \int_{|z|=R} \frac{f^{(j)}(z)}{\prod_{n=1}^N (z-n)^J} \frac{dz}{z-n'} \ \ \ \ \ (7)


(oriented anticlockwise), where {R \geq 2N'} is to be chosen later, and estimate it in two different ways. Firstly, we have

\displaystyle f^{(j)}(z) = \sum_{0 \leq a, b \leq D} c_{a,b} (a \log 2 + b \log 3)^j 2^{az} 3^{bz},

so for {|z|=2N'}, we have the bound

\displaystyle |f^{(j)}(z)| \ll D^2 \exp(o(NJ)) O( D )^J \exp( O( D R ) )

when {|z|=2N'}, which by the hypotheses and asymptotic calculus, simplifies to

\displaystyle |f^{(j)}(z)| \ll \exp( O( NJ + DR) ).

Also, when {|z|=R} we have

\displaystyle |\prod_{n=1}^N (z-n)^J| \geq (R/2)^{NJ}.

We conclude the upper bound

\displaystyle \exp( O(NJ + DR) - NJ \log R )

for the magnitude of (7). On the other hand, we can evaluate (7) using the residue theorem. The integrand has poles at {1,\ldots,N} and at {n'}. The simple pole at {n'} has residue

\displaystyle \frac{f^{(j)}(n')}{\prod_{n=1}^N (n'-n)^J}.

Now we consider the poles at {n=1,\ldots,N}. For each such {n}, we see that the first {J} derivatives of {f^{(j)}} are quasiexponentially small at {n}. Thus, by Taylor expansion (and asymptotic calculus), one can express {f^{(j)}(z)} as the sum of a polynomial of degree {J} with quasiexponentially small coefficients, plus an entire function that vanishes to order {J} at {n}. The latter term contributes nothing to the residue at {n}, while from the Cauchy integral formula (applied, for instance, to a circle of radius {1/2} around {n}) and asymptotic calculus, we see that the former term contributes a residue is quasiexponentially small. In particular, it is less than {\exp( O(NJ) - NJ \log R )}. We conclude that

\displaystyle |\frac{f^{(j)}(n')}{\prod_{n=1}^N (n'-n)^J}| \ll \exp( O(NJ + DR) - NJ \log R ).

We have

\displaystyle |\prod_{n=1}^N (n'-n)^J| \leq (N')^{NJ}

and thus

\displaystyle |f^{(j)}(n')| \ll \exp( O(NJ+DR) - NJ \log \frac{R}{N'} );

choosing {R} to be a large standard multiple of {N'} and using the hypothesis {N' = o( \frac{J}{D} N)}, we can simplify this to

\displaystyle |f^{(j)}(n')| \ll \exp( - NJ ).

To improve this bound, we use the integrality gap. Recall that from (5) that

\displaystyle |f^{(j)}(n')| \approx (\frac{\log 3}{q})^j \sum_{0 \leq a, b \leq D} c_{a,b} (ap+bq)^j 2^{an'} 3^{bn'};

in particular, {(\frac{q}{\log 3})^j f^{(j)}(n')} is quasiexponentially close to a (nonstandard) integer. Since

\displaystyle (\frac{q}{\log 3})^j = \exp( O( J \log H ) ),

we have

\displaystyle |(\frac{q}{\log 3})^j f^{(j)}(n')| \leq \frac{1}{2}

(say). Using the integrality gap, we conclude that

\displaystyle (\frac{q}{\log 3})^j f^{(j)}(n') \approx 0

which implies that {f} nearly vanishes to order {J} on {\Gamma_{N'}}, as required. \Box

Now we can finish the proof of Proposition 7 (and hence Proposition 6). We select quantities {D, J, N_0} of polylogarithmic size obeying the bounds

\displaystyle \log H \lll N_0 \lll D \lll J


\displaystyle N_0 J \lll D^2,

with a gap of a positive power of {\log H} between each such inequality. For instance, one could take

\displaystyle N_0 := \log^2 H

\displaystyle D := \log^{4} H

\displaystyle J := \log^{5} H;

many other choices are possible (and one can optimise these choices eventually to get a good value of exponent {C'} in Proposition 6).

Using Proposition 11, we can find a good polynomial {P} which vanishes to order {J} on {\Gamma_{N_0}}, of height {\exp( O( \frac{N_0 J^2 \log H}{D^2} + \frac{N_0^2 J}{D} ) )}, and hence (by the assumptions on {N_0,D,J}) of height {\exp( O( N_0 J ) )}.

Applying Proposition 12, {P} nearly vanishes to order {J/2} on {\Gamma_{N_1}} for any {N_1 = o( \frac{J}{D} N_0)}. Iterating this, an easy induction shows that for any standard {k\geq 1}, {P} nearly vanishes to order {J/2^k} on {\Gamma_{N_k}} for any {N_k = o( (\frac{J}{D})^k N_0)}. As {J/D} was chosen to be larger than a positive power of {\log H}, we conclude that {P} nearly vanishes to order at least {0} on {\Gamma_N} for any {N} of polylogarithmic size. But for {N} large enough, this contradicts Proposition 10.

Remark 1 The above argument places a lower bound on quantities such as

\displaystyle q \log 2 - p \log 3

for integer {p, q}. Baker’s theorem, in its full generality, gives a lower bound on quantities such as

\displaystyle \beta_0 + \beta_1 \log \alpha_1 + \ldots + \beta_n \log \alpha_n

for algebraic numbers {\beta_0,\ldots,\beta_n, \alpha_1,\ldots,\alpha_n}, which is polynomial in the height of the quantities involved, assuming of course that {1, \alpha_1,\ldots,\alpha_n} are multiplicatively independent, and that all quantities are of bounded degree. The proof is more intricate than the one given above, but follows a broadly similar strategy, and the constants are completely effective.