You are currently browsing the category archive for the ‘math.AC’ category.

[Note: the idea for this post originated before the recent preprint of Mochizuki on the abc conjecture was released, and is not intended as a commentary on that work, which offers a much more non-trivial perspective on scheme theory. -T.]

In classical algebraic geometry, the central object of study is an algebraic variety {V} over a field {k} (and the theory works best when this field {k} is algebraically closed). One can talk about either affine or projective varieties; for sake of discussion, let us restrict attention to affine varieties. Such varieties can be viewed in at least four different ways:

  • (Algebraic geometry) One can view a variety through the set {V(k)} of points (over {k}) in that variety.
  • (Commutative algebra) One can view a variety through the field of rational functions {k(V)} on that variety, or the subring {k[V]} of polynomial functions in that field.
  • (Dual algebraic geometry) One can view a variety through a collection of polynomials {P_1,\ldots,P_m} that cut out that variety.
  • (Dual commutative algebra) One can view a variety through the ideal {I(V)} of polynomials that vanish on that variety.

For instance, the unit circle over the reals can be thought of in each of these four different ways:

  • (Algebraic geometry) The set of points {\{ (x,y) \in {\bf R}^2: x^2+y^2 = 1 \}}.
  • (Commutative algebra) The quotient {{\bf R}[x,y] / (x^2+y^2-1)} of the polynomial ring {{\bf R}[x,y]} by the ideal generated by {x^2+y^2-1} (or equivalently, the algebra generated by {x,y} subject to the constraint {x^2+y^2=1}), or the fraction field of that quotient.
  • (Dual algebraic geometry) The polynomial {x^2+y^2-1}.
  • (Dual commutative algebra) The ideal {(x^2+y^2-1)} generated by {x^2+y^2-1}.

The four viewpoints are almost equivalent to each other (particularly if the underlying field {k} is algebraically closed), as there are obvious ways to pass from one viewpoint to another. For instance, starting with the set of points on a variety, one can form the space of rational functions on that variety, or the ideal of polynomials that vanish on that variety. Given a set of polynomials, one can cut out their zero locus, or form the ideal that they generate. Given an ideal in a polynomial ring, one can quotient out the ring by the ideal and then form the fraction field. Finally, given the ring of polynomials on a variety, one can form its spectrum (the space of prime ideals in the ring) to recover the set of points on that variety (together with the Zariski topology on that variety).

Because of the connections between these viewpoints, there are extensive “dictionaries” (most notably the ideal-variety dictionary) that convert basic concepts in one of these four perspectives into any of the other three. For instance, passing from a variety to a subvariety shrinks the set of points and the function field, but enlarges the set of polynomials needed to cut out the variety, as well as the associated ideal. Taking the intersection or union of two varieties corresponds to adding or multiplying together the two ideals respectively. The dimension of an (irreducible) algebraic variety can be defined as the transcendence degree of the function field, the maximal length of chains of subvarieties, or the Krull dimension of the ring of polynomials. And so on and so forth. Thanks to these dictionaries, it is now commonplace to think of commutative algebras geometrically, or conversely to approach algebraic geometry from the perspective of abstract algebra. There are however some very well known defects to these dictionaries, at least when viewed in the classical setting of algebraic varieties. The main one is that two different ideals (or two inequivalent sets of polynomials) can cut out the same set of points, particularly if the underlying field {k} is not algebraically closed. For instance, if the underlying field is the real line {{\bf R}}, then the polynomial equations {x^2+1=0} and {1=0} cut out the same set of points, namely the empty set, but the ideal generated by {x^2+1} in {{\bf R}[x]} is certainly different from the ideal generated by {1}. This particular example does not work in an algebraically closed field such as {{\bf C}}, but in that case the polynomial equations {x^2=0} and {x=0} also cut out the same set of points (namely the origin), but again {x^2} and {x} generate different ideals in {{\bf C}[x]}. Thanks to Hilbert’s nullstellensatz, we can get around this problem (in the case when {k} is algebraically closed) by always passing from an ideal to its radical, but this causes many aspects of the theory of algebraic varieties to become more complicated when the varieties involved develop singularities or multiplicities, as can already be seen with the simple example of Bezout’s theorem.

Nowadays, the standard way to deal with these issues is to replace the notion of an algebraic variety with the more general notion of a scheme. Roughly speaking, the way schemes are defined is to focus on the commutative algebra perspective as the primary one, and to allow the base field {k} to be not algebraically closed, or even to just be a commutative ring instead of a field. (One could even consider non-commutative rings, leading to non-commutative geometry, but we will not discuss this extension of scheme theory further here.) Once one generalises to these more abstract rings, the notion of a rational function becomes more complicated (one has to work locally instead of globally, cutting out the points where the function becomes singular), but as a first approximation one can think of a scheme as basically being the same concept as a commutative ring. (In actuality, due to the need to localise, a scheme is defined as a sheaf of rings rather than a single ring, but these technicalities will not be important for the purposes of this discussion.) All the other concepts from algebraic geometry that might previously have been defined using one of the other three perspectives, are then redefined in terms of this ring (or sheaf of rings) in order to generalise them to schemes.

Thus, for instance, in scheme theory the rings {{\bf R}[x]/(x^2)} and {{\bf R}[x]/(x)} describe different schemes; from the classical perspective, they cut out the same locus, namely the point {\{0\}}, but the former scheme makes this point “fatter” than the latter scheme, giving it a degree (or multiplicity) of {2} rather than {1}.

Because of this, it seems that the link between the commutative algebra perspective and the algebraic geometry perspective is still not quite perfect in scheme theory, unless one is willing to start “fattening” various varieties to correctly model multiplicity or singularity. But – and this is the trivial remark I wanted to make in this blog post – one can recover a tight connection between the two perspectives as long as one allows the freedom to arbitrarily extend the underlying base ring.

Here’s what I mean by this. Consider classical algebraic geometry over some commutative ring {R} (not necessarily a field). Any set of polynomials {P_1,\ldots,P_m \in R[x_1,\ldots,x_d]} in {d} indeterminate variables {x_1,\ldots,x_d} with coefficients in {R} determines, on the one hand, an ideal

\displaystyle I := (P_1,\ldots,P_m)

\displaystyle  = \{P_1Q_1+\ldots+P_mQ_m: Q_1,\ldots,Q_m \in R[x_1,\ldots,x_d]\}

in {R[x_1,\ldots,x_d]}, and also cuts out a zero locus

\displaystyle  V[R] := \{ (y_1,\ldots,y_d) \in R^d: P_1(y_1,\ldots,y_d) = \ldots = P_m(y_1,\ldots,y_d) = 0 \},

since each of the polynomials {P_1,\ldots,P_m} clearly make sense as maps from {R^d} to {R}. Of course, one can also write {V[R]} in terms of {I}:

\displaystyle  V[R] := \{ (y_1,\ldots,y_d) \in R^d: P(y_1,\ldots,y_d) = 0 \hbox{ for all } P \in I \}.

Thus the ideal {I} uniquely determines the zero locus {V[R]}, and we will emphasise this by writing {V[R]} as {V_I[R]}. As the previous counterexamples illustrate, the converse is not true. However, whenever we have any extension {R'} of the ring {R} (i.e. a commutative ring {R'} that contains {R} as a subring), then we can also view the polynomials {P_1,\ldots,P_m} as maps from {(R')^d} to {R'}, and so one can also define the zero locus for all the extensions:

\displaystyle  V[R'] := \{ (y_1,\ldots,y_d) \in (R')^d:

\displaystyle  P_1(y_1,\ldots,y_d) = \ldots = P_m(y_1,\ldots,y_d) = 0 \}.

As before, {V[R']} is determined by the ideal {I}:

\displaystyle  V[R'] = V_I[R'] := \{ (y_1,\ldots,y_d) \in (R')^d:

\displaystyle  P(y_1,\ldots,y_d) = 0 \hbox{ for all } P \in I \}.

The trivial remark is then that while a single zero locus {V_I[R]} is insufficient to recover {I}, the collection of zero loci {V_I[R']} for all extensions {R'} of {R} (or more precisely, the assignment map {R' \mapsto V_I[R']}, known as the functor of points of {V_I}) is sufficient to recover {I}, as long as at least one zero locus, say {V_I[R_0]}, is non-empty. Indeed, suppose we have two ideals {I, I'} of {R[x_1,\ldots,x_d]} that cut out the same non-empty zero locus for all extensions {R'} of {R}, thus

\displaystyle  V_I[R'] = V_{I'}[R'] \neq \emptyset

for all extensions {R'} of {R}. We apply this with the extension {R'} of {R} given by {R' := R_0[x_1,\ldots,x_d]/I}. Note that the embedding of {R} in {R_0[x_1,\ldots,x_d]/I} is injective, since otherwise {I} would cut out the empty set as the zero locus over {R_0}, and so {R'} is indeed an extension of {R}. Tautologically, the point {(x_1 \hbox{ mod } I, \ldots, x_d \hbox{ mod } I)} lies in {V_I[R']}, and thus necessarily lies in {V_{I'}[R']} as well. Unpacking what this means, we conclude that {P \in I} whenever {P \in I'}, that is to say that {I' \subset I}. By a symmetric argument, we also have {I \subset I'}, and thus {I=I'} as claimed. (As pointed out in comments, this fact (and its proof) is essentially a special case of the Yoneda lemma. The connection is tighter if one allows {R'} to be any ring with a (not necessarily injective) map from {R} into it, rather than an extension of {R}, in which case one can also drop the hypothesis that {V_I[R_0]} is non-empty for at least one {R_0}. For instance, {V_{(2)}[R'] = V_{(3)}[R'] = \emptyset} for every extension {R'} of the integers, but if one also allows quotients such as {{\bf Z}/(2)} or {{\bf Z}/(3)} instead, then {V_{(2)}[R']} and {V_{(3)}[R']} are no longer necessarily equal.)

Thus, as long as one thinks of a variety or scheme as cutting out points not just in the original base ring or field, but in all extensions of that base ring or field, one recovers an exact correspondence between the algebraic geometry perspective and the commutative algebra perspective. This is similar to the classical algebraic geometry position of viewing an algebraic variety as being defined simultaneously over all fields that contain the coefficients of the defining polynomials, but the crucial difference between scheme theory and classical algebraic geometry is that one also allows definition over commutative rings, and not just fields. In particular, one needs to allow extensions to rings that may contain nilpotent elements, otherwise one cannot distinguish an ideal from its radical.

There are of course many ways to extend a field into a ring, but as an analyst, one way to do so that appeals particularly to me is to introduce an epsilon parameter and work modulo errors of {O(\varepsilon)}. To formalise this algebraically, let’s say for sake of concreteness that the base field is the real line {{\bf R}}. Consider the ring {\tilde R} of real-valued quantities {x = x_\varepsilon} that depend on a parameter {\varepsilon \geq 0} (i.e. functions from {{\bf R}^+} to {{\bf R}}), which are locally bounded in the sense that {x} is bounded whenever {\varepsilon} is bounded. (One can, if one wishes, impose some further continuity or smoothness hypotheses on how {x} depends on {\varepsilon}, but this turns out not to be relevant for the following discussion. Algebraists often prefer to use the ring of Puiseux series here in place of {\tilde R}, and a nonstandard analyst might instead use the hyperreals, but again this will not make too much difference for our purposes.) Inside this commutative ring, we can form the ideal {I} of quantities {x = x_\varepsilon} that are of size {O(\varepsilon)} as {\varepsilon \rightarrow 0}, i.e. there exists a quantity {C>0} independent of {\varepsilon} such that {|x| \leq C\varepsilon} for all sufficiently small {\varepsilon}. This can easily be seen to indeed be an ideal in {\tilde R}. We then form the quotient ring {R' := \tilde R/I := \{ x \hbox{ mod } I: x \in \tilde R \}}. Note that {x = y \hbox{ mod } I} is equivalent to the assertion that {x = y + O(\varepsilon)}, so we are encoding the analyst’s notion of “equal up to errors of {O(\varepsilon)}” into algebraic terms.

Clearly, {R'} is a commutative ring extending {{\bf R}}. Hence, any algebraic variety

\displaystyle  V[{\bf R}] = \{ (y_1,\ldots,y_d) \in {\bf R}^d: P_1(y_1,\ldots,y_d) = \ldots = P_m(y_1,\ldots,y_d) = 0 \}

defined over the reals {{\bf R}} (so the polynomials {P_1,\ldots,P_m} have coefficients in {{\bf R}}), also is defined over {R'}:

\displaystyle  V[R'] = \{ (y_1,\ldots,y_d) \in (R')^d:

\displaystyle  P_1(y_1,\ldots,y_d) = \ldots = P_m(y_1,\ldots,y_d) = 0 \}.

In language that more closely resembles analysis, we have

\displaystyle  V[R'] = \{ (y_1,\ldots,y_d) \in \tilde R^d: P_1(y_1,\ldots,y_d), \ldots, P_m(y_1,\ldots,y_d) = O(\varepsilon) \}

\displaystyle  \hbox{ mod } I^d.

Thus we see that {V[R']} is in some sense an “{\varepsilon}-thickening” of {V[{\bf R}]}, and is thus one way to give rigorous meaning to the intuition that schemes can “thicken” varieties. For instance, the scheme associated to the ideal {(x)}, when interpreted over {R'}, becomes an {O(\varepsilon)} neighbourhood of the origin

\displaystyle  V_{(x)}[R'] = \{ y \in \tilde R: y = O(\varepsilon) \} \hbox{ mod } I,

but the scheme associated to the smaller ideal {(x^2)}, when interpreted over {R'}, becomes an {O(\varepsilon^{1/2})}-neighbourhood of the origin, thus being a much “fatter” point:

\displaystyle  V_{(x^2)}[R'] = \{ y \in \tilde R: y^2 = O(\varepsilon) \} \hbox{ mod } I

\displaystyle  = \{ y \in \tilde R: y = O(\varepsilon^{1/2} ) \} \hbox{ mod } I.

Once one introduces the analyst’s epsilon, one can see quite clearly that {V_{(x^2)}[R']} is coming from a larger scheme than {V_{(x)}[R']}, with fewer polynomials vanishing on it; in particular, the polynomial {x} vanishes to order {O(\varepsilon)} on {V_{(x)}[R']} but does not vanish to order {O(\varepsilon)} on {V_{(x^2)}[R']}.

By working with this analyst’s extension of {{\bf R}}, one can already get a reasonably good first approximation of what schemes over {{\bf R}} look like, which I found particularly helpful for getting some intuition on these objects. However, since this is only one extension of {{\bf R}}, and not a “universal” such extension, it cannot quite distinguish any two schemes from each other, although it does a better job of this than classical algebraic geometry. For instance, consider the scheme cut out by the polynomials {x^2, y^2} in two dimensions. Over {R'}, this becomes

\displaystyle  V_{(x^2,y^2)}[R'] = \{ (x,y) \in \tilde R^2: x^2, y^2 = O(\varepsilon) \} \hbox{ mod } I^2

\displaystyle  = \{ (x,y) \in \tilde R^2: x, y = O(\varepsilon^{1/2}) \} \hbox{ mod } I^2.

Note that the polynomial {xy} vanishes to order {O(\varepsilon)} on this locus, but {xy} fails to lie in the ideal {(x^2,y^2)}. Equivalently, we have {V_{(x^2,y^2)}[R'] = V_{(x^2,y^2,xy)}[R']}, despite {(x^2,y^2)} and {(x^2,y^2,xy)} being distinct ideals. Basically, the analogue of the nullstellensatz for {R'} does not completely remove the need for performing a closure operation on the ideal {I}; it is less severe than taking the radical, but is instead more like taking a “convex hull” in that one needs to be able to “interpolate” between two polynomials in the ideal (such as {x^2} and {y^2} to arrive at intermediate polynomials (such as {xy}) that one then places in the ideal.

One can also view ideals (and hence, schemes), from a model-theoretic perspective. Let {I} be an ideal of a polynomial ring {R[x_1,\ldots,x_d]} generated by some polynomials {P_1,\ldots,P_m \in R[x_1,\ldots,x_d]}. Then, clearly, if {Q} is another polynomial in the ideal {I}, then we can use the axioms of commutative algebra (which are basically the axioms of high school algebra) to obtain the syntactic deduction

\displaystyle  P_1(x_1,\ldots,x_d) = \ldots = P_m(x_1,\ldots,x_d) = 0 \vdash Q(x_1,\ldots,x_d) = 0

(since {Q} is just a sum of multiples of {P_1,\ldots,P_m}). In particular, we have the semantic deduction

\displaystyle  P_1(y_1,\ldots,y_d) = \ldots = P_m(y_1,\ldots,y_d) = 0 \implies Q(y_1,\ldots,y_d) = 0 \ \ \ \ \ (1)

for any assignment of indeterminates {y_1,\ldots,y_d} in {R} (or in any extension {R'} of {R}). If we restrict {y_1,\ldots,y_d} to lie in {R} only, then (even if {R} is an algebraically closed field), the converse of the above statement is false; there can exist polynomials {Q} outside of {I} for which (1) holds for all assignments {y_1,\ldots,y_d} in {R}. For instance, we have

\displaystyle  y^2 = 0 \implies y = 0

for all {y} in an algebraically closed field, despite {x} not lying in the ideal {(x^2)}. Of course, the nullstellensatz again explains what is going on here; (1) holds whenever {Q} lies in the radical of {I}, which can be larger than {I} itself. But if one allows the indeterminates {y_1,\ldots,y_d} to take values in arbitrary extensions {R'} of {R}, then the truth of the converse is restored, thus giving a “completeness theorem” relating the syntactic deductions of commutative algebra to the semantic interpretations of such algebras over the extensions {R'}. For instance, since

\displaystyle  y^2 = O(\varepsilon) \not \implies y = O(\varepsilon)

we no longer have a counterexample to the converse coming from {x} and {(x^2)} once we work in {R'} instead of {{\bf R}}. On the other hand, we still have

\displaystyle  x^2, y^2 = O(\varepsilon) \implies xy = O(\varepsilon)

so the extension {R'} is not powerful enough to detect that {xy} does not actually lie in {(x^2,y^2)}; a larger ring (which is less easy to assign an analytic interpretation to) is needed to achieve this.

This will be a more frivolous post than usual, in part due to the holiday season.

I recently happened across the following video, which exploits a simple rhetorical trick that I had not seen before:

If nothing else, it’s a convincing (albeit unsubtle) demonstration that the English language is non-commutative (or perhaps non-associative); a linguistic analogue of the swindle, if you will.

Of course, the trick relies heavily on sentence fragments that negate or compare; I wonder if it is possible to achieve a comparable effect without using such fragments.

A related trick which I have seen (though I cannot recall any explicit examples right now; perhaps some readers know of some?) is to set up the verses of a song so that the last verse is identical to the first, but now has a completely distinct meaning (e.g. an ironic interpretation rather than a literal one) due to the context of the preceding verses.  The ultimate challenge would be to set up a Möbius song, in which each iteration of the song completely reverses the meaning of the next iterate (cf. this xkcd strip), but this may be beyond the capability of the English language.

On a related note: when I was a graduate student in Princeton, I recall John Conway (and another author whose name I forget) producing another light-hearted demonstration that the English language was highly non-commutative, by showing that if one takes the free group with 26 generators a,b,\ldots,z and quotients out by all relations given by anagrams (e.g. cat=act) then the resulting group was commutative.    Unfortunately I was not able to locate this recreational mathematics paper of Conway (which also treated the French language, if I recall correctly); perhaps one of the readers knows of it?

Jean-Pierre Serre (whose papers are, of course, always worth reading) recently posted a lovely lecture on the arXiv entitled “How to use finite fields for problems concerning infinite fields”. In it, he describes several ways in which algebraic statements over fields of zero characteristic, such as {{\mathbb C}}, can be deduced from their positive characteristic counterparts such as {F_{p^m}}, despite the fact that there is no non-trivial field homomorphism between the two types of fields. In particular finitary tools, including such basic concepts as cardinality, can now be deployed to establish infinitary results. This leads to some simple and elegant proofs of non-trivial algebraic results which are not easy to establish by other means.

One deduction of this type is based on the idea that positive characteristic fields can partially model zero characteristic fields, and proceeds like this: if a certain algebraic statement failed over (say) {{\mathbb C}}, then there should be a “finitary algebraic” obstruction that “witnesses” this failure over {{\mathbb C}}. Because this obstruction is both finitary and algebraic, it must also be definable in some (large) finite characteristic, thus leading to a comparable failure over a finite characteristic field. Taking contrapositives, one obtains the claim.

Algebra is definitely not my own field of expertise, but it is interesting to note that similar themes have also come up in my own area of additive combinatorics (and more generally arithmetic combinatorics), because the combinatorics of addition and multiplication on finite sets is definitely of a “finitary algebraic” nature. For instance, a recent paper of Vu, Wood, and Wood establishes a finitary “Freiman-type” homomorphism from (finite subsets of) the complex numbers to large finite fields that allows them to pull back many results in arithmetic combinatorics in finite fields (e.g. the sum-product theorem) to the complex plane. (Van Vu and I also used a similar trick to control the singularity property of random sign matrices by first mapping them into finite fields in which cardinality arguments became available.) And I have a particular fondness for correspondences between finitary and infinitary mathematics; the correspondence Serre discusses is slightly different from the one I discuss for instance in here or here, although there seems to be a common theme of “compactness” (or of model theory) tying these correspondences together.

As one of his examples, Serre cites one of my own favourite results in algebra, discovered independently by Ax and by Grothendieck (and then rediscovered many times since). Here is a special case of that theorem:

Theorem 1 (Ax-Grothendieck theorem, special case) Let {P: {\mathbb C}^n \rightarrow {\mathbb C}^n} be a polynomial map from a complex vector space to itself. If {P} is injective, then {P} is bijective.

The full version of the theorem allows one to replace {{\mathbb C}^n} by an algebraic variety {X} over any algebraically closed field, and for {P} to be an morphism from the algebraic variety {X} to itself, but for simplicity I will just discuss the above special case. This theorem is not at all obvious; it is not too difficult (see Lemma 4 below) to show that the Jacobian of {P} is non-degenerate, but this does not come close to solving the problem since one would then be faced with the notorious Jacobian conjecture. Also, the claim fails if “polynomial” is replaced by “holomorphic”, due to the existence of Fatou-Bieberbach domains.

In this post I would like to give the proof of Theorem 1 based on finite fields as mentioned by Serre, as well as another elegant proof of Rudin that combines algebra with some elementary complex variable methods. (There are several other proofs of this theorem and its generalisations, for instance a topological proof by Borel, which I will not discuss here.)

Update, March 8: Some corrections to the finite field proof. Thanks to Matthias Aschenbrenner also for clarifying the relationship with Tarski’s theorem and some further references.

Read the rest of this entry »

I had occasion recently to look up the proof of Hilbert’s nullstellensatz, which I haven’t studied since cramming for my algebra qualifying exam as a graduate student. I was a little unsatisfied with the proofs I was able to locate – they were fairly abstract and used a certain amount of algebraic machinery, which I was terribly rusty on – so, as an exercise, I tried to find a more computational proof that avoided as much abstract machinery as possible. I found a proof which used only the extended Euclidean algorithm and high school algebra, together with an induction on dimension and the obvious observation that any non-zero polynomial of one variable on an algebraically closed field has at least one non-root. It probably isn’t new (in particular, it might be related to the standard model-theoretic proof of the nullstellensatz, with the Euclidean algorithm and high school algebra taking the place of quantifier elimination), but I thought I’d share it here anyway.

Throughout this post, F is going to be a fixed algebraically closed field (e.g. the complex numbers {\Bbb C}). I’d like to phrase the nullstellensatz in a fairly concrete fashion, in terms of the problem of solving a set of simultaneous polynomial equations P_1(x) = \ldots = P_m(x) = 0 in several variables x = (x_1,\ldots,x_d) \in F^d over F, thus P_1,\ldots,P_m \in F[x] are polynomials in d variables. One obvious obstruction to solvability of this system is if the equations one is trying to solve are inconsistent in the sense that they can be used to imply 1=0. In particular, if one can find polynomials Q_1,\ldots,Q_m \in F[x] such that P_1 Q_1 + \ldots + P_m Q_m = 1, then clearly one cannot solve P_1(x)=\ldots=P_m(x)=0. The weak nullstellensatz asserts that this is, in fact, the only obstruction:

Weak nullstellensatz. Let P_1,\ldots,P_m \in F[x] be polynomials. Then exactly one of the following statements holds:

  1. The system of equations P_1(x)=\ldots=P_m(x)=0 has a solution x \in F^d.
  2. There exist polynomials Q_1,\ldots,Q_m \in F[x] such that P_1 Q_1 + \ldots + P_m Q_m = 1.

Note that the hypothesis that F is algebraically closed is crucial; for instance, if F is the reals, then the equation x^2+1=0 has no solution, but there is no polynomial Q(x) such that (x^2+1) Q(x) = 1.

Like many results of the “The only obstructions are the obvious obstructions” type, the power of the nullstellensatz lies in the ability to take a hypothesis about non-existence (in this case, non-existence of solutions to P_1(x)=\ldots=P_m(x)=0) and deduce a conclusion about existence (in this case, existence of Q_1,\ldots,Q_m such that P_1 Q_1 + \ldots + P_m Q_m = 1). The ability to get “something from nothing” is clearly going to be both non-trivial and useful. In particular, the nullstellensatz offers an important correspondence between algebraic geometry (the conclusion 1 is an assertion that a certain algebraic variety is empty) and commutative algebra (the conclusion 2 is an assertion that a certain ideal is non-proper).

Now suppose one is trying to solve the more complicated system P_1(x)=\ldots=P_d(x)=0; R(x) \neq 0 for some polynomials P_1,\ldots,P_d, R. Again, any identity of the form P_1 Q_1 + \ldots + P_m Q_m = 1 will be an obstruction to solvability, but now more obstructions are possible: any identity of the form P_1 Q_1 + \ldots + P_m Q_m = R^r for some non-negative integer r will also obstruct solvability. The strong nullstellensatz asserts that this is the only obstruction:

Strong nullstellensatz. Let P_1,\ldots,P_m, R \in F[x] be polynomials. Then exactly one of the following statements holds:

  1. The system of equations P_1(x)=\ldots=P_m(x)=0, R(x) \neq 0 has a solution x \in F^d.
  2. There exist polynomials Q_1,\ldots,Q_m \in F[x] and a non-negative integer r such that P_1 Q_1 + \ldots + P_m Q_m = R^r.

Of course, the weak nullstellensatz corresponds to the special case in which R=1. The strong nullstellensatz is usually phrased instead in terms of ideals and radicals, but the above formulation is easily shown to be equivalent to the usual version (modulo Hilbert’s basis theorem).

One could consider generalising the nullstellensatz a little further by considering systems of the form P_1(x)=\ldots=P_m(x)=0, R_1(x),\ldots,R_n(x) \neq 0, but this is not a significant generalisation, since all the inequations R_1(x) \neq 0, \ldots, R_n(x) \neq 0 can be concatenated into a single inequation R_1(x) \ldots R_n(x) \neq 0. The presence of the exponent r in conclusion (2) is a little annoying; to get rid of it, one needs to generalise the notion of an algebraic variety to that of a scheme (which is worth doing for several other reasons too, in particular one can now work over much more general objects than just algebraically closed fields), but that is a whole story in itself (and one that I am not really qualified to tell).

[Update, Nov 26: It turns out that my approach is more complicated than I first thought, and so I had to revise the proof quite a bit to fix a certain gap, in particular making it significantly messier than my first version. On the plus side, I was able to at least eliminate any appeal to Hilbert's basis theorem, so in particular the proof is now manifestly effective (but with terrible bounds). In any case, I am keeping the argument here in case it has some interest.]

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 3,330 other followers