You are currently browsing the category archive for the ‘math.AC’ category.
Last week, we had Peter Scholze give an interesting distinguished lecture series here at UCLA on “Prismatic Cohomology”, which is a new type of cohomology theory worked out by Scholze and Bhargav Bhatt. (Video of the talks will be available shortly; for now we have some notes taken by two note–takers in the audience on that web page.) My understanding of this (speaking as someone that is rather far removed from this area) is that it is progress towards the “motivic” dream of being able to define cohomology for varieties
(or similar objects) defined over arbitrary commutative rings
, and with coefficients in another arbitrary commutative ring
. Currently, we have various flavours of cohomology that only work for certain types of domain rings
and coefficient rings
:
- Singular cohomology, which roughly speaking works when the domain ring
is a characteristic zero field such as
or
, but can allow for arbitrary coefficients
;
- de Rham cohomology, which roughly speaking works as long as the coefficient ring
is the same as the domain ring
(or a homomorphic image thereof), as one can only talk about
-valued differential forms if the underlying space is also defined over
;
-adic cohomology, which is a remarkably powerful application of étale cohomology, but only works well when the coefficient ring
is localised around a prime
that is different from the characteristic
of the domain ring
; and
- Crystalline cohomology, in which the domain ring is a field
of some finite characteristic
, but the coefficient ring
can be a slight deformation of
, such as the ring of Witt vectors of
.
There are various relationships between the cohomology theories, for instance de Rham cohomology coincides with singular cohomology for smooth varieties in the limiting case . The following picture Scholze drew in his first lecture captures these sorts of relationships nicely:
The new prismatic cohomology of Bhatt and Scholze unifies many of these cohomologies in the “neighbourhood” of the point in the above diagram, in which the domain ring
and the coefficient ring
are both thought of as being “close to characteristic
” in some sense, so that the dilates
of these rings is either zero, or “small”. For instance, the
-adic ring
is technically of characteristic
, but
is a “small” ideal of
(it consists of those elements of
of
-adic valuation at most
), so one can think of
as being “close to characteristic
” in some sense. Scholze drew a “zoomed in” version of the previous diagram to informally describe the types of rings
for which prismatic cohomology is effective:
To define prismatic cohomology rings one needs a “prism”: a ring homomorphism from
to
equipped with a “Frobenius-like” endomorphism
on
obeying some axioms. By tuning these homomorphisms one can recover existing cohomology theories like crystalline or de Rham cohomology as special cases of prismatic cohomology. These specialisations are analogous to how a prism splits white light into various individual colours, giving rise to the terminology “prismatic”, and depicted by this further diagram of Scholze:
(And yes, Peter confirmed that he and Bhargav were inspired by the Dark Side of the Moon album cover in selecting the terminology.)
There was an abstract definition of prismatic cohomology (as being the essentially unique cohomology arising from prisms that obeyed certain natural axioms), but there was also a more concrete way to view them in terms of coordinates, as a “-deformation” of de Rham cohomology. Whereas in de Rham cohomology one worked with derivative operators
that for instance applied to monomials
by the usual formula
prismatic cohomology in coordinates can be computed using a “-derivative” operator
that for instance applies to monomials
by the formula
where
is the “-analogue” of
(a polynomial in
that equals
in the limit
). (The
-analogues become more complicated for more general forms than these.) In this more concrete setting, the fact that prismatic cohomology is independent of the choice of coordinates apparently becomes quite a non-trivial theorem.
Let be a field, and let
be a finite extension of that field; in this post we will denote such a relationship by
. We say that
is a Galois extension of
if the cardinality of the automorphism group
of
fixing
is as large as it can be, namely the degree
of the extension. In that case, we call
the Galois group of
over
and denote it also by
. The fundamental theorem of Galois theory then gives a one-to-one correspondence (also known as the Galois correspondence) between the intermediate extensions between
and
and the subgroups of
:
Theorem 1 (Fundamental theorem of Galois theory) Let
be a Galois extension of
.
- (i) If
is an intermediate field betwen
and
, then
is a Galois extension of
, and
is a subgroup of
.
- (ii) Conversely, if
is a subgroup of
, then there is a unique intermediate field
such that
; namely
is the set of elements of
that are fixed by
.
- (iii) If
and
, then
if and only if
is a subgroup of
.
- (iv) If
is an intermediate field between
and
, then
is a Galois extension of
if and only if
is a normal subgroup of
. In that case,
is isomorphic to the quotient group
.
Example 2 Let
, and let
be the degree
Galois extension formed by adjoining a primitive
root of unity (that is to say,
is the cyclotomic field of order
). Then
is isomorphic to the multiplicative cyclic group
(the invertible elements of the ring
). Amongst the intermediate fields, one has the cyclotomic fields of the form
where
divides
; they are also Galois extensions, with
isomorphic to
and
isomorphic to the elements
of
such that
modulo
. (There can also be other intermediate fields, corresponding to other subgroups of
.)
Example 3 Let
be the field of rational functions of one indeterminate
with complex coefficients, and let
be the field formed by adjoining an
root
to
, thus
. Then
is a degree
Galois extension of
with Galois group isomorphic to
(with an element
corresponding to the field automorphism of
that sends
to
). The intermediate fields are of the form
where
divides
; they are also Galois extensions, with
isomorphic to
and
isomorphic to the multiples of
in
.
There is an analogous Galois correspondence in the covering theory of manifolds. For simplicity we restrict attention to finite covers. If is a connected manifold and
is a finite covering map of
by another connected manifold
, we denote this relationship by
. (Later on we will change our function notations slightly and write
in place of the more traditional
, and similarly for the deck transformations
below; more on this below the fold.) If
, we can define
to be the group of deck transformations: continuous maps
which preserve the fibres of
. We say that this covering map is a Galois cover if the cardinality of the group
is as large as it can be. In that case we call
the Galois group of
over
and denote it by
.
Suppose is a finite cover of
. An intermediate cover
between
and
is a cover of
by
, such that
, in such a way that the covering maps are compatible, in the sense that
is the composition of
and
. This sort of compatibilty condition will be implicitly assumed whenever we chain together multiple instances of the
notation. Two intermediate covers
are equivalent if they cover each other, in a fashion compatible with all the other covering maps, thus
and
. We then have the analogous Galois correspondence:
Theorem 4 (Fundamental theorem of covering spaces) Let
be a Galois covering.
- (i) If
is an intermediate cover betwen
and
, then
is a Galois extension of
, and
is a subgroup of
.
- (ii) Conversely, if
is a subgroup of
, then there is a intermediate cover
, unique up to equivalence, such that
.
- (iii) If
and
, then
if and only if
is a subgroup of
.
- (iv) If
, then
is a Galois cover of
if and only if
is a normal subgroup of
. In that case,
is isomorphic to the quotient group
.
Example 5 Let
, and let
be the
-fold cover of
with covering map
. Then
is a Galois cover of
, and
is isomorphic to the cyclic group
. The intermediate covers are (up to equivalence) of the form
with covering map
where
divides
; they are also Galois covers, with
isomorphic to
and
isomorphic to the multiples of
in
.
Given the strong similarity between the two theorems, it is natural to ask if there is some more concrete connection between Galois theory and the theory of finite covers.
In one direction, if the manifolds have an algebraic structure (or a complex structure), then one can relate covering spaces to field extensions by considering the field of rational functions (or meromorphic functions) on the space. For instance, if
and
is the coordinate on
, one can consider the field
of rational functions on
; the
-fold cover
with coordinate
from Example 5 similarly has a field
of rational functions. The covering
relates the two coordinates
by the relation
, at which point one sees that the rational functions
on
are a degree
extension of that of
(formed by adjoining the
root of unity
to
). In this way we see that Example 5 is in fact closely related to Example 3.
Exercise 6 What happens if one uses meromorphic functions in place of rational functions in the above example? (To answer this question, I found it convenient to use a discrete Fourier transform associated to the multiplicative action of the
roots of unity on
to decompose the meromorphic functions on
as a linear combination of functions invariant under this action, times a power
of the coordinate
for
.)
I was curious however about the reverse direction. Starting with some field extensions , is it is possible to create manifold like spaces
associated to these fields in such a fashion that (say)
behaves like a “covering space” to
with a group
of deck transformations isomorphic to
, so that the Galois correspondences agree? Also, given how the notion of a path (and associated concepts such as loops, monodromy and the fundamental group) play a prominent role in the theory of covering spaces, can spaces such as
or
also come with a notion of a path that is somehow compatible with the Galois correspondence?
The standard answer from modern algebraic geometry (as articulated for instance in this nice MathOverflow answer by Minhyong Kim) is to set equal to the spectrum
of the field
. As a set, the spectrum
of a commutative ring
is defined as the set of prime ideals of
. Generally speaking, the map
that maps a commutative ring to its spectrum tends to act like an inverse of the operation that maps a space
to a ring of functions on that space. For instance, if one considers the commutative ring
of regular functions on
, then each point
in
gives rise to the prime ideal
, and one can check that these are the only such prime ideals (other than the zero ideal
), giving an almost one-to-one correspondence between
and
. (The zero ideal corresponds instead to the generic point of
.)
Of course, the spectrum of a field such as is just a point, as the zero ideal
is the only prime ideal. Naively, it would then seem that there is not enough space inside such a point to support a rich enough structure of paths to recover the Galois theory of this field. In modern algebraic geometry, one addresses this issue by considering not just the set-theoretic elements of
, but more general “base points”
that map from some other (affine) scheme
to
(one could also consider non-affine base points of course). One has to rework many of the fundamentals of the subject to accommodate this “relative point of view“, for instance replacing the usual notion of topology with an étale topology, but once one does so one obtains a very satisfactory theory.
As an exercise, I set myself the task of trying to interpret Galois theory as an analogue of covering space theory in a more classical fashion, without explicit reference to more modern concepts such as schemes, spectra, or étale topology. After some experimentation, I found a reasonably satisfactory way to do so as follows. The space that one associates with
in this classical perspective is not the single point
, but instead the much larger space consisting of ring homomorphisms
from
to arbitrary integral domains
; informally,
consists of all the “models” or “representations” of
(in the spirit of this previous blog post). (There is a technical set-theoretic issue here because the class of integral domains
is a proper class, so that
will also be a proper class; I will completely ignore such technicalities in this post.) We view each such homomorphism
as a single point in
. The analogous notion of a path from one point
to another
is then a homomorphism
of integral domains, such that
is the composition of
with
. Note that every prime ideal
in the spectrum
of a commutative ring
gives rise to a point
in the space
defined here, namely the quotient map
to the ring
, which is an integral domain because
is prime. So one can think of
as being a distinguished subset of
; alternatively, one can think of
as a sort of “penumbra” surrounding
. In particular, when
is a field,
defines a special point
in
, namely the identity homomorphism
.
Below the fold I would like to record this interpretation of Galois theory, by first revisiting the theory of covering spaces using paths as the basic building block, and then adapting that theory to the theory of field extensions using the spaces indicated above. This is not too far from the usual scheme-theoretic way of phrasing the connection between the two topics (basically I have replaced étale-type points with more classical points
), but I had not seen it explicitly articulated before, so I am recording it here for my own benefit and for any other readers who may be interested.
The complete homogeneous symmetric polynomial of
variables
and degree
can be defined as
thus for instance
and
One can also define all the complete homogeneous symmetric polynomials of variables simultaneously by means of the generating function
We will think of the variables as taking values in the real numbers. When one does so, one might observe that the degree two polynomial
is a positive definite quadratic form, since it has the sum of squares representation
In particular, unless
. This can be compared against the superficially similar quadratic form
where are independent randomly chosen signs. The Wigner semicircle law says that for large
, the eigenvalues of this form will be mostly distributed in the interval
using the semicircle distribution, so in particular the form is quite far from being positive definite despite the presence of the first
positive terms. Thus the positive definiteness is coming from the finer algebraic structure of
, and not just from the magnitudes of its coefficients.
One could ask whether the same positivity holds for other degrees than two. For odd degrees, the answer is clearly no, since
in that case. But one could hope for instance that
also has a sum of squares representation that demonstrates positive definiteness. This turns out to be true, but is remarkably tedious to establish directly. Nevertheless, we have a nice result of Hunter that gives positive definiteness for all even degrees . In fact, a modification of his argument gives a little bit more:
Theorem 1 Let
, let
be even, and let
be reals.
- (i) (Positive definiteness) One has
, with strict inequality unless
.
- (ii) (Schur convexity) One has
whenever
majorises
, with equality if and only if
is a permutation of
.
- (iii) (Schur-Ostrowski criterion for Schur convexity) For any
, one has
, with strict inequality unless
.
Proof: We induct on (allowing
to be arbitrary). The claim is trivially true for
, and easily verified for
, so suppose that
and the claims (i), (ii), (iii) have already been proven for
(and for arbitrary
).
If we apply the differential operator to
using the product rule, one obtains after a brief calculation
Using (1) and extracting the coefficient, we obtain the identity
The claim (iii) then follows from (i) and the induction hypothesis.
To obtain (ii), we use the more general statement (known as the Schur-Ostrowski criterion) that (ii) is implied from (iii) if we replace by an arbitrary symmetric, continuously differentiable function. To establish this criterion, we induct on
(this argument can be made independently of the existing induction on
). If
is majorised by
, it lies in the permutahedron of
. If
lies on a face of this permutahedron, then after permuting both the
and
we may assume that
is majorised by
, and
is majorised by
for some
, and the claim then follows from two applications of the induction hypothesis. If instead
lies in the interior of the permutahedron, one can follow it to the boundary by using one of the vector fields
, and the claim follows from the boundary case.
Finally, to obtain (i), we observe that majorises
, where
is the arithmetic mean of
. But
is clearly a positive multiple of
, and the claim now follows from (ii).
If the variables are restricted to be nonnegative, the same argument gives Schur convexity for odd degrees also.
The proof in Hunter of positive definiteness is arranged a little differently than the one above, but still relies ultimately on the identity (2). I wonder if there is a genuinely different way to establish positive definiteness that does not go through this identity.
Analytic number theory is often concerned with the asymptotic behaviour of various arithmetic functions: functions or
from the natural numbers
to the real numbers
or complex numbers
. In this post, we will focus on the purely algebraic properties of these functions, and for reasons that will become clear later, it will be convenient to generalise the notion of an arithmetic function to functions
taking values in some abstract commutative ring
. In this setting, we can add or multiply two arithmetic functions
to obtain further arithmetic functions
, and we can also form the Dirichlet convolution
by the usual formula
Regardless of what commutative ring is in used here, we observe that Dirichlet convolution is commutative, associative, and bilinear over
.
An important class of arithmetic functions in analytic number theory are the multiplicative functions, that is to say the arithmetic functions such that
and
for all coprime . A subclass of these functions are the completely multiplicative functions, in which the restriction that
be coprime is dropped. Basic examples of completely multiplicative functions (in the classical setting
) include
- the Kronecker delta
, defined by setting
for
and
otherwise;
- the constant function
and the linear function
(which by abuse of notation we denote by
);
- more generally monomials
for any fixed complex number
(in particular, the “Archimedean characters”
for any fixed
), which by abuse of notation we denote by
;
- Dirichlet characters
;
- the Liouville function
;
- the indicator function of the
–smooth numbers (numbers whose prime factors are all at most
), for some given
; and
- the indicator function of the
–rough numbers (numbers whose prime factors are all greater than
), for some given
.
Examples of multiplicative functions that are not completely multiplicative include
- the Möbius function
;
- the divisor function
(also referred to as
);
- more generally, the higher order divisor functions
for
;
- the Euler totient function
;
- the number of roots
of a given polynomial
defined over
;
- more generally, the point counting function
of a given algebraic variety
defined over
(closely tied to the Hasse-Weil zeta function of
);
- the function
that counts the number of representations of
as the sum of two squares;
- more generally, the function that maps a natural number
to the number of ideals in a given number field
of absolute norm
(closely tied to the Dedekind zeta function of
).
These multiplicative functions interact well with the multiplication and convolution operations: if are multiplicative, then so are
and
, and if
is completely multiplicative, then we also have
Finally, the product of completely multiplicative functions is again completely multiplicative. On the other hand, the sum of two multiplicative functions will never be multiplicative (just look at what happens at ), and the convolution of two completely multiplicative functions will usually just be multiplicative rather than completley multiplicative.
The specific multiplicative functions listed above are also related to each other by various important identities, for instance
where is an arbitrary arithmetic function.
On the other hand, analytic number theory also is very interested in certain arithmetic functions that are not exactly multiplicative (and certainly not completely multiplicative). One particularly important such function is the von Mangoldt function . This function is certainly not multiplicative, but is clearly closely related to such functions via such identities as
and
, where
is the natural logarithm function. The purpose of this post is to point out that functions such as the von Mangoldt function lie in a class closely related to multiplicative functions, which I will call the derived multiplicative functions. More precisely:
Definition 1 A derived multiplicative function
is an arithmetic function that can be expressed as the formal derivative
at the origin of a family
of multiplicative functions
parameterised by a formal parameter
. Equivalently,
is a derived multiplicative function if it is the
coefficient of a multiplicative function in the extension
of
by a nilpotent infinitesimal
; in other words, there exists an arithmetic function
such that the arithmetic function
is multiplicative, or equivalently that
is multiplicative and one has the Leibniz rule
More generally, for any
, a
-derived multiplicative function
is an arithmetic function that can be expressed as the formal derivative
at the origin of a family
of multiplicative functions
parameterised by formal parameters
. Equivalently,
is the
coefficient of a multiplicative function in the extension
of
by
nilpotent infinitesimals
.
We define the notion of a
-derived completely multiplicative function similarly by replacing “multiplicative” with “completely multiplicative” in the above discussion.
There are Leibniz rules similar to (2) but they are harder to state; for instance, a doubly derived multiplicative function comes with singly derived multiplicative functions
and a multiplicative function
such that
for all coprime .
One can then check that the von Mangoldt function is a derived multiplicative function, because
is multiplicative in the ring
with one infinitesimal
. Similarly, the logarithm function
is derived completely multiplicative because
is completely multiplicative in
. More generally, any additive function
is derived multiplicative because it is the top order coefficient of
.
Remark 1 One can also phrase these concepts in terms of the formal Dirichlet series
associated to an arithmetic function
. A function
is multiplicative if
admits a (formal) Euler product;
is derived multiplicative if
is the (formal) first logarithmic derivative of an Euler product with respect to some parameter (not necessarily
, although this is certainly an option); and so forth.
Using the definition of a -derived multiplicative function as the top order coefficient of a multiplicative function of a ring with
infinitesimals, it is easy to see that the product or convolution of a
-derived multiplicative function
and a
-derived multiplicative function
is necessarily a
-derived multiplicative function (again taking values in
). Thus, for instance, the higher-order von Mangoldt functions
are
-derived multiplicative functions, because
is a
-derived completely multiplicative function. More explicitly,
is the top order coeffiicent of the completely multiplicative function
, and
is the top order coefficient of the multiplicative function
, with both functions taking values in the ring
of complex numbers with
infinitesimals
attached.
It then turns out that most (if not all) of the basic identities used by analytic number theorists concerning derived multiplicative functions, can in fact be viewed as coefficients of identities involving purely multiplicative functions, with the latter identities being provable primarily from multiplicative identities, such as (1). This phenomenon is analogous to the one in linear algebra discussed in this previous blog post, in which many of the trace identities used there are derivatives of determinant identities. For instance, the Leibniz rule
for any arithmetic functions can be viewed as the top order term in
in the ring with one infinitesimal , and then we see that the Leibniz rule is a special case (or a derivative) of (1), since
is completely multiplicative. Similarly, the formulae
are top order terms of
and the variant formula is the top order term of
which can then be deduced from the previous identities by noting that the completely multiplicative function inverts
multiplicatively, and also noting that
annihilates
. The Selberg symmetry formula
which plays a key role in the Erdös-Selberg elementary proof of the prime number theorem (as discussed in this previous blog post), is the top order term of the identity
involving the multiplicative functions ,
,
,
with two infinitesimals
, and this identity can be proven while staying purely within the realm of multiplicative functions, by using the identities
and (1). Similarly for higher identities such as
which arise from expanding out using (1) and the above identities; we leave this as an exercise to the interested reader.
An analogous phenomenon arises for identities that are not purely multiplicative in nature due to the presence of truncations, such as the Vaughan identity
for any , where
is the restriction of a multiplicative function
to the natural numbers greater than
, and similarly for
,
,
. In this particular case, (4) is the top order coefficient of the identity
which can be easily derived from the identities and
. Similarly for the Heath-Brown identity
valid for natural numbers up to , where
and
are arbitrary parameters and
denotes the
-fold convolution of
, and discussed in this previous blog post; this is the top order coefficient of
and arises by first observing that
vanishes up to , and then expanding the left-hand side using the binomial formula and the identity
.
One consequence of this phenomenon is that identities involving derived multiplicative functions tend to have a dimensional consistency property: all terms in the identity have the same order of derivation in them. For instance, all the terms in the Selberg symmetry formula (3) are doubly derived functions, all the terms in the Vaughan identity (4) or the Heath-Brown identity (5) are singly derived functions, and so forth. One can then use dimensional analysis to help ensure that one has written down a key identity involving such functions correctly, much as is done in physics.
In addition to the dimensional analysis arising from the order of derivation, there is another dimensional analysis coming from the value of multiplicative functions at primes (which is more or less equivalent to the order of pole of the Dirichlet series at
). Let us say that a multiplicative function
has a pole of order
if one has
on the average for primes
, where we will be a bit vague as to what “on the average” means as it usually does not matter in applications. Thus for instance,
or
has a pole of order
(a simple pole),
or
has a pole of order
(i.e. neither a zero or a pole), Dirichlet characters also have a pole of order
(although this is slightly nontrivial, requiring Dirichlet’s theorem),
has a pole of order
(a simple zero),
has a pole of order
, and so forth. Note that the convolution of a multiplicative function with a pole of order
with a multiplicative function with a pole of order
will be a multiplicative function with a pole of order
. If there is no oscillation in the primes
(e.g. if
for all primes
, rather than on the average), it is also true that the product of a multiplicative function with a pole of order
with a multiplicative function with a pole of order
will be a multiplicative function with a pole of order
. The situation is significantly different though in the presence of oscillation; for instance, if
is a quadratic character then
has a pole of order
even though
has a pole of order
.
A -derived multiplicative function will then be said to have an underived pole of order
if it is the top order coefficient of a multiplicative function with a pole of order
; in terms of Dirichlet series, this roughly means that the Dirichlet series has a pole of order
at
. For instance, the singly derived multiplicative function
has an underived pole of order
, because it is the top order coefficient of
, which has a pole of order
; similarly
has an underived pole of order
, being the top order coefficient of
. More generally,
and
have underived poles of order
and
respectively for any
.
By taking top order coefficients, we then see that the convolution of a -derived multiplicative function with underived pole of order
and a
-derived multiplicative function with underived pole of order
is a
-derived multiplicative function with underived pole of order
. If there is no oscillation in the primes, the product of these functions will similarly have an underived pole of order
, for instance
has an underived pole of order
. We then have the dimensional consistency property that in any of the standard identities involving derived multiplicative functions, all terms not only have the same derived order, but also the same underived pole order. For instance, in (3), (4), (5) all terms have underived pole order
(with any Mobius function terms being counterbalanced by a matching term of
or
). This gives a second way to use dimensional analysis as a consistency check. For instance, any identity that involves a linear combination of
and
is suspect because the underived pole orders do not match (being
and
respectively), even though the derived orders match (both are
).
One caveat, though: this latter dimensional consistency breaks down for identities that involve infinitely many terms, such as Linnik’s identity
In this case, one can still rewrite things in terms of multiplicative functions as
so the former dimensional consistency is still maintained.
I thank Andrew Granville, Kannan Soundararajan, and Emmanuel Kowalski for helpful conversations on these topics.
[Note: the idea for this post originated before the recent preprint of Mochizuki on the abc conjecture was released, and is not intended as a commentary on that work, which offers a much more non-trivial perspective on scheme theory. -T.]
In classical algebraic geometry, the central object of study is an algebraic variety over a field
(and the theory works best when this field
is algebraically closed). One can talk about either affine or projective varieties; for sake of discussion, let us restrict attention to affine varieties. Such varieties can be viewed in at least four different ways:
- (Algebraic geometry) One can view a variety through the set
of points (over
) in that variety.
- (Commutative algebra) One can view a variety through the field of rational functions
on that variety, or the subring
of polynomial functions in that field.
- (Dual algebraic geometry) One can view a variety through a collection of polynomials
that cut out that variety.
- (Dual commutative algebra) One can view a variety through the ideal
of polynomials that vanish on that variety.
For instance, the unit circle over the reals can be thought of in each of these four different ways:
- (Algebraic geometry) The set of points
.
- (Commutative algebra) The quotient
of the polynomial ring
by the ideal generated by
(or equivalently, the algebra generated by
subject to the constraint
), or the fraction field of that quotient.
- (Dual algebraic geometry) The polynomial
.
- (Dual commutative algebra) The ideal
generated by
.
The four viewpoints are almost equivalent to each other (particularly if the underlying field is algebraically closed), as there are obvious ways to pass from one viewpoint to another. For instance, starting with the set of points on a variety, one can form the space of rational functions on that variety, or the ideal of polynomials that vanish on that variety. Given a set of polynomials, one can cut out their zero locus, or form the ideal that they generate. Given an ideal in a polynomial ring, one can quotient out the ring by the ideal and then form the fraction field. Finally, given the ring of polynomials on a variety, one can form its spectrum (the space of prime ideals in the ring) to recover the set of points on that variety (together with the Zariski topology on that variety).
Because of the connections between these viewpoints, there are extensive “dictionaries” (most notably the ideal-variety dictionary) that convert basic concepts in one of these four perspectives into any of the other three. For instance, passing from a variety to a subvariety shrinks the set of points and the function field, but enlarges the set of polynomials needed to cut out the variety, as well as the associated ideal. Taking the intersection or union of two varieties corresponds to adding or multiplying together the two ideals respectively. The dimension of an (irreducible) algebraic variety can be defined as the transcendence degree of the function field, the maximal length of chains of subvarieties, or the Krull dimension of the ring of polynomials. And so on and so forth. Thanks to these dictionaries, it is now commonplace to think of commutative algebras geometrically, or conversely to approach algebraic geometry from the perspective of abstract algebra. There are however some very well known defects to these dictionaries, at least when viewed in the classical setting of algebraic varieties. The main one is that two different ideals (or two inequivalent sets of polynomials) can cut out the same set of points, particularly if the underlying field is not algebraically closed. For instance, if the underlying field is the real line
, then the polynomial equations
and
cut out the same set of points, namely the empty set, but the ideal generated by
in
is certainly different from the ideal generated by
. This particular example does not work in an algebraically closed field such as
, but in that case the polynomial equations
and
also cut out the same set of points (namely the origin), but again
and
generate different ideals in
. Thanks to Hilbert’s nullstellensatz, we can get around this problem (in the case when
is algebraically closed) by always passing from an ideal to its radical, but this causes many aspects of the theory of algebraic varieties to become more complicated when the varieties involved develop singularities or multiplicities, as can already be seen with the simple example of Bezout’s theorem.
Nowadays, the standard way to deal with these issues is to replace the notion of an algebraic variety with the more general notion of a scheme. Roughly speaking, the way schemes are defined is to focus on the commutative algebra perspective as the primary one, and to allow the base field to be not algebraically closed, or even to just be a commutative ring instead of a field. (One could even consider non-commutative rings, leading to non-commutative geometry, but we will not discuss this extension of scheme theory further here.) Once one generalises to these more abstract rings, the notion of a rational function becomes more complicated (one has to work locally instead of globally, cutting out the points where the function becomes singular), but as a first approximation one can think of a scheme as basically being the same concept as a commutative ring. (In actuality, due to the need to localise, a scheme is defined as a sheaf of rings rather than a single ring, but these technicalities will not be important for the purposes of this discussion.) All the other concepts from algebraic geometry that might previously have been defined using one of the other three perspectives, are then redefined in terms of this ring (or sheaf of rings) in order to generalise them to schemes.
Thus, for instance, in scheme theory the rings and
describe different schemes; from the classical perspective, they cut out the same locus, namely the point
, but the former scheme makes this point “fatter” than the latter scheme, giving it a degree (or multiplicity) of
rather than
.
Because of this, it seems that the link between the commutative algebra perspective and the algebraic geometry perspective is still not quite perfect in scheme theory, unless one is willing to start “fattening” various varieties to correctly model multiplicity or singularity. But – and this is the trivial remark I wanted to make in this blog post – one can recover a tight connection between the two perspectives as long as one allows the freedom to arbitrarily extend the underlying base ring.
Here’s what I mean by this. Consider classical algebraic geometry over some commutative ring (not necessarily a field). Any set of polynomials
in
indeterminate variables
with coefficients in
determines, on the one hand, an ideal
in , and also cuts out a zero locus
since each of the polynomials clearly make sense as maps from
to
. Of course, one can also write
in terms of
:
Thus the ideal uniquely determines the zero locus
, and we will emphasise this by writing
as
. As the previous counterexamples illustrate, the converse is not true. However, whenever we have any extension
of the ring
(i.e. a commutative ring
that contains
as a subring), then we can also view the polynomials
as maps from
to
, and so one can also define the zero locus for all the extensions:
As before, is determined by the ideal
:
The trivial remark is then that while a single zero locus is insufficient to recover
, the collection of zero loci
for all extensions
of
(or more precisely, the assignment map
, known as the functor of points of
) is sufficient to recover
, as long as at least one zero locus, say
, is non-empty. Indeed, suppose we have two ideals
of
that cut out the same non-empty zero locus for all extensions
of
, thus
for all extensions of
. We apply this with the extension
of
given by
. Note that the embedding of
in
is injective, since otherwise
would cut out the empty set as the zero locus over
, and so
is indeed an extension of
. Tautologically, the point
lies in
, and thus necessarily lies in
as well. Unpacking what this means, we conclude that
whenever
, that is to say that
. By a symmetric argument, we also have
, and thus
as claimed. (As pointed out in comments, this fact (and its proof) is essentially a special case of the Yoneda lemma. The connection is tighter if one allows
to be any ring with a (not necessarily injective) map from
into it, rather than an extension of
, in which case one can also drop the hypothesis that
is non-empty for at least one
. For instance,
for every extension
of the integers, but if one also allows quotients such as
or
instead, then
and
are no longer necessarily equal.)
Thus, as long as one thinks of a variety or scheme as cutting out points not just in the original base ring or field, but in all extensions of that base ring or field, one recovers an exact correspondence between the algebraic geometry perspective and the commutative algebra perspective. This is similar to the classical algebraic geometry position of viewing an algebraic variety as being defined simultaneously over all fields that contain the coefficients of the defining polynomials, but the crucial difference between scheme theory and classical algebraic geometry is that one also allows definition over commutative rings, and not just fields. In particular, one needs to allow extensions to rings that may contain nilpotent elements, otherwise one cannot distinguish an ideal from its radical.
There are of course many ways to extend a field into a ring, but as an analyst, one way to do so that appeals particularly to me is to introduce an epsilon parameter and work modulo errors of . To formalise this algebraically, let’s say for sake of concreteness that the base field is the real line
. Consider the ring
of real-valued quantities
that depend on a parameter
(i.e. functions from
to
), which are locally bounded in the sense that
is bounded whenever
is bounded. (One can, if one wishes, impose some further continuity or smoothness hypotheses on how
depends on
, but this turns out not to be relevant for the following discussion. Algebraists often prefer to use the ring of Puiseux series here in place of
, and a nonstandard analyst might instead use the hyperreals, but again this will not make too much difference for our purposes.) Inside this commutative ring, we can form the ideal
of quantities
that are of size
as
, i.e. there exists a quantity
independent of
such that
for all sufficiently small
. This can easily be seen to indeed be an ideal in
. We then form the quotient ring
. Note that
is equivalent to the assertion that
, so we are encoding the analyst’s notion of “equal up to errors of
” into algebraic terms.
Clearly, is a commutative ring extending
. Hence, any algebraic variety
defined over the reals (so the polynomials
have coefficients in
), also is defined over
:
In language that more closely resembles analysis, we have
Thus we see that is in some sense an “
-thickening” of
, and is thus one way to give rigorous meaning to the intuition that schemes can “thicken” varieties. For instance, the scheme associated to the ideal
, when interpreted over
, becomes an
neighbourhood of the origin
but the scheme associated to the smaller ideal , when interpreted over
, becomes an
-neighbourhood of the origin, thus being a much “fatter” point:
Once one introduces the analyst’s epsilon, one can see quite clearly that is coming from a larger scheme than
, with fewer polynomials vanishing on it; in particular, the polynomial
vanishes to order
on
but does not vanish to order
on
.
By working with this analyst’s extension of , one can already get a reasonably good first approximation of what schemes over
look like, which I found particularly helpful for getting some intuition on these objects. However, since this is only one extension of
, and not a “universal” such extension, it cannot quite distinguish any two schemes from each other, although it does a better job of this than classical algebraic geometry. For instance, consider the scheme cut out by the polynomials
in two dimensions. Over
, this becomes
Note that the polynomial vanishes to order
on this locus, but
fails to lie in the ideal
. Equivalently, we have
, despite
and
being distinct ideals. Basically, the analogue of the nullstellensatz for
does not completely remove the need for performing a closure operation on the ideal
; it is less severe than taking the radical, but is instead more like taking a “convex hull” in that one needs to be able to “interpolate” between two polynomials in the ideal (such as
and
to arrive at intermediate polynomials (such as
) that one then places in the ideal.
One can also view ideals (and hence, schemes), from a model-theoretic perspective. Let be an ideal of a polynomial ring
generated by some polynomials
. Then, clearly, if
is another polynomial in the ideal
, then we can use the axioms of commutative algebra (which are basically the axioms of high school algebra) to obtain the syntactic deduction
(since is just a sum of multiples of
). In particular, we have the semantic deduction
for any assignment of indeterminates in
(or in any extension
of
). If we restrict
to lie in
only, then (even if
is an algebraically closed field), the converse of the above statement is false; there can exist polynomials
outside of
for which (1) holds for all assignments
in
. For instance, we have
for all in an algebraically closed field, despite
not lying in the ideal
. Of course, the nullstellensatz again explains what is going on here; (1) holds whenever
lies in the radical of
, which can be larger than
itself. But if one allows the indeterminates
to take values in arbitrary extensions
of
, then the truth of the converse is restored, thus giving a “completeness theorem” relating the syntactic deductions of commutative algebra to the semantic interpretations of such algebras over the extensions
. For instance, since
we no longer have a counterexample to the converse coming from and
once we work in
instead of
. On the other hand, we still have
so the extension is not powerful enough to detect that
does not actually lie in
; a larger ring (which is less easy to assign an analytic interpretation to) is needed to achieve this.
This will be a more frivolous post than usual, in part due to the holiday season.
I recently happened across the following video, which exploits a simple rhetorical trick that I had not seen before:
If nothing else, it’s a convincing (albeit unsubtle) demonstration that the English language is non-commutative (or perhaps non-associative); a linguistic analogue of the swindle, if you will.
Of course, the trick relies heavily on sentence fragments that negate or compare; I wonder if it is possible to achieve a comparable effect without using such fragments.
A related trick which I have seen (though I cannot recall any explicit examples right now; perhaps some readers know of some?) is to set up the verses of a song so that the last verse is identical to the first, but now has a completely distinct meaning (e.g. an ironic interpretation rather than a literal one) due to the context of the preceding verses. The ultimate challenge would be to set up a Möbius song, in which each iteration of the song completely reverses the meaning of the next iterate (cf. this xkcd strip), but this may be beyond the capability of the English language.
On a related note: when I was a graduate student in Princeton, I recall John Conway (and another author whose name I forget) producing another light-hearted demonstration that the English language was highly non-commutative, by showing that if one takes the free group with 26 generators and quotients out by all relations given by anagrams (e.g.
) then the resulting group was commutative. Unfortunately I was not able to locate this recreational mathematics paper of Conway (which also treated the French language, if I recall correctly); perhaps one of the readers knows of it?
Jean-Pierre Serre (whose papers are, of course, always worth reading) recently posted a lovely lecture on the arXiv entitled “How to use finite fields for problems concerning infinite fields“. In it, he describes several ways in which algebraic statements over fields of zero characteristic, such as , can be deduced from their positive characteristic counterparts such as
, despite the fact that there is no non-trivial field homomorphism between the two types of fields. In particular finitary tools, including such basic concepts as cardinality, can now be deployed to establish infinitary results. This leads to some simple and elegant proofs of non-trivial algebraic results which are not easy to establish by other means.
One deduction of this type is based on the idea that positive characteristic fields can partially model zero characteristic fields, and proceeds like this: if a certain algebraic statement failed over (say) , then there should be a “finitary algebraic” obstruction that “witnesses” this failure over
. Because this obstruction is both finitary and algebraic, it must also be definable in some (large) finite characteristic, thus leading to a comparable failure over a finite characteristic field. Taking contrapositives, one obtains the claim.
Algebra is definitely not my own field of expertise, but it is interesting to note that similar themes have also come up in my own area of additive combinatorics (and more generally arithmetic combinatorics), because the combinatorics of addition and multiplication on finite sets is definitely of a “finitary algebraic” nature. For instance, a recent paper of Vu, Wood, and Wood establishes a finitary “Freiman-type” homomorphism from (finite subsets of) the complex numbers to large finite fields that allows them to pull back many results in arithmetic combinatorics in finite fields (e.g. the sum-product theorem) to the complex plane. (Van Vu and I also used a similar trick to control the singularity property of random sign matrices by first mapping them into finite fields in which cardinality arguments became available.) And I have a particular fondness for correspondences between finitary and infinitary mathematics; the correspondence Serre discusses is slightly different from the one I discuss for instance in here or here, although there seems to be a common theme of “compactness” (or of model theory) tying these correspondences together.
As one of his examples, Serre cites one of my own favourite results in algebra, discovered independently by Ax and by Grothendieck (and then rediscovered many times since). Here is a special case of that theorem:
Theorem 1 (Ax-Grothendieck theorem, special case) Let
be a polynomial map from a complex vector space to itself. If
is injective, then
is bijective.
The full version of the theorem allows one to replace by an algebraic variety
over any algebraically closed field, and for
to be an morphism from the algebraic variety
to itself, but for simplicity I will just discuss the above special case. This theorem is not at all obvious; it is not too difficult (see Lemma 5 below) to show that the Jacobian of
is non-degenerate, but this does not come close to solving the problem since one would then be faced with the notorious Jacobian conjecture. Also, the claim fails if “polynomial” is replaced by “holomorphic”, due to the existence of Fatou-Bieberbach domains.
In this post I would like to give the proof of Theorem 1 based on finite fields as mentioned by Serre, as well as another elegant proof of Rudin that combines algebra with some elementary complex variable methods. (There are several other proofs of this theorem and its generalisations, for instance a topological proof by Borel, which I will not discuss here.)
Update, March 8: Some corrections to the finite field proof. Thanks to Matthias Aschenbrenner also for clarifying the relationship with Tarski’s theorem and some further references.
Read the rest of this entry »
I had occasion recently to look up the proof of Hilbert’s nullstellensatz, which I haven’t studied since cramming for my algebra qualifying exam as a graduate student. I was a little unsatisfied with the proofs I was able to locate – they were fairly abstract and used a certain amount of algebraic machinery, which I was terribly rusty on – so, as an exercise, I tried to find a more computational proof that avoided as much abstract machinery as possible. I found a proof which used only the extended Euclidean algorithm and high school algebra, together with an induction on dimension and the obvious observation that any non-zero polynomial of one variable on an algebraically closed field has at least one non-root. It probably isn’t new (in particular, it might be related to the standard model-theoretic proof of the nullstellensatz, with the Euclidean algorithm and high school algebra taking the place of quantifier elimination), but I thought I’d share it here anyway.
Throughout this post, F is going to be a fixed algebraically closed field (e.g. the complex numbers ). I’d like to phrase the nullstellensatz in a fairly concrete fashion, in terms of the problem of solving a set of simultaneous polynomial equations
in several variables
over F, thus
are polynomials in d variables. One obvious obstruction to solvability of this system is if the equations one is trying to solve are inconsistent in the sense that they can be used to imply 1=0. In particular, if one can find polynomials
such that
, then clearly one cannot solve
. The weak nullstellensatz asserts that this is, in fact, the only obstruction:
Weak nullstellensatz. Let
be polynomials. Then exactly one of the following statements holds:
- The system of equations
has a solution
.
- There exist polynomials
such that
.
Note that the hypothesis that F is algebraically closed is crucial; for instance, if F is the reals, then the equation has no solution, but there is no polynomial
such that
.
Like many results of the “The only obstructions are the obvious obstructions” type, the power of the nullstellensatz lies in the ability to take a hypothesis about non-existence (in this case, non-existence of solutions to ) and deduce a conclusion about existence (in this case, existence of
such that
). The ability to get “something from nothing” is clearly going to be both non-trivial and useful. In particular, the nullstellensatz offers an important correspondence between algebraic geometry (the conclusion 1 is an assertion that a certain algebraic variety is empty) and commutative algebra (the conclusion 2 is an assertion that a certain ideal is non-proper).
Now suppose one is trying to solve the more complicated system for some polynomials
. Again, any identity of the form
will be an obstruction to solvability, but now more obstructions are possible: any identity of the form
for some non-negative integer r will also obstruct solvability. The strong nullstellensatz asserts that this is the only obstruction:
Strong nullstellensatz. Let
be polynomials. Then exactly one of the following statements holds:
- The system of equations
,
has a solution
.
- There exist polynomials
and a non-negative integer r such that
.
Of course, the weak nullstellensatz corresponds to the special case in which R=1. The strong nullstellensatz is usually phrased instead in terms of ideals and radicals, but the above formulation is easily shown to be equivalent to the usual version (modulo Hilbert’s basis theorem).
One could consider generalising the nullstellensatz a little further by considering systems of the form , but this is not a significant generalisation, since all the inequations
can be concatenated into a single inequation
. The presence of the exponent r in conclusion (2) is a little annoying; to get rid of it, one needs to generalise the notion of an algebraic variety to that of a scheme (which is worth doing for several other reasons too, in particular one can now work over much more general objects than just algebraically closed fields), but that is a whole story in itself (and one that I am not really qualified to tell).
[Update, Nov 26: It turns out that my approach is more complicated than I first thought, and so I had to revise the proof quite a bit to fix a certain gap, in particular making it significantly messier than my first version. On the plus side, I was able to at least eliminate any appeal to Hilbert’s basis theorem, so in particular the proof is now manifestly effective (but with terrible bounds). In any case, I am keeping the argument here in case it has some interest.]
Recent Comments