In the foundations of modern probability, as laid out by Kolmogorov, the basic objects of study are constructed in the following order:
- Firstly, one selects a sample space , whose elements represent all the possible states that one’s stochastic system could be in.
- Then, one selects a -algebra of events (modeled by subsets of ), and assigns each of these events a probability in a countably additive manner, so that the entire sample space has probability .
- Finally, one builds (commutative) algebras of random variables (such as complex-valued random variables, modeled by measurable functions from to ), and (assuming suitable integrability or moment conditions) one can assign expectations to each such random variable.
In measure theory, the underlying measure space plays a prominent foundational role, with the measurable sets and measurable functions (the analogues of the events and the random variables) always being viewed as somehow being attached to that space. In probability theory, in contrast, it is the events and their probabilities that are viewed as being fundamental, with the sample space being abstracted away as much as possible, and with the random variables and expectations being viewed as derived concepts. See Notes 0 for further discussion of this philosophy.
However, it is possible to take the abstraction process one step further, and view the algebra of random variables and their expectations as being the foundational concept, and ignoring both the presence of the original sample space, the algebra of events, or the probability measure.
There are two reasons for wanting to shed (or abstract away) these previously foundational structures. Firstly, it allows one to more easily take certain types of limits, such as the large limit when considering random matrices, because quantities built from the algebra of random variables and their expectations, such as the normalised moments of random matrices tend to be quite stable in the large limit (as we have seen in previous notes), even as the sample space and event space varies with . (This theme of using abstraction to facilitate the taking of the large limit also shows up in the application of ergodic theory to combinatorics via the correspondence principle; see this previous blog post for further discussion.)
Secondly, this abstract formalism allows one to generalise the classical, commutative theory of probability to the more general theory of non-commutative probability theory, which does not have a classical underlying sample space or event space, but is instead built upon a (possibly) non-commutative algebra of random variables (or “observables”) and their expectations (or “traces”). This more general formalism not only encompasses classical probability, but also spectral theory (with matrices or operators taking the role of random variables, and the trace taking the role of expectation), random matrix theory (which can be viewed as a natural blend of classical probability and spectral theory), and quantum mechanics (with physical observables taking the role of random variables, and their expected value on a given quantum state being the expectation). It is also part of a more general “non-commutative way of thinking” (of which non-commutative geometry is the most prominent example), in which a space is understood primarily in terms of the ring or algebra of functions (or function-like objects, such as sections of bundles) placed on top of that space, and then the space itself is largely abstracted away in order to allow the algebraic structures to become less commutative. In short, the idea is to make algebra the foundation of the theory, as opposed to other possible choices of foundations such as sets, measures, categories, etc..
[Note that this foundational preference is to some extent a metamathematical one rather than a mathematical one; in many cases it is possible to rewrite the theory in a mathematically equivalent form so that some other mathematical structure becomes designated as the foundational one, much as probability theory can be equivalently formulated as the measure theory of probability measures. However, this does not negate the fact that a different choice of foundations can lead to a different way of thinking about the subject, and thus to ask a different set of questions and to discover a different set of proofs and solutions. Thus it is often of value to understand multiple foundational perspectives at once, to get a truly stereoscopic view of the subject.]
It turns out that non-commutative probability can be modeled using operator algebras such as -algebras, von Neumann algebras, or algebras of bounded operators on a Hilbert space, with the latter being accomplished via the Gelfand-Naimark-Segal construction. We will discuss some of these models here, but just as probability theory seeks to abstract away its measure-theoretic models, the philosophy of non-commutative probability is also to downplay these operator algebraic models once some foundational issues are settled.
When one generalises the set of structures in one’s theory, for instance from the commutative setting to the non-commutative setting, the notion of what it means for a structure to be “universal”, “free”, or “independent” can change. The most familiar example of this comes from group theory. If one restricts attention to the category of abelian groups, then the “freest” object one can generate from two generators is the free abelian group of commutative words with , which is isomorphic to the group . If however one generalises to the non-commutative setting of arbitrary groups, then the “freest” object that can now be generated from two generators is the free group of non-commutative words with , which is a significantly larger extension of the free abelian group .
Similarly, when generalising classical probability theory to non-commutative probability theory, the notion of what it means for two or more random variables to be independent changes. In the classical (commutative) setting, two (bounded, real-valued) random variables are independent if one has
whenever are well-behaved functions (such as polynomials) such that all of , vanishes. In the non-commutative setting, one can generalise the above definition to two commuting bounded self-adjoint variables; this concept is useful for instance in quantum probability, which is an abstraction of the theory of observables in quantum mechanics. But for two (bounded, self-adjoint) non-commutative random variables , the notion of classical independence no longer applies. As a substitute, one can instead consider the notion of being freely independent (or free for short), which means that
whenever are well-behaved functions such that all of vanish.
The concept of free independence was introduced by Voiculescu, and its study is now known as the subject of free probability. We will not attempt a systematic survey of this subject here; for this, we refer the reader to the surveys of Speicher and of Biane. Instead, we shall just discuss a small number of topics in this area to give the flavour of the subject only.
The significance of free probability to random matrix theory lies in the fundamental observation that random matrices which are independent in the classical sense, also tend to be independent in the free probability sense, in the large limit . (This is only possible because of the highly non-commutative nature of these matrices; as we shall see, it is not possible for non-trivial commuting independent random variables to be freely independent.) Because of this, many tedious computations in random matrix theory, particularly those of an algebraic or enumerative combinatorial nature, can be done more quickly and systematically by using the framework of free probability, which by design is optimised for algebraic tasks rather than analytical ones.
Much as free groups are in some sense “maximally non-commutative”, freely independent random variables are about as far from being commuting as possible. For instance, if are freely independent and of expectation zero, then vanishes, but instead factors as . As a consequence, the behaviour of freely independent random variables can be quite different from the behaviour of their classically independent commuting counterparts. Nevertheless there is a remarkably strong analogy between the two types of independence, in that results which are true in the classically independent case often have an interesting analogue in the freely independent setting. For instance, the central limit theorem (Notes 2) for averages of classically independent random variables, which roughly speaking asserts that such averages become gaussian in the large limit, has an analogue for averages of freely independent variables, the free central limit theorem, which roughly speaking asserts that such averages become semicircular in the large limit. One can then use this theorem to provide yet another proof of Wigner’s semicircle law (Notes 4).
Another important (and closely related) analogy is that while the distribution of sums of independent commutative random variables can be quickly computed via the characteristic function (i.e. the Fourier transform of the distribution), the distribution of sums of freely independent non-commutative random variables can be quickly computed using the Stieltjes transform instead (or with closely related objects, such as the -transform of Voiculescu). This is strongly reminiscent of the appearance of the Stieltjes transform in random matrix theory, and indeed we will see many parallels between the use of the Stieltjes transform here and in Notes 4.
As mentioned earlier, free probability is an excellent tool for computing various expressions of interest in random matrix theory, such as asymptotic values of normalised moments in the large limit . Nevertheless, as it only covers the asymptotic regime in which is sent to infinity while holding all other parameters fixed, there are some aspects of random matrix theory to which the tools of free probability are not sufficient by themselves to resolve (although it can be possible to combine free probability theory with other tools to then answer these questions). For instance, questions regarding the rate of convergence of normalised moments as are not directly answered by free probability, though if free probability is combined with tools such as concentration of measure (Notes 1) then such rate information can often be recovered. For similar reasons, free probability lets one understand the behaviour of moments as for fixed , but has more difficulty dealing with the situation in which is allowed to grow slowly in (e.g. ). Because of this, free probability methods are effective at controlling the bulk of the spectrum of a random matrix, but have more difficulty with the edges of that spectrum (as well as with related concepts such as the operator norm, Notes 3) as well as with fine-scale structure of the spectrum. Finally, free probability methods are most effective when dealing with matrices that are Hermitian with bounded operator norm, largely because the spectral theory of bounded self-adjoint operators in the infinite-dimensional setting of the large limit is non-pathological. (This is ultimately due to the stable nature of eigenvalues in the self-adjoint setting; see this previous blog post for discussion.) For non-self-adjoint operators, free probability needs to be augmented with additional tools, most notably by bounds on least singular values, in order to recover the required stability for the various spectral data of random matrices to behave continuously with respect to the large limit. We will discuss this latter point in a later set of notes.
— 1. Abstract probability theory —
We will now slowly build up the foundations of non-commutative probability theory, which seeks to capture the abstract algebra of random variables and their expectations. The impatient reader who wants to move directly on to free probability theory may largely jump straight to the final definition at the end of this section, but it can be instructive to work with these foundations for a while to gain some intuition on how to handle non-commutative probability spaces.
To motivate the formalism of abstract (non-commutative) probability theory, let us first discuss the three key examples of non-commutative probability spaces, and then abstract away all features that are not shared in common by all three examples.
Example 1: Random scalar variables. We begin with classical probability theory – the study of scalar random variables. In order to use the powerful tools of complex analysis (such as the Stieltjes transform), it is very convenient to allow our random variables to be complex valued. In order to meaningfully take expectations, we would like to require all our random variables to also be absolutely integrable. But this requirement is not sufficient by itself to get good algebraic structure, because the product of two absolutely integrable random variables need not be absolutely integrable. As we want to have as much algebraic structure as possible, we will therefore restrict attention further, to the collection of random variables with all moments finite. This class is closed under multiplication, and all elements in this class have a finite trace (or expectation). One can of course restrict further, to the space of (essentially) bounded variables, but by doing so one loses important examples of random variables, most notably gaussians, so we will work instead with the space . (This will cost us some analytic structure – in particular, will not be a Banach space, in contrast to – but as our focus is on the algebraic structure, this will be an acceptable price to pay.)
The space of complex-valued random variables with all moments finite now becomes an algebra over the complex numbers ; i.e. it is a vector space over that is also equipped with a bilinear multiplication operation that obeys the associative and distributive laws. It is also commutative, but we will suppress this property, as it is not shared by the other two examples we will be discussing. The deterministic scalar then plays the role of the multiplicative unit in this algebra.
In addition to the usual algebraic operations, one can also take the complex conjugate or adjoint of a complex-valued random variable . This operation interacts well with the other algebraic operations: it is in fact an anti-automorphism on , which means that it preserves addition , reverses multiplication , is anti-homogeneous ( for ), and it is invertible. In fact, it is its own inverse (), and is thus an involution.
This package of properties can be summarised succinctly by stating that the space of bounded complex-valued random variables is a (unital) -algebra.
The expectation operator can now be viewed as a map . It obeys some obvious properties, such as being linear (i.e. is a linear functional on ). In fact it is -linear, which means that it is linear and also that for all . We also clearly have . We will remark on some additional properties of expectation later.
Example 2: Deterministic matrix variables. A second key example is that of (finite-dimensional) spectral theory – the theory of complex-valued matrices . (One can also consider infinite-dimensional spectral theory, of course, but for simplicity we only consider the finite-dimensional case in order to avoid having to deal with technicalities such as unbounded operators.) Like the space considered in the previous example, is a -algebra, where the multiplication operation is of course given by matrix multiplication, the identity is the matrix identity , and the involution is given by the matrix adjoint operation. On the other hand, as is well-known, this -algebra is not commutative (for ).
The analogue of the expectation operation here is the normalised trace . Thus is a *-linear functional on that maps to . The analogy between expectation and normalised trace is particularly evident when comparing the moment method for scalar random variables (based on computation of the moments ) with the moment method in spectral theory (based on a computation of the moments ).
Example 3: Random matrix variables. Random matrix theory combines classical probability theory with finite-dimensional spectral theory, with the random variables of interest now being the random matrices , all of whose entries have all moments finite. It is not hard to see that this is also a -algebra with identity , which again will be non-commutative for . The normalised trace here is given by
thus one takes both the normalised matrix trace and the probabilistic expectation, in order to arrive at a deterministic scalar (i.e. a complex number). As before, we see that is a -linear functional that maps to . As we saw in Notes 3, the moment method for random matrices is based on a computation of the moments .
Let us now simultaneously abstract the above three examples, but reserving the right to impose some additional axioms as needed:
Definition 1 (Non-commutative probability space, preliminary definition) A non-commutative probability space (or more accurately, a potentially non-commutative probability space) will consist of a (potentially non-commutative) -algebra of (potentially non-commutative) random variables (or observables) with identity , together with a trace , which is a -linear functional that maps to . This trace will be required to obey a number of additional axioms which we will specify later in this set of notes.
This definition is not yet complete, because we have not fully decided on what axioms to enforce for these spaces, but for now let us just say that the three examples , , given above will obey these axioms and serve as model examples of non-commutative probability spaces. We mention that the requirement can be viewed as an abstraction of Kolmogorov’s axiom that the sample space has probability .
To motivate the remaining axioms, let us try seeing how some basic concepts from the model examples carry over to the abstract setting.
Firstly, we recall that every scalar random variable has a probability distribution , which is a probability measure on the complex plane ; if is self-adjoint (i.e. real valued), so that , then this distribution is supported on the real line . The condition that lie in ensures that this measure is rapidly decreasing, in the sense that for all . The measure is related to the moments by the formula
Similarly, every deterministic matrix has a empirical spectral distribution , which is a probability measure on the complex plane . Again, if is self-adjoint, then distribution is supported on the real line . This measure is related to the moments by the same formula (1) as in the case of scalar random variables. Because is finite, this measure is finitely supported (and in particular is rapidly decreasing). As for (2), the spectral theorem tells us that this formula holds when is normal (i.e. ), and in particular if is self-adjoint (of course, in this case (2) collapses to (1)), but is not true in general. Note that this subtlety does not appear in the case of scalar random variables because in this commutative setting, all elements are automatically normal.
Finally, for random matrices , we can form the expected empirical spectral distribution , which is again a rapidly decreasing probability measure on , which is supported on if is self-adjoint. This measure is again related to the moments by the formula (1), and also by (2) if is normal.
Now let us see whether we can set up such a spectral measure for an element in an abstract non-commutative probability space . From the above examples, it is natural to try to define this measure through the formula (1), or equivalently (by linearity) through the formula
whenever is a polynomial of two complex variables (note that can be defined unambiguously precisely when is normal).
It is tempting to apply the Riesz representation theorem to (3) to define the desired measure , perhaps after first using the Weierstrass approximation theorem to pass from polynomials to continuous functions. However, there are multiple technical issues with this idea:
- In order for the polynomials to be dense in the continuous functions in the uniform topology on the support of , one needs the intended support of to be on the real line , or else one needs to work with the formula (4) rather than (3). Also, one also needs the intended support to be bounded for the Weierstrass approximation theorem to apply directly.
- In order for the Riesz representation theorem to apply, the functional (or ) needs to be continuous in the uniform topology, thus one must be able to obtain a bound of the form for some (preferably compact) set . (To get a probability measure, one in fact needs to have .)
- In order to get a probability measure rather than a signed measure, one also needs some non-negativity: needs to be non-negative whenever for in the intended support .
To resolve the non-negativity issue, we impose an additional axiom on the non-commutative probability space :
- (Non-negativity) For any , we have . (Note that is self-adjoint and so its trace is necessarily a real number.)
In the language of von Neumann algebras, this axiom (together with the normalisation ) is essentially asserting that is a state.) Note that this axiom is obeyed by all three model examples, and is also consistent with (4). It is the noncommutative analogue of the Kolmogorov axiom that all events have non-negative probability.
With this axiom, we can now define an positive semi-definite inner product on by the formula
This obeys the usual axioms of an inner product, except that it is only positive semi-definite rather than positive definite. One can impose positive definiteness by adding an axiom that the trace is faithful, which means that if and only if . However, we will not need the faithfulness axiom here.
Without faithfulness, is a semi-definite inner product space with semi-norm
In particular, we have the Cauchy-Schwarz inequality
This leads to an important monotonicity:
Exercise 1 (Monotonicity) Let be a self-adjoint element of a non-commutative probability space . Show that we have the monotonicity relationships
for any .
for any . We then say that a self-adjoint element is bounded if its spectral radius is finite.
Example 1 In the case of random variables, the spectral radius is the essential supremum , while for deterministic matrices, the spectral radius is the operator norm . For random matrices, the spectral radius is the essential supremum of the operator norm.
Guided by the model examples, we expect that a bounded self-adjoint element should have a spectral measure supported on the interval . But how to show this? It turns out that one can proceed by tapping the power of complex analysis, and introducing the Stieltjes transform
for complex numbers . Now, this transform need not be defined for all at present, because we do not know that is invertible in . However, we can avoid this problem by working formally. Indeed, we have the formal Neumann series expansion
If is bounded self-adjoint, then from (6) we see that this formal series actually converges in the region . We will thus define the Stieltjes transform on the region by this series expansion (8), and then extend to as much of the complex plane as we can by analytic continuation. (There could in principle be some topological obstructions to this continuation, but we will soon see that the only place where singularities can occur is on the real interval , and so no topological obstructions will appear. One can also work with the original definition (7) of the Stieltjes transform, but this requires imposing some additional analytic axioms on the non-commutative probability space, such as requiring that be a -algebra or a von Neumann algebra, and I wish to avoid discussing these topics here as they are not the main focus of free probability theory.)
We now push the domain of definition of into the disk . We need some preliminary lemmas.
From the previous two exercises we see that
and so the above Laurent series converges for .
We have thus extended analytically to the region . Letting , we obtain an extension of to the upper half-plane . A similar argument (shifting by instead of ) gives an extension to the lower half-plane, thus defining analytically everywhere except on the interval .
On the other hand, it is not possible to analytically extend to the region for any . Indeed, if this were the case, then from the Cauchy integral formula (applied at infinity), we would have the identity
for any , which when combined with (5) implies that for all such , which is absurd. Thus the spectral radius can also be interpreted as the radius of the smallest ball centred at the origin outside of which the Stieltjes transform can be analytically continued.
Now that we have the Stieltjes transform everywhere outside of , we can use it to derive an important bound (which will soon be superceded by (3), but will play a key role in the proof of that stronger statement):
Proof: (Sketch) We can of course assume that is non-constant, as the claim is obvious otherwise. From Exercise 3 (replacing with , where is the polynomial whose coefficients are the complex conjugate of that of ) we may reduce to the case when has real coefficients, so that is self-adjoint. Since is bounded, it is not difficult (using (5), (6)) to show that is bounded also (Exercise!).
By the previous discussion, to establish the proposition it will suffice to show that the Stieltjes transform can be continued to the domain
For this, we observe the partial fractions decomposition
of into linear combinations of , at least when the roots of are simple. Thus, formally, at least, we have the identity
One can verify this identity is consistent with (11) for sufficiently large. (Exercise! Hint: First do the case when is a scalar, then expand in Taylor series and compare coefficients, then use the agreement of the Taylor series to do the general case.)
If is in the domain , then all the roots of lie outside the interval . So we can use the above formula as a definition of , at least for those for which the roots of are simple; but there are only finitely many exceptional (arising from zeroes of ) and one can check (Exercise! Hint: use the analytic nature of and the residue theoremto rewrite parts of as a contour integral.) that the singularities here are removable. It is easy to see (Exercise!) that is holomorphic outside of these removable singularities, and the claim follows.
Exercise 5 Fill in the steps marked (Exercise!) in the above proof.
From Proposition 2 and the Weierstrass approximation theorem, we see that the linear functional can be uniquely extended to a bounded linear functional on , with an operator norm . Applying the Riesz representation theorem, we thus can find a unique Radon measure (or equivalently, Borel measure) on of total variation obeying the identity (3) for all . In particular, setting see that has total mass ; since it also has total variation , it must be a probability measure. We have thus shown the fundamental
Theorem 3 (Spectral theorem for bounded self-adjoint elements) Let be a bounded self-adjoint element of a non-commutative probability space . Then there exists a unique Borel probability measure on (known as the spectral measure of ) such that (3) holds for all polynomials .
Remark 1 If one assumes some completeness properties of the non-commutative probability space, such as that is a -algebra or a von Neumann algebra, one can use this theorem to meaningfully define for other functions than polynomials; specifically, one can do this for continuous functions if is a -algebra, and for functions if is a von Neumann algebra. Thus for instance we can start define absolute values , or square roots , etc.. Such an assignment is known as a functional calculus; it can be used for instance to go back and make rigorous sense of the formula (7). A functional calculus is a very convenient tool to have in operator algebra theory, and for that reason one often completes a non-commutative probability space into a -algebra or von Neumann algebra, much as how it is often convenient to complete the rationals and work instead with the reals. However, we will proceed here instead by working with a (possibly incomplete) non-commutative probability space, and working primarily with formal expressions (e.g. formal power series in ) without trying to evaluate such expressions in some completed space. We can get away with this because we will be working exclusively in situations in which the spectrum of a random variable can be reconstructed exactly from its moments (which is in particular true in the case of bounded random variables). For unbounded random variables, one must usually instead use the full power of functional analysis, and work with the spectral theory of unbounded operators on Hilbert spaces.
Exercise 6 Let be a bounded self-adjoint element of a non-commutative probability space, and let as the spectral measure of . Establish the formula
for all . Conclude that the support of the spectral measure must contain at least one of the two points .
Exercise 7 Let be a bounded self-adjoint element of a non-commutative probability space with faithful trace. Show that if and only if .
Remark 2 It is possible to also obtain a spectral theorem for bounded normal elements along the lines of the above theorem (with now supported in a disk rather than in an interval, and with (3) replaced by (4)), but this is somewhat more complicated to show (basically, one needs to extend the self-adjoint spectral theorem to a pair of commuting self-adjoint elements, which is a little tricky to show by complex-analytic methods, as one has to use several complex variables).
The spectral theorem more or less completely describes the behaviour of a single (bounded self-adjoint) element in a non-commutative probability space. As remarked above, it can also be extended to study multiple commuting self-adjoint elements. However, when one deals with multiple non-commuting elements, the spectral theorem becomes inadequate (and indeed, it appears that in general there is no usable substitute for this theorem). However, we can begin making a little bit of headway if we assume as a final (optional) axiom a very weak form of commutativity in the trace:
- (Trace) For any two elements , we have .
Note that this axiom is obeyed by all three of our model examples. From this axiom, we can cyclically permute products in a trace, e.g. . However, we cannot take non-cyclic permutations; for instance, and are distinct in general. This axiom is a trivial consequence of the commutative nature of the complex numbers in the classical setting, but can play a more non-trivial role in the non-commutative setting. It is however possible to develop a large part of free probability without this axiom, if one is willing instead to work in the category of von Neumann algebras. Thus, we shall leave it as an optional axiom:
Definition 4 (Non-commutative probability space, final definition) A non-commutative probability space consists of a -algebra with identity , together with a -linear functional , that maps to and obeys the non-negativity axiom. If obeys the trace axiom, we say that the non-commutative probability space is tracial. If obeys the faithfulness axiom, we say that the non-commutative probability space is faithful.
From this new axiom and the Cauchy-Schwarz inequality we can now get control on products of several non-commuting elements:
for any non-negative integers . (Hint: Induct on , and use Cauchy-Schwarz to split up the product as evenly as possible, using cyclic permutations to reduce the complexity of the resulting expressions.)
Exercise 9 Let be those elements in a tracial non-commutative probability space whose real and imaginary parts , are bounded and self-adjoint; we refer to such elements simply as bounded elements. Show that this is a sub-*-algebra of .
This allows one to perform the following Gelfand-Naimark-Segal (GNS) construction. Recall that has a positive semi-definite inner product . We can perform the Hilbert space completion of this inner product space (quotienting out by the elements of zero norm), leading to a complex Hilbert space into which can be mapped as a dense subspace by an isometry . (This isometry is injective when is faithful, but will have a non-trivial kernel otherwise.) The space acts on itself by multiplication, and thus also acts on the dense subspace of . We would like to extend this action to all of , but this requires an additional estimate:
Proof: Squaring and cyclically permuting, it will suffice to show that
Let be arbitrary. By Weierstrass approximation, we can find a polynomial with real coefficients such that on the interval . By Proposition 2, we can thus write where is self-adjoint with . Multiplying on the left by and on the right by and taking traces, we obtain
By non-negativity, . By Exercise 8, we have . Sending we obtain the claim.
As a consequence, we see that the self-adjoint elements of act in a bounded manner on all of , and so on taking real and imaginary parts, we see that the same is true for the non-self-adjoint elements too. Thus we can associate to each a bounded linear transformation on the Hilbert space .
Exercise 10 (Gelfand-Naimark theorem) Show that the map is a -isomorphism from to a -subalgebra of , and that one has the representation
for any , where is the unit vector .
Remark 3 The Gelfand-Naimark theorem required the tracial hypothesis only to deal with the error in the proof of Lemma 5. One can also establish this theorem without this hypothesis, by assuming instead that the non-commutative space is a -algebra; this provides a continuous functional calculus, so that we can replace in the proof of Lemma 5 by a continuous function and dispense with altogether. This formulation of the Gelfand-Naimark theorem is the one which is usually seen in the literature.
The Gelfand-Naimark theorem identifies with a -subalgebra of . The closure of this -subalgebra in the weak operator topology is then a von Neumann algebra, which we denote as . As a consequence, we see that non-commutative probability spaces are closely related to von Neumann algebras (equipped with a tracial state ). However, we refrain from identifying the former completely with the latter, in order to allow ourselves the freedom to work with such spaces as , which is almost but not quite a von Neumann algebra. Instead, we use the following looser (and more algebraic) definition in Definition 4.
— 2. Limits of non-commutative random variables —
One benefit of working in an abstract setting is that it becomes easier to take certain types of limits. For instance, it is intuitively obvious that the cyclic groups are “converging” in some sense to the integer group . This convergence can be formalised by selecting a distinguished generator of all groups involved ( in the case of , and in the case of the integers ), and noting that the set of relations involving this generator in (i.e. the relations when is divisible by ) converge in a pointwise sense to the set of relations involving this generator in (i.e. the empty set). Here, to see the convergence, we viewed a group abstractly via the relations between its generators, rather than on a concrete realisation of a group as (say) residue classes modulo . (For more discussion of this notion of convergence for finitely generated groups, see this earlier blog post.)
We can similarly define convergence of random variables in non-commutative probability spaces as follows.
Definition 6 (Convergence) Let be a sequence of non-commutative probability spaces, and let be an additional non-commutative space. For each , let be a sequence of random variables in , and let be a sequence of random variables in . We say that converges in the sense of moments to if we have
as for any sequence . We say that converge in the sense of -moments to if converges in the sense of moments to .
If (viewed as a constant -tuple in ) converges in the sense of moments (resp. -moments) to , we say that and have matching joint moments (resp. matching joint -moments).
Example 2 If converge in the sense of moments to then we have for instance that
as for each , while if they converge in the stronger sense of -moments then we obtain more limits, such as
Note however that no uniformity in is assumed for this convergence; in particular, if varies in (e.g. if ), there is now no guarantee that one still has convergence.
Remark 4 When the underlying objects and are self-adjoint, then there is no distinction between convergence in moments and convergence in -moments. However, for non-self-adjoint variables, the latter type of convergence is far stronger, and the former type is usually too weak to be of much use, even in the commutative setting. For instance, let be a classical random variable drawn uniformly at random from the unit circle . Then the constant sequence has all the same moments as the zero random variable , and thus converges in the sense of moments to zero, but does not converge in the -moment sense to zero.
It is also clear that if we require that be generated by in the -algebraic sense (i.e. every element of is a polynomial combination of and their adjoints) then a limit in the sense of -moments, if it exists, is unique up to matching joint -moments.
For a sequence of a single, uniformly bounded, self-adjoint element, convergence in moments is equivalent to convergence in distribution:
Exercise 11 Let be a sequence of self-adjoint elements in non-commutative probability spaces with uniformly bounded, and let be another bounded self-adjoint element in a non-commutative probability space . Show that converges in moments to if and only if the spectral measure converges in the vague topology to .
Thus, for instance, one can rephrase the Wigner semi-circular law (in the convergence in expectation formulation) as the assertion that a sequence of Wigner random matrices with (say) subgaussian entries of mean zero and variance one, when viewed as elements of the non-commutative probability space , will converge to any bounded self-adjoint element of a non-commutative probability space with spectral measure given by the semi-circular distribution . Such elements are known as semi-circular elements. Here are some easy examples of semi-circular elements:
- A classical real random variable drawn using the probability measure .
- The identity function in the Lebesgue space , endowed with the trace .
- The function in the Lebesgue space .
Here is a more interesting example of a semi-circular element:
Exercise 12 Let be the non-commutative space consisting of bounded operators on the natural numbers with trace , where is the standard basis of . Let be the right shift on . Show that is a semicircular operator. (Hint: one way to proceed here is to use Fourier analysis to identify with the space of odd functions on , with being the operator that maps to ; show that is then the operation of multiplication by .) One can also interpret as a creation operator in a exercise when is odd. Note that this provides a (very) slightly different proof of the semi-circular law from that given from the moment method in Notes 4.
Because we are working in such an abstract setting with so few axioms, limits exist in abundance:
Exercise 13 For each , let be bounded self-adjoint elements of a tracial non-commutative space . Suppose that the spectral radii are uniformly bounded in . Show that there exists a subsequence and bounded self-adjoint elements of a tracial non-commutative space such that converge in moments to as . (Hint: use the Bolzano-Weierstrass theorem and the Arzelá-Ascoli diagonalisation trick to obtain a subsequence in which each of the joint moments of converge as . Use these moments to build a noncommutative probability space.)
— 3. Free independence —
We now come to the fundamental concept in free probability theory, namely that of free independence.
Definition 7 (Free independence) A collection of random variables in a non-commutative probability space is freely independent (or free for short) if one has
whenever are polynomials and are indices with no two adjacent equal.
A sequence of random variables in a non-commutative probability space is asymptotically freely independent (or asymptotically free for short) if one has
as whenever are polynomials and are indices with no two adjacent equal.
Remark 5 The above example describes freeness of collections of random variables . One can more generally define freeness of collections of subalgebras of , which in some sense is the more natural concept from a category-theoretic perspective, but we will not need this concept here. (See e.g. this survey of Biane for more discussion.)
Thus, for instance, if are freely independent, then will vanish for any polynomials for which all vanish. This is in contrast to classical independence of classical (commutative) random variables, which would only assert that whenever both vanish.
To contrast free independence with classical independence, suppose that . If were freely independent, then . If instead were commuting and classically independent, then we would instead have , which would almost certainly be non-zero.
For a trivial example of free independence, and automatically are freely independent if at least one of is constant (i.e. a multiple of the identity ). In the commutative setting, this is basically the only way one can have free independence:
Exercise 14 Suppose that are freely independent elements of a faithful non-commutative probability space which also commute. Show that at least one of is equal to a scalar. (Hint: First normalise to have trace zero, and consider .)
A less trivial example of free independence comes from the free group, which provides a clue as to the original motivation of this concept:
Exercise 15 Let be the free group on two generators . Let be the non-commutative probability space of bounded linear operators on the Hilbert space , with trace , where is the Kronecker delta function at the identity. Let be the shift operators
for and . Show that are freely independent.
For classically independent commuting random variables , knowledge of the individual moments , gave complete information on the joint moments: . The same fact is true for freely independent random variables, though the situation is more complicated. We begin with a simple case: computing in terms of the moments of . From free independence we have
So far, this is just as with the classically independent case. Next, we consider a slightly more complicated moment, . If we split , we can write this as
In the classically independent case, we can conclude the latter term would vanish. We cannot immediately say that in the freely independent case, because only one of the factors has mean zero. But from (12) we know that . Because of this, we can expand
So again we have not yet deviated from the classically independent case. But now let us look at . We split the second into and . Using (12) to control the former term, we have
From (13) we have , so we have
Now we split into and . Free independence eliminates all terms except
which simplifies to
which differs from the classical independence prediction of .
This process can be continued:
Exercise 16 Let be freely independent. Show that any joint moment of can be expressed as a polynomial combination of the individual moments of the . (Hint: induct on the complexity of the moment.)
The product measure construction allows us to generate classically independent random variables at will (after extending the underlying sample space): see Exercise 18 of Notes 0. There is an analogous construction, called the amalgamated free product, that allows one to generate families of freely independent random variables, each of which has a specified distribution. Let us give an illustrative special case of this construction:
Lemma 8 (Free products) For each , let be a non-commutative probability space. Then there exists a non-commutative probability space which contain embedded copies of each of the , such that whenever for , then are freely independent.
Proof: (Sketch) Recall that each can be given an inner product . One can then orthogonally decompose each space into the constants , plus the trace zero elements .
where , and are such that no adjacent pair of the are equal. Each element then acts on this Fock space by defining
when , and
when . One can thus map into the space of linear maps from to itself. The latter can be given the structure of a non-commutative space by defining the trace of an element by the formula , where is the vacuum state of , being the unit of the tensor product. One can verify (Exercise!) that embeds into and that elements from different are freely independent.
Finally, we illustrate the fundamental connection between free probability and random matrices observed by Voiculescu, namely that (classically) independent families of random matrices are asymptotically free. The intuition here is that while a large random matrix will certainly correlate with itself (so that, for instance, will be large), once one interposes an independent random matrix of trace zero, the correlation is largely destroyed (thus, for instance, will usually be quite small).
We give a typical instance of this phenomenon here:
Proposition 9 (Asymptotic freeness of Wigner matrices) Let be a collection of independent Wigner matrices, where the coefficients all have uniformly bounded moments for each . Then the random variables are asymptotically free.
Proof: (Sketch) Let us abbreviate as (suppressing the dependence). It suffices to show that the traces
for each fixed choice of natural numbers , where no two adjacent are equal.
Recall from Notes 3 that is (up to errors of ) equal to a normalised count of paths of length in which each edge is traversed exactly twice, with the edges forming a tree. After normalisation, this count is equal to when is odd, and equal to the Catalan number when is even.
One can perform a similar computation to compute . Up to errors of , this is a normalised count of coloured paths of length , where the first edges are coloured with colour , the next with colour , etc. Furthermore, each edge is traversed exactly twice (with the two traversals of each edge being assigned the same colour), and the edges form a tree. As a consequence, there must exist a for which the block of edges of colour form their own sub-tree, which contributes a factor of or to the final trace. Because of this, when one instead computes the normalised expression , all contributions that are not cancel themselves out, and the claim follows.
Exercise 18 Expand the above sketch into a full proof of the above theorem.
Remark 6 This is by no means the only way in which random matrices can become asymptotically free. For instance, if instead one considers random matrices of the form , where are deterministic Hermitian matrices with uniformly bounded eigenvalues, and the are iid unitary matrices drawn using Haar measure on the unitary group , one can also show that the are asymptotically free; again, see the paper of Voiculescu for details.
— 4. Free convolution —
When one is summing two classically independent (real-valued) random variables and , the distribution of the sum is the convolution of the distributions and . This convolution can be computed by means of the characteristic function
by means of the simple formula
As we saw in Notes 2, this can be used in particular to establish a short proof of the central limit theorem.
There is an analogous theory when summing two freely independent (self-adjoint) non-commutative random variables and ; the distribution turns out to be a certain combination , known as the free convolution of and . To compute this free convolution, one does not use the characteristic function; instead, the correct tool is the Stieltjes transform
which has already been discussed earlier.
Here’s how to use this transform to compute free convolutions. If one wishes, one can that is bounded so that all series involved converge for large enough, though actually the entire argument here can be performed at a purely algebraic level, using formal power series, and so the boundedness hypothesis here is not actually necessary.
The trick (which we already saw in Notes 4) is not to view as a function of , but rather to view as a function of . Given that one asymptotically has for , we expect to be able to perform this inversion for large and close to zero; and in any event one can easily invert (8) on the level of formal power series.
for some of trace zero. Now we do some (formal) algebraic sleight of hand. We rearrange the above identity as
Similarly we have
We can combine the second two terms via the identity
We can rearrange this a little bit as
We expand out as (formal) Neumann series:
This expands out to equal plus a whole string of alternating products of and .
Now we use the hypothesis that and are free. This easily implies that and are also free. But they also have trace zero, thus by the definition of free independence, all alternating products of and have zero trace. (In the case when there are an odd number of terms in the product, one can obtain this zero trace property using the cyclic property of trace and induction.) We conclude that
Comparing this against (15) for we conclude that
Thus, if we define the -transform of to be (formally) given by the formula
then we have the addition formula
Since one can recover the Stieltjes transform (and hence the -transform ) from the spectral measure and vice versa, this formula (in principle, at least) lets one compute the spectral measure of from the spectral measures , thus allowing one to define free convolution.
For comparison, we have the (formal) addition formula
for classically independent real random variables . The following exercises carry this analogy a bit further.
Exercise 19 Let be a classical real random variable. Working formally, show that
where the cumulants can be reconstructed from the moments by the recursive formula
for . (Hint: start with the identity .) Thus for instance is the expectation, is the variance, and the third cumulant is given by the formula
Establish the additional formula
where ranges over all partitions of into non-empty cells .
Exercise 20 Let be a non-commutative random variable. Working formally, show that
where the free cumulants can be reconstructed from the moments by the recursive formula
for . (Hint: start with the identity .) Thus for instance is the expectation, is the variance, and the third free cumulant is given by the formula
Establish the additional formula
where ranges over all partitions of into non-empty cells which are non-crossing, which means that if lie in , then it cannot be the case that lie in one cell while lie in a distinct cell .
Remark 7 These computations illustrate a more general principle in free probability, in that the combinatorics of free probability tend to be the “non-crossing” analogue of the combinatorics of classical probability; compare with Remark 7 of Notes 3.
Remark 8 The -transform allows for efficient computation of the spectral behaviour of sums of free random variables. There is an analogous transform, the -transform, for computing the spectral behaviour (or more precisely, the joint moments) of products of free random variables; see for instance these notes of Speicher.
The -transform clarifies the privileged role of the semi-circular elements:
Exercise 21 Let be a semi-circular element. Show that for any . In particular, the free convolution of and is .
Exercise 22 From the above exercise, we see that the effect of adding a free copy of to a non-commutative random variable is to shift the -transform by . Explain how this is compatible with the Dyson Brownian motion computations in Notes 4.
It also gives a free analogue of the central limit theorem:
Exercise 23 (Free central limit theorem) Let be a self-adjoint random variable with mean zero and variance one (i.e. and ), and let be free copies of . Let . Show that the coefficients of the formal power series converge to that of the identity function . Conclude that converges in the sense of moments to a semi-circular element .
The free central limit theorem implies the Wigner semi-circular law, at least for the GUE ensemble and in the sense of expectation. Indeed, if is an GUE matrix, then the matrices are a.s. uniformly bounded (by the Bai-Yin theorem, Notes 3), and so (after passing to a subsequence, if necessary), they converge in the sense of moments to some limit .
On the other hand, if is an independent copy of , then from the properties of gaussians. Taking limits, we conclude that , where (by Proposition 9) is a free copy of . Comparing this with the free central limit theorem (or just the additivity property of -transforms we see that must have the semi-circular distribution. Thus the semi-circular distribution is the only possible limit point of the , and the Wigner semi-circular law then holds (in expectation, and for GUE). Using concentration of measure, we can upgrade the convergence in expectation to a.s. convergence; using the Lindeberg replacement trick one can replace GUE with arbitrary Wigner matrices with (say) bounded coefficients; and then by using the truncation trick one can remove the boundedness hypothesis. (These latter few steps were also discussed in Notes 4.)