Define a partition of ${1}$ to be a finite or infinite multiset ${\Sigma}$ of real numbers in the interval ${I \in (0,1]}$ (that is, an unordered set of real numbers in ${I}$, possibly with multiplicity) whose total sum is ${1}$: ${\sum_{t \in \Sigma}t = 1}$. For instance, ${\{1/2,1/4,1/8,1/16,\ldots\}}$ is a partition of ${1}$. Such partitions arise naturally when trying to decompose a large object into smaller ones, for instance:

1. (Prime factorisation) Given a natural number ${n}$, one can decompose it into prime factors ${n = p_1 \ldots p_k}$ (counting multiplicity), and then the multiset

$\displaystyle \Sigma_{PF}(n) := \{ \frac{\log p_1}{\log n}, \ldots,\frac{\log p_k}{\log n} \}$

is a partition of ${1}$.

2. (Cycle decomposition) Given a permutation ${\sigma \in S_n}$ on ${n}$ labels ${\{1,\ldots,n\}}$, one can decompose ${\sigma}$ into cycles ${C_1,\ldots,C_k}$, and then the multiset

$\displaystyle \Sigma_{CD}(\sigma) := \{ \frac{|C_1|}{n}, \ldots, \frac{|C_k|}{n} \}$

is a partition of ${1}$.

3. (Normalisation) Given a multiset ${\Gamma}$ of positive real numbers whose sum ${S := \sum_{x\in \Gamma}x}$ is finite and non-zero, the multiset

$\displaystyle \Sigma_N( \Gamma) := \frac{1}{S} \cdot \Gamma = \{ \frac{x}{S}: x \in \Gamma \}$

is a partition of ${1}$.

In the spirit of the universality phenomenon, one can ask what is the natural distribution for what a “typical” partition should look like; thus one seeks a natural probability distribution on the space of all partitions, analogous to (say) the gaussian distributions on the real line, or GUE distributions on point processes on the line, and so forth. It turns out that there is one natural such distribution which is related to all three examples above, known as the Poisson-Dirichlet distribution. To describe this distribution, we first have to deal with the problem that it is not immediately obvious how to cleanly parameterise the space of partitions, given that the cardinality of the partition can be finite or infinite, that multiplicity is allowed, and that we would like to identify two partitions that are permutations of each other

One way to proceed is to random partition ${\Sigma}$ as a type of point process on the interval ${I}$, with the constraint that ${\sum_{x \in \Sigma} x = 1}$, in which case one can study statistics such as the counting functions

$\displaystyle N_{[a,b]} := |\Sigma \cap [a,b]| = \sum_{x \in\Sigma} 1_{[a,b]}(x)$

(where the cardinality here counts multiplicity). This can certainly be done, although in the case of the Poisson-Dirichlet process, the formulae for the joint distribution of such counting functions is moderately complicated. Another way to proceed is to order the elements of ${\Sigma}$ in decreasing order

$\displaystyle t_1 \geq t_2 \geq t_3 \geq \ldots \geq 0,$

with the convention that one pads the sequence ${t_n}$ by an infinite number of zeroes if ${\Sigma}$ is finite; this identifies the space of partitions with an infinite dimensional simplex

$\displaystyle \{ (t_1,t_2,\ldots) \in [0,1]^{\bf N}: t_1 \geq t_2 \geq \ldots; \sum_{n=1}^\infty t_n = 1 \}.$

However, it turns out that the process of ordering the elements is not “smooth” (basically because functions such as ${(x,y) \mapsto \max(x,y)}$ and ${(x,y) \mapsto \min(x,y)}$ are not smooth) and the formulae for the joint distribution in the case of the Poisson-Dirichlet process is again complicated.

It turns out that there is a better (or at least “smoother”) way to enumerate the elements ${u_1,(1-u_1)u_2,(1-u_1)(1-u_2)u_3,\ldots}$ of a partition ${\Sigma}$ than the ordered method, although it is random rather than deterministic. This procedure (which I learned from this paper of Donnelly and Grimmett) works as follows.

1. Given a partition ${\Sigma}$, let ${u_1}$ be an element of ${\Sigma}$ chosen at random, with each element ${t\in \Sigma}$ having a probability ${t}$ of being chosen as ${u_1}$ (so if ${t \in \Sigma}$ occurs with multiplicity ${m}$, the net probability that ${t}$ is chosen as ${u_1}$ is actually ${mt}$). Note that this is well-defined since the elements of ${\Sigma}$ sum to ${1}$.
2. Now suppose ${u_1}$ is chosen. If ${\Sigma \backslash \{u_1\}}$ is empty, we set ${u_2,u_3,\ldots}$ all equal to zero and stop. Otherwise, let ${u_2}$ be an element of ${\frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})}$ chosen at random, with each element ${t \in \frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})}$ having a probability ${t}$ of being chosen as ${u_2}$. (For instance, if ${u_1}$ occurred with some multiplicity ${m>1}$ in ${\Sigma}$, then ${u_2}$ can equal ${\frac{u_1}{1-u_1}}$ with probability ${(m-1)u_1/(1-u_1)}$.)
3. Now suppose ${u_1,u_2}$ are both chosen. If ${\Sigma \backslash \{u_1,u_2\}}$ is empty, we set ${u_3, u_4, \ldots}$ all equal to zero and stop. Otherwise, let ${u_3}$ be an element of ${\frac{1}{1-u_1-u_2} \cdot (\Sigma\backslash \{u_1,u_2\})}$, with ech element ${t \in \frac{1}{1-u_1-u_2} \cdot (\Sigma\backslash \{u_1,u_2\})}$ having a probability ${t}$ of being chosen as ${u_3}$.
4. We continue this process indefinitely to create elements ${u_1,u_2,u_3,\ldots \in [0,1]}$.

We denote the random sequence ${Enum(\Sigma) := (u_1,u_2,\ldots) \in [0,1]^{\bf N}}$ formed from a partition ${\Sigma}$ in the above manner as the random normalised enumeration of ${\Sigma}$; this is a random variable in the infinite unit cube ${[0,1]^{\bf N}}$, and can be defined recursively by the formula

$\displaystyle Enum(\Sigma) = (u_1, Enum(\frac{1}{1-u_1} \cdot (\Sigma\backslash \{u_1\})))$

with ${u_1}$ drawn randomly from ${\Sigma}$, with each element ${t \in \Sigma}$ chosen with probability ${t}$, except when ${\Sigma =\{1\}}$ in which case we instead have

$\displaystyle Enum(\{1\}) = (1, 0,0,\ldots).$

Note that one can recover ${\Sigma}$ from any of its random normalised enumerations ${Enum(\Sigma) := (u_1,u_2,\ldots)}$ by the formula

$\displaystyle \Sigma = \{ u_1, (1-u_1) u_2,(1-u_1)(1-u_2)u_3,\ldots\} \ \ \ \ \ (1)$

with the convention that one discards any zero elements on the right-hand side. Thus ${Enum}$ can be viewed as a (stochastic) parameterisation of the space of partitions by the unit cube ${[0,1]^{\bf N}}$, which is a simpler domain to work with than the infinite-dimensional simplex mentioned earlier.

Note that this random enumeration procedure can also be adapted to the three models described earlier:

1. Given a natural number ${n}$, one can randomly enumerate its prime factors ${n =p'_1 p'_2 \ldots p'_k}$ by letting each prime factor ${p}$ of ${n}$ be equal to ${p'_1}$ with probability ${\frac{\log p}{\log n}}$, then once ${p'_1}$ is chosen, let each remaining prime factor ${p}$ of ${n/p'_1}$ be equal to ${p'_2}$ with probability ${\frac{\log p}{\log n/p'_1}}$, and so forth.
2. Given a permutation ${\sigma\in S_n}$, one can randomly enumerate its cycles ${C'_1,\ldots,C'_k}$ by letting each cycle ${C}$ in ${\sigma}$ be equal to ${C'_1}$ with probability ${\frac{|C|}{n}}$, and once ${C'_1}$ is chosen, let each remaining cycle ${C}$ be equalto ${C'_2}$ with probability ${\frac{|C|}{n-|C'_1|}}$, and so forth. Alternatively, one traverse the elements of ${\{1,\ldots,n\}}$ in random order, then let ${C'_1}$ be the first cycle one encounters when performing this traversal, let ${C'_2}$ be the next cycle (not equal to ${C'_1}$ one encounters when performing this traversal, and so forth.
3. Given a multiset ${\Gamma}$ of positive real numbers whose sum ${S := \sum_{x\in\Gamma} x}$ is finite, we can randomly enumerate ${x'_1,x'_2,\ldots}$ the elements of this sequence by letting each ${x \in \Gamma}$ have a ${\frac{x}{S}}$ probability of being set equal to ${x'_1}$, and then once ${x'_1}$ is chosen, let each remaining ${x \in \Gamma\backslash \{x'_1\}}$ have a ${\frac{x_i}{S-x'_1}}$ probability of being set equal to ${x'_2}$, and so forth.

We then have the following result:

Proposition 1 (Existence of the Poisson-Dirichlet process) There exists a random partition ${\Sigma}$ whose random enumeration ${Enum(\Sigma) = (u_1,u_2,\ldots)}$ has the uniform distribution on ${[0,1]^{\bf N}}$, thus ${u_1,u_2,\ldots}$ are independently and identically distributed copies of the uniform distribution on ${[0,1]}$.

A random partition ${\Sigma}$ with this property will be called the Poisson-Dirichlet process. This process, first introduced by Kingman, can be described explicitly using (1) as

$\displaystyle \Sigma = \{ u_1, (1-u_1) u_2,(1-u_1)(1-u_2)u_3,\ldots\},$

where ${u_1,u_2,\ldots}$ are iid copies of the uniform distribution of ${[0,1]}$, although it is not immediately obvious from this definition that ${Enum(\Sigma)}$ is indeed uniformly distributed on ${[0,1]^{\bf N}}$. We prove this proposition below the fold.

An equivalent definition of a Poisson-Dirichlet process is a random partition ${\Sigma}$ with the property that

$\displaystyle (u_1, \frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})) \equiv (U, \Sigma) \ \ \ \ \ (2)$

where ${u_1}$ is a random element of ${\Sigma}$ with each ${t \in\Sigma}$ having a probability ${t}$ of being equal to ${u_1}$, ${U}$ is a uniform variable on ${[0,1]}$ that is independent of ${\Sigma}$, and ${\equiv}$ denotes equality of distribution. This can be viewed as a sort of stochastic self-similarity property of ${\Sigma}$: if one randomly removes one element from ${\Sigma}$ and rescales, one gets a new copy of ${\Sigma}$.

It turns out that each of the three ways to generate partitions listed above can lead to the Poisson-Dirichlet process, either directly or in a suitable limit. We begin with the third way, namely by normalising a Poisson process to have sum ${1}$:

Proposition 2 (Poisson-Dirichlet processes via Poisson processes) Let ${a>0}$, and let ${\Gamma_a}$ be a Poisson process on ${(0,+\infty)}$ with intensity function ${t \mapsto \frac{1}{t} e^{-at}}$. Then the sum ${S :=\sum_{x \in \Gamma_a} x}$ is almost surely finite, and the normalisation ${\Sigma_N(\Gamma_a) = \frac{1}{S} \cdot \Gamma_a}$ is a Poisson-Dirichlet process.

Again, we prove this proposition below the fold. Now we turn to the second way (a topic, incidentally, that was briefly touched upon in this previous blog post):

Proposition 3 (Large cycles of a typical permutation) For each natural number ${n}$, let ${\sigma}$ be a permutation drawn uniformly at random from ${S_n}$. Then the random partition ${\Sigma_{CD}(\sigma)}$ converges in the limit ${n \rightarrow\infty}$ to a Poisson-Dirichlet process ${\Sigma_{PF}}$ in the following sense: given any fixed sequence of intervals ${[a_1,b_1],\ldots,[a_k,b_k] \subset I}$ (independent of ${n}$), the joint discrete random variable ${(N_{[a_1,b_1]}(\Sigma_{CD}(\sigma)),\ldots,N_{[a_k,b_k]}(\Sigma_{CD}(\sigma)))}$ converges in distribution to ${(N_{[a_1,b_1]}(\Sigma),\ldots,N_{[a_k,b_k]}(\Sigma))}$.

Finally, we turn to the first way:

Proposition 4 (Large prime factors of a typical number) Let ${x > 0}$, and let ${N_x}$ be a random natural number chosen according to one of the following three rules:

1. (Uniform distribution) ${N_x}$ is drawn uniformly at random from the natural numbers in ${[1,x]}$.
2. (Shifted uniform distribution) ${N_x}$ is drawn uniformly at random from the natural numbers in ${[x,2x]}$.
3. (Zeta distribution) Each natural number ${n}$ has a probability ${\frac{1}{\zeta(s)}\frac{1}{n^s}}$ of being equal to ${N_x}$, where ${s := 1 +\frac{1}{\log x}}$and ${\zeta(s):=\sum_{n=1}^\infty \frac{1}{n^s}}$.

Then ${\Sigma_{PF}(N_x)}$ converges as ${x \rightarrow \infty}$ to a Poisson-Dirichlet process ${\Sigma}$ in the same fashion as in Proposition 3.

The process ${\Sigma_{PF}(N_x)}$ was first studied by Billingsley (and also later by Knuth-Trabb Pardo and by Vershik, but the formulae were initially rather complicated; the proposition above is due to of Donnelly and Grimmett, although the third case of the proposition is substantially easier and appears in the earlier work of Lloyd. We prove the proposition below the fold.

The previous two propositions suggests an interesting analogy between large random integers and large random permutations; see this ICM article of Vershik and this non-technical article of Granville (which, incidentally, was once adapted into a play) for further discussion.

As a sample application, consider the problem of estimating the number ${\pi(x,x^{1/u})}$ of integers up to ${x}$ which are not divisible by any prime larger than ${x^{1/u}}$ (i.e. they are ${x^{1/u}}$-smooth), where ${u>0}$ is a fixed real number. This is essentially (modulo some inessential technicalities concerning the distinction between the intervals ${[x,2x]}$ and ${[1,x]}$) the probability that ${\Sigma}$ avoids ${[1/u,1]}$, which by the above theorem converges to the probability ${\rho(u)}$ that ${\Sigma_{PF}}$ avoids ${[1/u,1]}$. Below the fold we will show that this function is given by the Dickman function, defined by setting ${\rho(u)=1}$ for ${u < 1}$ and ${u\rho'(u) = \rho(u-1)}$ for ${u \geq 1}$, thus recovering the classical result of Dickman that ${\pi(x,x^{1/u}) = (\rho(u)+o(1))x}$.

I thank Andrew Granville and Anatoly Vershik for showing me the nice link between prime factors and the Poisson-Dirichlet process. The material here is standard, and (like many of the other notes on this blog) was primarily written for my own benefit, but it may be of interest to some readers. In preparing this article I found this exposition by Kingman to be helpful.

Note: this article will emphasise the computations rather than rigour, and in particular will rely on informal use of infinitesimals to avoid dealing with stochastic calculus or other technicalities. We adopt the convention that we will neglect higher order terms in infinitesimal calculations, e.g. if ${dt}$ is infinitesimal then we will abbreviate ${dt + o(dt)}$ simply as ${dt}$.

— 1. Constructing the Poisson-Dirichlet process —

We now prove Proposition 2, which of course also implies Proposition 1.

Let ${a>0}$, and let ${\Gamma_a}$ be as in Proposition 2. We note that ${\Gamma_a \equiv a \Gamma_1}$, so if one wished, one could normalise ${a=1}$. However, for minor technical reasons it will be convenient to allow ${a}$ to to be arbitrary.

We first understand the law of the sum ${S :=\sum_{x \in\Gamma_a} x}$ of the Poisson process ${\Gamma_a}$.

Lemma 5 Let ${\Gamma_a}$ be as in Proposition 2. Then ${S := \sum_{x \in \Gamma_a} x}$ is an exponential variable with probability distribution ${t \mapsto a e^{-at}}$. In particular, ${S}$ is almost surely finite.

Proof: Taking Laplace transforms, it suffices to show that

$\displaystyle {\bf E} \exp( - s S ) = \frac{1}{s+a} \ \ \ \ \ (3)$

for all ${s>0}$. We come the left-hand side by an informal argument using infinitesimals, leaving the rigorous version of the argument as an exercise to the reader. Partitioning ${[0,+\infty)}$ into infinitesimal intervals ${[t,t+dt)}$, we expect each interval ${[t,t+dt)}$ to contain exactly one element of ${\Gamma}$ with probability about ${\frac{1}{t} e^{-at} dt}$, and to contain two or more elements with negligible probability. We can then write

$\displaystyle S = \sum_t 1_{N_{[t,t+dt)}=1} t$

(ignoring higher order terms) and thus

$\displaystyle {\bf E} \exp(-sS) = \prod_t (1 + (e^{-st}-1) \frac{1}{t} e^{-at} dt)$

$\displaystyle = \exp( \sum_t (e^{-st}-1) \frac{1}{t} e^{-at}\ dt )$

and thus on sending ${dt}$ to zero

$\displaystyle {\bf E} \exp(-sS) = \exp( \int_0^\infty (e^{-st}-1) \frac{1}{t} e^{-at}\ dt ).$

The quantity ${\int_0^\infty (e^{-st}-1) \frac{1}{t} e^{-at}\ dt}$ vanishes when ${s=0}$ and has an ${s}$-derivative of ${-\int_0^\infty e^{-st} e^{-at}\ dt = \frac{-1}{s+a}}$, so on integrating we have

$\displaystyle \int_0^\infty (e^{-st}-1) \frac{1}{t} e^{-at}\ dt = - \log(s+a)$

and (3) follows. $\Box$

The same analysis can be performed for Poisson processes with more general intensity functions, leading to Campbell’s theorem. However, the exponential nature of ${S}$ leads to an important decoupling property:

Proposition 6 Let ${\Gamma}$ be as in Proposition 2. Then the random variable ${S}$ and the normalisation ${\Sigma_N(\Gamma_a) = \frac{1}{S} \cdot \Gamma_a}$ are independent.

Proof: Let ${[t_1,t_1+dt_1],\ldots,[t_k,t_k+dt_k]}$ be disjoint infinitesimal intervals in ${I}$, and let ${[s,s+ds]}$ be another infinitesimal interval in ${(0,+\infty)}$, with ${ds}$ infinitesimal compared with ${dt_1,\ldots,dt_k}$. We compute the probability of of the (infinitesimal) event ${E_{s,t_1,\ldots,t_k}}$ that ${S}$ lies in ${[s,s+ds]}$ and each of ${[t_i,t_i+dt_i]}$ is occupied by an element of ${\Sigma}$; our goal is to show that this probability decouples into the product of a function of ${s,ds}$ and a function of the ${t_i,dt_i}$. This event ${E_{s,t_1,\ldots,t_k}}$ is (up to higher order errors) empty if ${t_1+\ldots+t_k >1}$, and is otherwise equal to the event that each of the ${[st_i,st_i+sdt_i]}$ is occupied by an element ${x_i}$ of ${\Gamma_a}$, and that the sum ${S-x_1-\ldots-x_k}$ of the remaining elements lies in ${[s-x_1-\ldots-x_k,s-x_1-\ldots-x_k+ds]}$. By definition of Poisson process, the former event occurs with probability

$\displaystyle \prod_{i=1}^k \frac{1}{st_i} \exp(- a st_i) s dt_i = \exp( - as \sum_{i=1}^k t_i) \prod_{i=1}^k \frac{dt_i}{t_i}.$

Once we condition on this event, ${\Gamma_a \backslash \{x_1,\ldots,x_k\}}$ is again a Poisson process of intensity ${t \mapsto \frac{1}{t} \exp(-at)}$, so by Lemma 5 ${S -x_1 - \ldots- x_k}$ is an exponential variable of parameter ${a}$, so it lies in ${[s-x_1-\ldots-x_k,s-x_1-\ldots-x_k+ds]}$ with probability

$\displaystyle a \exp( -a (s-x_1-\ldots-x_k) ) ds.$

Since ${x_i}$ is infinitesimally close to ${at_i}$, we conclude that the probability of ${E_{s,t_1,\ldots,t_k}}$ is

$\displaystyle a \exp( -as) (\prod_{i=1}^k \frac{dt_i}{t_i}) 1_{t_1+\ldots+t_k \leq 1}$

giving the claim. $\Box$

Note that the above calculation in fact computes the ${k}$-point intensity function of ${\Sigma = \Sigma_N(\Gamma_a)}$ as

$\displaystyle (t_1,\ldots,t_k)\mapsto (\prod_{i=1}^k \frac{1}{t_i}) 1_{t_1+\ldots+t_k \leq 1}. \ \ \ \ \ (4)$

We can now show the Poisson-Dirichlet property (2):

Proposition 7 Let ${\Sigma:= \Sigma_N(\Gamma_a)}$, and let ${u_1 \in \Sigma}$ be a random element, with each ${t \in\Sigma}$ having a probability ${t}$ of equalling ${u_1}$. Then ${u_1}$ is distributed uniformly in ${[0,1]}$, and after one conditions on the value of ${u_1}$, ${\frac{1}{1-u_1} \cdot (\Sigma\backslash \{u_1\})}$ has the same distribution as ${\Sigma}$.

Proof: We first establish that ${u_1}$ is distributed uniformly in ${[0,1]}$. Consider the event ${E_u}$ that ${u_1}$ lies in an infinitesimal interval ${[u,u+du]}$ for some ${0 < u \leq 1}$. This requires the interval ${[u,u+du]}$ to contain an element ${t}$ of ${\Sigma}$, which occurs with probability ${\frac{1}{u} du}$, and then there is a probability ${u \approx t}$ that this element will be selected as ${u_1}$. Thus ${E_u}$ has probability ${du}$, giving the claim of uniform distribution.

Now consider the probability of the event ${E_{u,t_1,\ldots,t_k}}$ that ${u_1}$ lies in ${[u,u+du]}$, and that ${\frac{1}{1-u_1} \cdot (\Sigma \backslash \{u_1\})}$ meets each of the disjoint infinitesimal ${[t_1,t_1+dt_1],\ldots,[t_k,t_k+dt_k]}$, where we choose ${du}$ to be infinitesimal compared with ${t_1,\ldots,t_k}$. This event is empty if ${t_1+\ldots+t_k > 1}$, and is otherwise equal (up to higher order errors) as the event that ${u_1 \in [u,u+du]}$ and that ${\Sigma}$ meets each of the ${[(1-u)t_i, (1-u)t_i+(1-u)dt_i]}$. By (4) for ${k+1}$ applied at ${(u, (1-u)t_1,\ldots,(1-u)t_k)}$, we conclude that ${E_{u,t_1,\ldots,t_k}}$ occurs with probability

$\displaystyle u \times (\frac{du}{u} \times \frac{(1-u)dt_1}{(1-u)t_1} \times \ldots \times \frac{(1-u)dt_k}{(1-u)t_k}) 1_{t_1+\ldots+t_k \leq 1}$

which simplifies to

$\displaystyle du \times (\prod_{i=1}^k \frac{dt_i}{t_i}) 1_{t_1+\ldots+t_k \leq 1}.$

If we condition out by ${E_u}$ which has probability ${du}$, we recover the same ${k}$-point correlation function (4) as ${\Sigma}$, and the claim follows. $\Box$

Remark 1 As a crude zeroth approximation to the process ${\Sigma=\Sigma_N(\Gamma_a)}$, one can consider the Poisson process ${\Sigma'}$ on ${I}$ with intensity function ${t \mapsto \frac{1}{t} 1_{t\leq 1}}$. By (4), this process has the same ${1}$-point intensity function as the true Poisson-Dirichlet process ${\Sigma}$, but has the wrong higher point correlation functions (the cutoff ${1_{t_1+\ldots+t_k \leq 1}}$ is replaced by ${1_{t_1,\ldots,t_k \leq 1}}$). Related to this, ${\Sigma'}$ is not a partition in general as there is no reason why ${\sum_{x \in\Sigma'} x}$ should equal ${1}$. Nevertheless, this zeroth approximation can be used as a crude heuristic in some cases. In the number theoretic setting, it corresponds to a Cramer-type random model in which each prime ${p}$ less than a “typical” number ${N}$ has an independent probability of ${1/p}$ of dividing ${N}$.

Remark 2 If one starts with a Poisson process ${\Gamma_{\theta,a}}$ of intensity ${\theta \frac{1}{t} e^{-at}}$ for some parameter ${\theta>0}$ and normaliises it as above, one arrives at a more general Poisson-Dirichlet process whose enumeration variables ${u_1,u_2,\ldots}$ are distributed according to a beta distribution rather than the uniform distribution.

— 2. Cycle decomposition of random permutations —

The proof of Proposition 3 is essentially immediate from the following claim:

Proposition 8 Let ${n,\sigma}$ and ${\Sigma_n := \Sigma_{CD}(\sigma)}$ be as in Proposition 3. Let ${u_1}$ be a random element of ${\Sigma_n}$, with each ${t \in \Sigma_n}$ occuring with probability ${t}$. Then ${u_1}$ is distributed uniformly among the multiples ${\{1/n,2/n,\ldots,1\}}$ of ${1/n}$, and after conditioning on a given value ${u_1=j/n}$ of ${u_1}$, ${\frac{1}{1-u_1} \cdot (\Sigma_n \backslash \{u_1\})}$ has the distribution of ${\Sigma_{n-j}}$.

Heuristically at least, in the limit ${n \rightarrow \infty}$, this proposition asserts that the limiting distribution of the ${\Sigma_n}$ obeys the defining relation (2) of the Poisson-Dirichlet process. It is not difficult to make this argument rigorous, but we leave this as an exercise to the reader.

Proof: We first compute the probability that ${u_1}$ is equal to ${j/n}$. Using the cycle interpretation of the enumeration of ${\Sigma_n}$, this is equal to the probability that a randomly chosen element of ${\{1,\ldots,n\}}$ lies in a cycle of ${\sigma}$ of length ${j}$. By permutation symmetry, it suffices to compute the probability that ${1}$ lies in a cycle of length ${j}$. To count the number of permutations ${\sigma}$ with this property, observe from direct counting that there are ${(n-1) \times \ldots \times (n-j+1)}$ ways to select the cycle of length ${j}$ containing ${1}$, and once this cycle is chosen there are ${(n-j)!}$ ways to select the rest of the permutation ${\sigma}$, leading to ${(n-1)!}$ ways in all. Dividing by the total number ${n!}$ of permutations, we see that the cycle length ${j}$ occurs with probability ${1/n}$, giving the required uniform distribution.

The same analysis shows that once one fixes the cycle of length ${j}$ that contains ${1}$, the partition associated to the remaining cycles has the distribution of ${\Sigma_{n-j}}$. Averaging over all the cycles of length ${j}$ containing ${1}$, we obtain the second claim of the proposition. $\Box$

— 3. Asymptotics of prime divisors —

We now prove Proposition 4. We first give a simple argument of Lloyd (as described by Kingman), which handles the much smoother (and hence much easier) third case of the theorem. In this case one can take advantage of Euler product factorisation. Indeed, if we define ${\Omega_s \subset {\mathcal P}}$ to be the point process of primes defined by setting each prime ${p}$ to occur with multiplicity ${j}$ in ${\Omega_s}$ with probability ${(1-p^{-s}) \frac{1}{p^{js}}}$, independently in ${p}$, then we see from the fundamental theorem of arithmetic and the Euler product ${\zeta(s)^{-1} = \prod_p (1-p^{-s})}$ that the random variable ${n}$ has the same distribution as ${\prod_{p \in \Omega} p}$, where the product is with multiplicity. Setting ${\Gamma_s := \frac{1}{x} \log \Omega}$ and ${S := \sum_{x \in \Gamma_s} x}$, we thus see that the point process ${\Sigma = \Sigma_s}$ associated to the prime divisors of this random number ${N}$ is the same process as ${\frac{1}{S} \cdot \Gamma_s}$. On the other hand, it is easy to see (from the same Mertens’ theorem calculation in the introduction) that in the limit as ${s \rightarrow 1}$ (or ${x \rightarrow \infty}$), the point process ${\Gamma_s}$ converges (in the sense of Theorem 3) to the Poisson process with intensity ${t \mapsto \frac{1}{t} e^{-t}}$, and the claim follows in this case from Proposition 2.

Now we prove the theorem in the first and second cases. Here, the Euler product factorisation is not available (unless one allows ${s}$ to be complex and uses something like Perron’s formula, but this turns the probabilities involved into complex numbers, and it is difficult to see how this can be made at all rigorous). Instead, we use (a slightly informal version of) the argument of Donnelly and Grimmett. The key proposition is the following analogue of Proposition 9, stated slightly informally:

Proposition 9 Let ${N_x}$ be as in the first or second case of Proposition 4. Let ${p_1}$ be a random prime factor of ${N_x}$, with each prime factor ${p}$ of ${N_x}$ having a probability of ${\frac{\log p}{\log N_x}}$ of being chosen as ${p_1}$. Then the distribution of ${\frac{\log p_1}{\log N_x}}$ converges in the vague topology to the uniform distribution as ${x \rightarrow \infty}$. Also, if one conditions on a specific value ${p_1=p}$ of ${p_1}$, then ${N_{x}/p_1}$ has the same distribution as ${N_{x/p_1}}$ (up to negligible errors).

Again, upon taking logarithms and sending ${x \rightarrow \infty}$ we conclude, heuristically at least, that the limiting distribution of ${\Sigma_{PF}(N_x)}$ converges to a process that obeys (2) and is thus a Poisson-Dirichlet process. Again, one can make this argument rigorous; see the paper of Donnelly and Grimmett for details.

Proof: (Informal) For sake of concreteness let us work with the second case of Proposition 4, as the first case is similar (and also follows from the second case by a dyadic decomposition). We compute the probability that ${p_1}$ is equal to a given prime ${p}$, and ${N_x}$ equal to a given multiple ${pm}$ of ${p}$. This requires ${pm}$ lies in ${[x,2x]}$, and occurs with probability about

$\displaystyle \frac{1}{x} \times \frac{\log p}{\log N_x} 1_{[x,2x]}(pm) \approx\frac{1}{p} \frac{\log p}{\log x} \frac{1}{x/p} 1_{[x/p,2x/p]}(m).$

(This is slightly inaccurate in the case that ${m}$ is also divisible by ${p}$, but this event is negligible in the limit ${x \rightarrow \infty}$).

Summing in ${m}$, we see that ${p_1}$ is equal to ${p}$ with probability about ${\frac{1}{p} \frac{\log p}{\log x}}$, and that once one conditions on the event ${p_1=p}$, one essentially recovers the distribution of ${N_{x/p}}$. Finally, the probability that ${\frac{\log p}{\log N_x}}$ lies in ${[a,b]}$ is essentially

$\displaystyle \sum_{x^a \leq p \leq x^b} \frac{1}{p} \frac{\log p}{\log x},$

which by Mertens’ theorems is asymptotic to ${b-a}$, and the claim follows. $\Box$

Remark 3 The same argument also works when ${N}$ is drawn from the zeta distribution. Comparing this with Lloyd’s argument, this gives another proof of Proposition 2 (although ultimately it is just a discrete version of the previous, continuous, proof).

Remark 4 A variant of Proposition 4 of relevance to the polymath8 project; if instead of selecting ${N}$ uniformly at random from ${[x,2x]}$, one instead performs a weighted distribution where each ${n \in [x,2x]}$ has a probability ${\frac{k_0^{\Omega(n)}}{\sum_{n \in [x,2x]} k_0^{\Omega(n)}}}$ of being chosen as ${N}$ for some fixed ${k_0 \geq 1}$, where ${\Omega(n)}$ is the number of prime factors of ${n}$, then the same argument shows that ${\Sigma_{PF}(n)}$ is now asymptotic to a generalised Poisson-Dirichlet process in which the coordinates ${u_1,u_2,\ldots}$ of the random enumeration remain independent, but now have the probability density function of ${t \mapsto k_0 (1-t)^{k_0-1}}$ rather than the uniform distribution. (This distribution is in fact of the form in Remark 2, with ${\theta}$ replaced by ${k_0}$.) In particular, when ${k_0}$ is large, the elements of ${\Sigma_{PF}(n)}$ tend to cluster near the origin (most elements will have magnitude ${O(1/k_0)}$), which helps explain the Motohashi-Pintz-Zhang phenomenon that when sieving for ${k_0}$-tuples, the majority of moduli that one encounters in the sieve are quite smooth (or at least densely divisible).

— 4. Dickman’s function —

Let ${\rho(u)}$ be the probability that the Poisson-Dirichlet process ${\Sigma}$ has no element larger than ${1/u}$. Clearly ${\rho(u)=1}$ for ${u<1}$; it is also not difficult to show that ${\rho}$ is continuous. It thus suffices to verify the delay-differential equation ${u \rho'(u) = -\rho(u-1)}$ for ${u>1}$.

Let ${u>1}$. If we let ${\Gamma_1}$ be the ${a=1}$ process from Proposition 2, and ${S :=\sum_{x\in \Gamma_1} x}$, we see that ${\rho(u)}$ is equal to the probability that ${ut \leq S}$ for all ${t \in \Gamma_1}$. Taking (backwards) Newton quotients, we then conclude that ${-du \rho'(u)}$ is equal to the probability that ${ut \leq S}$ for all ${t \in \Sigma}$, and that ${S \in [(u-du)T_*, uT_*]}$ for the largest element ${T_*}$ of ${\Gamma_1}$.

For any real number ${t_*>0}$, let ${E_{t_*}}$ denote the event that ${T_* \in [t_*, t_*+dt]}$, ${ut \leq S}$ for all ${t \in \Gamma_1}$, and that ${S \in [(u-du)T_*, uT_*]}$, where ${dt}$ is infinitesimal compared with ${du}$. This is essentially the same as the event that ${\Gamma_1}$ contains an element ${t'_*}$ in ${[t_*, t_*+dt_*]}$, that ${(u-1)t \leq S-t'_*}$ for all ${t \in \Gamma_1 \backslash \{t'_*\}}$, and that ${S - t'_* \in [(u-du-1)t_*, (u-1)t_*]}$. The first event occurs with probability ${\frac{1}{t_*} e^{-t_*} dt_*}$. Conditioning on this event, ${\Gamma_1 \backslash \{t_*\}}$ is still a Poisson process with intensity ${\frac{1}{t} e^{-t}}$ with sum ${S - t_*}$. By Lemma 5, the random variable ${S-t_*}$ is exponential with parameter ${1}$ and indepenent of the process ${\frac{1}{S-t_*} \cdot \Gamma_1 \backslash \{t_*\}}$. Thus the remaining two events defining ${E_{t_*}}$ are conditionally independent, with the former occuring with probability ${\rho(u-1)}$ and the latter with probability ${e^{-(u-1)t_*} du t_*}$. Thus the total probability that ${E_{t_*}}$ occurs is

$\displaystyle (\frac{1}{t_*} e^{-t_*} dt) \times \rho(u-1) \times e^{-(u-1)t_*} du t_*$

which simplifies to ${\rho(u-1) du e^{-ut_*} dt_*}$; integrating in ${t_*}$, we conclude that ${-du \rho'(u)}$ is equal to ${\rho(u-1) du \frac{1}{u}}$, and the claim follows.

Remark 5 Using (4) and the inclusion-exclusion formula, one also has the integral expression

$\displaystyle \rho(u) = \sum_{k=0}^\infty (-1)^k \int_{t_1,\ldots,t_k \geq 1/u: t_1+\ldots+t_k \leq 1} \frac{dt_1 \ldots dt_k}{t_1 \ldots t_k}$

for ${\rho}$, with the convention that the ${k=0}$ term is ${1}$; note that the summands vanish for ${k \geq 1/u}$ and so only finitely many terms in this expression need to be evaluated. This gives another way to verify the delay-differential equation ${u\rho'(u) = \rho(u-1)}$.