You are currently browsing the tag archive for the ‘derandomisation’ tag.

I’ve uploaded to the arXiv the polymath research paper “Deterministic methods to find primes“, which is the outcome of the Polymath4 collaborative mathematics project, and has been submitted to Mathematics of Computation.

The objective of this paper was to find fast deterministic algorithms to solve the following problem:

Given a (large) integer x, find a prime p larger than x.

Thanks to the AKS algorithm, a number of size O(x) can be deterministically tested for primality in time O( \log^{O(1)} x ) = O( x^{o(1)} ).   By Bertrand’s postulate, there is always at least one prime between x and 2x; by testing each one of these integers in turn for primality, one can thus obtain a deterministic algorithm to find primes in time O( x^{1+o(1)}).

But one should be able to do much better.  For comparison, if probabilistic algorithms are allowed, then by randomly selecting integers between x and 2x to test for primality, it is easy to see from the prime number theorem that one will succeed in obtaining a prime with high probability in time O( \log^{O(1)} x ) = O( x^{o(1)} ).  However, after some effort we were not able to “derandomise” this algorithm to create any reasonable deterministic counterpart.  Nevertheless, we conjecture that a deterministic algorithm with run time O( x^{o(1)}) exists.  Such algorithms can be easily obtained if one assumes some standard conjectures regarding the primes (e.g. Cramer’s conjecture on prime gaps), but we do not know of any deterministic algorithms which can be unconditionally proved to run in time O( x^{o(1)}).

Currently, the best known deterministic algorithm is due to Lagarias and Odlyzko, and has a run time of O( x^{1/2+o(1)}).  Roughly speaking, it is based on the ability to compute the prime counting function \pi(x) in time O( x^{1/2+o(1)} ); once one has this function, one can detect which intervals contain primes or not, and then starting from Bertrand’s postulate and performing a binary search one can then locate a prime.  The Lagarias-Odlyzko argument is based on approximating \pi(x) by a certain integral of the Riemann zeta function, which one then approximates in turn by quadrature.

We conjecture that one should be able to compute \pi(x) in faster time, and in particular in time O(x^{1/2-c+o(1)}) for some c>0.  Unfortunately, we were not able to achieve this; however, we do have a non-trivial method to compute the parity \pi(x) \hbox{ mod } 2 of \pi(x) in such a time; a bit more generally (and oversimplifying a little bit), we can compute various projections \sum_{p \leq x} t^p \hbox{ mod } 2, g(t) of the prime polynomial \sum_{p \leq x} t^p modulo some small polynomials g.  This seems close to being able to achieve the goal of detecting whether primes exist in a given interval, but unfortunately some additional work seems to be needed in that area.  Still, the methods used to get the parity seem to be interesting (involving the Dirichlet hyperbola method, a piecewise approximation of the hyperbola, and the Strassen fast multiplication algorithm), and these techniques might be useful for some other problems.

Roughly speaking, the idea to compute the parity of \pi(x) \hbox{ mod } 2 is as follows.  The first observation is that, for square-free n, the number \tau(n) of divisors of n is equal to 2 when n is a prime, and a multiple of 4 otherwise.  So to compute the parity of \pi(x), it suffices to compute \sum_{n \leq x} \tau(n) modulo 4 (or more precisely, the restriction of this sum to squarefree n).

The restriction to squarefree numbers turns out to be relatively easy to handle, so we ignore it.  Since \tau(n) = \sum_{a,b: ab=n} 1, we can rewrite

\sum_{n \leq x} \tau(n) = \sum_{a,b: ab \leq x} 1.

So it suffices to find an efficient way to count the number of lattice points below the hyperbola \{ (a,b): ab = x \}.

The classic way to do this is via the Dirichlet hyperbola method.  This lets one rewrite the above expression as

2 \sum_{a \leq \sqrt{x}} \lfloor \frac{x}{a} \rfloor - \lfloor \sqrt{x}\rfloor^2

(assuming x is not a perfect square, where \lfloor x \rfloor is the greatest integer function).  One can then compute this quantity in time O(x^{1/2+o(1)}) without difficulty.

To improve this to O(x^{1/2-c+o(1)}), we used some ideas of Vinogradov, based on using Diophantine approximation to find moderately long arithmetic progressions on which the map a \mapsto \lfloor \frac{x}{a} \rfloor is linear, and thus easily summable.

The same method extends to the polynomial setting, though now, instead of summing an arithmetic progression, one has to find an efficient way to sum quadratic sums such as \sum_{n=1}^N t^{an^2+bn+c}.  This turns out to be possible by expressing this sum as a matrix product and then using fast matrix multiplication algorithms.

There is still some scope to improve the results and methods further; Ernie Croot is pursuing this with two undergraduate students, David Hollis and David Lowry,

This problem in compressed sensing is an example of a derandomisation problem: take an object which, currently, can only be constructed efficiently by a probabilistic method, and figure out a deterministic construction of comparable strength and practicality. (For a general comparison of probabilistic and deterministic algorithms, I can point you to these slides by Avi Wigderson).

I will define exactly what UUP matrices (the UUP stands for “uniform uncertainty principle“) are later in this post. For now, let us just say that they are a generalisation of (rectangular) orthogonal matrices, in which the columns are locally almost orthogonal rather than globally perfectly orthogonal. Because of this, it turns out that one can pack significantly more columns into a UUP matrix than an orthogonal matrix, while still capturing many of the desirable features of orthogonal matrices, such as stable and computable invertibility (as long as one restricts attention to sparse or compressible vectors). Thus UUP matrices can “squash” sparse vectors from high-dimensional space into a low-dimensional while still being able to reconstruct those vectors; this property underlies many of the recent results on compressed sensing today.

There are several constructions of UUP matrices known today (e.g. random normalised Gaussian matrices, random normalised Bernoulli matrices, or random normalised minors of a discrete Fourier transform matrix) but (if one wants the sparsity parameter to be large) they are all probabilistic in nature; in particular, these constructions are not 100% guaranteed to actually produce a UUP matrix, although in many cases the failure rate can be proven to be exponentially small in the size of the matrix. Furthermore, there is no fast (e.g. sub-exponential time) algorithm known to test whether any given matrix is UUP or not. The failure rate is small enough that this is not a problem for most applications (especially since many compressed sensing applications are for environments which are already expected to be noisy in many other ways), but is slightly dissatisfying from a theoretical point of view. One is thus interested in finding a deterministic construction which can locate UUP matrices in a reasonably rapid manner. (One could of course simply search through all matrices in a given class and test each one for the UUP property, but this is an exponential-time algorithm, and thus totally impractical for applications.) In analogy with error-correcting codes, it may be that algebraic or number-theoretic constructions may hold the most promise for such deterministic UUP matrices (possibly assuming some unproven conjectures on exponential sums); this has already been accomplished by de Vore for UUP matrices with small sparsity parameter.

Read the rest of this entry »

Archives

RSS Google+ feed

  • An error has occurred; the feed is probably down. Try again later.
Follow

Get every new post delivered to your Inbox.

Join 4,051 other followers