You are currently browsing the category archive for the ‘math.NA’ category.
This is the third thread for the Polymath8b project to obtain new bounds for the quantity
either for small values of (in particular ) or asymptotically as . The previous thread may be found here. The currently best known bounds on are:
- (Maynard) Assuming the Elliott-Halberstam conjecture, .
- (Polymath8b, tentative) . Assuming Elliott-Halberstam, .
- (Polymath8b, tentative) . Assuming Elliott-Halberstam, .
- (Polymath8b) for sufficiently large . Assuming Elliott-Halberstam, for sufficiently large .
Much of the current focus of the Polymath8b project is on the quantity
where ranges over square-integrable functions on the simplex
with being the quadratic forms
It was shown by Maynard that one has whenever , where is the narrowest diameter of an admissible -tuple. As discussed in the previous post, we have slight improvements to this implication, but they are currently difficult to implement, due to the need to perform high-dimensional integration. The quantity does seem however to be close to the theoretical limit of what the Selberg sieve method can achieve for implications of this type (at the Bombieri-Vinogradov level of distribution, at least); it seems of interest to explore more general sieves, although we have not yet made much progress in this direction.
The best asymptotic bounds for we have are
For small , the upper bound is quite competitive, for instance it provides the upper bound in the best values
we have for and . The situation is a little less clear for medium values of , for instance we have
and so it is not yet clear whether (which would imply ). See this wiki page for some further upper and lower bounds on .
The best lower bounds are not obtained through the asymptotic analysis, but rather through quadratic programming (extending the original method of Maynard). This has given significant numerical improvements to our best bounds (in particular lowering the bound from to ), but we have not yet been able to combine this method with the other potential improvements (enlarging the simplex, using MPZ distributional estimates, and exploiting upper bounds on two-point correlations) due to the computational difficulty involved.
In my previous post, I briefly discussed the work of the four Fields medalists of 2010 (Lindenstrauss, Ngo, Smirnov, and Villani). In this post I will discuss the work of Dan Spielman (winner of the Nevanlinna prize), Yves Meyer (winner of the Gauss prize), and Louis Nirenberg (winner of the Chern medal). Again by chance, the work of all three of the recipients overlaps to some extent with my own areas of expertise, so I will be able to discuss a sample contribution for each of them. Again, my choice of contribution is somewhat idiosyncratic and is not intended to represent the “best” work of each of the awardees.
The summer continues to allow some clearing of the backlog of projects accumulated during the academic year: Helge Holden, Kenneth Karlsen, Nils Risebro, and myself have uploaded to the arXiv the paper “Operator splitting for the KdV equation“, submitted to Math. Comp.. This paper is concerned with accurate numerical schemes for solving initial value problems for the Korteweg-de Vries equation
though the analysis here would be valid for a wide range of other semilinear dispersive models as well. In principle, these equations, which are completely integrable, can be solved exactly by the inverse scattering method, but fast and accurate implementations of this method are still not as satisfactory as one would like. On the other hand, the linear Korteweg-de Vries equation
can be solved exactly (with accurate and fast numerics) via the (fast) Fourier transform, while the (inviscid) Burgers equation
can also be solved exactly (and quickly) by the method of characteristics. Since the KdV equation is in some sense a combination of the equations (2) and (3), it is then reasonable to hope that some combination of the solution schemes for (2) and (3) can be used to solve (1), at least in some approximate sense.
One way to do this is by the method of operator splitting. Observe from the formal approximation (where should be thought of as small, and is some matrix or linear operator), that one has
[we do not assume A and B to commute here] and thus we formally have
if for some fixed time T (thus ). As a consequence, if one wants to solve the linear ODE
for time , one can achieve an approximate solution (accurate to order ) by alternating times between evolving the ODE
for time , and evolving the ODE
for time , starting with the initial data .
It turns out that this scheme can be formalised, and furthermore generalised to nonlinear settings such as those for the KdV equation (1). More precisely, we show that if for some , then one can solve (1) to accuracy in norm for any fixed time by alternating between evolving (2) and (3) for times (this scheme is known as Godunov splitting).
Actually, one can obtain faster convergence by modifying the scheme, at the cost of requiring higher regularity on the data; the situation is similar to that of numerical integration (or quadrature), in which the midpoint rule or Simpson’s rule provide more accuracy than the Riemann integral if the integrand is smooth. For instance, one has the variant
of (5), which can be seen by expansion to second order in (or by playing around with the Baker-Campbell-Hausdorff formula). For KdV, we can rigorously show that the analogous scheme (known as Strang splitting) involving the indicated combination of evolutions of (2) and (3) will also converge to accuracy in norm, provided that and .
Emmanuel Candés and I have just uploaded to the arXiv our paper “The power of convex relaxation: near-optimal matrix completion“, submitted to IEEE Inf. Theory. In this paper we study the matrix completion problem, which one can view as a sort of “non-commutative” analogue of the sparse recovery problem studied in the field of compressed sensing, although there are also some other significant differences between the two problems. The sparse recovery problem seeks to recover a sparse vector from some linear measurements , where A is a known matrix. For general x, classical linear algebra tells us that if m < n, then the problem here is underdetermined and has multiple solutions; but under the additional assumption that x is sparse (most of the entries are zero), it turns out (under various hypotheses on the measurement matrix A, and in particular if A contains a sufficient amount of “randomness” or “incoherence”) that exact recovery becomes possible in the underdetermined case. Furthermore, recovery is not only theoretically possible, but is also computationally practical in many cases; in particular, under some assumptions on A, one can recover x by minimising the convex norm over all solutions to Ax=b.
Now we turn to the matrix completion problem. Instead of an unknown vector , we now have an unknown matrix (we use the shorthand here). We will take a specific type of underdetermined linear measurement of M, namely we pick a random subset of the matrix array of some cardinality , and form the random sample of M.
Of course, with no further information on M, it is impossible to complete the matrix M from the partial information – we only have pieces of information and need . But suppose we also know that M is low-rank, e.g. has rank less than r; this is an analogue of sparsity, but for matrices rather than vectors. Then, in principle, we have reduced the number of degrees of freedom for M from to something more like , and so (in analogy with compressed sensing) one may now hope to perform matrix completion with a much smaller fraction of samples, and in particular with m close to .
This type of problem comes up in several real-world applications, most famously in the Netflix prize. The Netflix prize problem is to be able to predict a very large ratings matrix M, whose rows are the customers, whose columns are the movies, and the entries are the rating that each customer would hypothetically assign to each movie. Of course, not every customer has rented every movie from Netflix, and so only a small fraction of this matrix is actually known. However, if one makes the assumption that most customers’ rating preference is determined by only a small number of characteristics of the movie (e.g. genre, lead actor/actresses, director, year, etc.), then the matrix should be (approximately) low rank, and so the above type of analysis should be useful (though of course it is not going to be the only tool of use in this messy, real-world problem).
Actually, one expects to need to oversample the matrix by a logarithm or two in order to have a good chance of exact recovery, if one is sampling randomly. This can be seen even in the rank one case r=1, in which is the product of a column vector and a row vector; let’s consider square matrices for simplicity. Observe that if the sampled coordinates completely miss one of the rows of the matrix, then the corresponding element of u has gone completely unmeasured, and one cannot hope to complete this row of the matrix. Thus one needs to sample every row (and also every column) of the matrix. The solution to the coupon collector’s problem then tells us that one needs about samples to achieve this goal. In fact, the theory of Erdős-Rényi random graphs tells us that the bipartite graph induced by becomes almost surely connected beyond this threshold, which turns out to be exactly what is needed to perform matrix completion for rank 1 matrices.
On the other hand, one cannot hope to complete the matrix if some of the singular vectors of the matrix are extremely sparse. For instance, in the Netflix problem, a singularly idiosyncratic customer (or dually, a singularly unclassifiable movie) may give rise to a row or column of M that has no relation to the rest of the matrix, occupying its own separate component of the singular value decomposition of M; such a row or column is then impossible to complete exactly without sampling the entirety of that row or column. Thus, to get exact matrix completion from a small fraction of entries, one needs some sort of incoherence assumption on the singular vectors, which spreads them out across all coordinates in a roughly even manner, as opposed to being concentrated on just a few values.
amongst all matrices consistent with the observed data . This nuclear norm is the non-commutative counterpart to the norm for vectors, and so this algorithm is analogous to the minimisation (or basis pursuit) algorithm which is effective for compressed sensing (though not the only such algorithm for this task). They showed, roughly speaking, that exact matrix completion (for, say, square matrices for simplicity) is ensured with high probability so long as the singular vectors obey a certain incoherence property (basically, their norm should be close to the minimal possible value, namely ), so long as one had the condition
This differs from the presumably optimal threshold of by a factor of about .
The main result of our paper is to mostly eliminate this gap, at the cost of a stronger hypothesis on the matrix being measured:
Main theorem. (Informal statement) Suppose the matrix M has rank r and obeys a certain “strong incoherence property”. Then with high probability, nuclear norm minimisation will recover M from a random sample provided that , where .
A result of a broadly similar nature, but with a rather different recovery algorithm and with a somewhat different range of applicability, was recently established by Keshavan, Oh, and Montanari. The strong incoherence property is somewhat technical, but is related to the Candés-Recht incoherence property and is satisfied by a number of reasonable random matrix models. The exponent O(1) here is reasonably civilised (ranging between 2 and 9, depending on the specific model and parameters being used).
This week I am in Australia, attending the ANZIAM annual meeting in Katoomba, New South Wales (in the picturesque Blue Mountains). I gave an overview talk on some recent developments in compressed sensing, particularly with regards to the basis pursuit approach to recovering sparse (or compressible) signals from incomplete measurements. The slides for my talk can be found here, with some accompanying pictures here. (There are of course by now many other presentations of compressed sensing on-line; see for instance this page at Rice.)
There was an interesting discussion after the talk. Some members of the audience asked the very good question as to whether any a priori information about a signal (e.g. some restriction about the support) could be incorporated to improve the performance of compressed sensing; a related question was whether one could perform an adaptive sequence of measurements to similarly improve performance. I don’t have good answers to these questions. Another pointed out that the restricted isometry property was like a local “well-conditioning” property for the matrix, which only applied when one viewed a few columns at a time.
The Distinguished Lecture Series at UCLA for this winter quarter is given by Avi Wigderson, who is lecturing on “some topics in computational complexity“. In his first lecture on Wednesday, Avi gave a wonderful talk (in his inimitably entertaining style) on “The power and weakness of randomness in computation“. The talk was based on these slides. He also gave a sort of “encore” on zero-knowledge proofs in more informal discussions after the main talk.
As always, any errors here are due to my transcription and interpretation.
This problem in compressed sensing is an example of a derandomisation problem: take an object which, currently, can only be constructed efficiently by a probabilistic method, and figure out a deterministic construction of comparable strength and practicality. (For a general comparison of probabilistic and deterministic algorithms, I can point you to these slides by Avi Wigderson).
I will define exactly what UUP matrices (the UUP stands for “uniform uncertainty principle“) are later in this post. For now, let us just say that they are a generalisation of (rectangular) orthogonal matrices, in which the columns are locally almost orthogonal rather than globally perfectly orthogonal. Because of this, it turns out that one can pack significantly more columns into a UUP matrix than an orthogonal matrix, while still capturing many of the desirable features of orthogonal matrices, such as stable and computable invertibility (as long as one restricts attention to sparse or compressible vectors). Thus UUP matrices can “squash” sparse vectors from high-dimensional space into a low-dimensional while still being able to reconstruct those vectors; this property underlies many of the recent results on compressed sensing today.
There are several constructions of UUP matrices known today (e.g. random normalised Gaussian matrices, random normalised Bernoulli matrices, or random normalised minors of a discrete Fourier transform matrix) but (if one wants the sparsity parameter to be large) they are all probabilistic in nature; in particular, these constructions are not 100% guaranteed to actually produce a UUP matrix, although in many cases the failure rate can be proven to be exponentially small in the size of the matrix. Furthermore, there is no fast (e.g. sub-exponential time) algorithm known to test whether any given matrix is UUP or not. The failure rate is small enough that this is not a problem for most applications (especially since many compressed sensing applications are for environments which are already expected to be noisy in many other ways), but is slightly dissatisfying from a theoretical point of view. One is thus interested in finding a deterministic construction which can locate UUP matrices in a reasonably rapid manner. (One could of course simply search through all matrices in a given class and test each one for the UUP property, but this is an exponential-time algorithm, and thus totally impractical for applications.) In analogy with error-correcting codes, it may be that algebraic or number-theoretic constructions may hold the most promise for such deterministic UUP matrices (possibly assuming some unproven conjectures on exponential sums); this has already been accomplished by de Vore for UUP matrices with small sparsity parameter.
For much of last week I was in Leiden, Holland, giving one of the Ostrowski prize lectures at the annual meeting of the Netherlands mathematical congress. My talk was not on the subject of the prize (arithmetic progressions in primes), as this was covered by a talk of Ben Green there, but rather on a certain “uniform uncertainty principle” in Fourier analysis, and its relation to compressed sensing; this is work which is joint with Emmanuel Candes and also partly with Justin Romberg.
I’ve had a number of people ask me (especially in light of some recent publicity) exactly what “compressed sensing” means, and how a “single pixel camera” could possibly work (and how it might be advantageous over traditional cameras in certain circumstances). There is a large literature on the subject, but as the field is relatively recent, there does not yet appear to be a good non-technical introduction to the subject. So here’s my stab at the topic, which should hopefully be accessible to a non-mathematical audience.
For sake of concreteness I’ll primarily discuss the camera application, although compressed sensing is a more general measurement paradigm which is applicable to other contexts than imaging (e.g. astronomy, MRI, statistical selection, etc.), as I’ll briefly remark upon at the end of this post.