You are currently browsing the tag archive for the ‘Hoi Nguyen’ tag.

Hoi Nguyen, Van Vu, and myself have just uploaded to the arXiv our paper “Random matrices: tail bounds for gaps between eigenvalues“. This is a followup paper to my recent paper with Van in which we showed that random matrices ${M_n}$ of Wigner type (such as the adjacency matrix of an Erdös-Renyi graph) asymptotically almost surely had simple spectrum. In the current paper, we push the method further to show that the eigenvalues are not only distinct, but are (with high probability) separated from each other by some negative power ${n^{-A}}$ of ${n}$. This follows the now standard technique of replacing any appearance of discrete Littlewood-Offord theory (a key ingredient in our previous paper) with its continuous analogue (inverse theorems for small ball probability). For general Wigner-type matrices ${M_n}$ (in which the matrix entries are not normalised to have mean zero), we can use the inverse Littlewood-Offord theorem of Nguyen and Vu to obtain (under mild conditions on ${M_n}$) a result of the form

$\displaystyle {\bf P} (\lambda_{i+1}(M_n) - \lambda_i(M_n) \leq n^{-A} ) \leq n^{-B}$

for any ${B}$ and ${i}$, if ${A}$ is sufficiently large depending on ${B}$ (in a linear fashion), and ${n}$ is sufficiently large depending on ${B}$. The point here is that ${B}$ can be made arbitrarily large, and also that no continuity or smoothness hypothesis is made on the distribution of the entries. (In the continuous case, one can use the machinery of Wegner estimates to obtain results of this type, as was done in a paper of Erdös, Schlein, and Yau.)

In the mean zero case, it becomes more efficient to use an inverse Littlewood-Offord theorem of Rudelson and Vershynin to obtain (with the normalisation that the entries of ${M_n}$ have unit variance, so that the eigenvalues of ${M_n}$ are ${O(\sqrt{n})}$ with high probability), giving the bound

$\displaystyle {\bf P} (\lambda_{i+1}(M_n) - \lambda_i(M_n) \leq \delta / \sqrt{n} ) \ll \delta \ \ \ \ \ (1)$

for ${\delta \geq n^{-O(1)}}$ (one also has good results of this type for smaller values of ${\delta}$). This is only optimal in the regime ${\delta \sim 1}$; we expect to establish some eigenvalue repulsion, improving the RHS to ${\delta^2}$ for real matrices and ${\delta^3}$ for complex matrices, but this appears to be a more difficult task (possibly requiring some quadratic inverse Littlewood-Offord theory, rather than just linear inverse Littlewood-Offord theory). However, we can get some repulsion if one works with larger gaps, getting a result roughly of the form

$\displaystyle {\bf P} (\lambda_{i+k}(M_n) - \lambda_i(M_n) \leq \delta / \sqrt{n} ) \ll \delta^{ck^2}$

for any fixed ${k \geq 1}$ and some absolute constant ${c>0}$ (which we can asymptotically make to be ${1/3}$ for large ${k}$, though it ought to be as large as ${1}$), by using a higher-dimensional version of the Rudelson-Vershynin inverse Littlewood-Offord theorem.

In the case of Erdös-Renyi graphs, we don’t have mean zero and the Rudelson-Vershynin Littlewood-Offord theorem isn’t quite applicable, but by working carefully through the approach based on the Nguyen-Vu theorem we can almost recover (1), except for a loss of ${n^{o(1)}}$ on the RHS.

As a sample applications of the eigenvalue separation results, we can now obtain some information about eigenvectors; for instance, we can show that the components of the eigenvectors all have magnitude at least ${n^{-A}}$ for some ${A}$ with high probability. (Eigenvectors become much more stable, and able to be studied in isolation, once their associated eigenvalue is well separated from the other eigenvalues; see this previous blog post for more discussion.)