*[Corrected, thanks – T.]*

Is that possible to achieve stronger singularity probability by replacing random sign with uniform integers in [-t, t] for a large enough t?

I guess in the paper by Bourgain, Vu and Wood they studied the case when t = O(1). But in general can we hope for the singularity probability to be t^{-\Omega(n)} for arbitrarily large t?

Two more problems:

One is with the correction I just gave! Getting a bound of size is actually not sufficient to beat the entropy cost of size from the net. If we define to be -compressible when , then we get a bound .

The second problem is with the bound written above exercise 1. With this exponent of we still won’t be able to beat the entropy cost (when at least). We can replace it with a bound valid for any positive . Taking large enough depending on , we can beat the entropy factor.

Then for the compressible vectors, these all lie within distance of a -sparse vector, so we can use a -net on the compressible vectors. Using this net, one is able to solve Exercise 2, in the course of which one will fix sufficiently small, after which one can go back and take sufficiently small depending on and .

I think this works – sorry to introduce 2 new parameters! I didn’t see a cleaner way (maybe LiPaRuTo have one).

*[Corrections implemented, thanks – T.]*

Thanks for the corrections!

]]>In exercise 1, I think the denominator of the first term should be , not .

Also in that exercise, I see how a combination of having a lot of energy at low levels and conditioning out large components should do the trick, but I don’t see how the specific assumption can work. It seems that only conditioning on for which will still allow the third moment term in the Berry-Esseen bound to be large. If we freeze all variables with , this term is under control, but the first term $\latex r/||x||$ is of order .

It can be done by saying that is incompressible if , and freezing all variables with . This gives a final bound rather than , but this is still enough.

Typo under exercise 1: flip the inequality in the definition of incompressible vs compressible.

]]>Suppose that are independent matrices that are chosen with uniform distribution on the sphere . I want to find a lower bound on

How do I juggle and ?

]]>In the paragraph right above Section 5, when you write for the dual basis vectors, do you mean ?

There might be some latex typing error in the formulation of Theorem 6.

Wonderful notes! Especially studying the smallest singular value can be reduced to studying the projection distances.That is beautiful.

*[Corrected, thanks – T.]*

*[Corrected, thanks – T.]*

The Schatten norm only has a nice relationship with the norms of the rows and columns when p=2; the 2-Schatten norm (also known as the Hilbert-Schmidt norm, or Frobenius norm) is also the l^2 norm of the matrix entries.

The infinity-Schatten norm is the operator norm, which by Schur’s test is bounded by the geometric mean of the maximal l^1 norm of the rows, and the maximal l^1 norm of the columns. Presumably one can then interpolate with the 2-Schatten norm to get some inequalities for the p-Schatten norm for 2 <= p <= infty (a sort of non-commutative Hausdorff-Young inequality), but that's more or less the only general inequality I am aware of. (Note that for circulant matrices, the p-Schatten norm is basically the l^p norm of the Fourier transform of a row, and the only l^p estimates obeyed by the Fourier transform are given by the Hausdorff-Young inequality, so this suggests that there are no further general inequalities of this type beyond these ones.)

]]>