in the bottom of page 9 of the article

it is written: {(x_1,…,x_d) \in Z^d | M_i <= m_i <= M^’_i for all 1<=i<=d

you should replace the m_i with x_i

]]>This is a very nice problem. The recent progress on understanding issues such as the effect of random noise on the invertibility of a matrix does support at a heuristic level the empirically verified observation that Gaussian elimination works very well in the presence of random noise, and may indeed help in giving a rigorous explanation of the latter in the future, but there are still significant technical issues to overcome before this is the case. The basic problem is that even if the original matrix A is described by an additive noise model, e.g. A=M+N where M is deterministic and N is a gaussian random matrix, after a few rounds of Gaussian elimination, the matrix that one gets from A is now given by a much more nonlinear random model, and it is not clear how to use the existing technology to continue to guarantee that this new matrix reacts will to future pivoting operations with high probability. One possibility would be to construct some sort of “invariant measure” for the Gaussian elimination algorithm, but it is not obvious to me how one would build such a measure and be able to ensure that it is not singular, unless some algebraic miracle intervenes.

]]>In the sample chapter about “Numerical Analysis” (http://press.princeton.edu/chapters/gowers/gowers_IV_21.pdf), section 4, the author Lloyd N. Trefethen mentions the still missing theoretical analysis of Gaussian elimination with pivoting. Any chance you can tackle this problem using the techniques from your article? ]]>