I have just uploaded to the arXiv my paper “Commutators close to the identity“, submitted to the Journal of Operator Theory. This paper resulted from some progress I made on the problem discussed in this previous post. Recall in that post the following result of Popa: if {D,X \in B(H)} are bounded operators on a Hilbert space {H} whose commutator {[D,X] := DX-XD} is close to the identity in the sense that

\displaystyle  \| [D,X] - I \|_{op} \leq \varepsilon \ \ \ \ \ (1)

for some {\varepsilon > 0}, then one has the lower bound

\displaystyle  \| X \|_{op} \|D \|_{op} \geq \frac{1}{2} \log \frac{1}{\varepsilon}. \ \ \ \ \ (2)

In the other direction, for any {0 < \varepsilon < 1}, there are examples of operators {D,X \in B(H)} obeying (1) such that

\displaystyle  \| X \|_{op} \|D \|_{op} \ll \varepsilon^{-2}. \ \ \ \ \ (3)

In this paper we improve the upper bound to come closer to the lower bound:

Theorem 1 For any {0 < \varepsilon < 1/2}, and any infinite-dimensional {H}, there exist operators {D,X \in B(H)} obeying (1) such that

\displaystyle  \| X \|_{op} \|D \|_{op} \ll \log^{16} \frac{1}{\varepsilon}. \ \ \ \ \ (4)

One can probably improve the exponent {16} somewhat by a modification of the methods, though it does not seem likely that one can lower it all the way to {1} without a substantially new idea. Nevertheless I believe it plausible that the lower bound (2) is close to optimal.

We now sketch the methods of proof. The construction giving (3) proceeded by first identifying {B(H)} with the algebra {M_2(B(H))} of {2 \times 2} matrices that have entries in {B(H)}. It is then possible to find two matrices {D, X \in M_2(B(H))} whose commutator takes the form

\displaystyle  [D,X] = \begin{pmatrix} I & u \\ 0 & I \end{pmatrix}

for some bounded operator {u \in B(H)} (for instance one can take {u} to be an isometry). If one then conjugates {D, X} by the diagonal operator {\mathrm{diag}(\varepsilon,1)}, one can eusure that (1) and (3) both hold.

It is natural to adapt this strategy to {n \times n} matrices {D,X \in M_n(B(H))} rather than {2 \times 2} matrices, where {n} is a parameter at one’s disposal. If one can find matrices {D,X \in M_n(B(H))} that are almost upper triangular (in that only the entries on or above the lower diagonal are non-zero), whose commutator {[D,X]} only differs from the identity in the top right corner, thus

\displaystyle  [D, X] = \begin{pmatrix} I & 0 & 0 & \dots & 0 & S \\ 0 & I & 0 & \dots & 0 & 0 \\ 0 & 0 & I & \dots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \dots & I & 0 \\ 0 & 0 & 0 & \dots & 0 & I \end{pmatrix}.

for some {S}, then by conjugating by a diagonal matrix such as {\mathrm{diag}( \mu^{n-1}, \mu^{n-2}, \dots, 1)} for some {\mu} and optimising in {\mu}, one can improve the bound {\varepsilon^{-2}} in (3) to {O_n( \varepsilon^{-\frac{2}{n-1}} )}; if the bounds in the implied constant in the {O_n(1)} are polynomial in {n}, one can then optimise in {n} to obtain a bound of the form (4) (perhaps with the exponent {16} replaced by a different constant).

The task is then to find almost upper triangular matrices {D, X} whose commutator takes the required form. The lower diagonals of {D,X} must then commute; it took me a while to realise then that one could (usually) conjugate one of the matrices, say {X} by a suitable diagonal matrix, so that the lower diagonal consisted entirely of the identity operator, which would make the other lower diagonal consist of a single operator, say {u}. After a lot of further lengthy experimentation, I eventually realised that one could conjugate {X} further by unipotent upper triangular matrices so that all remaining entries other than those on the far right column vanished. Thus, without too much loss of generality, one can assume that {X} takes the normal form

\displaystyle  X := \begin{pmatrix} 0 & 0 & 0 & \dots & 0 & b_1 \\ I & 0 & 0 & \dots & 0 & b_2 \\ 0 & I & 0 & \dots & 0 & b_3 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \dots & 0 & b_{n-1} \\ 0 & 0 & 0 & \dots & I & b_n \end{pmatrix}.

\displaystyle  D := \begin{pmatrix} v & I & 0 & \dots & 0 & b_1 u \\ u & v & 2 I & \dots & 0 & b_2 u \\ 0 & u & v & \dots & 0 & b_3 u \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \dots & v & (n-1) I + b_{n-1} u \\ 0 & 0 & 0 & \dots & u & v + b_n u \end{pmatrix}

for some {u,v \in B(H)}, solving the system of equations

\displaystyle  [v, b_i] + [u, b_{i-1}] + i b_{i+1} + b_i [u, b_n] = 0 \ \ \ \ \ (5)

for {i=2,\dots,n-1}, and also

\displaystyle  [v, b_n] + [u, b_{n-1}] + b_n [u, b_n] = n \cdot 1_{B(H)}. \ \ \ \ \ (6)

It turns out to be possible to solve this system of equations by a contraction mapping argument if one takes {u,v} to be a “Hilbert’s hotel” pair of isometries as in the previous post, though the contraction is very slight, leading to polynomial losses in {n} in the implied constant.

There is a further question raised in Popa’s paper which I was unable to resolve. As a special case of one of the main theorems (Theorem 2.1) of that paper, the following result was shown: if {A \in B(H)} obeys the bounds

\displaystyle  \|A \| = O(1)

and

\displaystyle  \| A \| = O( \mathrm{dist}( A, {\bf C} + K(H) )^{2/3} ) \ \ \ \ \ (7)

(where {{\bf C} + K(H)} denotes the space of all operators of the form {\lambda I + T} with {\lambda \in {\bf C}} and {T} compact), then there exist operators {D,X \in B(H)} with {\|D\|, \|X\| = O(1)} such that {A = [D,X]}. (In fact, Popa’s result covers a more general situation in which one is working in a properly infinite {W^*} algebra with non-trivial centre.) We sketch a proof of this result as follows. Suppose that {\mathrm{dist}(A, {\bf C} + K(H)) = \varepsilon} and {\|A\| = O( \varepsilon^{2/3})} for some {0 < \varepsilon \ll 1}. A standard greedy algorithm argument (see this paper of Brown and Pearcy) allows one to find orthonormal vectors {e_n, f_n, g_n} for {n=1,2,\dots} such that for each {n}, one has {A e_n = \varepsilon_n f_n + v_n} for some {\varepsilon_n} comparable to {\varepsilon}, and some {v_n} orthogonal to all of the {e_n,f_n,g_n}. After some conjugation (and a suitable identification of {B(H)} with {M_2(B(H))}, one can thus place {A} in a normal form

\displaystyle  A = \begin{pmatrix} \varepsilon^{2/3} x & \varepsilon v^* \\ \varepsilon^{2/3} y & \varepsilon^{2/3} z \end{pmatrix}

where {v \in B(H)} is a isometry with infinite deficiency, and {x,y,z \in B(H)} have norm {O(1)}. Setting {\varepsilon' := \varepsilon^{1/3}}, it then suffices to solve the commutator equation

\displaystyle  [D,X] = \begin{pmatrix} x & \varepsilon' v^* \\ y & z \end{pmatrix}

with {\|D\|_{op} \|X\|_{op} \ll (\varepsilon')^{-2}}; note the similarity with (3).

By the usual Hilbert’s hotel construction, one can complement {v} with another isometry {u} obeying the “Hilbert’s hotel” identity

\displaystyle  uu^* + vv^* = I

and also {u^* u = v^* v = I}, {u^* v = v^* u = 0}. Proceeding as in the previous post, we can try the ansatz

\displaystyle  D = \begin{pmatrix} \frac{1}{2} u^* & 0 \\ a & \frac{1}{2} u^* - v^* \end{pmatrix}, X = \begin{pmatrix} b & \varepsilon' I \\ c & d \end{pmatrix}

for some operators {a,b,c,d \in B(H)}, leading to the system of equations

\displaystyle  [\frac{1}{2} u^*, b] + [\frac{1}{2} u^* - v^*, c] = x+z

\displaystyle  \varepsilon' a = [\frac{1}{2} u^*, b] - x

\displaystyle  \frac{1}{2} u^* c + c (\frac{1}{2} u^* - v^*) + ab-da = y.

Using the first equation to solve for {b,c}, the second to then solve for {a}, and the third to then solve for {c}, one can obtain matrices {D,X} with the required properties.

Thus far, my attempts to extend this construction to larger matrices with good bounds on {D,X} have been unsuccessful. A model problem would be to express

\displaystyle  \begin{pmatrix} I & 0 & \varepsilon v^* \\ 0 & I & 0 \\ 0 & 0 & I \end{pmatrix}

as a commutator {[D,X]} with {\|D\| \|X\|} significantly smaller than {O(\varepsilon^{-2})}. The construction in my paper achieves something like this, but with {v^*} replaced by a more complicated operator. One would also need variants of this result in which one is allowed to perturb the above operator by an arbitrary finite rank operator of bounded operator norm.