This is a well-known problem in multilinear harmonic analysis; it is fascinating to me because it lies barely beyond the reach of the best technology we have for these problems (namely, multiscale time-frequency analysis), and because the most recent developments in quadratic Fourier analysis seem likely to shed some light on this problem.

Recall that the Hilbert transform is defined on test functions (up to irrelevant constants) as

where the integral is evaluated in the principal value sense (removing the region to ensure integrability, and then taking the limit as .)

One of the basic results in (linear) harmonic analysis is that the Hilbert transform is bounded on for every , thus for each such p there exists a finite constant such that

One can view boundedness result (which is of importance in complex analysis and one-dimensional Fourier analysis, while also providing a model case of the more general Calderón-Zygmund theory of singular integral operators) as an assertion that the Hilbert transform is “not much larger than” the identity operator. And indeed the two operators are very similar; both are invariant under translations and dilations, and on the Fourier side, the Hilbert transform barely changes the magnitude of the Fourier transform at all:

In fact, one can show the only reasonable (e.g. -bounded) operators which are invariant under translations and dilations are just the linear combinations of the Hilbert transform and the identity operator. (A useful heuristic in this area is to view the singular kernel as being of similar “strength” to the Dirac delta function – for instance, they have same scale-invariance properties.)

Note that the Hilbert transform is formally a convolution of f with the kernel . This kernel is almost, but not quite, absolutely integrable – the integral of diverges logarithmically both at zero and at infinity. If the kernel was absolutely integrable, then the above boundedness result would be a simple consequence of Young’s inequality (or Minkowski’s inequality); the difficulty is thus “just” one of avoiding a logarithmic divergence. To put it another way, if one dyadically decomposes the Hilbert transform into pieces localised at different scales (e.g. restricting to an “annulus” ), then it is a triviality to establish boundedness of each component; the difficulty is ensuring that there is enough cancellation or orthogonality that one can sum over the (logarithmically infinite number of) scales and still recover boundedness.

There are a number of ways to establish boundedness of the Hilbert transform. One way is to decompose all functions involved into wavelets – functions which are localised in space and scale, and whose frequencies stay at a fixed distance from the origin (relative to the scale). By using standard estimates concerning how a function can be decomposed into wavelets, how the Hilbert transform acts on wavelets, and how wavelets can be used to reconstitute functions, one can establish the desired boundedness. The use of wavelets to mediate the action of the Hilbert transform fits well with the two symmetries of the Hilbert transform (translation and scaling), because the collection of wavelets also obeys (discrete versions of) these symmetries. One can view the theory of such wavelets as a dyadic framework for Calderón-Zygmund theory.

Just as the Hilbert transform behaves like the identity, it was conjectured by Calderón (motivated by the study of the Cauchy integral on Lipschitz curves)** **that the *bilinear Hilbert transform*

would behave like the pointwise product operator (exhibiting again the analogy between and ), in particular one should have the Hölder-type inequality

(*)

whenever and . (There is nothing special about the “2” in the definition of the bilinear Hilbert transform; one can replace this constant by any other constant except for 0, 1, or infinity, though it is a delicate issue to maintain good control on the constant in that case. Note that by setting g=1 and looking at the limiting case we recover the linear Hilbert transform theory from the bilinear one, thus we expect the bilinear theory to be harder.) Again, this claim is trivial when localising to a single scale , as it can then be quickly deduced from Hölder’s inequality. The difficulty is then to combine all the scales together.

It took some time to realise that Calderón-Zygmund theory, despite being incredibly effective in the linear setting, was not the right tool for the bilinear problem. One way to see the problem is to observe that the bilinear Hilbert transform B (or more precisely, the estimate (*)) enjoys one additional symmetry beyond the scaling and translation symmetries that the Hilbert transform H obeyed. Namely, one has the* modulation invariance*

for any frequency , where is the linear plane wave of frequency , which leads to a modulation symmetry for the estimate (*). This symmetry – which has no non-trivial analogue in the linear Hilbert transform – is a consequence of the algebraic identity

which can in turn be viewed as an assertion that linear functions have a vanishing second derivative.

It is a general principle that if one wants to establish a delicate estimate which is invariant under some non-compact group of symmetries, then the *proof *of that estimate should also be largely invariant under that symmetry (or, if it does eventually decide to break the symmetry (e.g. by performing a normalisation), it should do so in a way that will yield some tangible profit). Calderón-Zygmund theory gives the frequency origin a preferred role (for instance, all wavelets have mean zero, i.e. their Fourier transforms vanish at the frequency origin), and so is not the appropriate tool for any modulation-invariant problem.

The conjecture of Calderón was finally verified in a breakthrough pair of papers by Lacey and Thiele, first in the “easy” region (in which all functions are locally in and so local Fourier analytic methods are particularly tractable) and then in the significantly larger region where . (Extending the latter result to or beyond remains open, and can be viewed as a toy version of the trilinear Hilbert transform question discussed below.) The key idea (dating back to Fefferman) was to replace the wavelet decomposition by a more general *wave packet *decomposition – wave packets being functions which are well localised in position, scale, and frequency, but are more general than wavelets in that their frequencies do not need to hover near the origin; in particular, the wave packet framework enjoys the same symmetries as the estimate that one is seeking to prove. (As such, wave packets are a highly overdetermined basis, in contrast to the exact bases that wavelets offers, but this turns out to not be a problem, provided that one focuses more on decomposing the *operator *B rather than the individual functions f,g.) Once the wave packets are used to mediate the action of the bilinear Hilbert transform B, Lacey and Thiele then used a carefully chosen combinatorial algorithm to organise these packets into “trees” concentrated in mostly disjoint regions of phase space, applying (modulated) Calderón-Zygmund theory to each tree, and then using orthogonality methods to sum the contributions of the trees together. (The same method also leads to the simplest proof known of Carleson’s celebrated theorem on convergence of Fourier series.)

Since the Lacey-Thiele breakthrough, there has been a flurry of other papers (including some that I was involved in) extending the time-frequency method to many other types of operators; all of these had the characteristic that these operators were invariant (or “morally” invariant) under translation, dilation, and some sort of modulation; this includes a number of operators of interest to ergodic theory and to nonlinear scattering theory. However, in this post I want to instead discuss an operator which does not lie in this class, namely the *trilinear Hilbert transform*

Again, since we expect to behave like , we expect the trilinear Hilbert transform to obey a Hölder-type inequality

(**)

whenever and . This conjecture is currently unknown for any exponents p,q,r – even the case p=q=r=4, which is the “easiest” case by symmetry, duality and interpolation arguments. The main new difficulty is that in addition to the three existing invariances of translation, scaling, and modulation (actually, modulation is now a two-parameter invariance), one now also has a *quadratic modulation invariance*

for any “quadratic frequency” , where is the quadratic plane wave of frequency , which leads to a quadratic modulation symmetry for the estimate (**). This symmetry is a consequence of the algebraic identity

which can in turn be viewed as an assertion that quadratic functions have a vanishing third derivative.

It is because of this symmetry that time-frequency methods based on Fefferman-Lacey-Thiele style wave packets seem to be ineffective (though the failure is very slight; one can control entire “forests” of trees of wave packets, but when summing up all the relevant forests in the problem one unfortunately encounters a logarithmic divergence; also, it is known that if one ignores the sign of the wave packet coefficients and only concentrates on the magnitude – which one can get away with for the bilinear Hilbert transform – then the associated trilinear expression is in fact divergent). Indeed, wave packets are certainly not invariant under quadratic modulations. One can then hope to work with the next obvious generalisation of wave packets, namely the “chirps” – quadratically modulated wave packets – but the combinatorics of organising these chirps into anything resembling trees or forests seems to be very difficult. Also, recent work in the additive combinatorial approach to Szemerédi’s theorem (as well as in the ergodic theory approaches) suggests that these quadratic modulations might not be the only obstruction, that other “2-step nilpotent” modulations may also need to be somehow catered for. Indeed I suspect that some of the modern theory of Szemerédi’s theorem for progressions of length 4 will have to be invoked in order to solve the trilinear problem. (Again based on analogy with the literature on Szemerédi’s theorem, the problem of quartilinear and higher Hilbert transforms is likely to be significantly more difficult still, and thus not worth studying at this stage.)

This problem may be too difficult to attack directly, and one might look at some easier model problems first. One that was already briefly mentioned above was to return to the bilinear Hilbert transform and try to establish an endpoint result at r=2/3. At this point there is again a logarithmic failure of the time-frequency method, and so one is forced to hunt for a different approach. Another is to look at the bilinear maximal operator

which is a bilinear variant of the Hardy-Littlewood maximal operator, in much the same way that the bilinear Hilbert transform is a variant of the linear Hilbert transform. It was shown by Lacey that this operator obeys most of the bounds that the bilinear Hilbert transform does, but the argument is rather complicated, combining the time-frequency analysis with some Fourier-analytic maximal inequalities of Bourgain. In particular, despite the “positive” (non-oscillatory) nature of the maximal operator, the only known proof of the boundedness of this operator is oscillatory. It is thus natural to seek a “positive” proof that does not require as much use of oscillatory tools such as the Fourier transform, in particular it is tempting to try an additive combinatorial approach. Such an approach has had some success with a slightly easier operator in a similar spirit, in an unpublished paper of Demeter, Thiele, and myself. There is also a paper of Christ in which a different type of additive combinatorics (coming, in fact, from work on the Kakeya problem) was used to establish a non-trivial estimate for single-scale model of various multilinear Hilbert transform or maximal operators. If these operators are understood better, then perhaps additive combinatorics can be used to attack the trilinear maximal operator, and thence to the trilinear Hilbert transform. (This trilinear maximal operator, incidentally, has some applications to pointwise convergence of multiple averages in ergodic theory.)

Another, rather different, approach would be to work in the “finite field model” in which the underlying field is replaced by a Cantor ring of formal Laurent series over a finite field F; in such “dyadic models” the analysis is known to be somewhat simpler (in large part because in this non-Archimedean setting it now becomes possible to create wave packets which are localised in both space and frequency). Nazarov has an unpublished proof of the boundedness of the bilinear Hilbert transform in characteristic 3 settings based on a Bellman function approach; it may be that one could achieve something similar over the field of 4 elements for (a suitably defined version of) the trilinear Hilbert transform. This would at least give supporting evidence for the analogous conjecture in , although it looks unlikely that a positive result in the dyadic setting would have a *direct* impact on the continuous one.

## 12 comments

Comments feed for this article

20 May, 2007 at 5:56 am

Gil KalaiWhile quite far from my own research I find this problem and posting very interesting for the following reasons. First, shortly after the Calderon conjecture was proved I heard a talk about it by Chris Thiele at the HU colloquium which (to my surprise) I could follow and enjoy. (And even find there was some combinatorics.) Second, at a later time Rafi Coifman gave me the Lacey-Thiele theorem as an example how concepts and ideas from applied mathematics interlace and influence the study of classical pure math problem. (This aspect of the (even wider) story is something I will be happy to learn more about.) Third, I always wondered if Carleson’s theorem could ever be “industrialized”. (This is a fancy way to the ask if we can hope for a proof that can (really) be presented at class.) I do not know if the Lacey-Thiele proof quite get there but it is certainly in this direction. And, the analogy with Szemeredi theorem is intruiging.

Is there a quick reference/link for an easy proof for the boundedness of Hilbert transform, especially the proof using wavelets?

Regarding the analogy with Szemeredi’s theorem. Are wavelets or something similar useful there? Is there a way to think about the Roth case (or its improved versions) as related to a large spanning class of functions like wavelets?

I remember that various people (e.g. Eli Shamir) raised the idea that appropriate notion of wavelets can be useful to improve known results and find further results in places were Fourier analysis applies to combinatorics, (mainly Fourier analysis over the discrete cube, or products of fixed graphs); but I am not aware of some nice definitions/applications. In particular, I do not know what wavelets are in these contexts.

20 May, 2007 at 7:28 am

Terence TaoDear Gil,

Good questions! I may need several responses to answer all of them.

Coifman actually has a nice principle connecting pure and applied analysis: given any operator T, there is a positive correlation between T being tractable to estimate in pure analysis, and T having an efficiently computable representation for applied analysis purposes. The point is that the representations which allow T to be computed quickly, tend to also be the representations which allow one to analyse T efficiently, and vice versa.

For instance, suppose one wanted to compute the discrete Hilbert transform Hf of a function f on an N point set (e.g. the cyclic group Z/NZ). A naive computation of Hf would take O(N^2) computations, but by using either the Fourier representation or the wavelet representation (and using the FFT or FWT) one cuts this down to O(N log N).

For the bilinear Hilbert transform B, discretised to Z/NZ, I don’t know of a way to compute B(f,g) for given f,g which is any faster than O(N^2) steps. However, given f, g, h, one can compute the inner product in something like O( N log N ) steps by using wave packets; roughly speaking, there is a representation of the form

where P ranges over all Heisenberg tiles (dyadic rectangles in phase space with area ), is the spatial width of P, and are (-normalised) wave packets which are essentially localised in phase space to P. There are about O(N log N) of these tiles, and the above inner products can all be computed by an FFT-like algorithm in O(N log N) time. (The FFT performs O(N log N) computations but only retains N of the numbers computed as the Fourier transform. If instead one saves all of the O(N log N) numbers that one computes, one essentially obtains the wave packet transform.) In accordance with Coifman’s principle, the above representation is precisely what Lacey and Thiele need to control the bilinear Hilbert transform properly.

No such efficient representation (better than O(N^2)) is known for the trilinear Hilbert transform; finding a fast representation here may be the key to this problem.

There is a “finite field”, “single scale” model which might be more tractable for this latter problem. Let G be a finite group (whose order is coprime to 2 and 3). If one is given three functions and one wants to compute (exactly) the expression

then by using the FFT one can achieve this in O( |G| log |G| ) computations. But what if one is given four functions and wants to compute

?

I know of no way to compute this in fewer than computations, even for nice groups G such as . A faster algorithm to compute this expression exactly may be very useful to the trilinear Hilbert transform problem, by suggesting the “correct” way to represent that transform.

There is an analogous problem for the Gowers norms; the FFT allows one to compute the norm of f in O( |G| log |G| ) steps, but the fastest I can compute the norm in is steps (by using the recursive formula connecting each Gowers norm to its predecessor). Is there a faster way?

20 May, 2007 at 7:48 am

Terence TaoThe Lacey-Thiele proof Carleson’s theorem can be presented in a graduate harmonic analysis class in a handful of lectures. It is “simple” modulo knowing all the modern machinery of harmonic analysis (Calderon-Zygmund theory, Littlewood-Paley theory, interpolation theory, and the uncertainty principle). Things are a little simpler in a “finite field” model, in which the real line is replaced by the Walsh-Cantor group . See for instance my lecture notes on these topics.

Wavelets appear in harmonic analysis due to the role of

scale, which is largely absent in additive combinatorics, and in particular in Szemeredi’s theorem. In Szemeredi’s theorem there is essentially no distinction made between an arithmetic progression of small step and an arithmetic progression of large step, thus there is no need to treat these two scales separately (and indeed, since there are so many more large steps than small steps, the net contribution of the small steps is negligible compared to the large). But in things like the Hilbert transforms, the presence of the 1/t factor in the integral causes the small scales to be elevated to be of “equal strength” to the large scales, and so they have to be treated separately. Wavelets are a good way to separate the fine-scale and coarse-scale behaviours from each other. Wave packets are generalisations of wavelets which also capture frequency information, indeed the (overdetermined) wave packet basis contains both the wavelet basis and the Fourier basis as subsets.So, in summary, I would only expect wavelet-type tools to be useful in “multiscale” situations in which one needs to treat fine-scale and coarse-scale oscillations separately.

As to the “easy” proof of the boundedness of the Hilbert transform using wavelets, this is a little trickier to answer; again, this result is easy only modulo a certain amount of standard wavelet theory. Roughly speaking, the main tool is the Littlewood-Paley inequality for wavelets

which asserts in particular that the norm of a function f is controlled by the magnitude of the wavelet coefficients , but is insensitive to the phase. In the wavelet basis, the Hilbert transform is almost a diagonal operator; it alters the phase of the wavelet coefficients, and also mixes some nearby wavelet coefficients together, but does not significantly affect the magnitude of these coefficients. Making this precise, one can easily deduce the boundedness of the Hilbert transform (and many other operators) from the above inequality (plus some other useful tools, such as the Fefferman-Stein maximal inequality). The proof of the Littlewood-Paley inequality is not too difficult, but is basically a fancy version of the standard arguments used to establish boundedness of the Hilbert transform. Thus if boundedness of the Hilbert transform was your only goal, this would not be the most efficient way to establish it; but if one was interested in treating systematically a large class of operators by a single theory, this is a good approach.

20 May, 2007 at 10:08 pm

Gil KalaiThanks, Terry

You wrote: “In Szemeredi’s theorem there is essentially no distinction made between an arithmetic progression of small step and an arithmetic progression of large step, thus there is no need to treat these two scales separately (and indeed, since there are so many more large steps than small steps, the net contribution of the small steps is negligible compared to the large).”

Is what you wrote a characteristic of Szemeredi’s theorem or rather of its proofs? (We can talk just about Roth’s theorem or the analogous problem for bounds for cap sets.) We will be happy to find a 3-term AP of any step; For the analysis in the present proofs AP with small steps are negligebale. Couldn’t it give an advantage to consider kernels which introduce delicate multi scale situations?

21 May, 2007 at 8:40 am

Terence TaoDear Gil,

This is possible, but I think it is unlikely (at least if one uses “naive” notions of scale) because it doesn’t seem to be compatible with the symmetries of the problem. For instance, in Z/pZ, the statement of Szemeredi’s theorem is invariant under any dilation by an invertible element of the field Z/pZ. Thus, there is no reason to give small steps (i.e. steps r in an interval such as {1,…,R}) any more privileged role than steps in a dilated interval such as . This observation was used by Varnavides to show that once Szemeredi’s theorem is proven for a small interval, it automatically holds for a large interval as well, and furthermore provides a positive density family of arithmetic progressions in that larger interval.

That said, though, there is certainly a lot of scope for utilising

adaptivenotions of scale, tailored to the set or function that one is seeking arithmetic progressions inside. Indeed, the known proofs of Roth and Szemeredi all do this in one way or another. For instance, to find arithmetic progressions of length 3 inside a set A in Z/pZ, one can first look for a large Fourier coefficient . If there is no such large Fourier coefficient, then we “win”; if instead we find a frequency where things are large, we can use that frequency to determine a “scale” (we can give each element x in Z/pZ a “norm”, defined as the distance of to the nearest integer). We can then pass to a smaller scale (i.e. to a Bohr set generated by ), and this is the heart of the original “density-incremented” proof of Roth’s theorem. Or we can then go and hunt for more types of scales to capture as much of the global structure of A as one can, until we have so much control that it becomes “obvious” that there are lots of progressions; this brings us closer to the more “ergodic” or “energy-incremented” proofs of Roth’s theorem, or to the hybrid energy+density increment proofs of Heath-Brown and Szemeredi.25 July, 2007 at 5:38 am

cowgonemadDear Terry,

I am wondering. Is there some analog of the bilinear Hilbert transform on the unit circle? I know that the theory of the linear Hilbert transform works just fine there … If one would take a function on the unit circle as a periodic one on , one would have that these are no longer in any , which is also kind of bad.

Best,

Helge

25 July, 2007 at 8:42 am

Terence TaoDear Helge,

There is a general transference principle of Marcinkeiwicz and Zygmund which allows one to equate estimates on the real line which enjoy certain scale invariance and translation invariance properties, with their counterparts on the unit circle. For instance, there is a natural analogue of the estimates the bilinear Hilbert transform on the unit circle (basically, replace by throughout, identifying the torus with for the purposes of defining what means). A simple scaling argument shows that if such an estimate is true on the unit circle, then it is also true on the circle of length L (with appropriate normalisations). Letting L go to infinity, we can recover the estimate on the real line from the estimate on the circle.

In the converse direction, to obtain the circle estimate from the real line estimate, we take a function on the unit circle and extend it periodically. As you observe, the extended function is no longer in any finite class; however, we can truncate this function to a large finite interval (either using a sharp truncation or a smooth one) and see what the estimate for the real line tells us about the corresponding situation on the circle. There will be an error term arising from the boundary of the interval ; but one can arrange matters so that this error becomes acceptable in the limit as R goes to infinity.

There is a general principle in harmonic analysis that the unit circle is very much like the real line, except that all the coarse scales (with wavelength greater than 1) have been removed and one only has the fine scales. (Conversely, with the integers the fine scales have been removed and one only has the coarse scales.) It may seem like removing half the scales is an improvement, but the scale invariance shows that in fact you don’t actually gain very much at all; half of infinity is still infinity. In fact it turns out to be more convenient to work in the context of the real line, as then one has an exact scale-invariance symmetry which is often worth exploiting.

4 November, 2008 at 10:19 am

AnonymousI know it might be a bit late to be asking questions, but this discusion leaves me with the following nature question:

Is there an example known of a multilinear operator (given, say, by convolution with a distribution) which we’ll call M(f_1,…f_n), where say M(C,f_2,..,f_n) is known to be a bounded (n-1)-linear operator (on some appropriate spaces), but the “anticipated” multilinear bounds fail for M?

23 February, 2010 at 9:50 am

Håkan HedenmalmI guess the results of Lacey and Thiele have analogs in more general settings. For instance, what if we consider pv \int_\reals f(R(x,t)) g(S(x,t)) dt/t where R(x,t) and S(x,t) are two Lipschitzian functions satisfying certain nontriviality assumption (like to avoid repetition of x+t and again x+t)?

To push in the same direction, think of the Lacey-Thiele theorem as the statement that

the integral kernel g(x+t)/(x-t) maps a Lebesgue space L^r to another Lebesgue space

L^s , provided g belongs to a given Lebesgue space L^q (the values of r,s,q are related as described by the theorem). What is special about the form g(x+t) here? What would we need to know about a function g(x,t) to say the same thing about the integral kernel g(x,t)/(x-t)??

23 February, 2010 at 10:11 am

Terence TaoThere is some recent work in this direction, for instance see this paper of Xiaochun Li:

http://arxiv.org/abs/0805.0107

but I think it is fair to say that more work needs to be done before one has a good picture here. Typically what happens in this business is that nonlinear averages behave like their linear counterparts at small scales, but can in fact be smoother than their linear counterparts at large scales due to curvature effects. As a consequence, sometimes one can deduce the nonlinear case relatively cheaply from the linear one. But in other situations the genuinely nonlinear portion presents completely new difficulties that need to be tackled by different arguments.

17 November, 2012 at 9:34 am

A Few Mathematical Snapshots from India (ICM2010) | Combinatorics and more[...] Carleson’s famous theorem asserts that the Fourier expansion of a function in converges to the function almost everywhere. High dimensional analogs of the question itself or of some basic ingredients in the argument are very important and very hard. See here. [...]

27 March, 2014 at 3:36 am

francescodiplinioBeforehand, I apologize for bumping up such an old topic, but I thought it might be worthwhile to mention some recent progress on the bilinear Hilbert transform at the endpoint , which, as pointed out by Terence in the main post, could be looked at as a model for the harder trilinear operator. Again, as mentioned above, the technical connection stems from the “logarithmic failure” of the known time-frequency synthesis (i.e summing up the contributions of the forests) techniques for both problems.

Christoph Thiele and I have recently posted on ArXiv a joint paper http://arxiv.org/abs/1403.5978, addressing several endpoint questions for the bilinear Hilbert transform. Perhaps the most relevant result is that the restricted weak type estimate is true up to a doubly logarithmic correction, improving on the single logarithmic one which follows by the previously known methods (and which was observed by D. Bilyk and L. Grafakos, http://www.ams.org/mathscinet-getitem?mr=2271944). Same holds for any Holder tuple on the open segment with and .

In order to keep using the strength of the methods as much as possible, on this segment, we lift the subindicator restriction for the first argument say, which lies in with between and , and we seek a Calder\'on-Zygmund decomposition of

the function , into a good portion

and a bad portion enjoying additional localization. Due to the modulation invariance properties of BHT, the Calderon-Zygmund

decomposition has to respect a number of frequencies, as does the multi-frequency Calderon-Zygmund decomposition

(MFCZ) developed by Nazarov, Oberlin and Thiele in http://arxiv.org/abs/0912.3010.

The bad portion of the MFCZ is the sum of functions localized to intervals and having mean zero with respect to a number of bad frequencies relevant on the interval . The main issue lies with estimating the interaction of this bad function with wave packets which are frequency localized in a compact interval near a bad frequency, and which are spatially localized away from but not too far away from . To make this interaction sufficiently small for our needs we work on the one hand

with wave packets which have better than mere Schwartz function decay. We use an optimal almost exponential decay following a construction by Ingham: this seems to have no precedent in the context of time-frequency analysis. To utilize this decay we have to prepare the bad function of the MFCZ to have mean zero not only against the dominant bad frequency, but also against approximately many equidistant frequencies in the vicinity of the dominating one. The price of all this is the appearance of the doubly logarithmic correction in our main restricted weak type estimate (and of some other corrective terms in the other estimates). These corrections are less and less costful as the exponent gets closer to .

To discern whether these corrections are structural or simply a byproduct of our techniques, it might be telling to mention that no corrections are necessary for the Walsh model of the bilinear Hilbert transform, the so-called quartile operator, which in a previous article of mine joint with Ciprian Demeter http://arxiv.org/abs/1206.3798 is shown to obey the restricted weak type estimate above without any correction term. In fact, one of the functions can even be taken to be unrestricted.

The catch therein is that the MFCZ decomposition is obtained via a straightforward projection argument, exploiting perfect localization of the Walsh wave packets to obtain a "bad part" whose contribution to the operator is identically zero.

A brief comment on the relevance of our techniques in relation with THT. A direct application to THT seems unlikely to succeed: again, it is telling that our methods are robust enough (or not refined enough) to work regardless of the insertion of bounded coefficients, which, as mentioned in the main post, would make THT unbounded. On the other hand, one could check, for instance, whether a similar CZ idea would lead to improvement in the range of boundedness for Christ's single scale THT in the rational case, see article mentioned in the main post.

I apologize for the quite lengthy comment and thank very much Terence for hosting,

Francesco Di Plinio

https://www.researchgate.net/profile/Francesco_Di_Plinio/?ev=hdr_xprf