The idea rests on two insights:

i) All the operators I’ve looked at so far (except those corresponding to 0, 1 and 2 eigensets, which contain the and cycles) have the following property: in the eigen-equation

– either and … an “underwater” operator

– or and

i.e. the result is either always less than the argument, or always greater, for all k.

I am only interested in “branches”, defined as orbits that stop once you hit a lower number than the one you started from. (The rationale here is that if you’re lower than you started, you’ve “joined the tree” — the rest is a problem you’ve dealt with earlier in your search.)

ii) One can use the approach from Q2 in my previous post to grow the operator eigen-equations indefinitely. At each iteration one discards any operators that are “underwater”, and the rest are bifurcated via and and then grown by one.

5. Quickly regarding cycles. A cycle is T(n) = n.

If I remember correctly, it’s elementary number theory that prime decompositions are unique, so you can’t have and the only options are and .

So the only options for cycles are:

i)

ii)

It can be shown fairly easily that (ii) can’t happen — I will post the proof tomorrow. I’ve got a bit to say about (i) as well, but that’s clearly the place where the really really hard problem is buried. I recognize things from Terry Tao’s initial post.

@MatjazG – sorry I haven’t read your posts in detail, will do so tomorrow and will respond.

]]>

where odd and the result of applying to can be simply written as:

P.S. Solving the system is really easy, as detailed in a previous post. Explicitly:

1.) = (number of trailing binary zeros in )

2.) = ( without the trailing binary zeros)

3.) = ( without the trailing binary zeros) =

P.P.S. I must look over your posts in more detail as well, Martin. Will most likely post more tomorrow.

]]>and the trivial cycle is simply a fixed point: . I believe this to be a most economical way of writing the (iterated) Collatz function.

Explicitly, is equivalent to:

where: are uniquely defined for a given odd by (here are odd integers):

Or in other words, you can look at the occurrences of sequences in the application of to and replace each such sequence with a single application of .

]]>

This function is just iterated times on numbers of the form for . The function maps odd numbers to even ones and vice versa:

and the trivial cycle of is again the same as for , namely: .

It can also be quite efficiently implemented in a binary encoding. Explicitly:

1.) If is even: delete the trailing zeros in its binary encoding.

2.) If is odd: add to it, delete the trailing zeros from the resulting binary encoding, multiply the resulting number by (= append a trailing zero to and add this to ; repeat times), and finally subtract from it.

Whether this is useful I do not know, but I do like the fact that this compressed always exchanges odd and even numbers … :)

]]>and the last one is:

The 1 and 2 eigensets are as follows:

– for even orders:

– whereas for odd orders:

I have a bit more to say on operators, orbit length and the possibility of cycles, but need to go and earn a living. Hopefully more over the weekend!

]]>**Q1: Given an eigenset, how do you find its corresponding operator?**

This is fairly easy: in order to find the operator associated with eigenset , you simply follow the orbit of *c* for *a* steps and note the sequence of g and h operations. For example, for 8k+6 above, we first halve to 3, then g from 3 to 5, and then g again from 5 to 8. Hence, the operator associated with the eigenset 8k+6 is ggh, and the RHS constant is ggh(6)=8.

**Q2: Given an operator, how do you find its eigenset?**

Going the other way is a bit harder. The method I use generalizes the approach from my earlier post. It starts from the equation

k = k

and uses one of these two substitions at each step: either

or

.

For example for the operator ggh, we start with

k = k

and since we want to apply h, we need it to be even (at the moment it could be either). So we apply giving

2k = 2k

Now the RHS expression is clearly even, so we can apply h:

h(2k) = h(2k)

and simplifying the h on the RHS gives us the first-order eigen-equation

h(2k) = k .

Next we need an odd number for g, so looking at the RHS, we see that in this case we need to apply :

h(2(2k+1)) = 2k + 1

The RHS is now odd, so we can apply g:

gh(4k + 2) = g(2k + 1)

Working out the g() on the RHS then gives us the second-order eigen-equation

gh(4k + 2) = 3k + 2 .

Next we need the RHS to be odd for the final g, and we once again achieve this by applying :

gh( 4(2k+1) + 2 ) = 3(2k+1) + 2

i.e.

gh(8k + 6) = 6k + 5

The RHS is now odd for any k, so we can apply g once again:

ggh(8k + 6) = g(6k + 5)

and working out the RHS gives us the eigen-equation for our chosen third-order operator ggh:

ggh(8k + 6) = 9k + 8 .

The constant next to k on the RHS is always a power of 3, i.e. odd. So which substitution is applied at each step depends on the final number on the RHS and whether we need the result to be odd (for g) or even (for h). In the above example the relevant numbers were all even (0, 0 and 2, respectively). Had one of them been odd, we would have used the other transformation instead, e.g. if we wanted to turn

3k + 3

into an even expression, we would use , which gives us

3(2k+1) + 3 = 6k + 4.

]]>I believe I have understood the reason for the periodicity shown in my earlier comments, but I’ve been too busy to post.

If we call the two operations

… “half”

(as before) and then compose the operations into operations like

then these “operators” have something that, for lack of a better word, I call eigensets.

I will start by writing out a few interesting equations, and then explain the meaning (to the extent that I’ve understood it) later.

hhh(8k + 0) = 1k + 0

ghg(8k + 1) = 9k + 2

hgh(8k + 2) = 3k + 1

hgg(8k + 3) = 9k + 4

ghh(8k + 4) = 3k + 2

hhg(8k + 5) = 3k + 2

ggh(8k + 6) = 9k + 8

ggg(8k + 7) = 27k + 26

The sets in the left bracket are what I call “eigensets” of the operator: they are the only integers for which the operator returns an integer (as opposed to a fraction).

We see that there are exactly 2^3 = 8 operators of order 3, and their eigensets span the integers.

This explains the periodicity observed in my earlier comments.

The general form of the eigen-equation is as follows:

where the operator T consists of a sequence of a total of *a* operators, of which *(a-b)* are g-operators and *b* are h-operators.

Here I’ve used the fact that

or more generally

and since a similar relation holds for the h-operator as well, I get

Now there are two questions:

Q1: Given an eigenset, how do you find its corresponding operator?

Q2: Given an operator, how do you find its eigenset?

Rest in the next post.

]]>which is odd for and even for . All numbers of the form for odd thus have orbits of length at least .

(The above formula can be easily checked by writing as .)

The length of the orbit is actually a bit longer, if we count halving even numbers with as steps (we usually do). After iterations we arrive at:

after which we would need at least halvings to reach as fast as possible. The actual orbit is thus at least steps long assuming we don’t end up in a non-trivial cycle (in which case the Collatz conjecture would be false).

]]>which is odd for and even for , where of course .

In particular this means that all numbers of the form have orbits of length at least for .

P.S.

Of course it holds that for which means that the first elements of the orbit of (from to ) actually form a monotonously increasing sequence of odd numbers.

If we call the two transformations g and h,

g: n –> (3n+1)/2

h: n –> n/2 … the “h” is short for “halve”

and then define g^k to have the obvious meaning:

g^k (n) := g{ g^(k-1) (n) }

i.e. k successive applications of g, then I believe that

N(L) := (2^L) – 1

is obviously odd, and more importantly will remain odd under each of (L-1) successive applications of g.

For the record, the result of L applications of g will be:

g^L {N(L)} = (3^L) – 1

Unless I messed up somewhere, this relation can be derived by iteratively repeating the following steps:

1) Apply the substitution:

k –> 2k + 1

to the equation derived during previous iteration, or to the following equation in the first iteration:

k = k.

In the first iteration this simply results in:

2k + 1 = 2k + 1.

In general, assuming that we are in iteration n+1 and the equation resulting from iteration n was:

g^n { (2^n)k + (2^n) – 1 } = (3^n)k + (3^n) – 1

this step will give:

g^n { (2^(n+1))k + (2^(n+1)) – 1 } = 2*(3^n)k + 2*(3^n) – 1 .

2) Note that following the substitution, the RHS is now clearly odd, so we can generate the next number in the orbit by applying g to both sides. In the first iteration this results in:

g(2k+1) = 3k + 2

which just happens to be of the form noted above. In general, applying g to the RHS from step 1 gives:

g{ 2*(3^n)k + 2*(3^n) – 1 } = (3^n)k + (3^n) – 1

and the LHS is simply:

g^(n+1) { (2^(n+1))k + (2^(n+1)) – 1 }

This is precisely the equation for n+1.

As the final step in the proof, we may as well set k=0 to simplify the result.

]]>