[This is a (lightly edited) repost of an old blog post of mine, which had attracted over 400 comments, and as such was becoming difficult to load; I request that people wishing to comment on that puzzle use this fresh post instead. -T]
This is one of my favorite logic puzzles, because of the presence of two highly plausible, but contradictory, solutions to the puzzle. Resolving this apparent contradiction requires very clear thinking about the nature of knowledge; but I won’t spoil the resolution here, and will simply describe the logic puzzle and its two putative solutions. (Readers, though, are welcome to discuss solutions in the comments.)
— The logic puzzle —
There is an island upon which a tribe resides. The tribe consists of 1000 people, with various eye colours. Yet, their religion forbids them to know their own eye color, or even to discuss the topic; thus, each resident can (and does) see the eye colors of all other residents, but has no way of discovering his or her own (there are no reflective surfaces). If a tribesperson does discover his or her own eye color, then their religion compels them to commit ritual suicide at noon the following day in the village square for all to witness. All the tribespeople are highly logical and devout, and they all know that each other is also highly logical and devout (and they all know that they all know that each other is highly logical and devout, and so forth).
Of the 1000 islanders, it turns out that 100 of them have blue eyes and 900 of them have brown eyes, although the islanders are not initially aware of these statistics (each of them can of course only see 999 of the 1000 tribespeople).
One day, a blue-eyed foreigner visits to the island and wins the complete trust of the tribe.
One evening, he addresses the entire tribe to thank them for their hospitality.
However, not knowing the customs, the foreigner makes the mistake of mentioning eye color in his address, remarking “how unusual it is to see another blue-eyed person like myself in this region of the world”.
What effect, if anything, does this faux pas have on the tribe?
Note 1: For the purposes of this logic puzzle, “highly logical” means that any conclusion that can logically deduced from the information and observations available to an islander, will automatically be known to that islander.
Note 2: Bear in mind that this is a logic puzzle, rather than a description of a real-world scenario. The puzzle is not to determine whether the scenario is plausible (indeed, it is extremely implausible) or whether one can find a legalistic loophole in the wording of the scenario that allows for some sort of degenerate solution; instead, the puzzle is to determine (holding to the spirit of the puzzle, and not just to the letter) which of the solutions given below (if any) are correct, and if one solution is valid, to correctly explain why the other solution is invalid. (One could also resolve the logic puzzle by showing that the assumptions of the puzzle are logically inconsistent or not well-defined. However, merely demonstrating that the assumptions of the puzzle are highly unlikely, as opposed to logically impossible to satisfy, is not sufficient to resolve the puzzle.)
Note 3: An essentially equivalent version of the logic puzzle is also given at the xkcd web site. Many other versions of this puzzle can be found in many places; I myself heard of the puzzle as a child, though I don’t recall the precise source.
Below the fold are the two putative solutions to the logic puzzle. If you have not seen the puzzle before, I recommend you try to solve it first before reading either solution.
— Solution 1 —
The foreigner has no effect, because his comments do not tell the tribe anything that they do not already know (everyone in the tribe can already see that there are several blue-eyed people in their tribe).
— Solution 2 —
100 days after the address, all the blue eyed people commit suicide. This is proven as a special case of
Proposition. Suppose that the tribe had n blue-eyed people for some positive integer n. Then n days after the traveller’s address, all n blue-eyed people commit suicide.
Proof: We induct on n. When n=1, the single blue-eyed person realizes that the traveler is referring to him or her, and thus commits suicide on the next day. Now suppose inductively that n is larger than 1. Each blue-eyed person will reason as follows: “If I am not blue-eyed, then there will only be n-1 blue-eyed people on this island, and so they will all commit suicide n-1 days after the traveler’s address”. But when n-1 days pass, none of the blue-eyed people do so (because at that stage they have no evidence that they themselves are blue-eyed). After nobody commits suicide on the day, each of the blue eyed people then realizes that they themselves must have blue eyes, and will then commit suicide on the
day.
As the above two solutions give contradictory conclusions, at most one of them is correct. Which one (if any) is the correct solution, and what is the precise reason that the other solution is invalid?

371 comments
Comments feed for this article
7 April, 2011 at 10:08 am
Xamuel
Another variation of this is the Muddy Children Puzzle, which Fagin, Halpern, Moses and Vardi chose for the introduction or first chapter (can’t remember which) of their book “Reasoning About Knowledge”.
You forgot to mention one thing: you seem to be implicitly assuming the islanders aren’t deaf… oh, and they all know that they’re all not deaf, and so on… or at the very least, that the deaf ones can read lips…
7 April, 2011 at 10:23 am
Terence Tao
Yes, this is the type of “legalistic loophole” I mention in Note 2. With these sorts of logic puzzles, there is always the implied assumption of “idealised conditions” (no deafness, colour-blindness, or other physical disabilities, no pirates to come and randomly abduct half the population on Day 17, no unexpected breakdown of causality or other laws of physics, and so forth). It is a trivial matter to “break” the logic puzzle using non-idealised conditions, but this is not the most interesting aspect to the puzzle.
5 October, 2014 at 2:44 am
Christian Doscher
Solution seems to suffice.To pursure #2 seems pathological! The individuals are not able to understand anything about themselves from comment made. It seems clear no one could know any thing new. Is this how math is used on Wall Street? Or how does this effect real world situations? Meaning what to we do with this info? In order to know if I should pursue this understanding #2 process. Math is not my field, but is there any schools of thougth that don’t agree with # 2’s validity ie an argument against it that you’ve heard of by any one that is considered valid too?
7 April, 2011 at 10:50 am
Nicole Krieger
Wouldn’t the brown-eyed people then kill themselves, too, once they saw that all the blue-eyed people were dead? Wouldn’t they also know that all the blue-eyed people would kill themselves, and so once they were gone wouldn’t they figure out they had brown eyes?
7 April, 2011 at 11:02 am
Amic
No, because each one could consider to be the only one to have green eyes for example.
7 April, 2011 at 11:05 am
Nicole Krieger
It’s not a boolean?
4 May, 2011 at 10:25 pm
Anonymous
If it were boolean, then there would be no need of a foreigner, they would all commit suicide on their own.
4 October, 2014 at 2:58 pm
Anonymous
This sound interesting. Could you explain the all committing suicide on their own?
7 April, 2011 at 11:06 am
Greg
There is nothing that says there are necessarily only two eye colors. Therefore, none of the brown-eyed people would necessarily know that they didn’t have, say, green eyes.
7 February, 2013 at 6:34 am
Geoff Mossburg
True, and also there is no reason given why the brown eyed people (and green eyed people, etc) wouldn’t come to the same conclusion as the blue eyed people in the first place. ie: The implication of the second argument is that only the blue eyed people drew a conclusion that they were blue eyed, but there is no logical reason given for this to be exclusive to the blue eyed people.The inductive reasoning for n>1 forgets to take into account eye colors other than blue.
7 April, 2011 at 11:12 am
bretbenesh
Nicole: the brown-eyed people would definitely commit suicide if they knew that there were only two possible eye-colors: blue and brown (effectively, “blue” and “not blue”). However, each of the brown-eyed people would think: “All of the blue-eyed people just killed themselves; all of the remaining people have brown eyes, but it is possible that I have brown eyes OR green eyes (but definitely not blue eyes). ” EVERY brown-eyed person will think this, so none of them will commit suicide since they still have at least two possibilities for their eye color.
7 April, 2011 at 11:14 am
bretbenesh
Sorry for the redundancy—Greg’s and Amic’s answers did not show up until I posted.
7 April, 2011 at 11:19 am
Nicole Krieger
Or hazel or violet or whatever.
7 April, 2011 at 11:42 am
Nick
You’re correct: the brown eyed individuals would, in fact, also kill themselves at a later date. After all the blue-eyed individuals had killed them selves on day 100 (when the traveler makes his comment on day 0), there would be left exactly 900 brown-eyed people. The same exact logic that applied to the blue-eyed people now applies to the brown-eyed people. Any brown-eyed individual, looking around, would see 899 brown-eyed people. When these people didn’t commit suicide on day 899, he would have to conclude that he also had brown eyes. Therefore, the remaining population would commit suicide on day 900. The brown eyed individuals do not need to assume that there are only two eye colors for this to occur. Indeed, each population of individuals with a unique eye color will kill themselves off working from smaller populations (rarest eye color) up through larger populations (more common eye colors) for any number of colors. And again, as others have pointed out, this gradual genocide would happen without any interference from the traveler (i.e. if he had never shown up). We simply take this as a convenient date to start thinking about this thought experiment. This is the correct manner through which the two different scenarios should be reconciled.
8 April, 2011 at 8:53 am
mixedmath
There is an error in this argument when you imply that there is no need for there to be only two eye colors. Suppose for ease that there are 3 people on the island: one has blue eyes, one has brown, and one green. Then at first, none knows their own eye color. But as soon as the unfortunately honest traveler says there is a blue-eyed man, the blue-eyed man will kill himself.
Yet now the brown and green eyed men will still have no idea what their eye colors are. The brown-eyed man sees the other green, but does not know whether there are brown-eyed people on the island (or any other color, for that matter). Similarly, the green-eyed man does not know if there are any green-eyed men on the island. If the traveler were to come back and say, I see green eyes, then that color would also eventually kill itself. Finally, I note that this generalizes to larger numbers without error.
25 December, 2014 at 9:57 am
mike
Agreed.. the brown eys guys all kill themselves on day 900. On day 899 each is uncertain as to their own eye colour, but when the 899 DONT kil themselves on day 899 they all realize they own eye colour is the reason and kill themselves .. exactly the same logic as the blue eyes.. QED
5 April, 2015 at 2:04 pm
Brandon
The difference between the blue and brown case is that when the foreigner says there is someone with blue eyes, everyone knows there is a blue-eyed person, and everyone knows that everyone knows there is a blue-eyed person, and everyone knows that everyone knows that everyone knows… etc.
But say there were only 3 brown eyed people on the island, P1, P2, and P3. Everyone knows there is a brown eyed person (they can each see two of them), and everyone knows that everyone knows there is a brown eyed person (P1 knows that P2 can also see P3). However, here’s the tricky part: P1 does NOT know the previous sentence (“everyone knows that everyone knows there is a brown eyed person”). P1 cannot conclude that P2 knows that P3 knows that there is a brown-eyed person: For all P1 knows, he has green eyes, in which case P2 sees 1 green, 1 brown, and so in P1’s mind, P2 cannot conclude that P3 sees a brown-eyed person.
7 April, 2011 at 11:00 am
Nicole Krieger
… and why would the brown-eyed people not also be thinking the same thing the blue-eyed ones were, and kill themselves by accident thinking they were the other blue-eyed person? Wouldn’t they be subject to the same train of thought?
7 April, 2011 at 11:08 am
Anonymous
No, because each non-blue-eyed person counts the full set of blue-eyed people, while each blue-eyed counts one less.
7 April, 2011 at 11:13 am
Nicole Krieger
So the blue eyed people are killing themselves a day earlier than the brown-eyed person would have, that makes sense.
7 February, 2013 at 6:38 am
Geoff Mossburg
How can you assume that? Because each non-blue-eyed person has no way of knowing if they are blue-eyed (just as each blue-eyed person would), they are all dealing with the same n-1 set.
7 February, 2013 at 1:00 pm
Lenoxus
Yes, the degree of ignorance of one’s own eye color is equal for all islanders, brown or blue. But this doesn’t somehow mean they each observe the same number of blue-eyed people! Suppose there are two blues and five browns. In this case, each of the browns observes exactly two blues and each of the blues observes exactly one blue. Meanwhile, each of the browns observes exactly four browns, while each of the blues observes five.
In general, even before any visitor spoke, everyone already knew that, as a rule, all X-eyed people must each see one fewer X-eyed person than all non-X-eyed people see. (If your eyes are not X, then you must be seeing all the X-eyed people, but if they are X, then you are seeing all the X-eyed people minus one person, yourself.)
Of course, this fact by itself is not enough to figure out your own eye color, because you don’t know if you are seeing the lower or higher of the two estimates! The whole problem is built around that uncertainty and its ultimate elimination by the combination of two things: the visitor’s remark, and the separation of time into collectively-agreed-upon periodic intervals (days).
25 December, 2014 at 9:52 am
Anonymous
The brown eyed people all kill themselves on day 900
1 November, 2013 at 3:46 am
Anonymous
n=1 case only works with the information that there is a blue eyed person on the island. That info is not at hand in other cases.
7 April, 2011 at 11:06 am
X9
For a better example use this explanation; in a prison a crazy ruler has set up a riddle- he has trapped all prisoners (all of them smart logicians) in boxes. The boxes are located in a circle. Each box is soundproof. From a window in the box each prisoner can see the color of all boxes but not of his own box. There is no way of communication between boxes as the windows are constructed in such a way that you can see out but not in. There is a one-way speaker system in all boxes. In every cell an explanation is given; guess or deduct the correct color of your own for freedom or the wrong color for extermination. Also the rules of the game are explained on a wall poster in each cell. One day a new box is added and the information is given over the speaker system that the arriver can see another blue box (like himself) etc. In such a way the puzzle can be constructed in a way that is both correct to normal life and understandable in real-life logic. There also must be a way for the prisoners to know if other prisoners have made a logic assumption.
The answer will depend a bit on how the puzzle is constructed.
7 April, 2011 at 11:26 am
X9
To better understand; reduce the number of prisoners to 10 (this will not effect the outcome). As easily seen, if the puzzle is corrected in a certain way, the individual prisoner will know his color by the last day (given knowledge they all think the same way and to the rules of logic deduction).
You can also reduce the number of prisoners to 5 or 3. Remember to include the option of any color. The color of the speaker/guru (if used) will not be relevant if the wording is not ‘same color as my own’. In such a way the outcome of puzzle will depend on how it is constructed (it can be constructed in a solvable way an also to be unsolvable only by a slight variation in word use). Good luck.
7 April, 2011 at 11:14 am
Greg
It might be worth pointing out that the announcement, in the n=2 case, does add new information (I am ignoring the n=1 case since it seems to have some special structure).
When n=2, each blue-eyed person knows that there is at least 1 other, but they do not know whether the other knows that there is at least 1 other. The announcement, being public knowledge, then ensures that it is commonly known that there is at least one person with blue eyes. That is, the announcement does add new information when n=2, contrary to the first proposed solution.
How this scales to larger n is still unclear to me, however.
7 April, 2011 at 11:19 am
DT
For n = 3, the third guy will think that the other two will figure it out in 2 days and then kill themselves, and this is exactly what the other two will think. But then after two days when they find out that the other two have not killed themselves, they’ll figure out that there has to be a third person with blue eyes, and voila! It scales exactly this way for all higher n.
7 April, 2011 at 11:27 am
Greg
While I trust the induction, it is still slightly questionable to me. In the n > 2 cases, the announcement adds no new information it would seem. Everyone sees that there is someone with blue eyes, and everyone sees that everyone sees that there is someone with blue eyes, etc. So the information that was new in n=2 is no longer new in any n > 2. Would this mean that any situation where n > 2 is inherently unstable (and should have already collapsed)?
7 April, 2011 at 11:31 am
DT
True!
7 April, 2011 at 11:53 am
Greg
Given my correction to myself, I now don’t think that the situation is inherently unstable, but that there is new information provided by the announcement that was not previously present. Thus, the inductive proof is the correct solution.
7 April, 2011 at 11:37 am
Greg
Ah, I found a problem with what I wrote. If n=3, it is indeed the case that everyone sees that there is a blue-eyed person, and everyone sees that everyone sees that there is a blue-eyed person, but this does not entail that everyone knows that everyone sees that everyone sees a blue-eyed person.
For, each of the blue-eyed people is left with two hypotheticals: “If I have blue eyes, then everyone knows that everyone sees that everyone sees a blue-eyed person”, but also “If I have non-blue eyes, then I don’t know that everyone knows that everyone sees a blue-eyed person” (this second one is due to the fact that one of the two visible blue-eyed people would see only one other blue-eyed person and things would be back in the n=2 case).
7 April, 2011 at 12:20 pm
DT
True, again! So the traveller does act as a trigger.
7 April, 2011 at 4:17 pm
Lukasz
When considering what everyone thinks as a whole it’s a little confusing for me, I prefer to see it from recursive point of view of just one of them, even though it amounts to the same thing.
If n=2, person 2 is looking at person 1 and thinking (a) “If I’m not blue, then person 1 will kill himself today, otherwise I’m blue and we’ll both kill ourselves tomorrow”.
If n=3, person 3 is looking at person 2 and thinking (b) “If I’m not blue, then person 2 is thinking (a) and they will die tomorrow. If I am blue then person 2 is thinking (b) and we’ll all have to kill ourselves in two days”.
If n=4, person 4 is looking at person 3 and thinking (c) “If I’m not blue, then person 3 is thinking (b) and they will die in two days. If I am blue then Person 3 is thinking (c) and we’ll all have to kill ourselves in three days”.
Of course for n=3, not just person 3 is thinking (c) but all people 1 to 3, but we only need to consider 1 of them. Same idea for n=3.
… and death by induction it shall be.
7 April, 2011 at 11:16 am
DT
No. After n days, since all the blue-eyed people would have killed themselves, the brown eyes people will know that *all* the blue-eyed people are dead (since they all know that the blue-eyed people were highly logical, and wouldn’t kill themselves unless they were absolutely sure of their eye colours.)
7 April, 2011 at 11:21 am
DT
This one was for Nicole.
7 April, 2011 at 11:20 am
Anonymous
The first solution is correct. The foreigner never says how many people he saw with blue eyes, and none of the islanders are aware of the statistic of how many people have blue eyes. So they would not know n people should have blue eyes, and would not necessarily deduce their eye color when they count n-1 blue eyes.
7 April, 2011 at 11:26 am
DT
In my opinion, the second one is correct. They won’t do anything instantaneously after the announcement. But, from the point of view of a blue-eyed person (say), it’ll be clear that 99 days are enough for the other 99 blue-eyed people (that he can see) to figure out their own eye-colors. After 99 days, when he sees all of 99 blue-eyed people still alive, he’ll figure out that he has blue eyes too.
This is assuming, of course, that they still remember the announcement after 99 days. (Note 2.)
7 February, 2013 at 6:44 am
Geoff Mossburg
Why would they think this after 99 days if they have no statistic letting them know how many blue-eyed people there are? Maybe after 999 days.
8 April, 2011 at 7:37 am
DLS
If n=2, the statement “Everybody knows that someone has blue eyes” is initially true. If n=3, the statement “Everybody knows that everybody knows that someone has blue eyes” is true (but this is not true if n=2). More generally, the statement “(everybody knows that)^k someone has blue eyes” is true for k=n-1 and not for k=n. However, after the public announcement, the statement is true for all n. This is essentially equivalent to the induction but captures exactly the knowledge that is conveyed by the speech.
10 April, 2011 at 7:41 am
Nicole Krieger
Exactly, so the moment they began existing would be the start of the process, well before the visitor shows up.
7 February, 2012 at 10:12 am
Lenoxus
It can’t, because prior to the input of outside information, there is no reason whatsoever for a lone blue-eyed person to kill himself. Once the visitor speaks, there is such a reason, and the process starts.
30 June, 2011 at 5:45 pm
hugo
This is the subject that we should be talking about, because after all it is the interesting part
7 April, 2011 at 11:21 am
Nicole Krieger
I’ve got to say, I’m not sure I would describe people who killed themselves when they know their eye color as “highly logical”!
This is a very tragic puzzle…
7 April, 2011 at 11:29 am
DT
It’s a sacred ritual for the tribes people. For many of the atheists, the belief in God is absolutely absurd, irrational and illogical, yet Albert Einstein was a firm theist. Humans are fascinating creatures.
This makes me sound like ET, not DT. :P
7 April, 2011 at 11:48 pm
Anonymous
No he wasn’t. He believed in Spinozas god at most, a god who wound up the universe at the beginning of time and then let it run, not any kind of personal god, the kind most atheists would call silly. Even Dawkins has grudgingly admitted a case can be made for Einsteins and Spinozas god.
9 April, 2011 at 2:07 am
DT
Hehe. Yeah, I know now. What I posted was because of some stupid thing a youtube video had led me to believe. I am reading “The World As I See It” by Einstein, and what you say is indeed correct.
7 April, 2011 at 5:26 pm
Stas
It’s better to replace people with robots/cyborgs/androids who are programmed to be behave as described.
7 April, 2011 at 11:31 am
Ravi Kunjwal
I don’t think any logical conclusion in the nature of solution 2 follows: solution 2 assumes a knowledge of n, the number of blue-eyed folks in the tribe. any individual in the tribe knows (or can, in principle, know):
1) there are 1000 people in the tribe, including him,
2) if this individual is blue-eyed he knows that of the other 999 folks
– 99 are blue-eyed and 900 are brown-eyed,
and if he is brown-eyed, he knows that
– 100 are blue-eyed and 899 are brown-eyed
3) Of course, this individual doesn’t know his own eye-colour.
4) So when the foreigner makes a reference to “another blue-eyed person”, it could mean any of the other 99 or 100 folks.
5) But then each individual in the tribe reasons in exactly this fashion. Not one of them has any a priori information about his own eye-colour.
Life will go on as usual – the foreigner hasn’t really told them anything new. Each one of them already knows that there is at least one “another blue-eyed person”.
The induction that solution 2 proposes assumes a priori knowledge of the number of blue-eyed guys. Of course, if there were only one blue-eyed guy, he would have to commit suicide, since he knows there is no other blue-eyed guy in the tribe. If there are at least 2 blue-eyed guys, neither of them would have to commit suicide after two days. Each of them only knows that there is at least one blue-eyed guy and at most two, but there is no way to decide what the total number of blue-eyed guys is. Similarly if there are n blue-eyed guys, each of them knows that there are at least n-1 blue-eyed guys and at most n, but there is no way to decide which, unlike when n=1, where it is clearly established that the blue-eyed guy is blue-eyed because no one else is. The argument by induction is fallacious.
7 April, 2011 at 11:34 am
DT
Read Greg’s reply above. He raises an excellent point. The situation seems like it’s always unstable. So if the other had not been trying to figure out their eye colours before the announcement, life will go on as usual. Otherwise, they would have killed each other a long time ago.
7 April, 2011 at 11:32 am
Nicole Krieger
Wait a second, back up. The whole n=1 situation never occurs to begin with, because everyone already knows for a fact that n is either 99 or 100. So no one is sitting around thinking “What if it is just me and some other person”. So the induction falls apart because we know that situation is impossible from the start.
7 April, 2011 at 11:37 am
DT
I think the induction is alright. n = 1 is just a case to begin with, but we see that even for higher n, it would stand.
n = 1 – Easy
n = 2 – B1 thinks B2 will kill himself tonight, and B2 will think the same thing about B1. When the next day they see each other again, they’ll figure out that n has to be greater than 1. But everyone else they see is brown-eyed! So they kill themselves.
Similarly for higher n.
7 April, 2011 at 10:18 pm
Nicole Krieger
But we know that the n=1 case doesn’t exist to begin with. Everyone knows there are at least 98 other people with blue eyes. So n=1 through 97 are impossible, and you can’t base anything on it to begin with.
This is causing quite a debate in our household now, with my husband being in favor of the tragedy.
What would really be evil is if some guy came to the village and said, “How nice to see another grey-eyed person”.
8 April, 2011 at 9:02 am
mixedmath
I was thinking of this possibility myself! Then everyone dies. The power of a (grey) lie.
4 December, 2012 at 6:28 pm
Lenoxus
A late reply, because this point is commonly raised. Yes, the case N=1 does not “exist” here at all. But the point is that we can logically prove the following two statements:
1. If exactly 1 blue-eyed islander existed, he would die exactly 1 day after the announcement.
2. If it is provable that an island with exactly N blue-eyed people would see those N people dying on Day N, then it follows that N+1 blue-eyed people would die on Day N+1.
The second statement may cause disagreement, but it can’t be argued that if both statements are true, then the situation works out for any number of blue-eyed islanders (assuming they can count to any finite number). After all, 100 can be expressed as N + 1 + 1 + 1… (to 99 1s), where N is also 1.
What proves statement 2 is quite simple.
A. Suppose that the islanders know that for some particular N (such as N=2), it is true that N blues would kill themselves on Day N.
B. Suppose further that the actual number of blues is really N+1.
C. Each of those N + 1 blues would observe exactly N blues.
D. They would know that the total number must be either N or N+1.
E. They would know that if it were N, those N would die on Day N. (Restatement of premise A).
F. Therefore, if that doesn’t happen, there are not exactly N blues.
G. If there are not exactly N blues, then there are N + 1.
H. Once N+1 blues conclude that the exact number of blues is N+1, they will each realize they must be “the one blue I can’t see”.
I. The N+1 blues, having observed that N blues did not die on Day N, will kill themselves the following day, Day N+1.
This proves that if it works for a given N, it works for N +1. We can then treat N+1 as a “new” N, and repeat the deductive process as many times as needed.
7 April, 2011 at 11:34 am
Nicole Krieger
Oh, I see Ravi just came to the same conclusion.
7 April, 2011 at 11:46 am
X9
Ok, here’s the correct (or best) version;
In a room there are 10 Fields Medal winners. All are fully functioning and none has caught Alzheimer’s yet. They all were hats of different colors but can not tell the color of their own hat. Communication leads to disqualification in the contest and there is a prize of 1million$ for first correct answer (no, Grigory Perelman is not competing). Every two minutes a bell rings that indicates possibility for an answer. At first no one makes an attempt of answering but then Randall Munroe shows up (as surprise guest) wearing a green hat stating- ‘one of your hats is blue’ etc.
Here comes the questions; for what distribution of hat colors will the problem be solved (and in what time)? And for what distributions will it not be solved? Again good luck.
7 April, 2011 at 12:05 pm
Filipe
Dear Prof. Tao,
If your conclusion, it will mean the suicide of the entire tribe. If all blue-eyed people suicide themselves on the 100th day after the speech, then the rest of the islanders will know they are brown-eyed, so they are forced to commit suicide the day after.
Greetings.
7 April, 2011 at 1:00 pm
Ankur
Let BEP = blue-eyed person. My conclusion is that solution 2 is flawed.
The mere number of BEPs in the system does not determine whether they commit suicide or not on a given day. This also depends on what their knowledge is on the particular day. Your proposition in solution 2 therefore needs to be more explicit, e.g., as follows.
If there are n BEPs who know, on day n that there are n BEPs, they commit suicide on day n.
Now let us go through your proof. Case trivial for n=1. Assume it is true for n-1 and consider n. A BEP x knows that there are at least n-1 BEPs. x also knows that another BEP (y) knows that there are at least n-2 BEPs – x can be sure of only those BEPs that both him and y can see. So x conjectures “if there are at least n-1 BEPs and on day n-1, all know that there are n-1 BEPs, they would commit suicide.” After n-1 days. no one commits suicide. At this stage all that x can conclude is that *either* the another BEP y does not know that there are n-1 BEPs *or* that there are indeed n BEPs in the system. Your proof skips the former case. I don’t see how x can conclude that y does know that there are n-1 BEPs in the system.
You are implicitly assuming that because a BEP x knows that there are at least n-1 BEPs in the system, that all other BEPs also know that there are n-1 BEPs in the system. This “symmetry” argument is valid for an external observer, not for the BEP x.
7 April, 2011 at 1:20 pm
Greg
But doesn’t BEP y know, being the logical BEP that s/he is, that since no one killed themselves after n-2 days, that there must be at least n-1 BEPs?
7 April, 2011 at 1:43 pm
Ankur
Let us see what information BEP x has about BEP y’s knowledge. BEP x knows that BEP y knows there are at least n-2 BEPs in the system. Now if these n-2 did not commit suicide, then BEP x knows that BEP y knows that either these n-2 BEPs do not know that there there are n-2 BEPs or (BEP x knows that BEP y knows) that there are indeed n-1 BEPs.
Again we run into the same problem.
7 April, 2011 at 1:49 pm
Ankur
Greg, see Prof Tao’s latest buzz.
8 April, 2011 at 8:52 am
Ankur
I take this back. I have not used the foreigner’s intervention. The above argument can be extended further to eventually come to a stage where no BEP can be sure that everyone knows that there is at least one BEP.
Austin’s comment here: https://terrytao.wordpress.com/2011/04/07/the-blue-eyed-islanders-puzzle-repost/#comment-51447
shows that the foreigner’s comment supplies every BEP with this information. Thus the induction goes through.
7 April, 2011 at 1:20 pm
DT
After the announcement, for:
n = 1
Trivial
n = 2
– B1 and B2 both know that N(BEP)>= 1
7 April, 2011 at 1:24 pm
DT
– If the equality was to hold, that is, if there was only one BEP, B1 or B2 would know. But after one day when they are both alive, they figure that there has to be one more.
– Since they can see that everyone else has brown eyes, it’s easy to figure out that it is them
n = 3
– From B3’s point of view, if n = 2, then B1 and B2 would be dead by day 3
– Since they are not, there has to be one more BEP, and since he can see that everyone else is Brown, he figures out that it is him.
n = 4
– From B4’s point of view, if n = 3,……
This can be extended for any n, I guess. So the induction seems to be correct.
7 April, 2011 at 10:21 pm
Nicole Krieger
Except they aren’t all neatly numbered BEP 1 through 100. There is no one in the tribe who will think, “Hey, there’s only one person I can see with blue eyes so if he doesn’t kill himself, I’d better,” because they can ALL see 99 blue eyes, and they know that those people all see at least 98 people.
So if we arbitrarily assign one guy number 2, he can see that BEP1 has blue eyes, but he also sees BEP3, BEP4, BEP5 through BEP 99 all have blue eyes, too. And he knows number one can see those guys.
7 April, 2011 at 1:46 pm
Ankur
But Bi does not know that Bj knows that N(BEP) >=1. Bi only knows that Bj knows that N(BEP) >=0.
7 April, 2011 at 3:32 pm
Hao
(I’m slowly coming around to the second solution)
But to answer your (implied) question: “But Bi does not know that Bj knows that N(BEP) >=1. Bi only knows that Bj knows that N(BEP) >=0.”
That is precisely the point. The information given by the foreigner now informs Bi that Bj knows that N(BEP) >= 1.
8 April, 2011 at 8:47 am
Ankur
Thanks. That’s right.
7 April, 2011 at 1:40 pm
Literalman
Consider two slightly different propositions:
P1. Suppose that the tribe had n blue-eyed people for some positive integer n. Then n days after the traveller’s address, all n blue-eyed people commit suicide. (This is the original proposition).
P2. Suppose that the tribe had n blue-eyed people for some positive integer n. Then no longer than n days after the traveller’s address, all n blue-eyed people will have committed suicide.
Isn’t the supplied proof a proof of P2 rather than P1?
7 April, 2011 at 1:48 pm
Terence Tao
Yes, this is a good point! It is in fact a moderately challenging exercise to show that, assuming a “blank slate” initial state in which the islanders initially know nothing about eye colour beyond what they can see from direct observation (and making the idealised assumption that there are no other causes of death or incapacitation among the islanders), that one can strengthen the proof of P2 to a proof of P1. (The key here is to introduce the concept of a possible world from modal logic; basically, if one can imagine a self-consistent world in which X is false but which is compatible with all current knowledge about the world, then one cannot know for certain that X is true.)
[This illustrates the more general principle, by the way, that lower bounds are often substantially harder to establish than upper bounds, when it comes to estimating the time at which a certain piece of knowledge becomes available.]
10 April, 2011 at 6:33 am
Literalman
Sometimes, though, it’s the upper bounds that are difficult. For instance, it would be mass murder if you were to have been the visitor to the island and said: “if RH is true, then I see at least one BEP”. But what would the moral standing for you to say: “if P=NP, then I see at least one BEP”. Of course, that would be far reckless than “if P!=NP, then I see at least one BEP” :-)
10 April, 2011 at 8:39 am
Literalman
Oops…meant to write that it would be more reckless to say “if P!=NP, then I see at least one BEP”!
15 April, 2011 at 3:44 pm
Quoting a comment from 2010 clarifying the P1 vs P2 question
[from 24 June 2010 of the old thread. Reposting because the link-to-comment URL provided by WordPress doesn’t actually point to that comment and it would be difficult to locate the content otherwise.]
“The induction argument gives an upper bound for the number of days by which those with blue eyes can determine their eye color. However, it does not explicitly rule out the possibility of more efficient reasoning that reaches the same conclusion on an earlier day.
Formalizing the argument in (e.g.) modal logic would bound the reasoning power of the islanders and thus permit solid conclusions about what deductions CANNOT happen within N days, for each N. But this would require a completeness proof, i.e., a clear and convincing argument that whatever logical formalism is used correctly captures all possible “information” and “conclusions” available to the logicians on the island.
Possibly Aumann’s approach is clearly general enough to encompass this, but again, it may contain hidden assumptions as to what the islanders can and cannot reason about.
To see the difficulty, think about what if the visitor had made a higher-order statement such as “in the first draft of my speech, there were comments that, had I made them, would have certainly caused suicides within 100 days; but luckily, I noticed this in time to strike the comments from the speech I am now making!”. One needs a way of capturing not only the logic that the islanders can perform in a given logical system but a way of seeing that the system analyzed is as strong as possible. ”
[the proof of completeness would be closely related to the “possible world” semantics that Terry mentioned. I don’t know how it would work for higher order modal logics or if these are in fact complete. Aumann refers to Robert Aumann’s paper on “agreeing to disagree” where he establishes a semantics for reasoning about common knowledge.]
7 April, 2011 at 2:01 pm
Hao
I will reword the proposition slightly: “Suppose that the tribe has EXACTLY n blue-eyed people for some positive integer n. Then n days after the traveller’s address, all n blue-eyed people commit suicide.”
The proposition is true for n = 1 (base case).
For induction, we need to show that proposition being true for (n=i) implies the proposition being true for (n=i+1). However, if the proposition is true for (n=i), it cannot be true for (n=i+1), because the tribe has a constant number of blue-eyed people.
The alternative form of the proposition without “exactly” (Suppose that the tribe has at least n blue-eyed people for some positive integer n. Then n days after the traveller’s address, all n blue-eyed people commit suicide.) fails because it isn’t true for n=1.
7 April, 2011 at 2:10 pm
Austin
Here is the precise reason Solution 1 is wrong.
The foreigner’s faux pas DOES tell each tribesman something he did not know: namely, that everybody knows that everybody knows that everybody knows that (…) that everybody knows that someone has blue eyes.
That statement should include 99 repetitions of the phrase “that everybody knows”.
7 April, 2011 at 2:12 pm
Austin
(In fact, the foreigner doesn’t just make this statement known to everyone — he makes it true!)
7 April, 2011 at 2:40 pm
Austin
No, I take my second comment back (I stand by the first).
7 April, 2011 at 3:20 pm
Manuel
That’s exact.
7 April, 2011 at 4:31 pm
Brian
99 instances (98 repetitions), yes?
8 April, 2011 at 8:55 am
Ankur
Shouldn’t it be infinitely many repetitions? The repetitions count the “levels of knowing”, in other words, knowing an increasing hierarchy of things. I don’t think the number of repetitions should depend on the number of BEP.
13 May, 2011 at 12:58 pm
Anonymous
The number of levels of knowing does depend on the number of BEPs. Consider n=1: brown eyed person B knows that there exists a blue eyed person (person A), but he does not know if everyone knows it: in fact, the blue eyed person does not know, but if B were blue eyed, then A would know that there existed a blue eyed person. Now n=2: BrEP1 knows that there is a BEP, and knows that everyone knows, but cannot deduce whether everyone knows that everyone knows that everyone knows, since he does not know his own eye color.
7 April, 2011 at 2:45 pm
Dr. Sextistics
This sounds a lot like Curry’s Paradox.
7 April, 2011 at 4:31 pm
rebelJinnee
Question: have the islanders been co-existing more than 100 days before the stranger’s arrival on the island?
=========================
Additional information for solution 2:
— from a brown-eyed person’s perspective —
How does a brown-eyed person reason when n is 2?
S/he sees that there are n blue-eyed persons and reasons as follows
“If I am brown-eyed, then there are n blue-eyed people on this island, and so they will all commit suicide n days after the traveler’s address; if no such suicides take place on the nth day, then I must be blue-eyed (n + 1 blue-eyed persons) and must therefore commit suicide on the n+1th day”
Or more succintly, reasoned a brown-eyed person,
“I know I am blue-eyed if no suicides take place on the nth day”
— from a blue-eyed person’s perspective —
A blue-eyed person will arrive at the conclusion that h/er eye color is blue when no suicides take place on the n-1th day.
example: let n = 2 and b1 & b2 be the 2 blue-eyed persons.
b1 sees b2’s blue eyes and reasons: “b2 didn’t commit suicide after 1 day because b2 saw a blue-eyed person and that blue-eyed person must be me (b1) since I can see everyone’s eye color is brown except for b2’s” (if b2 didn’t see another blue-eyed person, b2 would have committed suicide
on n-1th day after the foreigner’s revelation)..
7 April, 2011 at 4:44 pm
rebelJinnee
Another way of looking at solution 2:
———————— from an islander’s perspective ————————
Let I be one of the 1000 islanders (not the foreigner).
If I sees m blue-eyed persons, I will wait until the mth day to see if suicides
take place: if they do, I lives; otherwise, I commits suicide on the m+1th day.
7 April, 2011 at 5:25 pm
DSchwab
Terry,
Isn’t it possible for the islanders to avoid the outcome of solution 2 by simply agreeing not to commit suicide for the subsequent 1000 days? (Or any number > 100, but every islander knows the total population, so that’s a natural choice.) This way they erase the trigger, there is no information gained when there are no suicides on day 99, and nobody learns their eye color. (Presumably they had to do this following the invention of their religion or they’d have had the same issue and would be dead already.) I suppose you could argue that this is “talking about” the eye color issue, but perhaps a logical actor with a desire to live would develop this strategy independently and assume that all others have as well. Also, note that this only works if there is more than one blue eyed person.
17 June, 2011 at 5:05 am
Marcelo Cantos
That’s against their religion.
7 April, 2011 at 6:01 pm
keith
I feel the validity of the proposition, during the process of induction, has not been proved right yet. Therefore people there could not know this proposition itself, which means they could not know “and so they will all commit suicide n-1 days after the traveler’s address”.
7 April, 2011 at 6:12 pm
Josh Brown Kramer
There is a very general version of this puzzle in “Mathematical Mind Benders” by Peter Winkler. There are only two eye colors, and the visitor can say ANYTHING nontrivial about the number of blue eyes, TRUE or FALSE, and everyone still ends up dead.
7 April, 2011 at 9:16 pm
Deniz
On the 100th day, if blue-eyed people fail to commit suicide, the next day, all brown-eyed people will commit suicide, thinking they’re blue eyed.
7 April, 2011 at 9:21 pm
Peli Grietzer
Unless I’m confused, I think the following demonstrates the significance of the foreigner’s announcement:
In a simplified case with three blue eyed islanders (let’s call them B1, B2, B3), prior to the foreigner’s anouncement B1 believe that if B1 does not have blue eyes then there are only two blue eyed islanders, one of whom is B2 who (in that scenario) believes that if B2 does not have blue eyes then there is only one blue eyed islander, B3, who (in that scenario) believes that if B3 does not have blue eyes then there are *no* blue eyed islanders. Thus, B1 believes that if B1 does not have blue eyes, then B2 believes that if B2 does not have blue eyes then B3 will commit suicide *upon learning from the foreigner that there is at least one blue eyed islander*. Now, on the morning of the second day after the announcement B1 can reason: if B1 does not have blue eyes and B3 did not commit suicide on the first day, then B2 now learns that B3 knows of a blue eyed islander that’s not B3 and (in that scenario) B2 will conclude that B2 has blue eyes and commit suicide at noon. When noon of the second day comes and B2 does not commit suicide, B1 learns that B1 has blue eyes. B1 then commits suicide. (The same applies to B2 and B3.)
8 April, 2011 at 1:49 am
Stijn
The fact that the visitor discusses the topic introduces a starting point, where everyone will reflect on the problem at the same time. This starts the induction process. Broaching the topic was taboo; the visitor changes this and by doing so synchronises the population.
8 April, 2011 at 3:13 am
Bryce
Putting in the perspective of what an islander knows helps show the induction for me:
If I am an islander and I see n blue eyed people there are two cases:
I am also blue eyed, therefore the n blue eyed people I see also see n blue eyed people.
I am not blue eyed, therefore the n blue eyed people I see only see n-1 blue eyed people.
Expanding on the “I am not blue eyed” case:
The n blue eyed people see n-1 blue eyed people, so they know that there is either n blue eyed people, or there is n-1 blue eyed people who see n-2 blue eyed people.
So, this means this is a proof by contradiction with induction, which is why I’ve had so much trouble making intuitive sense of it, even though I completely believe the induction.
8 April, 2011 at 3:45 am
Ravi Kunjwal
Ok, I retract my earlier comment. Solution 2 works. If, say, N=2, then each blue-eyed guy waits for one day, secure in the knowledge that there is at least one other blue-eyed guy who must kill himself if he is the only one blue-eyed, because he knows the fact that there is at least one blue-eyed guy in the tribe. But the other guy doesn’t. The only reason the other blue-eyed guy won’t kill himself, the first guy reasons, is because he knows at least one other guy who has blue eyes. Now all the other tribesmen have brown eyes, the other guy has blue eyes, so he himself must be blue-eyed to explain why the other blue-eyed guy did not kill himself. The other blue-eyed guy reasons exactly like this. Both of them kill themselves on the 2nd day.
The fact that everyone knows that everyone else knows there is at least one blue-eyed guy (“common knowledge”) is the key to see why the argument by induction works. At first glance, it wasn’t clear to me why it worked. After reading the wiki article on “common knowledge” it became clear that I was ignoring the fact that the blue-eyed guys – being logical – would reason why the other blue-eyed guys did not commit suicide, taking into account the fact that they share “common knowledge” – that everyone knows that there is at least one other blue-eyed guy.
8 April, 2011 at 9:01 am
Ankur
Yes indeed. See my comment to my comment here: https://terrytao.wordpress.com/2011/04/07/the-blue-eyed-islanders-puzzle-repost/#comment-51484
8 April, 2011 at 6:53 am
Akashnil Dutta
Solution 2 is correct.
Proof:
Whatever be the thought process of each person, his actions is defined by an algorithm f of the following form:
f:O->A where O is the set of all observations till some day and A is the set of actions, {commit suicide today, don’t}. A sample element of O would be “The address by the foreigner was made 3 days ago, and I see no suicides till now, I have always seen the following eye colors, 100 blue, 899 brown” etc.
Now each person, being completely logical, will make the following analysis:
Since everyone is completely logical, everyone follows a logical algorithm. Let the set of all logical algorithms be F*
any F in F* should have the following property P:
For all x in O, F(x)=suicide iff the following statement is true: “If everyone follows some algorithm in F* (not necessarily same), there is a unique eye color possible for the person committing suicide (or not) for which x is a consistent observation”.
Now, consider the first day. It is clear that F(x)=suicide iff x does not include a blue.
Inductively, if x is a second-day observation, F(x) can also be completely determined for all possible second-day observations and so forth. Hence indeed F is unique, and F* has only one element.
Now, everyone being completely logical will conclude that there exists this unique algorithm satisfying property P and act according to it.
At first glance, it may appear that the algorithm given is solution 1 (never commit suicide in any cases), also satisfies P. But, on closer look, it is not a complete algorithm, it does not say what should be the action corresponding to “no blue eyes seen on day of address by foreigner”. Nor can it be extended to include trivial cases of n=1 to satisfy P completely, because there exist an unique F satisfying P, as proved earlier.
8 April, 2011 at 7:16 am
n00b
Solution 2 looks ok, but I am not able to understand how the foreigner’s comment can act as trigger?
8 April, 2011 at 8:55 am
Nicole Krieger
Wouldn’t they have all committed suicide before the guy even shows up?
The guy plants the idea “Everyone else knows there is a blue eyed person, and they know that everybody else knows that, too”, but isn’t the idea already in everyone’s head before he even shows up?
8 April, 2011 at 9:12 am
Ankur
No. No BEP knows that every BEP knows that there is at least one BEP. The foreigner’s comment provides this information. See Austin’s comment and my comments aboves.
9 April, 2011 at 7:43 am
Nicole Krieger
Yes they do, because they see that Alice sees at least 98 BEPS, Bob sees at least 98 BEPS, Carol sees at least 98 BEPS, Dan sees at least 98 BEPS… they know that every blue eyed person knows that there are at least 98 BEPS… not just 1!
9 April, 2011 at 9:22 am
Ankur
But they don’t know that Alice knows that Bob sees 98 BEPs.
17 April, 2011 at 4:42 am
Joseph
yes they do. Or rather they know that Alice knows that Bob sees at least 97 BEP, that is to say they (lets call them Sharon) know that Alice does not know her own eye color but that she can see some number of BEP’s greater than 1 and that the forigner could have been talking about any one of those persons. ALL the islanders already know that there are at least 99 and no more than 100 BEP’s on the island, the forigner does not add any new information by stating that there is at lesat one. No one seems to address this issue directly in these comments, they just jump on the induction train, but that is the whole problem, the islanders already know that there ISN’T some n of BEP’s where n is an integer, there is some n where n is either 99 or 100, and you cant get there from here.
8 April, 2011 at 9:02 am
Nicole Krieger
Though I’m still optimistic… I mean, say I’m the 100th person. I assume I’m brown eyed, look around and think, “Alice thinks there are only 98 people, and that Bob thinks there are only 97 people and that Bob thinks Carol thinks there are only 96 people. But I know that Bob doesn’t think there are only 97 people, because I can see that Alice has blue eyes, so I know he thinks that there are 98 people with blue eyes, and that Carol also thinks that, so there for NO ONE thinks there are only 2 people, thus I have no idea what color my eyes are”.
Hopefully the visitor is a missionary who has come to convert them to some religion not involving ritual suicide. Though of course the idea that knowledge=death is hardly exclusive to the suicidal eye color tribe.
Also, I think we can eliminate other eye colors as a possibility, because in a tribe of 1000 they must be so inbred that any other recessive eyecolors would have shown up a bit more often.
8 April, 2011 at 10:35 am
Anonymous
“It is probably the n=3 case (three blue-eyed people) which causes the first major conceptual difficulty, in which common knowledge separates from both first-order and second-order knowledge. As pointed out above, in this case the three blue-eyed people all know that there is at least one blue-eyed person, and they know that everyone else knows this also… but they don’t know that everyone else knows that everyone else knows this, until the foreigner speaks. The remarkable thing about the puzzle, to me, is how such a subtle and seemingly academic change in the knowledge environment eventually propagates to have a concrete and dramatic effect.” By Tao, Feb 8,2008.
Why “they don’t know that everyone else knows that everyone else knows this”?
8 April, 2011 at 10:50 am
Literalman
Re: Why “they don’t know that everyone else knows that everyone else knows this”?
Answer: they don’t know that everyone else knows … that everyone else knows until he speaks because it’s not true until he speaks.
9 April, 2011 at 7:40 am
Nicole Krieger
It is true, they can see 99 blue eyed folks and they know that everyone else sees at least 98 blue-eyed folks and that the other people all know that the other people can see more than 1 blue-eyed person. The thought is already in their mind.
8 April, 2011 at 2:23 pm
mixedmath
Consider the logic necessary for 3 blue-eyed people: one blue might suppose that he does not have blue eyes (I will refer to this as the exterior supposition). Then he might look at a second blue guy, and think – what if he supposes that he does not have blue eyes (the interior supposition)? Then the second blue guy might, in this hypothetical (recall that this is all the first guy’s thought process), ask about the 3rd blue guy – the 3rd guy would be the only one with blue eyes. So on the second day, when the 2nd guy sees the 3rd alive, his thought process would lead to a contradiction, so now he knows that he has blue eyes. On the third day, the first guy’s thought process lead to a contradiction, and so he knows that he has blue eyes.
This is symmetric, and so all three blues arrive at the same conclusion and are dead by the 4th day. So it is necessary that ‘they know that everyone else knows that everyone else knows’ that someone has blue eyes. Otherwise, there is nothing wrong with that last person (in the hypothetical) to assume he actually has brown eyes too.
8 April, 2011 at 11:21 am
none
You mean I can show an islander some source code and ask whether the program halts, and if the answer is “yes” they can tell me that right away, but if it’s “no” then they might have to say “I don’t know”? Of course that’s illogical all by itself, for well-known reasons ;-).
8 April, 2011 at 12:36 pm
Anonymous
The problem here occurs in the form “If X, then Y”.
If Y is an unprovable statement, there is no statement X in that system such that X is equivalent to the implication (X → Y).
8 April, 2011 at 2:55 pm
Blue Eyes and Brown Eyes « mixedmath
[…] bring it up now because it has raised a lot of ruckus at Terry Tao’s blog. It doesn’t seem so peculiar to me, but the literally hundreds of comments at Terry’s […]
9 April, 2011 at 2:29 am
Geoff Robinson
I didn’t read all of the above, just a few. Apologies to anyone who already
said what I said. Isn’t the answer that nothng would happen in that scenario.
The fault seems to be in the induction hypothesis. If there
were 3 islanders to start with, there are two potential scenarios.
Two or three of them have blue eyes, in which case the outsider tells none
of them anything they didn’t know before-they all knew there
was at least one blue-eyed islander.
Or only one of them has blue eyes. In that case, the outsider has told
the person with blue eyes something he didn’t know before, but has
not told the other two anything new. In this (catastrophic) scenario,
the blue eyed person will commit suicide. As to what happens
the following day, I’m not sure the problem is so well-defined.
If they all know that there are only two possible colours,
then the other two should commit suicide (at least if their
brains are wired the same way as mine)- they should be
able to figure out that if only one person has disappeared,
the rest of them have non-blue eyes. So if they know there
is only one other colour available, they can figure out their
own eye colour, and then must kill themselves.
But as the problem is stated, it is not clear to me that they have
any reason to know there is only one colour than blue
available. A given islander might have green eyes for all he knows.
So, back to the original question. Nothing happens. The reuslt generalizes:
if there are n islanders, and only one is blue eyed, he will kill himself.
If there is more than one blue eyed islander, no one will
kill themselves. The start of the “induction” is atypical and misleading, and
this is why the second argument is fallacious.
9 April, 2011 at 3:45 am
boobs
The starting point is impossible. The scenario of solution 2 would have been played out already (when the law was first declared), killing everyone. Since the starting point is false, you can assign any truth value to situation 1 and 2.
9 April, 2011 at 8:11 am
Geoff Robinson
I don’t follow what you mean. Just because the law
is declared, that didn’t tell anybody for sure what colour their
own eyes were. Only when extra information comes from
an outsider is there the possibility that anyone knows
for sure what colour their own eyes are. And then
a person can only know for sure that (s)he has blue eyes
if there is no-one else visible to them who has blue eyes,
and the outsider has told them that there is someone
somewhere with blue eyes.
9 April, 2011 at 11:02 am
Geoff Robinson
In my original post, I should have said “nothing happens which was
not going to happen anyway”, and left everything else out.
The blue-eyed islanders will all eventually commit suicide, but not necessarily on day 100 after the foreigner’s announcement.
But maybe the problem is slightly ill-posed.
The islanders need to have a common starting point from
which they start counting. Once they have that, they will all
commit suicide, given that composition of the tribe.
It may be the intention of the poser of the question to mean
that the foreigner has now given them a point from
which to start counting in common which they did not have
before, in which case the foreigner becomes an agent of the
collective suicide. But if, like “boobs”, one inteprets the passing of the
law as a common point from which to start counting,
then one knows that 99 or less days have passed since
that time, or else there would be no blue-eyed islanders
left at that point in any case. But then the pronouncement
by the foreigner has made no difference to what would
happen. Maybe this is why the problem is slighly thorny.
If the foreigner has given anything to the islanders they
didn’t have before, it is a common frame of reference to
start counting. Once they have that, then they are doomed.
But is it really clear that they couldn’t have had a common point
from which to start counting before the foregner arrived?
9 April, 2011 at 1:11 pm
mixedmath
Even if there were only two people on the island, both with blue eyes, the additional information from the traveller is necessary for them to kill themselves.
Otherwise, at every point they look around and wonder – hmm, I wonder what eye color I have? They have no certainty of any fact to what they think others think they know.
9 April, 2011 at 2:14 pm
Geoff Robinson
I see what you are saying, and you may (finally) have straightened me out about this. Given the information from the outsider, Person 2 thinks:”If my eyes are not blue, then Person 1 must kill his/her self. But person 1 did not kill his/her self on day 1, therefore my eyes must also be blue.”
So, if there is only one blue eyed person on the island at the beginning,
that person gets new information from the foreigner, which
they must act on. But the fact that no-one acts on day 1 tells
everybody that there are at least two blue eyed people.
But then if there are exactly two blue eyed people, they must
both act on day 2, as they both initially know there are
either one or two blue-eyed people, and after day one, they know
it must be two , etc. So the induction argument is correct after all.
I hope this is the end of it for me.
9 April, 2011 at 2:42 pm
mixedmath
@Geoff Robinson 9 April 2011 at 2:14
Exactly right!
-David Lowry
9 April, 2011 at 5:35 am
Rob Blake
Assume that each islander realizes the trap the foreigner has placed them in. Assume further that at least one of them is altruistic enough to value the life of his tribe more than himself. He goes to the village center at high noon to commit suicide despite not knowing his eye color. The other tribesman cannot perfectly know why the martyr has committed suicide. Perhaps he knows his eye color, perhaps he does not. Does this break the induction?
If more than two people decide to adopt this martyr strategy, only the first to arrive at the town’s center need go through with the ritual. The second can simply watch, secretly relieved that they will be able to live another day.
4 December, 2012 at 8:18 pm
Lenoxus
It’s too bad this is so thumbed down, because it’s basically correct. If N blue-eyed people die on or before Day N, then no one else has to die.
9 April, 2011 at 1:02 pm
Sper
@ Rob Blake: Your comment seems to disregard the second note.
9 April, 2011 at 11:09 pm
Fun with common knowledge « The Leisure of the Theory Class
[…] another game theory puzzle, one that is always fun to come across, especially on Terence Tao’s blog. In the version I know the islanders are less devoted to their religion and more jealous of their […]
10 April, 2011 at 11:57 am
MCF
Online LaTeX Equation Editor:
http://www.codecogs.com/latex/eqneditor.php
10 April, 2011 at 1:01 pm
Jason T. Miller
The second solution seems mostly correct, at least “in the spirit of things.”
The first solution, on the other hand, is a complete non-starter, because the proposition “the traveller’s comments do not tell the tribe anything that they do not already know” does not follow _at all_ from the fact that “everyone in the tribe can already see that there are several blue-eyed people in their tribe.”
First, if the “the tribe” were a hive mind, they’d all have offed themselves at the first opportunity upon accepting their hokey religion. So we instead consider
Proposition A: “For all tribespeople P, there does not exist a fact F such that (a) P does not know the truth value of F before the traveller begins his comments, and (b) P knows the truth value of F immediately after the same.”
This certainly lacks the “intuitive appeal” of the original formulation, but it seems like a “safer” expression of the idea that P “learns something” from the traveller’s speech. Note also that, by “knows,” I mean “could potentially infer from known data.” The circularity in this statement is quite intentional — the puzzle should clearly not be approached as an epistemology exam, the point is that tribespeople, religious as they may be, do not, by hypothesis, “believe” things. Instead, they are in possession of a certain collection of “true facts” at any given time, and any claims as to their actions must follow deductively from these “facts” and “the rules.”
Suppose, then, that proposition A is true, let P be a tribesperson, and assume no facts exist satisfying both (a) and (b) for P. First, this means P is indubitably prescient, at least with respect to the traveller’s comments, otherwise he’ll necessarily learn _something_, at the very least that (a) and (b) hold for no facts other than the fact that (a) and (b) hold for no other facts. If this sounds confusing, it should, but it seems to follow quite directly from any reasonable interpretation of “high logicality.”
In other words, anything P knows immediately afterwards, he knew beforehand, so he cannot _possibly_ learn anything between “immediately before” and “immediately after.” Modulo formalization, this seems to be the only way to exclude various sorts of “higher-order knowledge” without forbidding the very sorts of conclusions we _must_ allow tribespeople to draw from traveller’s comments for a sensible, nontrivial interpretation of the puzzle.
Since this must be true of all tribespeople, the traveller’s actual comments seem to be largely superfluous. But this is not the case, whence _indubitable_ prescience: everyone must, in fact, know that what they know is/was/will be the case, i.e., that they will learn nothing, including that they have learned nothing, that they know they cannot possibly learn anything.
And so on. Yet somehow, solution 1 obtains this from the fact that “everyone in the tribe can already see that there are several blue-eyed people in their tribe.” Yet I can only think of a one-word “proof”: obvious.
11 April, 2011 at 1:36 am
Geoff Robinson
While I understand the logical structure, and apparent inevitability,
of solution two, I have some issues with the language of the problem.
The second solution effectively credits all the islanders with the
ability to extrapolate back to a scenario when one islander would
have had no choice, and then reconstruct the logically inevitable consequences of that initial lack of choice. I suppose my question is : “is it entirely clear from the pharasing of the problem that this the only logical choice available to them? Do they have to extrapolate back all the way to a time when there is no choice?”
I would find the following tightening of the first solution
a little more difficult to refute: “Nothing happens. Each islander is logical
enough to realise that their own eye colour might be unique on the island,
and, as things stand at that time, the traveller has given none of them any reason to change that belief”. One can dress up the language further by saying that each islander thinks :”It is indeed fortunate that that traveller did not have the same colour eyes as me, or I would have had no choice but to kill myself. As it stands, I don’t need to do anything, and my 999 logically
minded friends will see that for themselves too”. One could say that some of the islanders are opting to do nothing on the basis of a false premise, but
they don’t know for sure that it’s false, and no one has given them any reason to revise that premise.
Try this with 6 islanders, three brown eyed, and three blue eyed,
at the time of the time of the traveller’s arrival. Each one thinks
“My eye colour could be unique here for all I know, and as far as
anyone else knows, their eye colour could be unique on the island too.
I happen to know that they are all mistaken in that belief, but they
do not know that, and nothing the traveller says has corrected their
belief.” Where then is the need to start extrapolating backwards?
Each islander knows only that if their eye-colour is unique among
islanders, it is neither blue nor brown, because each can see at
least two blue eyed people and at least two brown-eyed people.
11 April, 2011 at 2:17 am
Matthew
It is a brilliant puzzle and solution;
i’m not trying to disprove the puzzle, because in good faith it holds; but, I would like to point out something obvious anyway; it’s an island, so they are next to the ocean and water is reflective
also, one cannot live without water
/that is all\
11 April, 2011 at 3:47 am
Matthew
http://en.wikipedia.org/wiki/Common_knowledge_(logic)#Example
11 April, 2011 at 4:28 am
Matthew
http://boards.4chan.org/sci/res/2868576#2869058
“Anyway, the reason they kill themselves despite having already known at least one other blue-eyed person exists there is because the mentioning of their eye colour itself is the issue. Regarding the group as a whole, he has declared the existence of a blue eyed person, something that has never actually happened before in their tribe. It is forbidden to mention and therefore all of the blue-eyed tribespeople are condemned at once.”
a nice moralistic summation, logic aside.
also, remind me not to join that religion (I’ll be the visitor if I must)
too much thinking all around; sleeping time
11 April, 2011 at 4:32 pm
Pete
There are two relevant things (knowledge) that result from the outsider’s statement:
1) the color blue is special in the islanders’ thoughts
2) the islanders all start considering it at the same time.
Thing 2 is the indirect and hard to spot thing. (Stijn mentioned it above.) It effectively synchronizes the islanders’ thoughts. The structure of the problem suggests that one label the days with the inductive index n, and the outsider’s faux pas forces every islander to count n=1 on the same day.
Thing 1 is necessary as well, otherwise the islanders wouldn’t know to which eye color they should relate the counter n. But once the outsider says “blue”, all islanders know there are either 99, 100, or 101 blue-eyed islanders.
Having both things, the islanders can relate the passage of n suicide-free days to the number of blue-eyed islanders, and the proof by induction applies.
Had the synchronization not occurred, then no knowledge could be gained from the passage of days. This would be the case for an outsider simply saying the same thing but only to individuals and on different days. This would also describe the state of the tribe before the faux pas.
Had the eye color not been mentioned, the islanders wouldn’t know which color to count, and there would be nothing to which the passage of days would relate. It’s contrived, but imagine some sort of mass amnesia affecting all islanders. If they all regain their cultural memories on the same day, and they know it, then their counters would synchronize. But there is no danger, because there is no color for them to count.
13 April, 2011 at 12:06 am
Pseudonym
The knowledge which is imparted by the foreigner’s statement is subtle, but important. What the islanders don’t know prior to the statement is what the other islanders do and don’t know about each others’ knowledge.
Everyone on the island knows that there is at least one person with blue eyes. Everyone on the island knows that everyone knows that there is at least one person with blue eyes. Everyone on the island knows that everyone knows that everyone knows that there is at least one person with blue eyes…
…and so on, but only up to a point. Everyone knows up to 99 levels of knowledge about knowledge, but no further. By telling everyone that there is at least one blue eyed person on the island, this pushes the bound of knowledge about knowledge as far as you need to go, because everyone heard that statement, and everyone knows that everyone heard that statement, and so on.
13 April, 2011 at 12:54 am
Geoff Robinson
I find this puzzle really uncomfortable. I think I now realise why at a personal level, which may not apply for others: for me, I think it is that the
problem highlights the difference between “reasonable” behaviour,
and “strictly logical” behaviour, and the sometimes blurred boundaries
between them in “normal” experience (ignoring the question of whether
committing suicide for those reasons is reasonable).
The following verbalization enabled me to reconcile some of the issues for myself. Assuming that the island was in equilibrium beforehand, the traveller’s announcement has made a difference, but it can be seen as a negative one. Before the announcement, they are logically entitled to be certain that no other islander knows their own eye colour. After the announcement, they are not. Once the
equilibrium is broken, there is only one way for it to be restored:
each blue eyed-islander must commit suicide, and then the
population that remains will be back in equilibrium ( but with one
less eye-colour).
Continuing this perspective, each islander might as well regard
the blue-eyed population (s)he can see as a single organism
(which is, I suppose, what the second solution effectively does).
I also found it helpful to think in terms of minutes, rather than days
(though I know it makes no logical difference). Suppose that
the rule is that each islander must commit public suicide within
one minute of discovering their own eye colour, and suppose further
that each islander has an alarm clock (better be digital!).
Each islander can then think: ” I can treat the blue-eyed population
I see as a single entity- I just need to recalibrate what I mean by
a minute. I see n blue eyed people, so I can treat n “old” minutes
as one “new” minute. Then I will effectively be in the situation of
deciding whether there are one or two blue-eyed islanders.
I’ll set my alarm one “new” minute ahead. If the blue-eyed population
I see has not disappeared when the alarm goes off, then I’m
in the “two blue eyed islander situation”, and must commit suicide myself.”
(If the islander can see no blue-eyed population to start with,
they must commit suicide anyway, given the announcement).
14 April, 2011 at 5:46 am
Geoff Robinson
I really hope that this is my last comment on this problem. I am sure
that I am not alone in that hope. One flippant
remark is that to find the purely logical solution to the question, it
helps to “become part of the problem, not part of the solution”
(where have I heard that before?). Imagine that through some quirk of
fate you have forgotten your own eye colour, and been sent to the island
to recuperate. Then credit the other islanders with being able to reason
the same way you can.
Another (perhaps obvious to many) comment is that, once one moves away from the “idealized” problem, there is a quantitative element to the distinction between what is reasonable and what is logical. Everyone would ( I think) agree that if there was only one blue-eyed islander, then it would be both unreasonable and illogical for that islander not to act on the information
imparted by the traveller. However, suppose that, instead of an island,
you translate the problem to a city of 300000 inhabitants, of whom
100000 are blue-eyed. It might be illogical for all the islanders to
ignore the traveller’s announcement, but it would be entirely unreasonable
for any of them to bother to start counting days (assuming they had
anything like a normal human lifespan) , since in any case any one of them would die before they could ascertain their own eye colour
(I know this contravenes note two, and I know that if you start to go down this route, you have to start modelling for the fact that some of the population will die of natural causes in the meantime, etc, which would
probably become intractibly difficult. However, it does illustrate a point).
14 April, 2011 at 9:45 am
Ilkhom
Agree. This is becoming a question of policy – do they really want to know something, if they are going to die after learning it?
14 April, 2011 at 1:02 pm
mixedmath
Going down this line of thinking ruins the logical appeal of the puzzle. It seems to give people enough trouble without layering additional levels of difficulty on top of the problem.
David
15 April, 2011 at 3:25 am
Geoff Robinson
If you were responding to my comment you are certainly right. I do
understand that one is looking at a totally different problem, but I was
trying to reconcile the (for me) counterintuitive nature of the logical answer
by analysing the difference between what is reasonable and what is absolutely logical. One thing the comments on this post show is that different people’s intuition is telling them very different things. For the
mathematician (or mathematics student) it should perhaps not be
too difficult to adopt the maxim ” forget intuition, and trust rigorous
logical argument”. On the other hand, when trying to use mathematics
and isolating the essential properties of the situation you are trying to model, it is important to understand the limitations imposed on the
model you are using by the “simplifying” assumptions made, and perhaps
be prepared to amend the mathematical model to better reflect the
problem you are using it for.
13 April, 2011 at 1:09 am
mircea
Did anybody tell the tribe that there can be only 2 eye colours?
13 April, 2011 at 2:58 am
Ilkhom
A hypothesis:
The tourist taught them that their island is closer to India, than Sweden.
And I personally never seen blue-eyed Indian.
And that changed one subtle assumption.
Is it right way of thinking?
13 April, 2011 at 3:05 am
Ilkhom
the outcome is like in the answer, but for different reason
13 April, 2011 at 3:05 am
Ilkhom
like in the answer 2, I meant
13 April, 2011 at 3:51 am
lkozma
Solution 2 is correct, Solution 1 is wrong because there is always transfer of information in the stranger’s speech.
Suppose there was only 1 blue-eyed person:
[before]: Not everyone knows there exist blue eyed people
(the blue eyed person doesn’t know it)
[after]: Everyone knows there exist blue eyed people.
Suppose there are 2 blue-eyed persons:
[before]: Everyone knows there are blue eyed persons.
Not everyone knows that everyone knows that there are blue eyed persons
(the 2 blue eyed don’t know).
If Alice and Bob are blue eyed, Alice knows Bob is blue eyed,
but thinks that Bob thinks nobody has blue eyes.
[after]: Everyone knows and knows that everyone knows etc.
Suppose there are 3 blue-eyed persons:
[before]: Everyone knows there are blue eyed people.
Everyone knows everyone knows there are blue eyed people.
Not everyone knows everyone knows everyone knows there are blue eyed people.
This last statement changes after the speech.
Similarly for any n:
[before]: (everyone knows)*(n-1) there are blue eyed people.
[after]: everyone knows everything.
13 April, 2011 at 4:19 am
Ilkhom
No, not always.
If I say that your nickname is lkozma, the amount of information transfered is zero.
13 April, 2011 at 3:57 am
lkozma
Incidentally, this is why dictatorships don’t allow gatherings of people (or free broadcast media). Everyone knows the regime is bad but not everyone knows that everyone knows etc. When the whole hierarchy collapses, revolutions break out.
13 April, 2011 at 4:37 am
Peter Whittaker
The puzzle can be understood by thinking about it in terms of quantum mechanics and colllapsing wave functions, with the collapse leading to an event. In this case, the wave function describes eye colour, and the event is suicide. Sure, it’s silly, but that’s not important; the event could be self-exile, putting on a t-shirt that announces one’s eye colour, etc.
Every islander knows the colour of every other islander’s eyes, but not their own. This means that each islander’s “eye colour information wave function” is not yet collapsed.
In other words, from their own perspective, each islander doesn’t even have an eye colour! They’re not “blue” or “brown” or “green”, or anything else: Each individual’s colour is “?”.
Every islander has exactly the same knowledge as all the other islanders – they all have the same information. No one acts, because no one has sufficient information to act – they each know their eye colour to be “?”. New information is required to collapse that function, to pin “?” to a specific colour.
The stranger’s pronouncement is the new information that causes functions to start collapsing. But they don’t collapse right away unless there is only one blue-eyed person on the island.
If there is 1 person, they know their eye colour to be blue, and so they act on that information.
If there are 2 or more people, and one has blue eyes, that person immediately knows their own eye colour, and they act.
If there are 2 or more people, and two have blue eyes, each one continues to know their own colour as “?” until the next day, when the other does not act. This tells each of the two blue-eyed people that the other blue-eyed person knew of a blue-eyed person and didn’t act – in other words, the other blue-eyed person didn’t have enough information to act, because they could not know their own eye colour until the time for the action passed.
Once the time passes for the “other guy” to act – to kill himself – the observer who has blue eyes and doesn’t know it now knows there is at least one other person with blue eyes, and by looking at everyone else now knows that it must be himself.
So the next day, they both act.
With three blue-eyed, it takes two days of non-action observed by someone who sees two blue-eyed people for that person to realize their must be one more person with blue eyes, and that it must be them.
Just keeping days for each person with blue-eyes.
The stranger’s words are the trigger that starts wave functions collapsing, but it is only sufficient information in the special case where only one person has blue eyes.
When more than one person has blue eyes, additional information is required for each blue-eyed person to realize their own eye colour, and that additional information is the continued non-action on the part of all of the other blue-eyed people.
Once all the blue-eyed people take their collective action, the island returns to the state wherein everyone’s knowledge of their eye colour is “?”. They may all see brown, but they don’t have enough information to make any conclusions about their personal eye colour.
13 April, 2011 at 5:13 am
Ilkhom
No. Incorrect.
A priori, probability of having blue or brown eyes is 1/2.
Now imagine we have three islanders, each has blue eyes,
then would think [even after the tourist mentioned he saw blue eyes in the crowd] “Ah, Ok, yeah I know we have people with blue eyes, but everybody still believes that he or she may have any of the colors, so nobody still really sure about colors, therefore no suicides”.
But after the tourist speech, probability of having blue eyes is not 1/2 anymore.
13 April, 2011 at 5:31 am
Peter Whittaker
Your a priori probability is wrong. It is a) not a priori at all, but based on a particular known probability distribution, and b) irrelevant and c) irrelevant – it is irrelevant for two reasons.
It is irrelevant because b) no one takes action on probability on this island, but on certainty – they only act when they know something, and because c) blue and brown aren’t the only possibilities: I could see one person with blue and 999 with brown, and see that person with blue kill themselves tomorrow at noon, and I still would not know that my eyes were green.
13 April, 2011 at 6:25 am
Peter Whittaker
A more general statement of the puzzle….
The puzzle can be stated generally as follows:
There exists a system S in which there are N agents.
Each agent has a property, P, which has a value, Pv.
Discussion of P is forbidden. (Call this rule R1 for later reference.)
If an agent should discover their Pv, they must take an action, A, after time T.
Each agent can see that there are N values of P, that is, Pv1, Pv2, Pv3… PvN
Each agent is ignorant of their Pv, and has no reason to believe it is the same as any observed Pv. This means that there may – or may not – be N+1 values of P, that is, the Pv1, Pv2, Pv3… PvN observed values and their own PvN+1, which may or may be the same as one of the Pv1, Pv2, Pv3… PvN observed values.
Given these conditions, there is insufficient information for any agent to know their Pv, so no agent takes action A. As far as they are each concerned, the value of their property is Pv?
An outsider arrives, one not bound by rule R1 or one capable of breaking rule R1.
The outsider announces that they see the value PvX. The outsider does not indicate how many agents have PvX, merely that some do.
Any agent that cannot see any PvX immediately knows they are the only one with PvX. They are therefore compelled to take action A after time T.
Any agent A that sees one and only one other agent with PvX expects that other agent to take A after T.
When T passes and the other agent does not act, the agent expecting A must conclude that they themself also have PvX, because the only thing that would stop the other agent from acting is uncertainty about their own Pv, and they would only be uncertain if they also saw one other with PvX and were expecting that one other to act. Therefore, each must conclude that there are exactly two agents with value PvX, the one they can observe, and themself.
Therefore, after a second time T, they both take A.
Add another agent with PvX, and repeat with three agents with PvX: After the first time T, each agent with PvX expects nothing, because they can observe two agents with PvX and they know neither has enough information to act. After a second time T, each agent with PvX expects the other two to act, but they do not, which means there must be one more agent with PvX, one that each agent cannot observe – themself.
Therefore, after a third time T, all three take A.
Add another agent with PvX, and repeat….
The reason for dramatic action A (suicide, self-exile), etc., is simply to restore the system S to its initial state: the agents who know their Pv must be removed from the system to return all agents to the original level of ignorance, of uncertainty.
Once all agents with PvX are removed, the original description of S, above, applies with N reduced by 1. The exact number of agents in S, the exact number of agents removed by A after the outsider’s disturbance, and the exact value of N are all irrelevant.
20 April, 2011 at 3:00 pm
UK
Thanks! I couldnt figure out the solution for quite some time because I couldnt formulate the problem rigorously in my mind. But your reformulation helped me see the solution clearly.
13 April, 2011 at 1:59 pm
JPP
(Sorry if I’m making a dumb comment or saying something someone already said).
This problem reminds me something I read as a child:
“James Bond is captured. On Sunday, his gaolers tell him that he will be put to death a morning at dawn. The precise day will no be revealed in advance so that the visit of the executioner will be a complete surprise for him. But in any case, he’ll be dead on the next Sunday at dusk.” The paradox is now the following: Commander Bond can think “If I’m still alive on Saturday (after dawn), I’m doomed to be killed the next day. But this wouldn’t come as a surprise so this hypothesis is absurd and they can’t kill me on Sunday. So if I’m alive on Friday, I’ll be killed on Saturday, but this wouldn’t come as a surprise and so on.” By this brilliant induction, our glorious spy safely concludes that the gaolers can’t logically kill him and he will escape this pickle, as usual. The visit of the executioner on Wednesday then comes as a complete (and rather unpleasant) surprise. How to solve this paradox?
I have an odd feeling that this problem and the blue-eyed islander one are kind of related. Am I wrong? What can you say about Bond’s one?
13 April, 2011 at 5:04 pm
Peter Whittaker
Is the problem with the Bond puzzle the layering of hypotheticals? IF I’m alive on Friday, I won’t be killed Saturday, and IF I’m alive….
But IF he is killed Monday, then the whole thing falls apart. I don’t see a contradiction, I just see a conclusion reached on shaky conjecture.
The blue eyed problem is framed in such a way (very strict rules known and followed by all) that there is no conjecture, no probability, just fact, solid information, the others did not commit the act, therefore… and conclusions drawn from that information.
Think of it another way: Bond is reasoning based on potential future events and what those might mean. The islanders are based on past events and what those must mean, given that the rules are always followed. The past events are the stranger’s announcement and the fact that each morning an unexpected number of people have not committed the act.
14 April, 2011 at 8:32 am
Terence Tao
Ah, yes, the unexpected hanging paradox. There are some similarities between the two puzzles, in that one has to work with chains of higher-order knowledge statements (Bond-on-Monday knows that if he is not killed on Monday, then Bond-on-Tuesday knows that …), though now the deductive chain is working backwards in time rather than forwards.
To me, though, the resolution of the unexpected hanging paradox is the fact that once there are two or more days involved, there is an implicit assumption that Bond knows the consistency of his own knowledge (or more precisely, of the consistency of his future knowledge). For instance, suppose that the execution is either at Monday-dawn or Tuesday-dawn, and consider the state of Bond’s knowledge at Sunday-dusk (just before Monday-dawn) and Monday-dusk (between Monday-dawn and Tuesday-dawn). Let me abbreviate these states of knowledge as Bond-Sunday and Bond-Monday respectively.
Bond-Sunday knows that if there is no execution at Monday-dawn, then Bond-Monday will know the following facts:
1. There was no execution at Monday-dawn.
2. Bond-Monday knows that there was no execution at Monday-dawn.
3. The execution is either at Monday-dawn or Tuesday-dawn.
4. Bond-Monday knows that the execution is either at Monday-dawn or Tuesday-dawn.
5. On the date of the execution, Bond will be completely surprised by the event.
These five facts are inconsistent with each other, and hence Bond-Monday’s knowledge would be inconsistent. However, in order for Bond-Sunday to then deduce (via reductio ad absurdum) that there is an execution at Monday-dawn, he must first know that Bond-Monday’s knowledge is consistent. But this itself will lead to an inconsistency in Bond-Sunday’s knowledge, by similar arguments to above (and so forth back in time, if we consider more than two days).
So we see that the unexpected hanging paradox is in fact closely related to Godel’s second incompleteness theorem, which roughly speaking prevents any reasonable and consistent formal system from knowing its own consistency. This connection is discussed in detail in this recent Notices article of Kritchman and Raz.
Note that the blue-eyed islander puzzle is unaffected by this issue, because at no point does any islander need to know that any other islander’s knowledge is consistent (or any iteration thereof). Instead, they need to know that the other islander’s knowledge is logical (i.e. closed under the rules of deductive logic), which is a different property.
14 April, 2011 at 9:29 am
Ilkhom
But does your post mean that solution is already given?
I am still dissatisfied with it then, I see it relies on the intention of tribespeople to know about their eye-color. It is unclear, if when they are given an opportunity to figure out their own eye-color they want it to be pursued.
19 April, 2011 at 11:51 am
Pace Nielsen
“Note that the blue-eyed islander puzzle is unaffected by this issue, because at no point does any islander need to know that any other islander’s knowledge is consistent (or any iteration thereof).”
So, just to see if I got this: we, the puzzle solvers, may assume that the people in the puzzle are consistent to come to our solution, but the people in the puzzle itself cannot make such an assumption.
19 April, 2011 at 11:59 am
Pace Nielsen
Or to put it another way:
Suppose that in the puzzle there are exactly two eye-colors: blue and brown, and all the islanders know this fact (and know the others know this fact, etc…)
When all the blue-eyed islanders kill themselves, does it necessarily follow that the brown-eyed islanders must kill themselves the next day? For if they do not have to assume that their fellow islanders have consistent knowledge, then could they not conclude that it is possible that those blue-eyed islanders killed themselves early because of their inconsistent knowledge?
(Note: I’m also not sure that knowledge acts like an axiomatic system in all the relevant ways to speak of formal consistency…)
4 May, 2011 at 10:39 pm
Alexey-Voropaev
Would you please answer also:
Is this issue related somehow with Theorem of Poincaré about three bodies?
Sorry for my English : (
13 April, 2011 at 10:58 pm
Ilkhom
Answer 1 is right.
[it is wrong only in degenerate cases of 1 2 or 3 people]
The religion prevents them from discussing the topic.
Cases with than 4(?) or so people require implicit communication through observing the events of suicide. But as it is mentioned above any communication is prohibited,and the lack of suicide has no disambiguative power.
16 April, 2011 at 12:53 pm
Big Red Bits » Reasoning about Knowledge
[…] to Petra in Jordan together with Vasilis Syrgkanis and in the road we kept discussing about the Blue-Eyed Islanders Puzzle, posted by Terry Tao on his blog. It is a nice puzzle because it allows us to reflect on how to […]
16 April, 2011 at 4:17 pm
moonshadow
One free thinker in the bunch would realize that there is money to be made in starting a new religion. He would renounce the existing religion based on a revelation from “God”. He, or more likely She, would have “visions” to guide the people to a realization that they should not only know their eye color but embrase it. For a nominal fee they would meet each week and hear the Prophet speak. They would go out to their friends and share the good news that they did not have to die. They could know the color of their own eyes and live!!! PRAISE THE PROPHET!!!!!!!!!
17 April, 2011 at 5:16 am
Joseph
I keep seeing comments, here, on xkcd, on the wikipedia page, that say “imagine if there was x islanders” but there’s not, there are 1000, and it is common knowledge that there are 1000, and comments like, “imagine there are 2 (or 3 or 4) blue eyed people, but all the islanders KNOW there are NOT 2 or 3 or 4, they all KNOW that there are EITHER 99 OR 100 OR 101 (That is every one of them sees either 900 or 899 brown eyed people and 99 or 100 blue eyed people, and each knows that they may have blue eyes). I cannot understand how they begin their induction before day 99, as they all know that no one is going to commit suicide on day 1 or 2 or 3 etc because there is no way to infer their own eye color on any of those days. on day 99, there is no preceding case, is has NOT been demonstrated that there is NOT 98 blue eyed people on the island by the lack of suicides, that fact was already established, and KNOWN TO ALL.
smart people seem to believe that induction is possible and that solution 2 is correct, but I have not seen any explanation that isn’t manifestly “hand-waving”. Is anyone aware of a PEDAGOGICALLY SOUND explanation of this puzzle?
Also the common knowledge page on Wikipedia pulls the same trick with the knowledge that the foreigner adds; they say “imagine the number of tribesmen is n” BUT ITS NOT! its 1000! does anyone know of a PEDAGOGICALLY SOUND that is without hand-waving or appeal to “multi-modal logic” that can explain why induction should be possible in a case where is is already KNOWN that there are more than 50 people on the island who have blue eyes?
I am deeply suspicious of people who assert wildly unintuitive things about seemingly simple scenarios and then to prove them have to resort to talking about theoretical frameworks only a tiny percentage of the worlds population could possibly be familiar with.
18 April, 2011 at 9:48 pm
Tollens
“on day 99, there is no preceding case, is has NOT been demonstrated that there is NOT 98 blue eyed people on the island by the lack of suicides, that fact was already established, and KNOWN TO ALL.”
Here is the last step in the induction: If no one commits suicide on day 99 then there were could not have been 99 blue eyed people (this was not known beforehand). Hence there must be 100 blue eyed people and on day 100 all the blue eyed people commit suicide.
I suggest reading Lkozma’s comment which makes the solution much more intuitive.
18 April, 2011 at 10:14 pm
Anonymous
I am impressed by the longevity of the enthusiasm on this puzzle. I guess I should check back once a while for new interesting comments.
19 April, 2011 at 11:47 am
megatron
Hi,
From an engineering point of view (EE or CS), let’s imagine the island/w the islanders is the system under test. The foreign visitor is the external excitation signal.
Say the purpose is to input a signal (foreigner’s action) to kill every blue eyed islander.
solution 1: the foreigner can point to each blue eyed islander and tell him/her that he/she is blue-eyed
solution 2: the foreigner can count and say that there are 100 blue-eyed islanders
In addition, if the input signal is deceiving, for example, the foreigner says that there are 99 blue-eyed islander. the blue eyed islander will count 99. and the brown eyed count 100 and knows that it’s a false claim. so nobody dies.
But what if the foreigner says 101 blue eyed. then blue eyed won’t die. but all brown eyed will die next day, thinking that they are blue-eyed.
To kill 100 blue-eyed islander, all these takes precise counting, which it’s computationally expensive.
what is amazing for me is that for such a system, there exists simple “algorithms” to achieve the goal:
1. foreigner just say: “how unusual it is to see another blue-eyed person …”
or
2. (deceiving information) foreigner just say: it’s amazing to see a red-eyed person (which kills every islander)
Can some theorists generalize this phenomena?
19 April, 2011 at 11:55 pm
Varjabedian
Dear prof. Tao,
The foreigner gives a starting day to apply solution 2, for me that’s the information missing.
23 April, 2011 at 12:32 am
AnusInFabula
I think there is two possible way:
1)
If every islander knows the statistics he can count the number of islander which
have blue and brown eyes and than understand wich one is his own eye colour and suicide him.
The tribe could not exists.
2)
If every islander doesn’t know the statistics the foreigner’s assertion does not influence in any way the life of islanders cause we have this logic-scheme:
A islander’s reality: there are over 800 people blue-eyed in this place
B foreigner’s assertion: there are some person blue-eyed in this place
Deduce with assumption A,B that my eye colour is X has not logical reasons, cause A is equivalent B.
If someone commits suicide for this is the same case if i know that (Exists x€R such that x=1) come somebody and tell me (Exists x€R such that x=1) i deduce y=1.
There’s no way.
25 April, 2011 at 11:48 pm
Ajay Ravi
Here’s the proof similar to solution 2 which clearly shows that traveller announcing the presence of blue-eyed person provides new information that forces all n blue-eyed people to commit suicide.
The proof is structured into numbered propositions, with proposition N=n providing the proof for n blue-eyed people:
Proposition 1: if n=1 AND traveller announced there is a blue-eyed person, then the blue eyed person commites suidicde after 1 day
Proof: the blue-eyed person realises he is the only one with blue eyes and commites suicide the next day.
Proposition 2: if n=2 AND traveller announced there is a blue-eyed person then the blue eyed people commit suidicde after 2 days.
Proof:
All blue-eyed persons reason in this way:
if I am not blue-eyed then there is 1 blue-eyed person on the island, and since traveller announced there is a blue-eyed person, the lone blue-eyed person will apply Proposition 1 to commit suicide after 1 day.
Therefore if there is no suicide after 1 day, I must also be blue and I will have to commit suicide after 2 days.
Proposition 3: if n=3 AND traveller announced there is a blue-eyed person then the blue eyed people commit suicide after 3 days.
proof:
All blue-eyed persons reason in this way:
if I am not blue-eyed then there are 2 blue-eyed people on the island, and since traveller announced there is a blue-eyed person, the 2 blue-eyed people will apply Proposition 2 to commit suicide after 2 days.
Therefore if there are no suicides after 2 days, I must also be blue and I will have to commit suicide after 3 days
…
proposition N: if n=N AND traveller announced there is a blue-eyed person then the blue eyed people commit suicide after N days.
proof:
All blue-eyed persons reason in this way:
if I am not blue-eyed then there are N-1 blue-eyed people on the island, and since traveller announced there is a blue-eyed person, the N-1 blue-eyed people will apply proposition N-1 to commit suicide after N-1 days.
Therefore if there are no suicides after N-1 days, I must also be blue and I will have to commit suicide after N days
29 April, 2011 at 9:53 am
ETzortzakis
Solution 1 is wrong!
proof:if there are n blue eyed persons(BEPs) A_1,A_2,…,A_n
-for n=1 it’s obvious that traveler’s statement adds new info.Then starts what we call 1-sub game,which is A_1 realizing his own eye color and committing suicide on the same day.
-for n=2 then traveler’s statement again adds new info,
indeed A_1 knows that there is at least one BEP but he doesn’t know that A_2 has this knowledge.But now(after foreigner’s remark),he knows that A_2 also knows this.So now A_1 is thinking that if his own eye color is brown then A_2 plays the 1-sub game which will conclude with A_2 committing suicide on the 1st day.So if on the 2nd day A_2 is alive then no 1-sub game ever happen so he realizes that his own eye color is blue and thus commits suicide on the 2nd day.Symmetrically A_2 does the same.
-n=3 A_1 knows that there is at least one BEP.
A_1 knows that A_2 and A_3 know that there is at least one BEP.
BUT A_1 doesn’t know if A_2 knows that A_3 is aware of the fact that there is at least one BEP!
We can see clearly now where Solution 1 fails,the comment gives the tribe meta-knowledge(or meta-meta-knowledge) aka knowledge on others knowledge.So A_1 is thinking if i have brown eyes then the other 2 with their new knowledge play a 2-sub game so they will commit suicide on the 2nd day,else i am BEP and we all commit suicide on the 3rd day.
The rest of the induction is clear from n=3,
this faux pas gives A_1 the knowledge that A_2 has the knowledge that…that A_n-1 has the knowledge that A_n has the knowledge that there is at least one BEP in the tribe.
30 April, 2011 at 11:32 am
Dale
Most blue-eyed people will not know how many blue-eyed people are on the island. (Do you know how many blue-eyed people live in your neighborhood?) Being highly logical, when they recognize the iteration, they will have ggod reason not to know the number. Some will committ suicide on the 100th day (they had counted), but everyone else (brown or blue eyed) will continue to not count the number of blue-eyed people.
5 December, 2012 at 11:39 am
Lenoxus
This answer makes sense, except that if all the islanders are aware of the possibility of other islanders not counting, then no one will ever commit suicide.
30 April, 2011 at 11:47 am
Dale
Actually, we could call this the “Sargent Schulz” solution (for fans of Hogan’s Heroes)
30 April, 2011 at 12:21 pm
G. Siamantas
The first solution seems to be the correct one.
The second solution requires that at least one of the n-1 blue eyed islanders know his/her eye color, that it is not true.
This easily demonstrated when n=2.
Both blue eyed residents don’t know their eye color, their decisions are mutually exclusive.
After the first day they will both wait indefinetely because they both know that they don’t know their eye colors.
If one of them knew that the other knows his eye color only then he would commit suicide the other day.
The only guy that will be probably forced to commit suicide will be the traveller because he talks too much…
30 April, 2011 at 7:01 pm
al
I was struggling with why the foreigner triggers everything, until I read the Wikipedia article on common knowledge and the light dawned.
As far as any single islander knows, all the other islanders are blind (or otherwise ignorant of the eye colours of other people).
On the day the foreigner makes his speech, each islander discovers that every islander knows that at least one islander has blue eyes. After the first non-suicide day, each islander discovers that every other islander knows that at least two islanders have blue eyes and so on and so on.
Without this knowledge of what the other islanders know, then each islander cannot assume that all the blue-eyed islanders would kill themselves on the right day.
PS Many of Nicole Krieger’s comments are amusing.
5 May, 2011 at 6:04 am
G. Siamantas
In this variant of the muddy children puzzle the information provided by the traveller does not add any more information to the knowledge that every blue eyed tribesman has. They all already know that there are 99 other blue eyed islanders. The solution to THIS puzzle is proposition 1.
The only way that the words of the traveller can trigger a sequence of suicides in the islander community is if they have in mind the solution of other versions of this puzzle, as most people does in these comments.
In order for proposition 2 to be true there should be added in the description of the puzzle the element of synchronicity in the islander’s thoughts, i.e. all the islanders reason in complete synchronization. This is NOT stated in the given puzzle description. This is usually given in the descriptions of other variants of the puzzle.
A general comment on all variants of the puzzle: in order for the solution to be the suicide of all islanders or for all children to say that they have mud in their foreheads etc. All islanders, children etc must think inductively on n (in mathematical terms) and this is normally not the case in normal people’s thinking.
5 May, 2011 at 10:43 am
al
I would say this is covered by “all the islanders are highly logical” and Note 1 and Note 2.
There are millions of possible caveats you could exploit to avoid conclusion 2.
30 June, 2011 at 6:22 pm
hugo
The foreigner does not act as a trigger, we just assume he will cause the same thinking in all the tribepeople, wich is to count. And they all assume in their world they will all do it.
So thats the fact they “know” everybody will start to think that makes each one of them to star worring about themselves.
Im sorry for my bad english, im portuguese
6 May, 2011 at 12:21 am
jqanderson
Has anyone mentioned that this is game theory 101? The answer hinges on the concept of common knowledge, and there are a number of ways to formalize it. Ask your econ professor or consult any graduate-level game theory handbook for precise statements.
7 May, 2011 at 6:24 am
Phanindra
I think this puzzle can be solved if one understands the “depth” of knowledge. Here is my go at a solution:
Let 1) “k” denote the string “knows that”
2) “e” denote the string “there is a blue eyed person in the tribe”
to avoid writing the phrases repeatedly.
I looked at a much simpler case to start off with. Assume that there are only three people in the tribe all of them blue
eyed. Name them a,b,c.
At the beginning of time (in this problem) one can say that the following are true:
1) d k e
2) d k (f k e)
where d belongs to {a,b,c} and f belongs to {a,b,c}. However, at the next level of “depth”, the following statements are
false:
1) d k (f k (g k e))
where d belongs to {a,b,c}, f belongs to {a,b,c} and g belongs to {a,b,c}.
Note that if any of the above false statements become true somehow (here, with the passage of time and observing whether
people are comitting suicides or not) then it could be shown that d knows his/her eye colour (where d belongs to {a,b,c}).
The words of the traveller make these statements true at the beginning of day three. Even though at the first day his words
do not help the tribe at all, they become handy after sometime. Unfortunately, at the height of their knowledge, they choose
to kill themsevles !
7 May, 2011 at 6:37 am
Phanindra
A minor mistake in the above: for the false statements, d != f != g. Some of the rest are trivial and the others are not that important.
10 May, 2011 at 12:14 pm
FV
1. If everybody knew that there are only two colors they would know immediately about Solution 2, i.e. that eventually they would know their eye colors, so they could commit suicide immediately if they felt like it.
2. Nicole Krieger mentioned a possibility of evil foreigner with grey eyes. Another possibility is of an islander too scared to commit suicide, in which case the following day everybody would have to commit suicide.
10 May, 2011 at 12:42 pm
FV
3. Of course, as someone mentioned above, if they knew that there are “exactly two colors” then by the same Solution 2 they would all commit suicide even without the foreigner.
However, what if they all knew that there are “at most two colors”? Then the first induction step would not work since the only blue eyed person would not have killed himself since he could assume that he also has brown eyes. However, in the case of “at most two colors” and 100/900 breakdown they know immediately that there are “exactly two colors”. Isn’t that paradoxical?
10 May, 2011 at 1:42 pm
FV
Oh, I see! Announcing whether there are “at most two colors” or “exactly two colors” reveals different “common” information, even though to each individual it is exactly the same! Nice!
10 May, 2011 at 3:47 pm
SJO
Actually, all the islanders will completely know the statistics of the population. In a large population of 1000 people, excluding themselves, everyone can compute a distribution based on their observations, and those distributions would be identical.
12 May, 2011 at 5:13 pm
Tad Boniecki
Please put me out of my agony! I have read through the blue eyes puzzle a few times, as well as the answers on the Net. It doesn’t make sense to me. The solutions start with the case where there is one blue-eyed islander, who then leaves on hearing the blue eyes statement. They then move on to the case of two blue-eyed islanders hearing the same statement.
The problem is that the announcement that there is an islander with blue eyes is made *only once*, so it cannot be used in 100 separate cases. Sure, if there were only one blue-eyed islander then he would leave on hearing the statement, but we start with 100 blue-eyed islanders, not one. I think the inductive logic fails because it assumes that the statement about someone having blue eyes is made every day.
Come to think of it, even if the statement about blue eyes were to be made every day, that still would not help. The solution is only valid if we can get from the case of 100 to 99 blue-eyed islanders.
Let’s strip the puzzle back to essentials. Suppose the scenario is that there are only two islanders, both with blue eyes, and that the outside person says, “I see someone with eyes.” Can we get from two islanders to one in this simplest version of the puzzle? I think not, because the statement about there being a blue-eyed person is never made to just one islander. Because it is never said to just one islander, we cannot work back to this case from the case where there are two islanders.
Another problem is that the announcement of there being a blue-eyed islander does not tell anyone anything. Every one of the islanders can already see a blue-eyed islander, so apart from saying it out loud, the statement can have no effect, as no information is being given. If each islander thought this thought, rather than hearing it said out loud, then it would have the same effect. If the inductive proof were valid then all the islanders would have left 100 days after arriving at the island, whether or not anyone made the statement out loud.
In conclusion, the statement, “There is a blue-eyed person” can *only *have an effect if there is just one blue-eyed person to hear it. This never happens, as the statement is only made once to the initial group of 100. So, theorem one is false because the case of one person hearing the statement never happens.
Thanks
Tad
13 May, 2011 at 8:08 am
Terence Tao
The key point here is that the “real” world in the puzzle – i.e. the world of what actually is true in the puzzle (i.e. the world in which there are 100 blue-eyed islanders) – is not the only world that one has to analyse; various “purely hypothetical” worlds, in which there are other numbers of blue-eyed islanders, also influence the behaviour of the islanders. The reason for this is that while you know that such worlds are purely hypothetical, the islanders do not, and so must also take those worlds into consideration. (More subtly, there are worlds which each islander knows are purely hypothetical, but they do not know that the other islanders know to be hypothetical – such as the world in which there are only 98 blue-eyed islanders instead of 100. Somewhat surprisingly, these worlds need to be taken into consideration also. And so forth.)
A familiar everyday example of this phenomenon (at least at the first-order level) comes from the game of poker. A player may, in reality, hold a lousy hand, but may be able to successfully bluff other players into thinking that he or she in fact holds a very high-ranking hand. The hypothetical world in which this player holds a good hand is not the real world, but the other players do not know that fact – and as a consequence, this bluff can significantly alter their behaviour. Higher order versions of this phenomenon can occur when a bluffing player has to try to gauge if another player has “bought” his or her bluff and has a genuinely strong hand, or does not believe the bluff and is counterbluffing. Only one of these alternatives can be the truth, but the mere possibility of the other alternative can still cause a real change in the player’s behaviour.
In a similar way, in an island in which there are two blue-eyed islanders, the hypothetical world in which there is only one blue-eyed islander is a possible world to each of the two blue-eyed islanders, and can only be ruled out one day after the foreigner speaks.
In an island in which there are three blue-eyed islanders, the hypothetical world in which there is only two blue-eyed islanders is a possible world to each of the three blue-eyed islanders, and can only be ruled out two days after the foreigner speaks. And so forth.
The only hypothetical worlds which can be completely eliminated from one’s analysis are those worlds that are commonly known to not be possible. In particular, once the foreigner speaks in full view of all the islanders, it becomes common knowledge that there is at least one blue-eyed islander, and so the world in which there are no blue-eyed islanders can be ruled out completely from that point onwards.
To understand the puzzle properly, one has to “get inside the head” of an islander, and see things from the islander’s point of view, taking into account that the islander may not know everything that you, as an external observer, know. (In particular, hypothetical worlds which you know to be impossible, can become possible once one adopts an islander’s perspective.) Furthermore, one has to iterate this process, getting inside the head of an islander as he or she in turn gets inside the head of another islander, and so forth. (In fact, this process has to be iterated 100 times in order to fully resolve this particular puzzle!)
15 May, 2012 at 3:40 pm
Jeremy
The hypothetical worlds part is where it seems the logical argument parts ways with the real world.
It has to be known to these logical people that whatever the real number of blue-eyed people is (let’s call it N), blue-eyed people see N-1 blue-eyed people and the rest see N blue-eyed people. So it is Common Knowledge that everyone has first-hand evidence of at least N-1 blue-eyed people.
That this is Common Knowledge follows as a consequence of the GIVEN that all the islanders are completely logical AND THAT THAT IS COMMON KNOWLEDGE, that they can all see everyone else’s eye color, and are ignorant only of their own eye color.
So every tribe member can be absolutely sure that whatever number n of blue-eyed people they see, no other tribe member could possibly conclude there are fewer than n-1 blue-eyed people, and this, too, would be Common Knowledge. Furthermore, since n >= N-1, no other tribe member could possibly conclude any other tribe member concludes there are less than N-2 blue-eyed people.
Stated differently, ever tribe member is certain that their first-hand evidence of n blue-eyed people requires the world to be such that N >= n, and everyone else has first hand evidence of at least n-1 blue-eyed people and N-1 blue-eyed people. Thus even though I may conclude that another islander thinks there is one fewer blue-eyed person than I do, I cannot transitively think that that islander thinks that another islander would think there could be even one fewer. This is because I know the world is such that if I am seeing N-1 blue-eyed people, I have blue eyes and no one else is seeing fewer blue-eyed people and alternatively, if I am seeing all N blue-eyed people, I do not have blue eyes so no other person would not suspect I see fewer blue-eyed people than they do.
So it must be Common Knowledge that there are at least N-2 blue-eyed people. That is, it is Common Knowledge that every islander will, from first-hand observation and reasoning, conclude the lower bound of every other islanders estimate is >= N-2.
The key here is that it is Common Knowledge that any islander would have first-hand knowledge of at least N-1 people with blue-eyes. (Or perhaps the key is that the blue-eyed islanders are interchangeable.) In a hypothetical world with 3 blue-eyed islanders, there is no hypothetical world with no blue-eyed islanders. All the blue eyed islanders see 2 other blue-eyed islanders and cannot imagine a world where a blue-eyed islander does not see a blue-eyed islander (and of course all the rest see 3 blue-eyed islanders), so it is not a possibility for any islander to conclude there are no blue-eyed islanders.
15 May, 2012 at 4:10 pm
Jeremy
Of course, knowing that there are at least n-1 or N-2 blue-eyed people doesn’t tell me whether or not I have blue eyes. I will only know I have blue eyes when I know N > n. I will only know that when I know the n blue-eyed people I see also think that N > n. So what the foreigner does is start the clock ticking. Today we know n > 0. Tomorrow, when nobody does anything, we know n > 1, and the next day we know n > 2, and so on. In our world with 100 blue-eyed islanders, everyone knows that everyone knows there are at least 98 and everyone can see there are at least 99, but only on day 100 do those people who see 99 know that they are one of the 100 blue-eyed people and not a non-blue-eyed person amongst 99 blue-eyed people.
15 May, 2011 at 5:06 pm
Tad Boniecki
Sorry to be a nuisance, but I am as confused as before. You wrote about hypothetical worlds and that the islanders have to take these into account. Fine. But I still don’t see how the saying out loud of “I see blue eyes” can have any effect, given that each of the islanders can think this without the benefit of outside help. Each islander also knows that every other islander knows this, and so on.
Secondly, I can’t follow the proof of the case where n=1 because the statement about blue eyes is only made once, to 1000 islanders, never to just one or two. It cannot be hypothesised because it simply does not happen within the framework of the puzzle.
What am I missing??
Thanks
Tad
16 May, 2011 at 7:49 am
Terence Tao
You are getting close to the key point in the puzzle, which is that the “and so on” in your first paragraph is actually false. (This is easiest to see in the case when there are 2 blue-eyed islanders.)
In the inductive argument, n is the number of blue-eyed islanders, not the number of islanders in total, which is indeed commonly known to be 1000. In contrast, n, while actually equal to 100, is not commonly known to equal 100, and can hypothetically be anything from 0 to 1000 once one works in a sufficiently highly nested knowledge framework.
28 November, 2012 at 10:49 pm
Jeremy
No, see above. The islanders can employ other logic to prove that no islander can hypothesize n < 98.
16 May, 2011 at 8:06 am
FV
Tad,
The point is that by saying “I see blue eyes” he does reveal information they previously did not know, it’s just not an obvious information. Suppose that there are two blue-eyed people, A and B and a lot of browned eyed people. Each of them knows that there is a blue-eyed person. But A does not know whether B knows that there is a blue-eyed person. When A looks at B, he thinks: “Well, I and everyone else except B knows that there is a blue eyed person. But does B know that there is a blue-eyed person? I don’t know… It really depends on whether I have blue eyes or not.” After the foreigner’s announcement A thinks “Now B knows for sure that someone has blue eyes. If I have blue eyes then B will think it’s me and not kill himself on the first day. But if I have not blue eyes, B will know that it must be him and kill himself on the first day.” So the foreigner reveals information, not about what each of them already knows about other’s eye color, but about what each of them knows about what others know.
Try the same with three blue eyed people – A, B and C. A knows that B knows that someone else has blue eyes, namely, C. But when A looks at B and thinks: “Does B know that C knows that someone else has blue eyes?”, the answer is that A is not sure since it depends on his eye color, but foreigner’s announcement changes this.
16 May, 2011 at 8:29 am
lkozma
You can also check my first comment above, where I explained what is the information transmitted by the foreigner’s speech:
https://terrytao.wordpress.com/2011/04/07/the-blue-eyed-islanders-puzzle-repost/#comment-51675
16 May, 2011 at 8:16 am
FV
Dear Terry,
Wouldn’t it be fun to have an undergraduate class (or even graduate class for law majors!) consisting completely of logic puzzles like this one? This one alone would take at least one class. Is there a good book with 20-30 different logic puzzles such that each would fill one class? Could you make a list of as many non-trivial puzzles as you know, and ask others to contribute. Also, it would be fun to think of ways of using these types of puzzles to force others to reveal information they are hiding, i.e. logic based interrogation techniques! Better than water-boarding! If you find this idea interesting, I can not think of a better way to turn it into a collaborative project than your blog! Thanks, Terry!
16 May, 2011 at 12:29 pm
farzin
I think the problem is in the nature of the question when n=1 and n>1.
These problems are different when n=1 and n>1 since when n=1 everybody knows there exists blue and brown colours except the blue eyed guy but when n>1 every body knows that two colours exist so the information in the problem is not same when n=1 and n>1. I think the induction logic does not work here since we have two different type of problems.
16 May, 2011 at 2:33 pm
farzin
In other words, the information provided by the foreigner changes the problem if n=1 or n=2 since if n>2 everybody already knows that blue and brown exist. Therefore the right induction for n>2 is that for n=1 every body knows blue and brown exist and so on.
At the end by the solution 2 we see that based on the assumptions of the problem it is not possible to have a community of people with blue eyes for n>2 even without having a foreigner!!!.
17 May, 2011 at 10:12 am
gcy
Approaching the problem by recursive postulation (for the lack of a better word) – modeling a possible scenario within a possible scenario, and so on – is more sensible than inducting on 1, 2, 3,…, n-1, n, though both are essentially the same.
But the former approach may be more convenient to help resolve the above paradox, because it allows the information ‘there is at least one blue eye’ to perpetuate across all scenarios hypothesized by the islanders, in the manner of ‘A thinks B thinks C… an so on’ – let’s just call that 1st, 2nd, 3rd, … k-th order postulation. Whereas in reality, every islander can casually conclude that everyone else must see at least one blue-eyed (which is of equivalent logic to the foreigner’s statement) – this observation is bound to the ‘reality framework’ for which it is based on, and does not prevail when an islander ‘gets inside the head of another’ to postulate an alternate scenario, and so on. Without the foreigner’s information, this exercise will not yield a definite outcome because at the 99th-order postulation, the possibility that n=0 (there can be no blue-eyed) cannot be eliminated (by the way, every postulation is verifiable by dichotomy logic – if not k-1 islanders, it must be k – such that it provides new information back up the postulation chain).
Hence, the islanders will not be able to figure out anything unless the given the foreigner’s statement as the ‘baseline truth’, such that the thought experiments can lead to a definite conclusion on the 99th / 100-th day.
17 May, 2011 at 11:51 pm
James
Terry’s explanations are fantastic on a mathematical level and also on a language level. The hypothetical worlds situation is explained beautifully. Indeed I do agree with outcome 2 and when I “saw the light” I felt smarter! The “sufficiently highly nested knowledge framework” is also a nice phrase.
Is there, though, a similarly nice and intuitive way of describing why the blue eyed guys would not kill themselves earlier?
I know that some poster earlier talked about 100 days being an upper bound. Is it easy to explain why 100 would also be a lower bound or do we not have enough information?
Also, can we apply the following straightforward argument or is it too “hand wavy”? Argument: Because of symmetry the blue eyed people must all commit suicide at the same time.
From then, could we then rule out the possibility of this happening on any day before the 100th day? Thus making 100 a lower bound also.
Finally, can you keep posting interesting problems like this on a regularly? I imagine after solving/ understanding 10-20 problems like this, we might feel better able to tackle problems in general.
7 February, 2012 at 9:49 am
Lenoxus
Here’s an answer to why they cannot die any sooner than Day 100.
Suppose that each of the blue-eyes thinks “Well, I see 99 blues, so there are either 99 or 100. Since this is a version of the puzzle where we are “trying” to deduce our own color as soon as possible, then I will commit to killing myself on Day 2 if and only if no one has done so on Day 1, that is, tomorrow.”
The problem is that if there really were only 99 blues, then they would each have to think something like the same thing: “I see 98, so I will kill myself on Day 2 if no one does on Day 1.” The 100 blues would have already known this about the hypothetical 99 blues. Therefore, they would have to modify their “strategy” accordingly: “If there are really 99 blues, then they will wait to see if the 98 that they each see kill themselves on Day 1. When that doesn’t happen, those 99 blues will kill themselves on Day 2. And IF that doesn’t happen, then I’ll know there are 100 blues total, and we’ll all kill ourselves on Day 3.”
But it can’t possibly stop there! Why, exactly, would a hypothetical 98 blues kill themselves immediately on Day 1? There can’t be any particular reason. The hypothetical 99 know that a hypothetical 98 would also wait a day, so these 99 wouldn’t kill themselves until Day 3 at the earliest. Except, of course, that this in turn relies on a hypothetical 97 blues immediately deducing their own color from the foreigner’s statement and killing themselves on Day 1…
As I hope you can tell, this just keeps going and going. It stops once we reach a hypothetical single blue-eyed islander. Unlike all the various combinations of hypothetical islanders before him, this guy would kill himself on Day 1. So the rest of the islanders’ thought process must build up from there.
On Day 1, it becomes common knowledge that there is more than 1 blue, on Day 2 that there are more than 2, and so forth. On Day 99, each of the 100 blues sees that the other 99 did not kill themselves, and so all 100 prepare to do so the next day.
17 May, 2011 at 11:57 pm
Tad Boniecki
I have finally understood the answer. Enlightenment came when I read the clear explanation on wikipedia:
http://en.wikipedia.org/wiki/Common_knowledge_%28logic%29#Example
This may help other people.
Tad
19 May, 2011 at 9:11 pm
Epistemic logic, temporal epistemic logic, and the blue-eyed islander puzzle lower bound « What’s new
[…] recently reposted my favourite logic puzzle, namely the blue-eyed islander puzzle. I am fond of this puzzle because […]
20 May, 2011 at 12:34 am
Oskari
I’m absolutely bewildered by this puzzle, not because it would be hard to solve, but because of the use of deductive reasoning in absolutely wron realm of things. This is a question of inferring the properties of your model of the world from the observations you make. The link between the model world (a world described by random variables) and the real world is the observations we make of the real world. This cloudy hazy curtain of observations always separates the two. Deductive logics is completely confined in the model world, and can be used ONLY to narrow down the support of the random variables of the model world (based on previously assumed knowledge of the world).
The solution 1 is correct, with only one exception of n=1 (in which case they all kill themselves by the day two).
The reasoning is as follows:
Villagers state of knowledge of his eye’s colour is a random variable with support: Colour=[blue,brown]. Before the new observation provided by the foreigner, the state of blue eyed villager’s random variable “colour”=[(n-1)/N,(N-N-1)/N], where n is the number blue eyed villagers and N is the total number of villagers. Should there be any “observation” that will have only one solution (giving 100% of the probability mass two either of the two states), the person will kill himself. This “elimination” of possible states (or narrowing the support) can be done with deductive reasoning.
When the foreigner provides the new observation, the only two states of the random variable “colour” of interest in this puzzle are: colour=[1,0] or colour=[0,1].
If n=1, then the blue eyed person concludes with deductive reasoning that the only random variable state that fits the observation is colour=[1,0], and therefore he kills himself. And after that the rest conclude, they must have brown eyes, so they kill themselfs as well.
If n>1, then deductive reasoning is not able to eliminate (give zero probability) to either state, and therefore will not cause any consequences. The end of story. If you want to calculate the probabilities, use the bayes theory, which also gives you the way to join the information from two or more observations:
p(colour|observations) = p(observations|colour)*p(colour)/p(observations).
20 May, 2011 at 2:03 am
Oskari
I see Terence has talked about bayes before:
https://profiles.google.com/114134834346472219368/posts/G5DnA8EL7D3/In-classical-logic-one-can-represent-ones#114134834346472219368/posts/G5DnA8EL7D3
21 May, 2011 at 10:32 am
A small segue of subject | Great & Amazing Content Site
[…] if we discard the blue-eyed men on the island, and look at real human markets, we would expect to see a collective rationality not finite, but […]
26 May, 2011 at 1:08 pm
Anon
Consider the modified scenario, where the foreigner instead remarks: “There are *exactly* two distinct eye-colours on this island.” What happens then? If I’m not mistaken, on day 100 all the blue-eyed islanders kill themselves, and then the following day everyone else kills themselves.
On the other hand, if the foreigner says: “There are *at most* two distinct eye-colours on this island”, then everyone learns they have either blue or brown eyes, but without the tragic ending.. how odd!
27 May, 2011 at 7:38 am
Terence Tao
If the foreigner says “there are exactly two distinct eye colours”, but does not name what they are, then nobody commits suicide (assuming there are no other sources of information, of course), because the fact that the two eye colours are blue and brown is not common knowledge. (For instance, if there are two blue-eyed islanders A, B and two brown-eyed islanders C, D, then A knows that B knows that there are two eye colours, but it is not the case that A knows that B knows what those two eye colours are, and that one of them is brown, but it is not the case that A knows that B knows what the other eye colour is.)
27 May, 2011 at 12:18 pm
Anon
Ah, quite right.. well that’s even more odd, because in this case the islanders are actually learning something new about their own eye colours directly from the foreigner (that their eyes are either blue or brown).. in the original scenario where he remarks that someone has blue eyes, the islanders learn nothing about their own eyes directly from him. But you’re right, even the n=1 part of the induction no longer works.
17 June, 2011 at 3:53 am
Marcelo Cantos
But does this uncertainty extend to a larger population? Surely each islander among the 1000 can readily infer that everyone sees at least brown and blue eyes, and that everyone else has made the same inference.
16 January, 2012 at 11:37 pm
Anonymous
Right, in the modified puzzle when
the foreigner says “there are exactly two distinct eye colours”, but does not name what they are, then nobody commits suicide – even in the formerly “fool proof” cases of n=0 and n=1 (of blue-eyed and 900 brown-eyed). It is also interesting to note that if, in the original puzzle, “because of some accident,” e.g. pre-suicide overdrinking, the 100 blue eyed will not commit suicide on the 100th day, the next 101th day, all 900 brown-eyed must commit suicide – because the inductive nested logical calculations are done by everybody (and the brown-eyed people all think that there 100 blue-eyed people, for sure, – but maybe 101 ).
17 June, 2011 at 3:59 am
Marcelo Cantos
I don’t follow the second scenario. Everyone already knows that there are at least two eye-colours (and that everyone else knows this too, and that everyone knows that everyone knows, …). With this knowledge, the assertion that there are at most two colours is identical to the assertion that there are exactly two colours.
27 May, 2011 at 3:52 am
dimitris stathopoulos
day 1: nobody dies because nobody knows his/her eye colour
day 2: each person count the other 99 or 100 (depending on his eye colour) blue eyed persons
day 3: you cant say to the other the colour of eyes he has so you ask how many he found with blue eyes
day 4: those who found 99 realise that they have blue eyes and they commit suicide!
THE END!
27 May, 2011 at 4:00 am
dimitris stathopoulos
also:
those who found 100 also die because the know that they have brown eyes!
funny
31 May, 2011 at 10:02 pm
Anonymous
I will use the xkcd example where people get to leave on a nightly ferry.
Guru says “there’s at least one person with blue eyes”
With this, I don’t see any flaws in the following logic:
For 1 blue-eyed person (A):
A leaves the instant the 1st Ferry arrives, because he only sees one other person (the Guru) who clearly does not have blue eyes.
—
For 2 blue-eyed people:
There’s 2 possibilities for A in A’s mind. A has blue eyes or non-blue eyes.
If A has non-blue eyes, B will leave on the first ferry.
B DOESN’T leave on the first ferry.
A knows the other possibility must be true, that A has blue eyes. They both leave on the 2nd ferry.
—
For 3 blue-eyed people:
There’s 2 possibilities for A again and as always… A has blue eyes or non-blue eyes.
If A has non-blue eyes, A might as well be a brown-eyed person. B and C would ignore A and behave as though B and C are the only blue eyed people. We agreed that 2 blue eyed people leave on the 2nd ferry. The 2nd ferry comes by and no one leaves. A must have blue eyes.
They all leave on the 3rd ferry.
—
For 4 blue-eyed people:
A has non-blue eyes or blue eyes.
If A has non-blue eyes, A might as well be a brown-eyed person. B, C, and D would ignore A and behave as though B, C, and D are the only blue eyed people. we agreed that 3 blue eyed people leave on the 3rd ferry. The 3rd ferry comes by and no one leaves. A must have blue eyes.
They all leave on the 4th ferry.
—
I don’t see how this could possibly be wrong.
Now we try the other way…
Suppose there is no Guru:
For 1 blue-eyed person:
No way of discerning own eye color.
—
For 2 blue-eyed people:
Yes, both people know that there is at least one blue-eyed person, but A doesn’t know B knows, and B doesn’t know A knows.
For all A knows, A has non-blue eyes, and it is as though B is alone on the island.
—
For 3 blue-eyed people:
For all A knows, A has non-blue eyes.
If this is the case, this is what B’s vision would look like:
A = non-blue (worst case)
B = (I don’t know because I’m A pretending I’m B)
C = blue
Now have A’s theoretical B pretend he’s C:
A = non-blue (worst case)
B = non-blue (worst case)
C = (I don’t know because I’m A pretending I’m B pretending I’m C)
As far as A knows, B has no way of knowing whether or not C knows anyone has blue eyes.
A can’t know if B knows C knows.
—
For 4 blue-eyed people:
For all A knows, A has non-blue eyes.
If this is the case, here’s B’s vision:
A = non-blue
B = (idk)
C = blue
D = blue
Now have A’s B pretend he’s C.
A = non-blue
B = non-blue
C = (idk)
D = blue
Now have A’s B’s C pretend he’s D.
A = non-blue
B = non-blue
C = non-blue
D = (idk)
A’s B’s C can’t count on D.
I like this idea… but why do you need 3rd order knowledge? Isn’t the below actually true?
A=non-blue
B=(idk)
C=blue
D=blue
A knows B knows C knows (C at least sees D)
A knows B knows A knows (A sees both C and D)
A knows B knows D knows (D at least sees C)
A can comfortably obtain that Everyone knows everyone knows everyone knows that there is at least one blue-eyed person. Isn’t that enough?
Are my inductions without the Guru wrong? Could someone explain why you need 3rd order knowledge or show me my error in the 4 blue-eyed people no Guru step?
7 June, 2011 at 12:14 am
Chen Wei
the visitor’s words is meaningless, because everyone already knew that at least one native people are blue-eyed. So, what’s the fuction of the visitor’s announcement.
7 June, 2011 at 6:57 pm
cocacola1123
I have some opinions about this puzzle, I am not sure it’s right, just for discussion:
1. The 2 solutions are all right. Because it is unreasonable that the foreigner trigger this incident. Actually I believe a reasonable trigger (beginning condition) is as follow: the islanders immediately reach a common knowledge that “all islanders know that at least 98 blue eyes are exist”. (brown eyes will make the common knowledge 99, but it is contained in the condition of “at least 98”, however, such difference make the brown eyes wait for all blue eyes commit suicide, then they will die one day later)
2. 1st day, no one died, so the common knowledge rises to 99;
2nd day, no one died, so the common knowledge rises to 100;
3rd day, all blue eye commit suicide;
4th day, all brown eye commit suicide;
3. Conclusion: there is m brown, n blue. In 3rd day , min(m, n) will commit suicide. In 4th day , max(m, n) will commit suicide. If m=n, they all commit suicide in 3rd day.
12 June, 2011 at 10:03 pm
Anonymous
As a nudge for people who are skeptical that the foreigner has introduced information to the system, consider that the situation would be different (and no villagers would die) if the foreigner made his comment individually to each villager.
12 June, 2011 at 10:04 pm
Anonymous
That is, individually and privately.
20 June, 2011 at 1:38 pm
Winston W. L. Ho
Shang1 nao3 jing1 ah! (crack my brain) The person who came up with this puzzle is a genius. My understanding to hopefully ‘unify’ the two solutions goes like this:
We can use a simplification (offered by Jim http://higherlogics.blogspot.com/2008/02/blue-eyed-islander-puzzle-analysis.html ) of considering a small group of 4 people wearing hats. N people wear blue hats and 4-N people wear brown hats. Every minute, if anybody deduces the color of the hat he is wearing, he has to declare it. At a certain point in time, an outsider pops in and says ‘I see at least 1 blue hat.’
If N=2, the statement that the outsider said has already been known by all the four people, long before the outsider came. So how did the people deduce the color of their hats? Why didn’t they do so earlier?
The answer I think is because the outsider provided the origin time. This allows the people to deduce the color of their own hat based on counting each passing minute. The declaration has to occur at fixed time instants.
I don’t understand the mathematics of the wikipedia entry on common knowledge, but I offer my simple common sense explanation. How does this relate to the common knowledge?
20 July, 2011 at 2:22 pm
jlflorez
My opinion is that foreigner statement break the uncertainty when one guy is seeing that nobody has blue eyes, then he is the BEP, and by induction you reach the ritual suicide in the day n.
11 August, 2011 at 2:47 am
JoshuaR
No new information is given. All islanders already know there is at least one blue-eyed person on the island.
But they need the foreigner and if you look at this problem BACKWARDS from what others have explained, you realize a key difference.
I am blue. I do not know this. I look around and see 99. Suppose I am not blue. Then a blue person sees 98. Now that person doesn’t know he is blue. Since I don’t know that I am blue, and to his mind he doesn’t know that he is blue, I might assume that HE would think that there might only be 97 blues, if I am not blue. And so on and so on until we get to the “he might think that he might think that he might think… that he is the only blue or that there are no blues.” And so he cannot kill himself until someone states for certain there is at least one blue.
In real terms, each individual will guess there are 99-100 or 100-101 blues, but none will know where they stand on that totem pole. If one guessed he were on the high end, say a brown saw 100 blues, then he might guess he is number 101, or that there are 100 only. But if he puts himself in #100s shoes, he will say, “if I am brown, #100 will think to himself that he is either #100 or not blue.” And so on. It all matters what they can guess that another knows.
As soon as the foreigner comes in, when you get to “he might think that he might think… that he is blue,” the countdown starts.
11 August, 2011 at 2:51 am
JoshuaR
No new information is given. All islanders already know there is at least one blue-eyed person on the island.
But they need the foreigner and if you look at this problem BACKWARDS from what others have explained, you realize a key difference.
I am blue. I do not know this. I look around and see 99. Suppose I am not blue. Then a blue person sees 98. Now that person doesn’t know he is blue. Since I don’t know that I am blue, and to his mind he doesn’t know that he is blue, I might assume that HE would think that there might only be 97 blues, if I am not blue. And so on and so on until we get to the “he might think that he might think that he might think… that he is the only blue or that there are no blues.” And so he cannot kill himself until someone states for certain there is at least one blue.
In real terms, each individual will guess there are 99-100 or 100-101 blues, but none will know where they stand on that totem pole. If one guessed he were on the high end, say a brown saw 100 blues, then he might guess he is number 101, or that there are 100 only. But if he puts himself in #100s shoes, he will say, “if I am brown, #100 will think to himself that he is either #100 or not blue.” And so on. It all matters what they can guess that another knows.
As soon as the foreigner comes in, when you get to “he might think that he might think… that he is blue,” the countdown starts.
…
Edited because I forgot to say, but of course within minutes of reading the induction solution starting from 1, I was sure that was simple and that was it… until I really started to think about it the other way, that the foreigner didn’t really add new information, that it didn’t matter what would happen IF there were only 1 blue because that situation is impossible because there are certainly at minimum 99 blues to every person on the island… But then if you look at it in such a way as to say, “I know this but he may not know that if I am in fact this…” you realize that the induction method does indeed work.
23 August, 2011 at 5:27 am
Henry
All blue eyed people see 99 other blue eyed people and that everyone else (900) are brown eyed. So they all know only 2 alternatives exist: 99 and 100 are the only alternatives. They also know everyone of the blue eyed people know of at least 98 are logical and honest so day one equal to 98.
Day 2 then is 99 and no one dies. On day 3 (set to 100 blue eyes) the whole village commits suicide under the mistaken (for 900 of them) belief they have blue eyes.
The reason is that the brown eyed people had different information then blue eyed people they would set the alternatives at 100 or 101. So day 1 was set as 99, day 2 was 100 and day 3 included them.
Even when everyone else showed up at noon (they would assume the brown eyed people had figured out they were brown eyed) based on all blue eyed people (themselves included) lining up in the square.
So the Foreigner doomed 900 people to commit suicide by flawed logic.
23 August, 2011 at 8:43 am
Henry
I have an additional clarification before someone else brings it up. The logic of the 100 days of waiting. Each other person could only see -1 blue eyed people. This math works but does not make sense logically. Each blue eyed person knows that there are at least 99 people and at most 100.
Even if the other people only see 98 no one can see <98. This is common to the whole group so there is no need to start at 0 as no one can see less than 98 blues. I does not matter what the subjective count is or what someone else might think they think someone else thinks.
The absolute (objective) answer can only be 99 or 100 for everyone. All 999 other people see at least (98- blue eyes for blue eyed people and 99 for brown eyed people) if I am not blue eyed.
They see 99 and 100 if I am blue eyed.
So we start at the largest impossible answer of 97. We all know no one will kill themselves now as no one can see just 97 blue eyes. So at the next noon (Day 1) we advance to 98, Day 2 is 99, Day 3 is 100. No one dies on day one or two so I go kill myself on day 3.
What I don’t know (if I have blue eyes) is that the brown eyed people started at 98 because they know that everyone must see at least 99. They know the absolute (objective) answer is 100 or 101. For them Day 1 is 99, day 2 is 100 and now on day 3 they think the answer is 101 blue eyed people (including themselves) so everyone dies on day 3.
3 January, 2012 at 3:37 am
Sean Eberhard
I don’t think this argument works: I think that the blue-eyed people really do kill themselves on day 100. The problem depends delicately on what each person knows, what each person knows each other person knows, what each person knows each other person knows each other person knows, and so forth.
Imagine for instance that Adam is a blue-eyed person. Adam sees 99 (other) blue-eyed people, and therefore knows that there are exactly 99 or 100 blue-eyed people in total. Adam may then reason as follows: “Suppose there are 99 blue-eyed people. Consider one of the blue-eyed people I see, Bob say. Bob sees 98 other blue-eyed people, and therefore knows that there are either 98 or 99 blue-eyed people in total. Bob may then reason as follows: ‘Suppose there are 98 blue-eyed people. Consider one of the blue-eyed people I see, Carla say. Carla sees 97 other blue-eyed people, and therefore knows that there are either 97 or 98 blue-eyed people in total.'”
Reasoning like this can get you all the way down from 100 to 1. The point is: each person may know himself that there are 99 or 100 blue-eyed people, but each other person only knows that each other person knows there are at least 98 blue-eyed people, and worse each person knows only that each person knows only that there are at least 97 blue-eyed people, and so on.
I hope that made sense.
15 November, 2011 at 9:25 am
Yale Dikes
figured out a solution that works; it uses only one inductive point at a time, a stable one, that is shared by all members of the party. And gets the blues gone in two days if they are equal to or less than the size of the other group. The …others may take longer, the others being the larger party. Either the next day, or after the others leave switch to the higher commonly viewed number.
Take 100 bs and 101 gs.
The method goes as follows The {b’s}* will see 99 the {G’s} will see 100 b’s but {G’s} will know that the b’s see 99 blues ( as the b’s they see can’t see themselves), they don’t know their own color, The {g’s} will likewise know that all the b’s they see will only have a shared view of 99 of the commonly viewed bs . The {b’s} will see 101 g’s and know they only have a shared view of 100. The lower number, call it lower commonly viewed number , is used by both as the starting point to avoid the inductive latter ; thus the starting point for both is 99. ( trigger point for both 98 blues or 98 greens) . It is not met. All stay. Day two: number goes to 100, b’s all see 99, and leave. Day three, either greens know there are two values and leave or the number is raised to to 101 and they leave.
The method goes as follows The {b’s}* will see 99 the {G’s} will see 100 b’s but {G’s} will know that the b’s see 99 blues ( as the b’s they see can’t see themselves), they don’t know their own color, The {g’s} will likewise know that all the b’s they see will only have a shared view of 99 of the commonly viewed bs . The {b’s} will see 101 g’s and know they only have a shared view of 100. The lower number, call it lower commonly viewed number , is used by both as the starting point to avoid the inductive latter ; thus the starting point for both is 99. ( trigger point for both 98 blues or 98 greens) . It is not met. All stay. Day two: number goes to 100, b’s all see 99, and leave. Day three, either greens know there are two values and leave or the number is raised to to 101 and they leave.
15 November, 2011 at 9:29 am
Yale Dikes
There are some, like suicide guy above, who think distribution like 99/ 100 create a problem for the inductive method used to solve the problem. Essentially, if there is a distribution of 99 blues and 100 browns poor browns all see 99 and what do they do?
They wait.
Remember the starting point, 98 leave iff 97) . no blue or brown leaves. Then 99 (leave iff 98). The blues leave. The hapless browns are spared. Than they leave via elimination if they know there are only two values, or by raising the induction point to 100 the next day. They never have to face the choice the apparent paradox purports.
15 November, 2011 at 12:27 pm
Yale Dikes
Damn, screwed up with an illegitimate move. The induction points are different I inadvertently scewed it.
15 November, 2011 at 12:35 pm
Yale Dikes
Same applies to Henry where the number that a brown eyes will see is 100 meaning the possiblities would be 100 or 101, and the same reasoning would generate the number of off 99, and he would illegitimately want to leave with the blues on day 3.
Shhez. That the trick. Finding a stable induction point ( for blues and greens) other than 1. It seems that the traditional way might be the only way to accomplish this.
15 November, 2011 at 12:39 pm
Yale Dikes
Same applies to Henry’s solution : For green’s the possibilties are 100 or 101, so his method will generate 99 for them, not the 98 it does for blues.
15 November, 2011 at 12:41 pm
Yale Dikes
The only stable induction point seems to be 1. Even though they all know there is more that they all know , they can’t use it to pick a unique number. That’s the rub.
20 November, 2011 at 8:27 pm
Blue Eyes: The Hardest Logic Puzzle in the World « Guzman's Weblog
[…] Tao’s The blue-eyed islanders puzzle – repost Share this:TwitterFacebookLike this:LikeBe the first to like this post. […]
24 December, 2011 at 12:59 am
David Ho
Hello prof Tao,
I really enjoyed going through your Diversions page. I have tried several puzzles and solved them. I thought the blue-eyed islanders puzzle wasn’t very difficult. The easiest way for me to solve these is to try simpler cases and induct upwards. So, I just thought of a room of 4 people, 3 with brown eyes and 1 with blue. Then I tried with 2 brown eyed people and 2 blue eyed people. It was easy to induct upwards from there.
The one puzzle that has me a bit unsure is the white/black sand mixture problem. I have sent you an email about it because the solution wasn’t listed on your webpage. I have also scoured Google and haven’t found a reasonable answer. I’ve even posted it on a puzzle forum and it has caused some controversy. I approached it several different ways and examined each conclusion. First, I tried using an exaggerated approach to make the solution more obvious. So, instead of one teaspoon, I transferred the whole bucket of white sand into black and mixed. This obviously leads to an equal mixture of impurities. I then tried a simplified model using small cubes as sand particles. This assumed even distribution in the mixture. In this case, I had to make some pretty big leaps in logic by making some assumptions that might not necessarily happen in the real world. Finally, I tried to solve it using algebraic techniques. This was a bit tricky, for me anyways, on the second transfer of mixed sand. It seems that the mixtures are still equal.
So, will you tell me if I’m right? Is the sand mixture equal in impurities? I can finally sleep at night once I know the right answer. :-) Thanks.
2 January, 2012 at 2:44 pm
David Ho
Nevermind. I figured it out. The solution should look something like this:
Let X,Y = white sand, black sand bucket, and t = teaspoon and 0 <= t <= 1. So, removing one teaspoon from Y to X:
Y-tY ; X + tY
Removing a teaspoon from X + tY to Y – tY we eventually get:
t/(t+1) * X + t/(t+1) * Y in one bucket and 1/(t+1) * Y + t/(t+1) * X in the other. What this shows is that the amount of impurity X in one bucket is equal to the amount of impurity Y in the other and vice versa. This solution also shows that by taking no teaspoon (t=0) the results collapse to what we expect: X in one bucket with no Y and vice versa. It also shows that if we take t=1 (mixing the entire bucket into another and then splitting it back), we have a completely equal mixture in both buckets as expected; i.e., 0.5X + 0.5Y and vice versa.
3 January, 2012 at 3:11 am
Sean Eberhard
A zero-algebra way of seeing this solution is to observe that each bucket has the same starting as ending volume. Therefore the amount of black sand in the white sand bucket must be the same as the amount of white sand in the black sand bucket.
14 January, 2012 at 3:30 pm
Tessa
Take in account that I am young;
If the foreigner said his statement directly to an islander, the islander would think he had blue eyes and therefore kill himself. But, if no one knows what color eyes they have after the one dies, they can’t very well kill themselves. So, if one islander told another that they have blue eyes, each would kill himself for knowing and sharing that information. And it seems that in the end they either all would die, or they all would live.
Now if the islander said his statement not directly towards one islander, no one would kill themselves unless another told them what color eyes the other had, resulting in more than one islanders dying.
And then there’s the chance the islanders didn’t speak the foreigner’s language (whatever that might be, if not English), then the islanders would move on about their lives. But how would an islander know what color another’s eyes were if they didn’t understand the foreigner?
Ultimately, there could be any number of outcomes to this puzzle, and I could carry with those but I’d probably end up confusing myself.
18 January, 2012 at 7:07 pm
Tyrocomp
OK, I’ve been racking my brain about this for a few days, and I actually think the first solution is correct. The inductive solution seems to assume that even if n>2 that the islanders will induct based on someone following the reasoning of n=1. I don’t think this happens except in the (rather obvious) cases of n=1 and n=2.
Take the case of 100 BEP, each of them numbered BEP1, BEP2 and so on.
BEP1 thinks there could be 99 BEP’s (because that is what he sees) and he assumes that he is not a BEP. He then thinks from the perspective of BEP2. BEP2 would then think that there are 98 BEP’s because BEP1 is not a BEP, and in BEP2’s thoughts neither is he.
Then we move onto BEP3. The problem is, from anybody’s perspective (and while we are in this little world BEP1 has created in his head) BEP3 sees 98 BEP’s. He doesn’t automatically assume that there are 97 BEP’s In BEP1’s world, It is a fact that there are 99 BEP’s visible, and any of those 99 are going to see 98 BEP’s.
The inductive solution seems to say
1. BEP1 assumes he is not blue eyed, and sees 99 others
2. Given the above BEP2 assumes he is not blue eyed, and sees 98 others
3. Given the above BEP3 assumes he is not blue eyed and sees 97 others
This is not true. 3 contradicts what BEP1 will assume is a patently known fact by all BEP’s, and therefore you cannot continue the induction process.
20 January, 2012 at 4:35 am
Aadil Aman
In this post I saw that People are confused for the third color say “green”. Even if there exist a third color, Say the foreigner realizes later that he did a mistake.
But if he will say, “Ah! I am sorry. That was my mistake I wanted to say that Its good to see a green eyed person on this island!”
Saying this, he will be responsible for everyone’s death as Everyone will think that he/she is a green eyed person! So the the next day all the people will commit suicide and the story will end!
One more thing, this problem has a kind of “Circular” Logic, as the persons may think the same way or the persons may think in a different way, So their is an Ambiguity here!
21 January, 2012 at 11:06 am
X. Ying
The key point of this story is about Common Knowledge. The backward induction is based on Common Knowledge. The foreigner tells every body that these is at least one blue eye in this tribe. This becomes a Common Knowledge in this island. So backward induction works. Solution 2 is correct. Each person watching doesn’t get a Common Knowledge but a 99 level mutual knowledge in this case. This knowledge is not sufficient to support 100 level backward induction. So solution 1 is wrong.
The more details please see my blog. But you must know chinese.:-)
http://blog.wenxuecity.com/myblog/49240/201201/19041.html
http://blog.wenxuecity.com/myblog/49240/201201/19044.html
http://blog.wenxuecity.com/myblog/49240/201201/19052.html
Otherewise you can read following article. But you must know mathematics.
http://plato.stanford.edu/entries/common-knowledge/
25 January, 2012 at 1:26 am
Tyrocomp
Well, then why wasn’t it common knowledge before? Whats the difference between the following scenarios?
1. I hear a foreigner say there is at least 1 blue eyed person. Everybody heard this. Therefore this is now common knowledge
2. I see 99 blue eyed people. Each of them sees at least 98 blue eyed people, and I know that. Everybody knows that that N=98 or 99. Since 98 and 99 are greater than 1, it is now common knowledge tha there is at least 1 blue eyed person.
In both cases, I know that all other members of the island possess the same knowledge – that there is at least 1 blue eyed person (in the second place the common knowledge is actually a step further in that there are at least 98 blue eyed people).
25 January, 2012 at 9:11 am
Barry Cipra
Tyrocomp, this difference is most easily explained by starting with 3 people, say Alice, Bob, and me. If I see that Alice and Bob both have blue eyes, then I know there’s at least one blue-eyed person, and, as you pointed out, I know that Alice and Bob also know that there’s at least one blue-eyed person (because they see each other’s blue eyes). I also know that Alice and Bob both know that I know there’s at least one blue-eyed person (because Alice knows I can see Bob’s blue eyes, and Bob knows I can see Alice’s). What I *don’t* know is that Alice knows that Bob knows there’s at least one blue-eyed person, and vice versa. That’s because for all I know my eyes are brown, in which case Alice, not knowing her own eye color, wouldn’t know whether Bob sees anyone with blue eyes (and likewise for Bob). But if an outsider makes a public announcement, then I *do* know that Alice and Bob both know that the other one knows there’s at least one pair of blue eyes, and the logical process can go from there.
In sum, for 3 people, the outsider’s announcement ensures that everyone knows that everyone knows that everyone knows there’s at least one blue-eyed person. For 4 people, the announcement adds one more level of the “everyone knows that” phrase. In general, if you have N blue-eyed people, then you need N levels of “everyone knows that” nesting in order for the logic to play out, but without the outsider’s announcement, you’ve only got N-1 levels.
12 February, 2012 at 9:44 am
Florian.Biermann@gmx.de
Was the argument expressed by Nicole Krieger above finally refuted? I agree with her that the inductive argument is problematic here. It assumes that IF there was just one blue-eyed person on the island (n = 1), the visitor’s statement would make that person commit suicide on day one. That is definitely true. However, with n = 100, everybody knows that n is at least 99 (because everybody can see 99 blue eyed people). As n >= 99 is mutual knowledge, from the fact that nobody committed suicide on day one no islander can deduce anything, because only under COUNTERFACTUAL circumstances, i.e. if n = 1, somebody would have committed suicide on day one. This means that with n >= 99 being mutual knowledge, under no possible state of the world somebody would have killed himself on day 1. Now we apply induction: Nobody will kill himself on day 1 under any possible state of the world, therefore the fact that day one passed without anyone committing suicide does not give any additional information to a villager who is unsure whether the state is n = 1 or n = 2, even if such a villager would exist. In particular, it does not tell him that n = 2 is the true state of the world. This goes on for 98 days. So with n >= 99 being mutual knowledge, it is impossible that under any possible state of the world somebody would have committed suicide until day 98. So the fact that indeed nobody has committed suicide until day 98 does not give any information to any villager about the true state of the world. Now day 99 begins. As a villager, I just know that the n = 99 or n = 100. The fact that 98 days have passed without anyone committing suicide does not add to my knowledge. Hence as a villager, I do not commit suicide at day 99, regardless what is the true state of the world. Now day 100 begins. Again, under no possible state of the world, somebody would have killed himself up to day 99. So again, I can deduce nothing from the fact that 99 days have passed without anyone committing suicide and I am again left with the two possible states of the world n = 99 and n = 100, prescribing me to stay alive.
One cannot start an inductive argument with a basis which is known to be NOT TRUE. “Assume n = 1…” WAIT! n CANNOT be 1 if we assume n >= 99 to be mutual knowledge.
12 February, 2012 at 6:02 pm
bert
There’s one thing that bugs me here, and any help understanding this is appreciated.
When thinking of this as a kind of game, the strategy of committing suicide on the n-th day if you see n-1 people with blue eyes, that is clearly a correct and working strategy for the players to collectively solve the question of who has blue eyes. It’s also the fastest and simplest stratey.
What the foreigner brings to the table is that he gets all the players to agree on two things: when to start counting days, and the eye color they will try to figure out. He’s not so much giving any information about who has which eye color, he’s just starting the game.
If there are 1 or 2 islanders with blue eyes, it’s clear to me that the conclusion is inevitable and that they will know for sure. But for 3 or more players, it gets fuzzy for me. I can see how they can solve the puzzle if they all use this strategy, but I don’t see how they know that this is the strategy that everyone must be using, or that everyone agrees to even play the game at all?
12 February, 2012 at 6:06 pm
bert
Something feels wrong to me about the proof by induction, but I can’t quite put my finger on it (probably because there is nothing wrong with it). It seems like it’s inducing over two things simultaneously, the number of blue eyed islanders, and the number of days. It’s as if we skip from day n-1 with n-1 blue eyed islanders, to day n with n blue eyed islanders, without considering day n-1,n-2,.. with n blue eyed islanders?
17 March, 2012 at 1:13 am
Ahno
Wow, a way to kill all logical bastards on the island indifferently to their eye color. :)
18 March, 2012 at 8:16 am
Ahno
Also actually I only see one mess in this solution and that is adding days as variable. Because someone thinking of something is not time related thing. That is why there probably be an avalanche effect at some random point of time.
By the way if try to resolve this “information poisoning” case when there are up to 4 blue-eyed (B) and up to 3 brown-eyed (R) islanders it’s funny and shows how the logic works in intuitive way:
1x B: I see no one else B but me -> suicide;
2x B: If he didn’t kill himself then I am B too -> double suicide;
3x B: If there were only 2 of them they should kill themselves from previous logic but they are not then it’s 3 of us -> mass suicide;
4x B:
1 step:
1,2,3,4 B: oh there are 3 or 4 blue eyed, wait and see;
2 step:
Random B: no one takes action then there are not 3 but 4 blue eyed because it’s evident for those 3 if its actually 3 B from previous logic -> own suicide -> those 3 see evident thing -> mass suicide
etc >_<
11 April, 2012 at 1:55 pm
JO
Ok, I just came upon this puzzle yesterday and read most of the comments above.
From what I can see there is a problem with the puzzle to begin with.
First of all If there are 4 or more people with blue eyes to begin with they won’t need some one to tell them that at least one person has blue eyes. they will have figured it out on their own and commited suicide long before someone told them. This puzzle only seems to work with 3 or less blue eyed people.
Br = Brown eyes. His point of view doesn’t matter.
(Bl) = Blue eyer point of veiw. He/she doesn’t know if he is blue or brown.
‘Bl’ = (Bl) know that at least one other person’s point of view is what matters, because all the others you know have blue eyes and each of them are thinking of the other blue eyers assuming the (Bl) thinks that he/she has brown eyes.
Bl = all the other blue eyers.
Beyond this it doesn’t matter, because you know for a fact that all the other blue eyers are thinking only one of two things 1. (Bl) has blue eyes or 2. (Bl) has brown eyes.
1 blue eyer needs some one to tell them that their is one to figure it out. System is stable before the statement is made.
Br / (Bl)
2 blue eyers need some one to tell them that their is at least one of them to figure it out, because they can only see one of them and figure that he/she doesn’t know it.
System is stable before the statement is made.
Br / (Bl) / ‘Bl’
3 blue eyers need some one to tell them that their is at least one of them because even seeing two other can figure on each one not knowing about themselves and knowing one other doesn’t know, but can at least see one. Which each may think that the others think their is only one, assuming that him/herself is not blue eyed.
System is stable before the statement is made.
Br / (Bl) / ‘Bl’ / Bl
4 Blue eyers don’t need anybody to tell them. they already know. Each one sees that their are 3 others and know that at each one of them doesn’t know his/her own, but can see at least 2 others. It is known by all that all the others know that their are blue eyed people.
System is unstable before statement is made. they all commit suicide from the get go.
Br / (Bl) / ‘Bl’ / Bl / Bl
5 on up is all the same. Just more blue eyers that can see each other.
I tried to explain this best I could. Let me know what you think.
30 May, 2012 at 4:30 pm
Lenoxus
Obviously, prior to the visitor’s statement, the deduction process won’t work with just 1 blue-eyed person and 199 browns. That one blue will never know if her eyes are brown, blue, green, or other.
Because of this, it doesn’t work with 2 blues and 198 browns, either. This is because from the perspective of each of those 2 blues, it is exactly the same as if the other blue was the only blue. This is the key thing to grok.
Now, if it doesn’t work with 2, then it won’t work with 3 — because from the perspective of each of those 3, it is exactly the same as if the other 2 blues were the only blues. Those 3 have no way of knowing themselves to be blue and not brown. From their point of view, it’s the same as in the previous paragraph.
If it doesn’t work with 3, then it won’t work with 4 — because from the perspective of each of those 4, it is exactly the same as if the other 3 blues were the only 3 blues. How would the 4 blues know that they were 4 in total? Why wouldn’t they each think “I might be one of 197 browns”?
If it doesn’t work with 4, it won’t work with 5 — for the same reasons as above.
At no point can anyone deduce their own eye color, because they have no reason to expect anyone else to deduce their own color based on people not leaving, which is what the whole puzzle is all about.
Once we add the visitor’s statement, however, it all works.
A lone blue would have to leave on Day 1. (She knows that there are either zero or one blues, but the visitor’s statement excluded the zero, so therefore there is 1 blue — himself.)
Because a lone blue would have left on Day 1, then 2 blues (if there are exactly 2) would deduce that there must be 2 blues, and leave on Day 2.
If there are 3 blues, then they, each knowing that the other 2 “ought to” have left on Day 2, will leave on Day 3.
If there are 4 blues, then they will each see the situation the same as if there were only 3, and will thus “expect” those 3 to leave on Day 3. When that doesn’t happen, they conclude the only other possibility — that there are 4 blues and not 3 — and leave on Day 4.
Etc.
6 December, 2012 at 7:27 pm
Lenoxus
Looking over this again I feel like adding a simpler rebuttal. Let’s say you and 3 other people pop into existence on this island, none of you knowing your eye color. You see that the 3 others are all blue-eyed. What color are your eyes? Obviously you don’t know.
Day 1 goes by with no one dying. Day 2 is likewise uneventful. Nothing happens on Day 3. So it will soon be Day 4. Now what color are your eyes?
Did you say “blue”? Because you’re wrong. Your eyes were in fact brown. And surely you can see that this revelation doesn’t lead to a contradiction. If your eyes were brown, then those 3 would remain alive, because the situation with just 3 blues is stable.
But that means that you shouldn’t have concluded your eye color from no one dying on Day 3. Which in turn means that if your eyes had actually been blue, you shouldn’t have concluded it either. (Otherwise you would be psycic!) This means none of the other 3 would decide to leave (all of the islanders have to think symmetircally.)
This proves that the situation is stable with 4 blues; they each have no way of knowing if there are just 3 or not.
Well, by the same method, we prove that any number (before a visitor speaks) is stable. 5 is stable because 4 is stable: the 5 don’t somehow draw a conclusion from 4 people not killing themselves. Because 5 is stable, 6 is stable. And so forth.
This may seem counter-intuititve. When you think about it, it’s not: it makes complete sense. Asking “How many blue-eyed people does it take for blue eyes to become common knowledge?” is the same thing as asking “How many people have to stare at my eyes before I can see what color my eyes are?” Adding more people doesn’t somehow overcome the fact that each person is still blind to his own eye color.
29 May, 2012 at 7:49 am
Does P contain languages decided solely by incomprehensible TMs? | MoVn - Linux Ubuntu Center
[…] obstructs my own understanding of a broad class of problems that includes Terry Tao’s Blue-Eyed Islanders Puzzle, Dick Lipton and Ken Regan’s Urn-Choice Game, and their hybridization in the context of […]
7 June, 2012 at 1:08 pm
Zarko
I m sorry for my english but I feel like I should say something here. I am not sure if anyone realized that in this sequence of hypothesis in hypothesis we eventually must come to the problem of just two blue eyed people which can not be solved if there is no information about at least one blue person.
You see, p1 sees p2 and thinks: if I am non-blue then p2 can not conclude anything because he does not know that there is at least one blue person. But if guru says something then it is made possible for him to conclude if I am non-blue that he IS blue! But even now, he concludes nothing. So this must be because I am blue. Oh my god, but still, this puzzle puzzles me. Is it really possible to do all this? You can now think of possibility of 3, p1, p2 and p3 and still see that it is possible to come to conclusion of own blueness because you can speculate that if you are non-blue than they can solve the two-blue problem. It is obviously solvable. When everyone sees that no one has resolved 2 blue problem, they now know that they are dealing with, in fact, 3 blue problem, but it all rests on guru because he puts
blue person in a position to speculate about the knowledge of his friend.
But what makes me so frustrated is that for large number such as 100 I can not see through this line of reasoning.
7 August, 2012 at 1:06 am
Kamal
Also see at:
http://www.ritambhara.in/blue-eyed-islanders-puzzle/
20 August, 2012 at 4:51 pm
dratman
In order to relieve the gloomy spirit of this problem as heretofore posed, I would like to suggest that these unfortunate people, living on their peculiar island, are all members of an obscure tribal group known as “mathematicians.” Each inhabitant must always wears a shirt bearing, on its front, a large, ornate red letter M, while on the back of each shirt is printed a positive integer. (Details of exactly how this is all to be managed are left an an exercise for the reader).
Since a serious earthquake struck their nation some years ago, all N tribespeople have worn medically necessary neck braces, and so can look at all the others’ numbers but not at their own.
If any islander can prove that his or her integer is prime, that person will be given a unique and much coveted trophy, and will march in a grand torchlight procession that evening, to be seen by all.
Prizes may be claimed any day at noon. In case of ties, the winners are to have their likenesses carved onto the island’s central feature, Volcan Gushmore, in lieu of holding the trophy.
It goes without saying that each islander is highly and equally competent, and will never miss a proof if one can be obtained.
Naturally no one will ever talk about anyone else’s number for fear that the information will somehow get back to that person. And of course no one will reveal the existence of his or her proof until the designated hour, for fear of somehow losing priority.
As everyone on the island already knows, several shirts (0 < n < N) bear prime numbers, though n is not yet known with certainty by anyone.
Months and years go by, but no one produces a proof. One day a famous member of the tribe, who hails from a different part of the archipelago, arrives for a visit to the island and lets slip that at least one person's integer is a prime.
After the Great One's embarrassing gaffe ("I misspoke"), when, if ever, will anyone step forward with a valid proof?
20 September, 2012 at 2:04 pm
Quora
What are the best puzzles you have ever seen?…
As far as sources of puzzles go, I second Project Euler (although “puzzles” may not be the best word.) As far as individual puzzles go, my favourite, is the infamous blue-eyed people problem, which I dedicated about an hour one afternoon to wrapping …
3 November, 2012 at 4:52 am
Jack Snoeyink
Ok, I thought I believed the induction proof until I tried to write the proof myself and realized that n is being used for both the number of islanders and the number of days. You don’t actually change the number of islanders, so it seems to me that induction can be properly done only on what is common knowledge on day d, where day 1 is the day of the announcement.
Proposition: given an island with n>1 blue-eyed islanders,on day d>=1, if no-one has committed suicide, then everyone knows that everyone knows there are at least d blue-eyed islanders.
Let’s prove this by induction on d.
Base: d=1. The announcement says so.
Ind step: for day d>1,
Ind hyp: we may assume that if no-one suicided before day d-1, then on day d-1 everyone knows that everyone knows there are at least d-1 blue-eyed islanders.
Now consider day d, and assume that no-one has committed suicide.
By the IH, on the previous day everyone knew there were at least d-1 blues.
Anyone who saw only d-2 would therefore deduce their eye color; since there were no suicides, everyone concludes that everyone sees at least d-1 blues. And since d>1, everyone sees some blue-eyed islander that they know sees at least d-1 blues, so now everyone now knows that there are at least d blues. (Note that here is where we use the observer information. This would not work if we did d=0 as a base case and d=1 in the IH.)
This completes the proof by induction on the day.
4 November, 2012 at 10:12 pm
Ralph Dratman
The problem with this puzzle is that, although every islander knows all the facts (except his/her own eye color), no islander knows the beliefs of other islanders. When n>1, any prediction by an islander about future actions of other islanders must take into account what each other islander believes, including what each believes about the beliefs of the others. But there is no way to know which basic theory other islanders might believe: the “nothing has changed” interpretation or the “clock is now ticking” interpretation.
Each islander must act independently, based on his/her belief about the beliefs of others. Since there is no way to determine the beliefs of others, there is no deterministic solution to the puzzle.
20 November, 2012 at 1:31 pm
Lenoxus
That’s an odd, and broad, assertion. Consider a simple case. A, B, and C all have brown eyes. Does A think that B thinks that C has brown eyes, or does say think “I cannot possibly infer someone else’s beliefs”? If the latter, then yes, the puzzle is unsolvable, but that A would think this is an assertion requiring justification.
22 November, 2012 at 5:03 pm
Ralph Dratman
Lenoxus, you are right. I overstated my point about not being able to know the beliefs of others. In your example, I think we can safely assume that A knows that B knows that C has brown eyes.
But In the puzzle as stated, it seems to me that each islander would be free to hold one of two different theories after the visitor’s speech.
One valid theory is that as a result of the visitor’s revelation, nothing has changed, and therefore no new action can possibly be required.
The other valid theory is that the visitor’s words have started a new clock ticking at t = 0, as though from the beginning of the world.
I see no logical or mathematical way for any islander to figure out which theory any other islander will adopt. But without such knowledge, it is not possible to solve the puzzle.
With respect to the induction proposed below the problem, starting from n=1 is not valid. Why not? Because prior knowledge in the n=1 case does not match prior knowledge in the originally posed (n=100) situation: a single blue-eyed islander does NOT already know (before the visitor speaks) that there is at least one blue-eyed person on the island.
But n=2 is not a valid case, either. For if islander A (with blue eyes) is looking at one other islander named B (also with blue eyes), A cannot be certain that that B sees any blue-eyed person.
Surprisingly, in order for everyone to be certain (prior to the visitor) that everyone else already knows (prior to the visitor) that there is at least one blue-eyed islander, the situation must include at least three blue-eyed islanders.
—- ADDITIONAL THOUGHTS —-
At least one reason this puzzle is so confusing is that the stated initial situation is a “garden-of-eden” configuration. That is, there is no way to arrive at the given situation (n=100, with full knowledge of all others’ state of mind) from any other preceding situation, because all the blue-eyed islanders would already have committed suicide. The given initial state can be posited, but it cannot be reached.
This is similar to a chess position that cannot occur after the first move of any valid game. It is possible to begin playing in such a situation, but it is not possible to play into one. In fact, the standard “move 0” setup for chess is just such a position.
Fortunately, in playing chess, we do not have to reason about what each piece knows about what all the other pieces know!
23 November, 2012 at 8:08 am
Lenoxus
It’s important to understand that there’s nothing inherently special about “Everyone knows that everyone knows X”, or tertiary reasoning. That’s merely where the human brain starts to have trouble. Again, I repeat: There is NOTHING SPECIAL ABOUT THREE ISLANDERS. Let’s start from simple cases.
You agree that if there is only one blue-eyed islander, than one day after hearing the statement “At least one of you has blue eyes,” he would leave the island (I prefer this as a less violent alternative to suicide). Furthermore, at no point would he leave before this. If no one says anything, then from his perspective, it is just as possible that there are zero blue-eyed people.
Now suppose that instead of one, there are two “blues”. Prior to an outsider’s statement, will either of them have any reason to leave? No. And here’s an easy way to think of it: The situation with two blues is exactly like one blue, if we take the perspective of one of those blues. Neither of them can think something like “If I were that guy, I would see that my eyes are blue, therefore my eyes are blue.” That would be nonsense. In general, there is no basis on which either blue can deduce his eye color.
Now, let’s take the perspective of one of the browns on the island with two blues. How many blues does this brown think might exist? Well, either two (not including himself) or three (including himself). Will he ever leave (assuming no new outside information)? No. He has no idea if his eyes are blue, brown, green or purple.
I’ve just shown that on an island with three blues, no one will leave. Why not? Because each of those blues can figure that his eyes might be brown (or green, etc). Furthermore, they each know that if only two blues exist, no one will leave, so they can’t deduce anything from no one leaving.
We can extend this process indefinitely. First take the perspective of a brown-eyed person seeing X blues. Then change his eye color to blue. Then shift perspective to another brown-eyed person seeing X + 1 blues. Repeat. At no point can anyone here deduce his own eye color. When you think about it, that makes complete sense — such a deduction would be made out of thin air.
So how does a visitor’s statement change anything? Well, let’s go back to the base case. Our lone blue-eyed person realizes that the visitor can only be talking about him, so he leaves the next day. What if there were two? Well, each of those two blues considers two possibilities:
A. My eyes are not blue. If so, then there is only one blue-eyed person on this island. If there is only one blue, then he will leave on Day 1.
B. My eyes are blue. In that case, statement A is false, which means that no one will leave on Day 1.
Each blue now has a logical method to determine whether or not his eyes are blue: Wait to see if the other blue leaves on Day 1. “If that doesn’t happen,” each blue thinks, “it means my eyes are blue, so I will leave on Day 2.” It has now been demonstrated that if there are exactly two blues, then they will both leave on day two.
Ah, but we can extend this, easily. Suppose there are exactly three blues. From each of their perspectives, they have no way of knowing whether the number of blues is two or three. But they know that if the number is two, then a specific event must occur: those two blues will leave on Day 2. That’s an event that either will or won’t happen. If there are only two blues, then it will happen. If there aren’t, then it won’t. “But if there aren’t exactly two blues,” each of the three thinks to himself, “then there must be three, and that means I am another blue.” So after Day 2 goes by with no one leaving, all three blues deduce their color and leave on Day 3. Therefore, no matter what, if there are exactly 3 blues, then they will all leave on Day 3.
I hope it’s now clear how to generalize this. Once you have a rule that K blues leave on Day K, then it logically follows that K+1 blues (each seeing that the K blues he observes have not left on Day K) leave on Day K+1. Thus, because 1 blue would leave on Day 1, 100 blues would leave on Day 100.
Thoughts?
25 November, 2012 at 7:41 am
Ralph Dratman
I am thinking about this a lot. No clarity yet. Sometimes these things take me a long time. Thank you for your dialog to date. I will reply substantively as soon as I am able.
26 November, 2012 at 5:02 am
tom_glabst
That’s exactly the way I finally managed to understand the puzzle. The induction basis simply changes from “N=1: Nothing happens” to “N=1: The blue-eyed is gone the next day”.
28 November, 2012 at 5:39 pm
Ralph Dratman
Lenoxus, your exposition is clear and convincing and clear. I am persuaded that you are correct. My difficulty is with the statement of the problem. What is the real message conveyed by the revelation that “there is at least one blue” in a situation where, assuming n>>1, each islander has already seen more than one blue with his own (whatever-colored) eyes?
28 November, 2012 at 10:42 pm
Jeremy
Ralph asks “What is the real message conveyed by the revelation that ‘there is at least one blue’…?”
It’s important to realize that if there were no rule that the islanders had to leave the day they found out their eye color then the message would not change anything. It is the combination of this rule and the foreigner’s statement that makes the problem interesting. Essentially the “leave that day” rule makes the islanders an induction proving machine where each day they prove that there is at least one more blue eyed islander than what they proved the day before. However, until the foreigner speaks, no one had proven any starting point to the satisfaction of every islander (do we start at n=0 or n=1?) and it is the *synchronization* of the revelation that starts the *time-dependent* induction machine.
People leave when they have proof that the number of blue-eyed people is greater than the number that they observe. If the foreigner had said “I’m surprised to find 100 blue-eyed people here” then all the blue-eyed islanders would leave the next day (day 1), since they only see 99. If the foreigner had said “I’m surprised to find 80 blue-eyed people here” then all the blue-eyed people would leave on day 21 because it takes 20 days for the induction proving machine to go from 80 to 100 and then they leave the next day.
Of course, it doesn’t take a foreigner. If one of the islanders has said to the whole island “I’m sick of looking at these blue-eyed people” that would trigger the induction machine, too.
29 November, 2012 at 4:18 am
Ralph Dratman
I think I am finally getting this.
1) One lone blue thinks there are (maybe) no blues, until he learns there is one, and so leaves.
2) In a group of two, each one thinks there is (maybe) only 1 blue, and also thinks “the 1 blue thinks there are maybe 0 blues, and now he has learned there is one, and so will leave tomorrow”
3) In a group of three, each thinks maybe there are only 2, and thinks that in that group of 2 each one thinks there is maybe only one blue and that that one blue thinks there are maybe none, until …
4) In a group of 4, each one thinks there might be only 3, and also thinks that in the group of 3 each thinks there might be only 2, who think that there might only be one, until…
Is that the basic picture?
29 November, 2012 at 6:36 am
Lenoxus
Yes, Ralph, I think you’ve got it. Another way to answer the question is this: It’s not the content of the visitor’s statement which provides information – it’s simply the fact that he said it publicly. If he had told each islander in private that he can see at least one blue-eyed person on the island, then no suicides would occur.
Imagine I’m in a room with three people. Alice has a laptop, Bob has a newspaper, and I’m browsing the Web on my cell phone. I’m looking at the weather reports, and I can see Alice and Bob in front of me also looking at the weather — but they can’t see each other’s reading matter, or mine.
Alice announces, “Seems like it’s going to snow tomorrow.” Did I learn anything? Yes — I learned that now, Bob knows that Alice knows it will snow. I also learned that Alice knows that I know it will know, and that Bob knows I know Alice knows I know it will snow, for any number of such combinations. (Well, not quite — most people, including me, stop bothering thinking about this after three layers or so. But these logical islanders are different from normal humans in that key respect.)
I would like to add something to Jeremy’s point about one of the blues himself mentioning that he sees blue-eyed people. In that case, what would happen is that the other 99 blues would leave/die on Day 99, and the blue who made the statement would never leave/die. This is because that blue could not possibly have been talking about himself, and all the other blues would know this.
If it’s still not clear, simplify it to just two blues and consider what would happen. (If there were only one blue and he said it, then he would be lying. Everyone else would think he was talking about them, and they would all leave the next day!)
29 November, 2012 at 11:46 am
Jeremy
OK, Lenoxus, I agree with you about what happens if a blue-eyed islander says there are blue-eyed islanders, but if a brown-eyed islander said it then it would be the same result as when the foreigner said it.
29 November, 2012 at 9:09 am
Ralph Dratman
Lenoxus, thank you for your very helpful messages. Does this problem fall within a particular range of your interests?
Another thought: It occurs to me that if “blue eyes” were replaced by, let us say, “poor ethics,” or “lax conscience,” the puzzle might actually mirror some real-world situations in which people have difficulty with self-evaluation. For example…
Mr. Alf: “Someone here has been using extremely bad judgement.”
—
Bet: “Whew, I’m sure glad Alf isn’t talking about me! Unless… ”
Gam: “Those stupid guys are messing up again.”
Del: “I was on vacation when that happened!”
Eps: “Good thing I play golf with Alf’s boss.”
—
Corporate policy: If you realize you have violated our rules , causing significant harm to the company, you must resign within 24 hours.
29 November, 2012 at 2:23 pm
Lenoxus
If enough similarities to the islander puzzle are established, then yes, all four “bad workers” would leave four days later. However, the honor system might not be followed by them all, especially since bad judgment is itself a reason someone might not bother to resign. Furthermore, the logic of the original puzzle itself rests on each participant being a perfect logician who knows that the others know that the others are perfect logicians, ad infinitum. Most real humans have trouble thinking more than three layers deep on this (and really intelligent animals tend not to get past one, maybe two).
It only takes a bit of uncertainty for the puzzle to fall apart. For example, suppose Bet, Gam, Del, and Eps are all perfect logicians, but the last three don’t know that Bet is — they think she’s only able to consider these problems at two levels deep, “I know that she knows” but no further. In that situation, none of them would ever resign, because they wouldn’t infer anything from Bet not resigning on Day 3.
Anyway, I do have an interest in puzzles like these. There are two others that come to mind which I particularly like. One is the prisoners and hats puzzle and the other is called the pirate game.
18 November, 2012 at 11:00 pm
Anonymous
One difficulty with understanding the proof of this problem is that the induction step works upwards from 1 to 2 to 3… It may (or may not!) be clearer if the induction is explained the other way round.
Assume that there are exactly 3 blue-eyed people: B1, B2, and B3, part of a larger tribe including brown-eyed people.
B1 sees 2 blue-eyed people (B3 and B4) and as far as he knows there are only two blue-eyed people.
So B1 is certain that either B1 has blue eyes or B2 will see only one blue-eyed person: B4
So B1 is certain that either B1 has blue eyes, or (B2 is certain that either B2 has blue eyes or that there is only one blue eyed person, B3). Of course B1 knows that B2 will be wrong to think there is only one blue-eyed person, nevertheless he sees no logical reason why B2 might not think this.
So B1 is certain that either B1 has blue eyes or (B2 is certain that either B2 has blue eyes or (B3 is certain that either B3 has blue eyes or there are no blue-eyed people)).
This is where the foreigner comes in and changes things. We then know that one possibility is removed. We are then left with:
B1 is certain that either B1 has blue eyes or (B2 is certain that either B2 has blue eyes or (B3 is certain that B3 has blue eyes, and so will suicide the next day)).
When B3 does not suicide the next day, then after one day we are left with:
B1 is certain that either B1 has blue eyes or (B2 is certain that B2 has blue eyes and will suicide the next day)).
When B2 does not suicide the next day, then we are left with:
B1 is certain that B1 has blue eyes and will suicide the next day.
19 November, 2012 at 1:55 pm
Anonymous
Arrgh. Edits to Paragraphs 3 & 4
Paragraph 3: B1 sees 2 blue-eyed people (B2 and B3) and as far as he knows there are only two blue-eyed people.
Paragraph 4: So B1 is certain that either B1 has blue eyes or B2 will see only one blue-eyed person: B3
The trick with all of this is that, if 1, 2 and 3 are each logical with imperfect information, then the statement “1 thinks (2 thinks (3 thinks X))” does not imply “1 thinks (3 thinks X)”
22 November, 2012 at 12:04 pm
tom_glabst
That is the best logic puzzle I’ve ever heard of, and it just gave me some restless hours. I’ve finally understood the solution, and the way I managed to understand, after reading all this posts about common knowledge, was to think about the common knowledge of the induction proof.
Before the visitor held his address, the induction start was obviously different: With number of blue-eyed persons N=1, the blue-eyed can’t know that there are any blue-eyeds, and everybody knows that. So N=1 is stable.
Since people came up with the N=3 case, I will do the induction step as well:
Suppose the case N=k is stable, and the k+1^th blue-eyed enters the island or whatever (of course, every constuction of this universe is very artificial). Now we are in case N=k+1. It is common knowledge that N<=k is stable, and all the blue eyed people think it is at least possible they are in case N=k. So they won't kill themselves and the case N=k+1 is stable as well.
Once the islander made his statement, the other chain of induction (the one mentioned in the question) becomes true. And so, after N days, everybody with blue eyes has to commit suicide.
25 November, 2012 at 6:03 pm
Sean G
After telling many people this puzzle, I find one of the clearest ways to explain why “Solution 1” is fallacious is as follows:
Let S_1 be the assertion that there is at least one blue eyed islander.
For every positive integer n, let S_{n+1} be the assertion that every islander knows S_n. This is well defined as the islanders are perfect logicians.
Now whilst every islander certainly already knows the truth of S_1 simply by observation, they do NOT know every S_k. In fact, the largest k for which every islander knows S_k is one less than the number of blue eyed islanders.
However, the public nature of the foreigners address instantly makes every monk aware of every S_k. Hence we have new information and no contradiction.
28 November, 2012 at 10:03 pm
Jeremy
No, Sean, every islander already knows every S_k before the foreigner speaks because, as I said above:
It has to be known to these logical people that whatever the real number of blue-eyed people is (let’s call it N), blue-eyed people see N-1 blue-eyed people and the rest see N blue-eyed people. So it is Common Knowledge that everyone has first-hand evidence of at least N-1 blue-eyed people.
Read my post above for details, but since these people are perfect logicians, they can thus be assured not only of every S_k but of of every Si_k for 0 < i < N-1 where Si_1 is the assertion that there are at least i blue eyed islanders.
29 November, 2012 at 6:12 am
Lenoxus
I think this is all correct, except that that’s not “common knowledge”, merely nth-order mutual knowledge. For it to be common knowledge, it has to be assured for every Si_k for any i. Once the visitor speaks, the information becomes common knowledge.
29 November, 2012 at 12:33 pm
Jeremy
Lenoxus, you seem to have misunderstood my use of “i”. In my formulation, “i” is the minimum number of blue-eyed islanders asserted. Where your S_1 was the assertion that there is at least one blue eyed islander, that would be my Si_1 where i=1. So it is not true and will never be true that everyone is assured of every Si_k for any i because there are only 100 blue-eyed islanders and no one will ever be assured of Si_1 for i > 100.
Using the logic of my post from 5 May, 2012 at 3:40 pm you can see how every islander can conclude that no islander can conclude there are fewer than 98 blue-eyed islanders. Si_k is known for all k where i <= 98. That includes your S_k being known for all k. There is no possible world where an islander can conclude another islander thinks there are fewer than 98 blue-eyed islanders because every islander is assured from first-hand knowledge that every other islander sees at least 98 blue-eyed islanders. No islander can imagine a world where an islander sees only 97 blue-eyed islanders thus every islander can reason about every other islander that they know there are at least 98 islanders. Si_k, i=98 is known for all k.
The thing is, no islanders will take action until Si_k is true for all k when i=100. Even when Si_k is true for all k with i=99, blue-eyed islanders can assume that every blue-eyed islander sees 98 blue-eyed islanders and every non-blue-eyed islander sees 99 and thus everyone can conclude that they might have not have blue eyes.
What is not common knowledge is the precise number of blue-eyed islanders. Everyone is uncertain about the precise number until the combination of the foreigner's statement and the islanders' behavior establishes that it must be at least 100 and then those 100 leave. It is the combination of Si_k, i=99 being established for all k plus the lack of action on the day that that is established (day 98) that establishes Si_k, i=100 for all k on day 99 which leads to all of the blue-eyed islanders leaving on day 100 (which finally makes the precise number of blue-eyed islanders common knowledge).
29 November, 2012 at 1:21 pm
Lenoxus
Ah, I see my error; I hadn’t realized you were discussing alternative situations with different numbers of blues; oops. You’re correct.
30 November, 2012 at 8:15 am
Lenoxus
Wait. Looking at this again I don’t think you’re using “common knowledge” correctly. Until the visitor speaks, there is no number of blue-eyed people whose existence is common knowledge, not even 1. For example, consider the actual number of blues to be 3. Then the following statement is false: “Everyone knows that everyone knows that everyone knows that at least one blue-eyed person exists.” And because that’s false, it’s not common knowledge. (For 100 blues, the number of “everyone knows” can be truthfully extended to 99, but not 100.)
4 December, 2012 at 10:39 pm
Jeremy
Wow, is this tricky. OK, Lenoxus, Terry, you are right and I was wrong. I had reasoned that shortly along the chain of reasoning there could be an end to the descent of the agreed upon minimum number of blue-eyed people, but now I see there is not. I had to completely work through an example to see it, but I did and I did. Here it is:
Imagine the case where there are 4 blue-eyed islanders: X, Y, Z, and A. X sees 3 blue-eyed islanders and considers the possibility that there really are only 3 blue-eyed islanders. X considers what Y thinks in this case. Here, X thinks Y sees 2 blue-eyed people and considers Y reasons as X does. Well, Y would have to consider that he does not have blue eyes so Z and A only see 1 blue-eyed person.
Now Y wonders what Z, who in X’s imagination of Y’s imagination maybe only sees 1 blue-eyed person, thinks about the other blue-eyed person A. Well of course Z has to conclude that A may not see any other blue-eyed people.
I had thought that since X knows that Y and Z and A all see the same number of blue-eyed people that was good enough and somehow we could limit the decline of blue-eyed islanders in imaginary worlds, but now I acknowledge that it is not because X cannot tell Y anything relevant. X knows that Y’s consideration of the possibility that Y does not have blue eyes is false, but there is no way to communicate that to Y, so just because it is false doesn’t mean X can be assured Y doesn’t believe it. When X imagines Y he (X) has to imagine the possibility that his (X’s) eye color is not blue and he (X) has to go on to imagine that Y imagines the possibility that Y imagines Y’s eyes are not blue and so on, and with every step of the chain the island loses a blue-eyed islander until finally there are none for A to see. X is wrong and has no way to know it and has to consider that everyone else cannot know whether they are right or wrong and must consider the world where they are wrong as equally plausible as the world where they are right.
This is, after all, what we all instinctually know for the case where there was only one blue-eyed islander to begin with. We consider the world where that person is wrong and doesn’t know it as reasonable for him to believe.
So while X knows that A sees at least 3 blue-eyed islanders, X doesn’t know that Y knows that Z knows that A knows there are any blue-eyed islanders. It is therefore not common knowledge that there is a blue-eyed islander.
Another way to look at it is that no one knows or has evidence of their own eye color and therefore every islander has to consider a reasonable belief in a possible world where every blue-eyed islander is completely wrong about their eye color. In this world there are no blue-eyed islanders. When the foreigner rules that world out, then the induction can begin, because anyone witnessing a world where there are no blue-eyed people would now know that they must have blue eyes. A world where every blue-eyed islander is wrong is no longer consistent with the evidence.
22 February, 2013 at 2:13 am
Anonymous
I dont know why you gave up on this, it was correct.
22 February, 2013 at 9:37 am
Lenoxus
Here’s what’s incorrect: That for certain numbers of blue-eyed islanders, it is true that A knows B knows C knows… that at least one person has blue eyes, for any extension of the chain A-B-C-D-E… etc. In fact, that’s not true.
If there is just 1 blue, he doesn’t know if anyone has blue eyes. If there are 2, then A knows B has blue eyes, but A doesn’t know whether B knows if anyone has blue eyes. With 3 people, A does know that B can see C’s blue eyes, but A doesn’t know if B knows if C knows if anyone has blue eyes. And so forth.
With 100 blues, it’s true that [each blue knows that]x99 someone has blue eyes, but false that [each blue knows that]x100 someone has blue eyes.
28 November, 2012 at 10:04 pm
Jeremy
No, Sean, every islander already knows every S_k before the foreigner speaks because, as I said above:
It has to be known to these logical people that whatever the real number of blue-eyed people is (let’s call it N), blue-eyed people see N-1 blue-eyed people and the rest see N blue-eyed people. So it is Common Knowledge that everyone has first-hand evidence of at least N-1 blue-eyed people.
Read my post above for details, but since these people are perfect logicians, they can thus be assured not only of every S_k but of of every Si_k for 0 < i < N-1 where Si_1 is the assertion that there are at least i blue eyed islanders.
30 November, 2012 at 6:26 am
Leopold
No, Sean_G has it exactly right. If N is the number of blue-eyed islanders, every islander knows every S_k for each k < N, but not every islander knows S_N.
For example if N were 2, everyone would know S_1 ie that there is at least one blue-eyed islander (because everyone could see at least one), but not everyone would know that everyone knows S_1 (the two blue-eyed islanders would not know that the other one knew this). That is, not everyone would know S_2.
23 December, 2012 at 4:23 am
Henning
Here is a possible reasoning why the actual visit and utterance of the foreigner is important (I’d like to know whether that makes any sense):
Let us call X the day where the islanders first all met up in a group, thus making sure that everybody knows for certain, from that day, the eye color of all the other islanders (while they were not sure of that before).
Now it first appears that it suffices for the islanders, with their power of logical reasoning, to simply *suppose* that a visitor would come to their island and state that at least one of them has blue eyes; at X+100 all blue-eyed islanders would commit suicide.
However, it would clearly not work with n=1. But perhaps one can start the induction with n=2? Well, you cannot, because after the first day the two blue-eyed tribes people wouldn’t know if their companion did not commit suicide because of their own blue-eyedness, or because they did not know of the existence of a blue-eyed islander (as they now can only suppose such an existence, but not be certain of it).
Interestingly, in that scenario the islanders still have all the information needed to actually commit suicide after 100 days: Each of them just determines how many islanders they can see to have the minority color (blue), and resolve to commit suicide after that many days+1.
Can the islanders supose that they all actually resolve to do this – after all, that would be suicide?
23 December, 2012 at 4:29 am
Henning
Perhaps the real question is: In the scenario above, can you start the induction with n=3? (Because then, everyone can be sure that a supposed visitor could make this statement).
23 December, 2012 at 8:06 am
Lenoxus
Okay, let’s consider that situation. You’re on the island and you don’t know your eye color. There is a collectively-agreed-upon Day Zero. You see exactly two people with blue eyes. Day 1 goes by and no one dies, Day 2 goes by and no one dies. So, do you now know your eye color?
Be careful. If you say “No”, then I’ll tell you your eyes were blue. This means that with three blues, they won’t be able to deduce their eye color.
But if you say “Yes”, I’ll inform you that your eyes were in fact green. Is this revelation — that you have green eyes — inconsistent with your observations of no one dying on Day 1 or Day 2?
If you answer that with “Yes, it’s inconsistent”, then you’re saying that three is not the minimum for the induction to start, but rather, two is. And we can repeat the whole thing again. Why would two people die on Day 2? All they each know is that the other one didn’t die on Day 1, which is in turn perfectly consistent with there being only one blue altogether. (aA lone blue will not deduce his eye color out of thin air and kill himself ASAP; by that logic, each islander, seeing no one with green eyes, may as well deduce his own eyes to be green and kill himself for that reason. The same with purple eyes, or any other color you can name.)
So with this, it can be proven that no number of blues is sufficient to “by itself” cause all the induction. A shorter way of putting this is: Apart from a visitor’s statement, an islander’s only input of information is observing people die or not die. If there is no reason for X blues to die on Day X, then X+1 blues will not deduce anything from no one dying, and thus X+1 blues won’t die on Day X+1.
It’s true that each of three blues can imagine a visitor making that statement, but they don’t know that the other two know that the other ones can imagine it. Any system you might devise for three blues deducing their eye color needs to not cause a brown-eyed person, observing exactly two blues, to falsely conclude that his eyes are blue. And because the islanders are perfectly logical, they would all know this risk of any such system, so they would not apply it.
23 December, 2012 at 11:08 am
Henning
Yes, I agree with this; thanks for the detailed explanation. I particularly like the part about the inconsistency. The actual information provided by the visitor is indeed important.
(Does this also prove that there is no possible line of reasoning, at all, that the islanders could apply to lead up to the simple recipe “I see 99 blue-eyed people, so I feel compelled to kill myself on day 100”?)
3 January, 2013 at 7:16 am
Oleksandr Rybak
The foreigner actually ADDED some new information to the tribe’s knowledge. Consider a simpler example with 2 blue-eyed people, say Andrew and Bob. Of course, Andrew knew from the beginning, that there are some blue-eyed people in the tribe (for example, Bob). But Andrew DIDN’T KNOW, if Bob knows, that there are some blue-eyed people!!! And after the foreigner’s speech Andrew HAS BEGUN to know, that Bob knows about the blue-eyed people in the tribe. So Andrew HAS BECOME TO BE ABLE to make some inductional meanings like “if Bob didn’t determine his eye colour at the first day, then I am blue-eyed, otherwise Bob would immediately know about the fact, that he is the unique blue-eyed in the tribe”.
Analogous meanings are correct for the case of 100 people. For example, after the speech of the foreigner the following fact has come true: blue-eyed man 1 knows, that blue-eyed man 2 knows, …, that blue-eyed man 100 knows, that there are blue-eyed people in the tribe.
7 February, 2013 at 6:20 am
Geoff Mossburg
My problem with the second argument is this: “After nobody commits suicide on the (n-1)^{st} day, each of the blue eyed people then realizes that they themselves must have blue eyes, and will then commit suicide on the n^{th} day.” Why would that be so, and why would it only be true of the people with blue eyes? If nobody commits suicide, it only logically implies that no one who has blue eyes realized they have blue eyes, but because there are many eye colors, the thinker would not have any reason to assume that this must mean they have blue eyes. However, even if I’m wrong about this, wouldn’t any-color eyed people be thinking the same thing as the blue eyed people, because they wouldn’t know what eye color they have either? Would they also commit suicide (ie: The entire tribe)? Possibly, but I would think that once people of any-colored eyes saw that non-blue eyed people were killing themselves, they would logically deduce that there is a flaw in the argument… Which leads to another problem of their supposedly perfect logic, but I think that’s beyond the scope of this conversation.
7 February, 2013 at 2:01 pm
Lenoxus
I find that one of the best ways to grasp the problem is to test various “strategies” as thought experiments, starting from very low numbers of islanders. Imagine that the islanders are robots executing identical code which has been designed to derive one’s own eye color, and that the only input of information is which other robots have “died” and on which day. If a given series of instructions works in any circumstances, then the islanders will use it.
Suppose that for some possible Method Z in a particular situation, we find that if all islanders followed Method Z, then some of them would make the wrong deduction. Well, the perfectly logical islanders can imagine this outcome, and therefore know not to use Method Z in the first place.
That’s why this is incorrect:
Ordinary humans could well make wrong deductions from true premises, but not these islanders. Indeed, if even a single ordinary human were substituted for an islander, the whole thing would break down; it relies on the islanders (A) being perfectly logical and (B) knowing that all the islanders know that A and B are true. (note the recursion!) So we can safely say no islander will make a wrong deduction. Further, if any islander sees a non-blue kill himself, she will just assume that he discovered his color by other means (and she will be correct).
7 February, 2013 at 2:59 pm
Lenoxus
So the question remains: Why does the problem work out the way it does?
Suppose there were exactly one blue and no one else. He would have no reason to know his eye color until after a visitor arrives and speaks, saying he sees “at least one” blue. Then the islander immediately knows that he is blue, and leaves (a less violent alternative to killing himself) at the next noon.
Now suppose there were two blues. Initially, they have no reason to know their own color. Along comes the visitor. After the visitor speaks, each blue considers two possibilities: “Either (A) I am blue-eyed, or (B) I’m not. If B is true, then that blue I see must realize he is blue, and will leave at the next noon, or Day 1.” On Day 1, that doesn’t happen. Each islander then thinks “B must be false, meaning A is true, meaning that I am blue.” On Day 2, the two blues leave.
Let’s add a third person. Each of them considers the possibility that he is not blue (brown-, or green-, or orange-eyed), in which case it would be just like the former paragraph. Each thinks “If I am not blue, then the two blues I see will leave on Day 2.” Then that doesn’t happen. This means that “I am not blue” is false, and therefore “I am blue” is true. So on Day 3, each of the three blues leaves.
This continues indefinitely. Suppose it is true that for some particular N (such as 1, or 4, or 15) that a set of exactly N blues would leave on Day N. (Not for all numbers, just at least one. For example, it is true that for some number N that there are N states in the USA. Since it’s true if N==50, then it’s not false.)
Further suppose there are really N+1 blues.
Each of them observes only N blues and figures that there might be only N blues, in which case those N will kill themselves on Day N. When that doesn’t happen, it means there are not exactly N blues, which means there are exactly N+1, including oneself, which means you know that you are blue, and you will leave on Day N+1. In short, if N blues would leave on Day N, then N+1 would leave on Day N+1. This means that if it works for N==1, it works for N==2. If it works for N==2, it works for N==3. And so forth, up to 100 and beyond.
One reason that no non-blue-eyed person would have to leave is that they keep a different count of the number of blues, as discussed above. Until the day comes when all the blues do go (leaving the rest of the islanders behind), each islander, blue or otherwise, is continually considering the possibility that he is blue. He knows that if he is not blue, then he will observe the N blues he sees leaving on Day N. Once that does happen, he will know that he is not blue.
If you think you have a system whereby a non-blue would correctly deduce their eye color, then try “testing” it as a thought problem. (Consider that each individual brown has no way of knowing his eyes aren’t green, or orange, or purple.)
14 February, 2013 at 7:43 am
Anonymous
The problem with the logic is that we ignore time. What we do is pretty much apply some kind of “logical” algorithm recursively and that algorithm while applied to the past implies knowledge of present.
Suppose there are 2 Blue eyed and 2 brown eyed people at time T0 and someone says there is one blue eyed them we say, okay if at time T-1 there were only one blue eyed and he knew that that there is at least one blue eyed, than he could see other 3 brown eyed and deduce that he is the blue eyed and would kill himself. Whats the problem with this logic: problem is that at time T-1 we assume that blue eyed person knows something that is known only at time T0.
In mathematical finance it is called information filtration, when you backtest your algorithm you cant imply knowledge of present ( because you are in the past). It is the same as predicting future. Whole “logical” yada yada proof here ignores existence of time and the fact that you can’t predict future.
14 February, 2013 at 7:43 am
Irakli Machabeli
The problem with the logic is that we ignore time. What we do is pretty much apply some kind of “logical” algorithm recursively and that algorithm while applied to the past implies knowledge of present.
Suppose there are 2 Blue eyed and 2 brown eyed people at time T0 and someone says there is one blue eyed them we say, okay if at time T-1 there were only one blue eyed and he knew that that there is at least one blue eyed, than he could see other 3 brown eyed and deduce that he is the blue eyed and would kill himself. Whats the problem with this logic: problem is that at time T-1 we assume that blue eyed person knows something that is known only at time T0.
In mathematical finance it is called information filtration, when you backtest your algorithm you cant imply knowledge of present ( because you are in the past). It is the same as predicting future. Whole “logical” yada yada proof here ignores existence of time and the fact that you can’t predict future.
18 February, 2013 at 3:22 pm
anonymous
Dear Prof. Tao
Are there any applications of this sort of logic, in, say, mathematics? When solving some math problem, would the reasoning required to solve this problem ever be useful (when looking at a problem which isn’t related to this blue eyed islander problem in an obvious way)?
I think I solved it in the same way that everyone else did (by “induction”, you might say). Could one do this in another way?
24 February, 2013 at 6:15 pm
Jorriss
I do not understand why the Guru is necessary.
Suppose there are three people who have never seen eachother, they bump into eachother and see eachothers eyes for the first time (yet they still know the rules). Suppose we know they all have blue eyes.
Take one of the people, call him A. A does not know if he has blue or brown eyes. From A’s perspective, if his eyes are brown then B and C (the other two people) will see one person with brown eyes and one with blue. Suppose you are B. From A’s perspective, B sees one person with blue eyes and and one person with brown. So if C does not leave the island, the next day, B knows his eyes are blue. Then, if neither leave after 2 days, A knows his assumption was wrong and his eyes are blue after all, and they all leave on day 3.
I understand the induction proof I believe, but I do not see why the guru is necessary. Certainly he is sufficient for starting the problem but I see them as leaving regardless – regardless of the fact the guru gave new knowledge (common knowledge).
Can someone please clarify why I am misguided?
24 February, 2013 at 9:12 pm
Jorriss
I do not see how to delete comments.
I finally understand why the guru’s comment is necessary. That may of been one of the most confusing problems I have come across. I’ve spent hours thinking about it, I don’t know why it suddenly clicked despite the fact I had already gone through the same scenario so many times.
27 February, 2013 at 6:08 am
John Graeme
Don’t stop at 3 or 4 islanders. Look at what happens with 5: A knows that B (every other islander) can see that D has sight of both C and E. i.e. everyone knows that everyone knows there are at lease two blue eyed islanders. If there are at least two, then nobody is sitting waiting for the other to leave the island. This defeats the induction argument at the lower end. At the higher end, say 100 islanders, we are told that each islander holds a hypothesis containing the hypothesis of each the other: “nested hypotheses” But even nested hypotheses cannot be protected from the truth: if you are standing infront of the Eiffel Tower with 7 friends, you might be hypothesising all sorts of things about what Jim thinks about Karen thinks about John etc – but in no part of any hypothesis, nested or otherwise, will anyone believe they are in fact standing in front of the Statue of Liberty. So the idea that everyone is waiting on a hypothetical individual they know definitely does not exist is not possible – unless they subscribe to epistemic modal logic – which as far as I can tell has grown out of this and the other Christmas cracker puzzles – and really just amounts to a way of using the questioning to count themselves. i.e count to as many number days as you see blue eyed islanders and then leave – there’s no other information coming in, as so many others have intuitively known.
28 February, 2013 at 10:49 am
Lenoxus
With the Eiffel tower, there’s a huge difference — the tower’s “thoughts” are no concern to anyone; no one in that situation cares if the tower knows it’s a tower. Replace that tower with a single blue-eyed person, and you agree that blue eyes would not be common knowledge to the group. To put it simply: There is no number of people staring at my eyes which can cause me to know my eye color. This fact is itself common knowledge, so no one will think anyone else thinks anyone else thinks anyone else knows my eye color.
You argue that 5 is a crucial point where things change. Let’s consider that situation. You are on the island and see exactly 4 blue-eyed people. Do you think it is common knowledge that at least 1 person is blue-eyed?
If you say yes, I’ll inform you that your eyes are brown, which means you must have been wrong, since it’s not the case that 4 blues would have that common knowledge. (Remember the definition of common knowledge: it’s that A knows B knows C knows D knows… X, for any number of levels, not just two or three.)
If you say no, I’ll inform you that your eyes are blue. This means that even though there were 5 of you, you couldn’t be sure of blue eyes being common knowledge.
A simpler way to explain it is: Each islander has a significant blind spot: His own eye color. When considering the mental state of another islander, he must consider that islander’s lack of knowing his own eye color. The unknowns accumulate, resulting in “nested ignorance”. Every islander you add will still not know his own eye color, so it’s not possible to add islanders “until” blue eyes become common knowledge.
If there are at least two, then nobody is sitting waiting for the other to leave the island.
This is correct, and it’s actually quite simple to extend this: If there are N + 1 blues, then nobody is waiting for the other N blues to leave. Think about it: If 2 blues would not have any reason to leave, then a 3rd blue, who only sees 2 blues, will not learn anything from them not leaving, and therefore 3 blues have no reason to leave either. So this establishes that 3 blues would not leave. If there’s a 4th blue, then each blue sees only 3 blues, and therefore learns nothing from those 3 not leaving. And so on.
When you contradict this, you’re saying that there’s an upper-bound number of blues which would not leave, but if you added one more blue, they would. But that additional blue (who is isomorphic to all the other blues, of course) can only see a number of blues who wouldn’t leave, so he can’t learn anything from them not leaving.
28 February, 2013 at 12:52 pm
John Graeme
My response as follows, answers*
With the Eiffel tower, there’s a huge difference — the tower’s “thoughts” are no concern to anyone; no one in that situation cares if the tower knows it’s a tower. Replace that tower with a single blue-eyed person, and you agree that blue eyes would not be common knowledge to the group.
*No, absolutely, that doesn’t happen until five, when it is known there is a minimum of two.
To put it simply: There is no number of people staring at my eyes which can cause me to know my eye color.
*No, but looking at the others when there are more than five tells you there are enough for nobody to wonder if one person might be thinking they are the only one The lack of self knowledge you state here is implicit in the puzzle proposition.
This fact is itself common knowledge, so no one will think anyone else thinks anyone else thinks anyone else knows my eye color.
*Everyone who isn’t you knows they and everyone else knows your eye colour.The opposite to what you have written.
You argue that 5 is a crucial point where things change. Let’s consider that situation. You are on the island and see exactly 4 blue-eyed people. Do you think it is common knowledge that at least 1 person is blue-eyed?
*No, two.
If you say yes, I’ll inform you that your eyes are brown, which means you must have been wrong, since it’s not the case that 4 blues would have that common knowledge. (Remember the definition of common knowledge: it’s that A knows B knows C knows D knows… X, for any number of levels, not just two or three.)
*My proposition is that the “common knowledge” is that there is a minimum number of blue eyed people known to the group. I am not relying on this version of common knowledge which seems to require knowing exact numbers. Obviously quite impossible when you don’t know your own eye colour.
If you say no, I’ll inform you that your eyes are blue. This means that even though there were 5 of you, you couldn’t be sure of blue eyes being common knowledge.
*Again, all you’re really saying is that you require exact numbers to be known. I don’t.
A simpler way to explain it is: Each islander has a significant blind spot: His own eye color. When considering the mental state of another islander, he must consider that islander’s lack of knowing his own eye color. The unknowns accumulate, resulting in “nested ignorance”. Every islander you add will still not know his own eye color, so it’s not possible to add islanders “until” blue eyes become common knowledge.
*You must choose between this and my objections to nested concepts, which I find to fly in face of what must be known to all and a high minded logical conceit that does not in fact work.
If there are at least two, then nobody is sitting waiting for the other to leave the island.
This is correct, and it’s actually quite simple to extend this: If there are N + 1 blues, then nobody is waiting for the other N blues to leave. Think about it: If 2 blues would not have any reason to leave, then a 3rd blue, who only sees 2 blues, will not learn anything from them not leaving, and therefore 3 blues have no reason to leave either. So this establishes that 3 blues would not leave. If there’s a 4th blue, then each blue sees only 3 blues, and therefore learns nothing from those 3 not leaving. And so on.
*That’s my point, yes.
When you contradict this, you’re saying that there’s an upper-bound number of blues which would not leave, but if you added one more blue, they would. But that additional blue (who is isomorphic to all the other blues, of course) can only see a number of blues who wouldn’t leave, so he can’t learn anything from them not leaving.
*I don’t see that I said this, about adding blue eyed people. I agree it wouldn’t really change anything. Particularly if he doesn’t know his own eye colour.
28 February, 2013 at 3:12 pm
Lenoxus
(A useful trick I forgot to use in my last post is the blockquote tag.)
I am not relying on this version of common knowledge which seems to require knowing exact numbers.
Ah, I should clarify. Common knowledge does not require knowledge of specific amounts. It is enough to know about minimums and maximums, such as “There are at least two people with blue eyes.” For P to be common knowledge for a given group of people, each of them has to know that each other person knows that each other person knows… repeated forever, that P is true. P could be “At least one of us has blue eyes.”
Everyone who isn’t you knows they and everyone else knows your eye colour.The opposite to what you have written.
Yes, but now you’ve subtracted me from the group of people being considered. It’s true that a group of blue-eyed people looking at blue-eyed outsider will have common knowledge that that person has blue eyes. However, once we consider the group as a whole with that person included, then there is not such common knowledge. This isn’t magic or nonsense, it’s a matter of making logically correct statements, like the difference between “Everyone except Jake knows about the stain on Jake’s shirt”, which may be true, and “Everyone (including Jake) knows about the stain on Jake’s shirt”, which may be false. This problem is just more complicated because of all the nesting, the “Everyone knows that everyone knows” stuff.
================================
That may be the entire issue. If you reject the existence or relevance of nested thinking in general, then you will reject it here, of course. You are free to do so, though you can’t turn around and say that it’s false that blue eyes are not common knowledge with 100 blues. Common knowledge relies on nested thinking by definition; otherwise it is merely mutual knowledge. If you object to nested concepts, you can’t say the islanders do have an all-the-way-nested concept of someone having blue eyes.
The simple truth is that humans almost never think more than “two layers deep” here, three at the most, like “She thinks that he knows that I like chamber music”. I certainly don’t think any levels past that, there’s no need to. But these hypothetical islanders do think at any number of levels. (You are free to reject this very premise.)
================================
Anyway, you do accept that no one would leave/die until a visitor said something, yes? So there’s not much left to disagree over. The islanders don’t leave until a visitor says “At least one of you has blue eyes,” after which all N blue-eyed people leave on Day N. This can be explained in terms of nested logic, but it doesn’t have to be. All we have to do is establish this:
1. If, for some particular number X, it is true and known that X blues would leave on Day X, then a group of X+1 blues, upon seeing the other X blues not leaving on Day X, would leave the next day: Day X+1.
2. After the visitor speaks, a single blue would conclude his eye color, and leave on Day 1.
(Repeated reading of Point 1 establishes that 100 blues would leave on Day 100.)
3. If, for some particular number Q, it is true that Q blues would not leave on or before Day Q, then a group of Q+1 blues would not conclude anything from seeing the other Q blues not leaving, and therefore Q+1 blues would not leave on Day Q+1.
4. Before a visitor speaks, a lone blue would not have reason to conclude his eye color, and would not leave on any day.
================================
Going back a bit:
This is true. But it’s only two levels deep, which is enough with two blues but not three or more. One of the best demonstrattions of the situation is this picture here. The basic idea can be extended to more than three people, and the question marks can never “go away” (For example, it’s not possible that while B doesn’t know his own eye color, D thinks C thinks B knows his own eye color.) When you get all the way to the middle, every blue is “thinking” that innermost thought bubble, so all the ignorance has accumulated.
28 February, 2013 at 10:58 am
Irakli Machabeli
Easiest case to show what’s wrong with the solution offered here and all other solutions like on wikipedia etc is to consider case of 4 blue eyed people.
Proposition 0: If there are 4 blue eyed people than everyone sees other 3 and also all of them can conclude(even without knowing their own eye color) that all of them know(not a typo) that there are at least 2 blue eyed people on the island. So if blue eyed people are A,B,C & D than A can say that B knows that C knows that D knows that A knows that there are at least 2 blue eyed, B can make similar assumption etc.
This would be enough for a pure mathematical proof, since if all of them know that all of them know that there are at least 2 blue eyed people than visitors anouncement that there is one blue eyed among them does not introduce new knowledge.
Let’s find out what’s wrong with the proof:
General argument for any kid of proof: If proof/solution assumes something that is impossible or contradicts to initial conditions than every conclusion based on such an assumption is useless.
All solutions I’ve seen so far include some variations of :
Suggestion A: Suppose/if there was only person (same as suppose n=1)
Suggestion B: Suppose/If there was only one blue eyed person
Clearly if there are 4 blue eyed person we can’t suggest that there is only one person on the island so Suggestion A is clearly wrong and all proof based on such a suggestion are wrong. This also invalidates all recursive proofs that assume that n=1 / day one . One can abstract from the concrete example and say suppose n=1/day one, but than you can’t imply the knowledge that on day one you knew that there is one blue eyed person.
Now what’s wrong with the suggestion B? remember proposition 0. Everybody knows that all of them know that there are at least 2 blue eyed people on the island. But suggestion B says suppose there is only one blue eyed person. This Suggestion is also wrong since we know that there are at least two of them. So, all profs/conclusions based on suggestion B are also wrong.
28 February, 2013 at 2:10 pm
John Graeme
It is not true that with 4 everyone knows there are at least 2 blue eyed people: A does not know his own eye colour and looks at B. He knows B may be wondering if he himself has blue eyes. So it’s possible for A that B believes they both, A and B are not blue eyed. In addition to this, A thinks”If it’s possible in B’s world that neither of us have blue eyes, he must look at C and wonder if he also doesn’t know if he has blue eyes”. That makes a possible 3 without blue eyes. At 5, A knows B has sight of C,D and E – so knows B is aware that C (or D) is looking at a minimum of two.
Also, my proof isn’t like the rest. I’ve checked.
28 February, 2013 at 3:20 pm
Lenoxus
Now I can’t tell what you’re trying to say. Remember to distinguish between the knowledge at the different layers.
The basic assertion “There are at least 2 blue eyed people” is distinct from “Everyone knows that everyone knows that everyone knows there are two blue-eyed people.” In fact, if there are really 4 blues, then each of them personally observes 3 blues, and hence each of the knows there are 3 blues. It is not common knowledge that there are 3 blues but it is mutual knowledge.
Yes, this is true. For all A knows that B knows that C knows, C may be seeing 2, or 3, or 4, or 5 blue-eyed people. But there’s another layer we can consider: A knows that B knows that C knows that D can see at least ___ blue-eyed people/person. How would you fill in the blank?
1 March, 2013 at 2:12 pm
Irakli Machabeli
Forget about layers. If i’m a logical person and I know that everybody knows that there are at least 2 blue eyed than I can not make an assumption that someone might think there is only one(as long as that someone is also logical)
1 March, 2013 at 8:14 pm
Lenoxus
Take a look at the diagram I linked to elsewhere here. See if you understand with it and agree with its idea.
You said:
Yes, this is correct. But from your point of view, someone might think that someone might think there is only one. That’s a different statement, just like “My mother’s mother” and “my mother’s mother’s mother” are two different people. If we added another hundred generations there, it would be difficult to keep track of the difference, but this doesn’t mean the difference doesn’t exist, or even that it’s a “small” difference. My [mother’s x98] mother is not “almost” my [mother’s x 99] mother, any more than I am almost my mother.
Suppose there are four blues, A, B, C, and D. Consider the following statement, which may or may not be true: “A knows B knows C knows that D knows someone has blue eyes.” The question must be asked: Exactly how does D know this?
The answer may seem obvious: He just looks around! But this is a hypothetical D, not the real one. The real Mark Zuckerberg knows his computer password, but this doesn’t mean I know it. And even if I did know it, that wouldn’t mean that my cousin knows that I know it, and so forth.
So if this hypothetical D knows that someone has blue eyes by “looking around”, exactly which other islander is he looking at? Remember the statement, “A knows B knows C knows D sees someone with blue eyes.”
It can’t be C, or B, or A, because we are inside each of their heads, recursively. It’s impossible for A to think that B thinks that C thinks that D can see B’s blue eyes. That would require A to think that B knows that C can see B’s blue eyes! And it likewise can’t work for C’s blue eyes, or A’s blue eyes, or D’s blue eyes.
So there is no way for knowledge to be common, by the strict logical definition of common knowledge. Hence, the statement cannot have been correct.
1 March, 2013 at 12:33 pm
John Graeme
Thanks for replying to my post. Your answers were to the point and I have a better understanding of your point of view. A few statements for clarity’s sake:
I am satisfied that we mean the same thing by common knowledge.
I agree with nested hypotheses, but not as applied to this puzzle.
I fully accept the premise that the islanders think at every possible level, and no part of my argument rejects this or any kind of recursive thinking if it is logical.
I don’t think anyone would leave at all when the foreigner makes his announcement unless there are four or fewer islanders.
I couldn’t access you picture, it seems to have gone. Let me know if it is still there and if so if you know how I can see it.
I agree with your last point about mutual knowledge with four people, (all see 3 blues) and distinguish it from common knowledge (with 4, all know there is at least one, and wait on his reaction after the first night, all leave after 3 days).
I will address your last question, as I believe it goes to heart of the way in which the puzzle is perceived. You asked, I think meaning a five person scenario:
“Yes, this is true. For all A knows that B knows that C knows, C may be seeing 2, or 3, or 4, or 5 blue-eyed people. But there’s another layer we can consider: A knows that B knows that C knows that D can see at least ___ blue-eyed people/person. How would you fill in the blank?”
The answer is one and that person is E. But I would ask you: A knows that B knows that C knows that E can see at least___blue-eyed people/person. How would you fill in the blank?” Would the answer also be one? And would that person not be D?
A asks him/herself both these questions and decides that B, who after all is everyone else on the island, knows that C (every other apart from B) knows both D and E have blue eyes.
This cuts to the question if interchangeability and whose thoughts exactly signify the thoughts of everyone. The complexity comes from the fact that we as non-islanders must get inside the head of any islander – say A. Although for us A is every person on the island, the puzzle is about what A “The Observer” makes of every other person on the island, B. There are effectively 2 viewers who are “out of the game”, testing our null hypotheses- us and A. B is the “everyman” in the puzzle. B’s knowledge can only be formulated with reference to himself and all identical others. This is because all others are the same. For B everybody else thinks the same as each other and is interchangeable, and cannot have different levels of knowledge. So B the everyman cannot imagine a situation in which C knows that D and E have blue eyes but D cannot see both C and E. Or that all 3 cannot see each other.
What this means is that you cannot create a single chain such as the one above in which A knows that B knows that C knows that D knows etc without asking the same question interchanging all the letters. Ultimately A will conclude that B knows that that C knows all the other letters of the alphabet have blue eyes, as after all permutations have been considered all letters will have ended up at the end at some point (which incidentally doesn’t require a foreigner to point out).
1 March, 2013 at 1:20 pm
Lenoxus
Whoops, you’re right, I didn’t link correctly to that image. Here’s its url: http://www.mediafire.com/view/?gy5m3fqpd7ok99k#
Next, I’ll address this:
To be precise, do you mean “four or fewer blue-eyed islanders”?
If so, that does sound reasonable, but there’s a simple counterargument. Suppose you are on the island, not knowing your own eye color. You see exactly four blue-eyed people. The visitor makes his announcement. Days 1, 2, and 3 go by without anyone leaving. Ask yourself the following two questions:
1) If my eyes are not blue (meaning there are exactly four blue-eyed people) what will happen (or not happen) on Day 4?
2) If my eyes are blue, what will happen (or not happen) on Day 4?
Meanwhile…
Yes to both. But these letters are really just stand-ins for “any other person not previously mentioned in this train of thought.” It’s just that it would be complicated to have words for “any one person” and “any one person besides that person” and “any one person besides the previous two people” and so forth.
I’d like you to consider the following scenario: Yolanda has blue eyes and Zoe sees Yolanda. Does this mean that Yolanda knows that Zoe knows that Yolanda has blue eyes? Of course not.
That example seems silly, but it’s easy to fall into a similar trap when we add more people, because of how complex it gets. For example: “William knows that Xavier knows that Yolanda knows that Zoe knows that ___ has blue eyes.” Whose name can we insert there? The answer is: None of those four (it’s kind of a trick question). Do you see why?
You say:
But this sounds like you’re “inserting a name” in my William-Zoe example, which I just suggested cannot be done.
Take another look at the picture (I hope it works this time!) and tell me what you think.
1 March, 2013 at 11:59 pm
John Graeme
Thanks for the picture, I agree with what is represented here. I appreciate that if it were extended to 100 islanders, all 100 would end up possibly not blue in a thought cloud in the middle. But draw it out for 5, ABCDE. Look at E. Is it credible that E might imagine that everyone in the group does not have blue eyes – when A – whose thoughts these are – knows that E can see B,C and D, who he sees have blue eyes? In order to come up with a grand theory a scientific enquiry may require sub-theories or hypotheses. The grand hypothesis depends on these sub-hypotheses or nested hypotheses. If one of these sub-hypotheses conflicted with another or the main hypothesis, this would be a cause for alarm. As applied to the puzzle, there does not seem to be any consideration of whether the nested hypotheses conflict with one another, as in the above example, where A knows for a fact E cannot possibly be wondering if everyone else does not have blue eyes. And he knows that B also knows this. The nested hypotheses are applied in a way that obscures knowledge from certain hypothetical islanders, knowledge that is not in fact obscured.
I appreciate what you say about the interchangeability of names and am slightly confused as this is my point entirely. My answer about D seeing E to your question was in essence rhetorical, as the question particularises the islanders to make them seem like different people. By switching the letters I was showing that you cannot particularise in this way. But if for the sake of argument we do name particular islanders with letters (without them knowing-if that matters) how would you dispute my conclusion that A knows B knows C is aware of 2 blues, D and E? And if all islanders are interchangeable, doesn’t that mean all 5 know there are at least 2 blues?
Yes, I did mean 4 blue eyed islanders. Everyone leaves after 3 days i.e. on Day 4. If I did not have blue eyes, they would have left on day 3. So in answer to your question, if I don’t have blue eyes, I am alone on day 4. Not sure where you are going with that one. That’s the basis for the simple induction argument with which I disagree.
2 March, 2013 at 12:10 am
John Graeme
Apologies, I slipped and “everyone knows”. Final question in paragraph 3 should read:
And if all islanders are interchangeable, doesn’t that mean all 5 know everyone knows there are at least 2 blues?
2 March, 2013 at 10:56 am
Lenoxus
The “interchangeability” of the actual islanders does not imply an interchangeability of the hypothetical islanders, and this is crucial.
Compare the statements “A knows that B sees that C has blue eyes” and “A knows that C sees that B has blue eyes.” The two Bs here are very different, because one of them is in-C’s-mind-in-A’s-mind, and the other is directly-in-A’s-mind.
If we switch these two B’s around, then we are allowing people’s thoughts to “cross” as if they were mind-readers, or something stranger still: Two people recursively imagining each other! (Even that was too weird for Inception.)
Is it credible that E might imagine that everyone in the group does not have blue eyes – when A – whose thoughts these are – knows that E can see B,C and D, who he sees have blue eyes?
E is different from the E in D’s mind. Further, the E in D’s mind is different from the E in C’s conception of D’s mind. This, in turn, is different from the E in B’s idea of C’s conception of D’s mind.
Does A know that E can see B’s blue eyes? Yes. Does A know that E can see C’s blue eyes, and that E can see D’s blue eyes? Yes to both.
But the “innermost” E (the one who is being pictured by an imaginary D who is in turn being imagined by a fictional C who is in turn being by a hypothetical B who is in turn being conceptualized by the real A — whew!) does not necessarily see anyone with blue eyes.
Try filling in this sentence: A knows that B knows that C knows that D knows that E can see that ___ has blue eyes.
Do you see why the blank can’t be filled in with A? Or why it can’t be filled with E? Well, for similar reasons, it can’t be filed with B, or C, or D. This would require B (or C, or D) to be able to look at his own blue eyes, or to be imagined by A as being capable of doing so.
================
Remember, I didn’t say there were four blues total and you see three of them. I said that you see four blues. If your eyes are blue (which I didn’t tell you — maybe they are and maybe not), then there are five blues, not four.
So let’s try this again. You agree that if there are exactly four blues, they will leave on Day 4. You are on the island and see four blues.
Maybe your eyes are not blue. In that case, the four blues you see will leave on Day 4, right?
Maybe your eyes are blue. In that case, would the four blues you see leave on Day 4? No, there’s no reason for them to.
You now have a simple way to test the two competing hypotheses, exactly one of which must be true.
Hypothesis Alpha: There are four blues. If this is true, then they will leave on Day 4.
Hypothesis Beta: There are five blues. If Hypothesis Alpha is false, then Hypothesis Two is true.
Is there a way to tell if Alpha is false? Yes. If no one leaves on Day 4, then Alpha is false. This means Beta is true. This means there are five blues total. This means that you have blue eyes.
To summarize: If you see exactly four blues, and they don’t leave on Day 4, then you have blue eyes.
Furthermore, if you do have blue eyes, then each of the blues you see will be able to make the exact same deduction. Hence, on Day 5, you won’t leave alone — all five blues, including you, will leave together. (Again, that’s if that you do have blue eyes. If you don’t, then the four blues you observe will leave on Day 4.)
Do you disagree?
2 March, 2013 at 11:00 am
Lenoxus
Oops! this should have been:
Hypothesis Beta: There are five blues. If Hypothesis Alpha is false, then Hypothesis Two Beta is true.
2 March, 2013 at 11:03 am
leopold
I don’t think that naming the islanders is helpful.
If exactly 5 (or indeed more than 5) islanders are blues
* everybody knows that at least 4 are blues (by inspection)
* everybody knows that everybody knows that at least 3 are blues
* everybody knows that everybody knows that everybody knows that at least 2 are blues
* everybody knows that everybody knows that everybody knows that everybody knows that at least one is blue
but it is NOT true eg that
*everybody knows that everybody knows that at least 4 are blues.
Why is this, despite its being true that everybody knows that at least 4 are blues? Because although each blue can directly observe 4 blues, she can directly observe each of those 4 blues observing just 3 blues, not 4. Namely the other 3 blues, but not she herself, since although in fact she is a blue SHE DOES NOT KNOW THIS. Thus although she does know that everybody knows that at least 3 are blues, she does not know that everybody knows that there are at least 4 blues.
And similarly further down the nesting, until we can see eg that it is NOT true that everybody knows that everybody knows that everybody knows that everybody knows that everybody knows that anyone at all is blue. In fact no blue at all knows this, even though it IS true that everyone knows (because they can directly see) that there are at least 4 blues.
2 March, 2013 at 11:06 am
leopold
Or in summary: Lenoxus is right.
2 March, 2013 at 10:32 pm
John Graeme
In response to the two posts above:
Lenoxus, your post is in two parts. Firstly you are stating that it is not possible for the islanders to look at each other. That’s right and that’s why we name the islanders by letters, so we don’t make silly mistakes like that. I never suggested you should have islanders look at themselves, and my refutation of the solution is based on the limits of what is seen, see the table below.
Switching B with C in the example you gave is another way of looking at the same situation, that is all. A must have thoughts about what B thinks about C as well as what C thinks about B. When I switched the letters around I was showing this, and any islander thinking hypothetically about the islanders is free to do this. They won’t just be locked into one “hypothetical chain” where A imagines what B is thinking about C is thinking about D etc without also seeing that E isn’t just thought about, but also thinks about the others.
I feel sorry for the “innermost E”. He doesn’t really exist, he can’t see what is right in front of him and the islanders, when thinking about him, and stroking their chins, wonder if the poor soul sees nobody with blue eyes, and all because of the mean things other people are thinking about him.
But perhaps you are saying that of course he does not exist in this way, he is just a hypothetical being that is a logical consequence of the necessary thoughts others have about him? But my assertion is that a hypothesis which involves an impossibility is no hypothesis at all, nested or otherwise. This way of thinking in a chain of thought processes is really just taking yourself down a road where each person is thought about only by one other, and only thinks about one other, which all know cannot be true.
Or put another way: at base, what is your justification for extending the hypotheses in this way, when it makes no real sense in the real world? What kind of hypotheses are these? How are you protecting them from reality?
The second part of your post:
You put me right on the four blue eyes question, clearly about 5 islanders. But I have argued that the puzzle does not work with 5 blue eyed islanders and this situation there are only four. If I see 4 blue eyed islanders, I still know that there is a minimum known to all of 2 blue eyes and can see that the process will not get started because nobody can be waiting to see what the other does (see table below).
The difference here is that for one of the blue eyed people I am looking at, they know they are in at most a four person game, and they will wait to see how the others react to the minimum known of one. So it’s only me without blue eyes that is unable to act. Anyone who is looking at four blue-eyed people or more will be in the same position.
Leopold, this chart may help me to explain. I think your second line of “knows” should have been, “everybody knows that their neighbour knows” or “Everybody knows that someone knows”, or as below, “everybody knows that one other knows”
A is No. of blue-eyed islanders
B is No. of blue-eyed islanders known on basis of mutual knowledge
C is No. of islanders everyone knows one other knows have blue eyes (eg A knows B knows C&D are beps)
D is Common knowledge – everyone knows everyone knows this number have blue eyes.
A B C D
1 0 0 0
2 1 0 0
3 2 1 0
4 3 2 1
5 4 3 2
6 5 4 3
This information “lag” means that it is not until 4 islanders is it common knowledge that there is one blue eyed islander. And given the demands of the puzzle, even if everyone knows everyone knows there is one blue islander, this is not enough to spoil the game. Everyone waits to see if they leave on the first day. So not until 5 islanders does everyone know that everyone knows there are 2 blues – and there is no point waiting to see if someone leaves on the first day. There is no possibility, remote or otherwise, that someone might realise they are the only one – everyone knows nothing will happen after the first day – nothing will happen ever.
This is why it is easy to believe that the induction argument must work, if you only look at 3 or 4 islanders.
Clearly with larger numbers the solution as offered with 3 or 4 islanders becomes counter intuitive, and people rely on “nested hypotheses” – which really just means that they believe the induction solution must work forever, no arguments.
2 March, 2013 at 11:51 pm
John Graeme
Correction
“Firstly you are stating that it is not possible for the islanders to look at each other.” ( I mean themselves)
3 March, 2013 at 10:46 am
Jeremy
First of all, kudos to Lenoxus for answering all these posts and doing a fine job of it!
I encourage John Graeme and everyone else to read my walk through of an islander in the 4 blues world posted above, dated 4 December 2012: https://terrytao.wordpress.com/2011/04/07/the-blue-eyed-islanders-puzzle-repost/#comment-199646
In it you will see exactly how an islander constructs a possible world where one islander sees no blue-eyed people. As for the situation where there are 5 blue-eyed islanders, well, for the blue eyed islanders, they have to consider the possible world where there are 4 blue-eyed islanders (since that’s all they can see), so they consider exactly the situation I described. Thus, by induction, you can see how this is extended to any number of blue-eyed islanders.
The key tripping point in thinking about this puzzle is that the islanders have to consider the possibility of a counter-factual world, because they are lacking the information to be certain of the facts. They are ignorant of their own eye color and no amount of looking at other people will tell them their own eye color. So when they look at other people and wonder what they see, they have to consider all possibilities for their own eye color, even though in reality their own eye color is fixed.
Another way of at least starting to see the problem is to realize that although everyone knows there are some blue-eyed islanders, no one is sure exactly how many there are. Brown-eyed islanders are sure everyone sees 99 blue-eyed islanders, whereas blue-eyed islanders are only sure everyone sees 98. No one can deduce their own eye-color until there is mutual knowledge of the exact number of blue-eyed islanders (not just some lower bound), because only then would people seeing fewer than the correct number know that their own eyes must be blue.
3 March, 2013 at 2:35 am
leopold
JohnGraeme. When you say “This information “lag” means that it is not until 4 islanders is it common knowledge that there is one blue eyed islander” I’m unsure what you mean by ‘common knowledge’.
If you simply mean knowledge that they all have, then with 4 blues it is already common knowledge that there are at least 3 blues.
If you mean knowledge which they all have, and which they all know that they all have, and which they all know they all know they all have and so on ad infinitum (this is the logicians’ sense of common knowledge BTW), then there is no common knowledge that there are any blues at all, however many blues there in fact are.
Do you see why this is so? Appreciating this is quite important to understanding what is going on here.
3 March, 2013 at 1:51 pm
leopold
The following might help, if taken slowly….
If there are exactly 2 blues, everyone knows (because they can see) there is at least 1 blue. But neither of the blues knows that everyone knows there is at least 1 blue, since neither of them can be sure that the other blue can see any others.
If there are exactly 3 blues, everyone knows (because they can see) there are at least 2 blues. And this time each blue can see that everyone can see at least 1 blue (and each brown can see that everyone can see at least 2) so everyone knows that everyone knows there is at least 1 blue. But none of the blues knows that everyone knows that everyone knows there is at least 1 blue, since none of them can be sure that either of the others can be sure that the other blue(s) can see any others.
If there are exactly 4 blues, everyone knows (because they can see) there are at least 3 blues. And this time each blue can see that everyone can see at least 2 blues (and each brown can see that everyone can see at least 3). And therefore each blue can see that everyone can see that everyone can see at least 1 blue (and each brown can see that everyone can see that everyone can see at least 2) so everyone knows that everyone knows there is at least 1 blue. But none of the blues knows that everyone knows that everyone knows that everyone knows there is at least 1 blue, since none of them can be sure that any of the others can be sure that any of the others can be sure that the other blue(s) can see any others.
Similarly for 5,6,etc etc
4 March, 2013 at 12:25 am
John Graeme
Thanks to everyone to responding to my Christmas Cracker post, I can see now that the game cannot be stopped! Thank you Lenoxus for refining my understanding of common knowledge and distinguishing it from mutual knowledge. One or two things bother me, and I will re-post again soon. Firstly I think I was completely wrong with the idea of “minimum known” I’m not sure it really is possible for anyone to form an idea of a minimum number that everyone knows everyone knows. I think if you could that would stop the game. It may be that this concept is just inappropriate to the puzzle. On another point, one way to view the puzzle is as a counting game: two islanders must wait one day for “decision”, if someone else wishes to join, the group must wait 2 days, if another wishes to join, they must wait 3. The game is constructed such that it is a counting game of this kind and it is logical to follow the pattern. I’m still not convinced about nested hypotheses, mainly because they involve people who everyone knows everyone knows cannot exist (I think!), but as these hypotheses about others follow logically from the induction argument, I don’t see how they can be eliminated.
I’ll definitely read the walk through before I post again. Amazing puzzle that is much more than it seems.
5 March, 2013 at 10:29 am
Lenoxus
Hello everyone, I’ve been delayed in responding by a big project but that’s finally over, yay!
I was going to say more about the nesting, but I can’t think of any new ways to frame it. So let’s just stick with the separate question of whether there’s a maximum number of blue-eyed people who would leave, such that a larger number of blues would stay.
For the record, I had been under the impression that you, John Graeme, thought that four blues would in fact leave on Day 4 (where Day 0 is the day of the announcement). Maybe I was wrong and you actually think the maximum is three, but for now I’m going to stick with four. You wrote:
Are you now saying that if you see 4 blue-eyed islanders, they will never leave? But what if you have brown eyes, and those four are the only blues? This would mean there are exactly four blues, and earlier you said that it does work with four. Does it only work with four if there isn’t a brown-eyed person watching them?
Now, maybe I’ve been misreading you all along and you think the maximum is three.
In that case, let’s suppose you (on the island and not know your own eye color) see exactly three blues. The simple question is:
Will those three blues leave on Day 3?
Will they or won’t they? And if you can’t answer becaue you don’t have enough information, then what information would you need to be able to answer?
Notice that you don’t actually have to worry about nesting here. The inductive “counting argument” stands completely on its own; the nesting is just the deeper explanation for why it works.
6 March, 2013 at 10:38 am
John Graeme
Lenoxus, it was because of the above inconsistency that I admitted I was wrong, in the post above. Thanks for tidying that up in any case. By the way, do you believe that the nesting, extended hypotheses argument suggests that the islanders are in genuine ignorance of whether there are any blue-eyes on the island, or is it the conclusion of logical processes that they are bound to follow? I also wanted to add that if you are or are thinking of become a teacher, you certainly have the patience for it.
6 March, 2013 at 12:44 pm
Lenoxus
Darn it, I had either skipped past or misunderstood when you said “The game cannot be stopped!”. Thanks for the compliment, anyway!
Well, if interpreted literally this is not the case, of course. If there are just two or more blue-eyed people, than everyone knows there are blue-eyes. However, if you meant “everyone knows” as a shorthand for “[everyone knows] x infinity”, then yes. No matter how many blue-eyed people we add, it cannot become common knowledge (in the strict logical sense) that blue-eyes exist.
However, the logical process never comes into conflict with any islander’s knowledge, which would indeed be paradoxical, irrational even. It’s not like an islander is saying “On the one hand, I see 99 blue-eyed people, but on the other hand, logic compels me to reject this evidence.” Rather, the direct observation and the logical process go hand-in-hand.
================
I think I can give another simple counting argument to establish this as well. Consider the following three premises:
P1. If there are exactly N blue-eyed people, than each of them will see exactly N-1 blue-eyed people.
P2. Any individual islander can always consider the possibility that he does not have blue eyes.
P3. Any individual islander is capable of considering P1, P2, and P3. (Note the infinite recursion in this one!)
All we must now do is iterate the above premises. Suppose there are 100 blues. By P1, they each see 99 blues. By P2, they can each consider themselves to not have blue eyes, making the total 99. By P3, each of those (hypothetical) 99 would repeat the process. By P1, each of those 99 would see 98. By P2, each of those (meta-hypothetical) 98 blues may consider himself non-blue-eyed. By P3, a set of 98 blues would consider all three propositions. And so on.
There is no point at which P1, or P2, or P3 becomes false until we reach 0 blue-eyed people, who are of course not capable of seeing anyone or making deductions (because there of 0 of them).
================
We can also try to demonstrate the absence of the common knowledge by another “counting argument.” In this context, “relevant common knowledge” will mean Nth-order knowledge, where N is the number of blue-eyed people. (So if there’s just one blue-eyed person and he knows he has blue eyes, then that’s relevant common knowledge. If there are 15 and [everyone knows]x15 that someone has blue eyes, then that’s relevant common knowledge.)
Consider a set of islanders who have relevant common knowledge of blue eyes. We can remove any one of the islanders. If we do so, we must also remove a single “everyone knows” from our description of the situation. We can do this safely because the islander we removed knew that the remaining part of the sentence was true.
A sidenote: consider a simple situation: “Jared knows that the ball is in the box” is true. If we remove Jared, then “the balll is in the box” must remain true, because Jared didn’t just think it, he knew it.
For an island with 100 people, if it is true that [everyone knows x 100] that someone has blue eyes, then any one person knows that [everyone knows x 99] that someone has blue eyes. We can remove a person, and it will now be the case that [everyone knows x 99] that someone has blue eyes.
Perhaps you now see where I am going with this. After you have removed one person and one “everyone knows”, the process is repeatable. The number of “everyone knows” and the number of people started out the same, and if you repeat the act of removing one of each, then they will both decrease at the same rate and the same time until both reach 1.
What’s special about 1? Well, in a sense, nothing really. But at that point, we are in a situation where there is a lone-blue eyed person, and also “everyone knows” knows (meaning simply that he knows) that someone still on the island has blue eyes. Yet there is no apparent way for him to know this. (No visitor has showed up in this example.)
What’s my point in all this? To put it simply: This demonstrates that if it is necessarily the case that 100 blue-eyed people would have common knowledge that blue eyes exist, without any outside information having been given, then it follows that 1 blue-eyed person would also have this knowledge. However, it is false that 1 blue-eyed person must have this knowledge. Therefore, by modus tollens, it is false that 100 blue-eyed people necessarily would have that common knowledge.
================
One objection I can foresee people making to this is that the situation with nested thinking is different from my Jared example, because that example didn’t involve Jared thinking about other people thinking about himself, so subtracting Jared really would make an important difference. However, in actuality, the common knowledge situation doesn’t involve that either! Sometimes we say “everyone” when a better term would be “everyone else”.
Consider the situation with just 4 blues. It may or may not be the case that: Each of the four knows that each of the other three knows that each of the other two knows that the other one knows that blue eyes exist. That’s just the more complicated, detailed way of saying that “[Everyone knows x 4] that blue eyes exist.” And with that example, it should be clear that we can do the elimination thing. Remove any one person, and now it is true that “Each of the other three knows that each of the other two knows that the other one knows that blue eyes exist.”
7 March, 2013 at 12:53 am
John Graeme
“[everyone knows] x infinity” Is that possible? To say “I know what I know” is tortological – the same applies with “everyone knows that everyone knows that everyone knows” (ie x3)….we can’t keep knowing what we know forever..surely?
7 March, 2013 at 8:01 am
leopold
To say “I know what I know” is tautological, but to say “I know that I know” is not. In the case of exactly two blue eyed islanders, everyone knows that there are blue eyes, but everyone does NOT know that everyone knows that there are blue eyes. In particular neither of the blued-eyes islanders know that everyone knows there are blue eyes, since neither knows that the other one can see any.
7 March, 2013 at 2:00 am
John Graeme
Further to the above, you must mean in a chain of nested hypotheses, i.e. “everyone knows the next person knows” ad infinitum…the next person of course being everyone else ad infinitum, as you point out.
Just to be clear though, are you saying that it is NOT possible for the group to be aware (eks eks) that there is any number of blue-eyed islanders? If you can answer that, I can then make full sense of the above.
7 March, 2013 at 2:14 pm
Lenoxus
Short answer: Yes. (if the “eks eks” are included in the question).
In general, with N blue-eyed people: Everyone knows there are (at least) N-1 blues. Everyone knows that everyone (else) knows there are N-2 blues. Everyone knows that everyone knows that everyone knows there are N-3 blues. And so on, until we reach N-N blues, or zero.
At that point, we have reached a trivial truth. For any set of N thoroughly logical people whose intelligence and logic is itself common knowledge, it must be true that everyone knows (times N) that at least zero blue-eyed people are among them, in the same way that at least zero of them have red hair, at least zero of them have pet elephants, and so forth.
I’m going to repeat something I said earlier, which resembled some of my other statements closely but was not identical.
Try filling in this sentence: A knows that B knows that C knows that D knows that E can see that ___ has blue eyes.
Can the letter A, B, C, D, or E be put in there? If so, why? If not, why not?
7 March, 2013 at 2:38 pm
John Graeme
Lenoxus, thanks for your help, and good luck with your projects.
8 March, 2013 at 5:29 pm
Lenoxus
You’re welcome and thank you.
21 March, 2013 at 9:58 am
John Graeme
Just thinking about this puzzle again, does anyone have an explanation for the following.
If all islanders hypothesise in a series of nested hypotheses that their neighbour is looking at a blue eyed person, and wondering if they know they have blue eyes etc etc, this must mean all reason that at the end of the line (islander 100 if there are 100 blue eyed islanders) there is a blue eyed person the penultimate islander is looking at (No. 99). So this must mean all acknowledge there is at least one blue eyed islander. That islander can’t be looking back at anyone else without blue eyes as this would involve the problem Lenoxus pointed out above, A looking at B looking at C looking back at A, apparently not possible.
The existence of one blue would make the foreigner’s announcement redundant, but would also cause problems for eg three islanders who know it is not common knowledge there is one blue ie a problem crossing from the basic induction reasoning to the extended hypotheses. Any answers?
10 April, 2013 at 12:57 am
Anonymous
Here is a more difficult puzzle like this one!!
http://e-riddles.blogspot.gr/2013/04/neil-armstrong-landed-on-moon.html
26 May, 2013 at 12:03 pm
Luis
If they were logical why wouldn’t they just remove their eyes and not have to commit suicide since they now have no eye color lol but seriously if they arnt allowed to say if someone is right or wrong then the only person creating the suicides would be the traveler so who ever he told would then kill them selves I mean like I could one day say my eyes are blue then the next day say they are brown.. Only person to say my color is the traveler … And when would they have come up with this rule I think this test is dumb
26 May, 2013 at 12:28 pm
Luis
also I will ad that most likely he addresses everyone once he states that they would all assume they were the ones with blue eyes and kill them selves after of course they force the traveler into suicide…
1 June, 2013 at 3:54 pm
Emery
https://github.com/em/blue-eyes
1 June, 2013 at 4:44 pm
Lenoxus
Your variant doesn’t work, because the base case can’t hold up.
Suppose you are the sole blue-eyed person and everyone else is brown-eyed. How can you logically prove that you must be blue-eyed? No matter how many days go by, the situation from your perspective would be the same as being the sole green-eyed (or purple-eyed, red-eyed, etc) person.
Now suppose you are one of exactly two blue-eyed people. How would you prove it this time? You can’t, because for all you can tell, you are still the sole [something]-eyed person. We’ve already established that if there were just one, then s/he would have no reason to ever leave. Hence, you can’t learn anything from him/her not leaving.
By extension, there is no number of people, for any eye color, that is sufficient for a given person to logically prove their own eye color. Some additional piece of information needs to be given. It also has to be given publically, so that it becomes common knowledge (which it wasn’t before).
So what if anything is wrong with your original chain of logic? I would put it this way: You are making something “special” out of “everyone knows that everyone knows X.” That might be called two layers of knowledge. Here’s the thing: There is nothing special about two layers. It’s different from three layers, or “everyone knows everyone knows everyone knows X.” And these are each different from four layers, etc.
The only thing special about a two-layered situation is that that is usually where the human brain wants to stop. But it’s not where these islanders stop. These islanders can think up to any number of layers. In order for all the blues to die, they require a number of layers equal to the number of blues. So if there were just one blue, then he would die once it became true that “everyone knows” at least one blue exists (because “everyone” would include him). But it wouldn’t be enough in the case of two blues, because each of those two blues wouldn’t know that the other blue knows it. And so on.
A public announcement immediately makes it the case that “everyone knows x infinity” that blue eyes exist. It wasn’t the case before.
To repeat and clarify, you said this:
But that’s not the case and here’s why. Suppose you are on the island with exactly six other people, of whom three have blue eyes and three have brown. So it is correct to say that everyone sees at least two blues, meaning everyone knows that everyone sees at least one blue, or “everyone knows that everyone sees all other blue eyes”. Three nights go by without suicide. What color are your eyes?
2 June, 2013 at 1:37 pm
Emery
What I’d really like to do is demonstrate the blue eyes algorithm in Javascript. I would like nothing more than to show it to the original specs of the puzzle. But I’m really not getting what you are saying.
If you don’t mind helping me. I’d like to go one step at a time.
The biggest thing I don’t understand is where these “layers” come from. I think what you’re saying is the algorithm would need recursion. But I don’t see how that’s necessary.
It think what you’re saying that if we have islanders A, B and C.
A needs to know:
does A know there are blue eyes
does B know there are blue eyes
does A know that B knows there are blue eyes
does B know that A knows there are blue eyes
But all of that compounds simply to “does everybody know there are blue eyes and does everybody know that this is common knowledge”.
Are you sure you aren’t making it more complicated that in needs to be and in doing so obscuring an illogical state transition?
2 June, 2013 at 1:39 pm
Emery
I forgot to add
does A know that C knows there are blue eyes
does B know that C knows there are blue eyes
2 June, 2013 at 2:09 pm
Lenoxus
It generally would. However, one can also prove the puzzle without recourse to recursion; it’s just that recursion is the “whole story.”
Here’s an attempt to prove, without using (as much) recursion, why it works once a public statement “At least one person has blue eyes” is made.
1) If there were just 1 blue-eyed person, he would die on Day 1.
2) If there are in fact 2 blue-eyed people, they will die on Day 2. Why? Because each one observes just 1 blue and thinks: “Either there is just one blue, or there is not and I am also blue. In the former case, that blue will die on Day 1. In the latter case, I am also blue. So if no deaths happen on Day 1, that means I will kill myself on Day 2.”
3) We’ve just proven that if there are exactly 2 blues, they will die two days after the announcement. So suppose there were 3. They would know that either there are 2 blues or 3, and if there are 2 then those 2 will die on Day 3. Hence, after no deaths happen on Day 2, the 3 blues kill themselves on Day 3.
4) In general, if we know that N blues (for some particular N, such as 2) would kill themselves on Day N, then N+1 blues, upon seeing the N blues not killing themselves, will deduce that there are not N blues, meaning there must be N+1 including oneself, and so the N+1 blues kill themselves on Day N+1.
5) Hence, in general, all X blues will kill themselves X days after a public announcement like “At least one of you has blue eyes.”
This same train of logic can be flipped around for the situation where no announcement is made. In that case, there’s no reason for 1 blue to die. If 1 wouldn’t die, then 2 blues wouldn’t draw any conclusions from seeing the other one not die. So 2 are “safe.” And if 2 are safe, then 3 are safe as well, because they have no reason to draw a conclusion from 2 not dying. In general, if N blues would have no reason to kill themselves, then neither would N+1.
You’ve read the Wikipedia page about “common knowledge”. Do you understand the difference between that and “mutual knowledge”? If something is common knowledge, that means that “Everyone knows that everyone know that [times any number of repetitions] X is true.” So if something is common knowledge (for a particular group), then it must also be the case that everyone knows it is common knowledge.
Suppose there are three blue-eyed people, and no visitor. Is it common knowledge that blue eyes exist? It would seem so, but no, it is not. A does not know if B knows if C can see anyone with blue eyes. Whose eyes woud this hypothetical C be seeing? Not A’s, because A doesn’t know if his own eyes are blue, and not C, because A knows that C can’t see his own eyes, and not B, because A knows that B can’t see his own eyes.
There is no minimum of blues for which the existence of blue eyes can become true common knowledge. That’s because what is common knowledge is that no one knows her own eye color. Each person accounts for their own ignorance, and accounts for the next person accounting for his own ignorance, and accounts for the next person accounting for the next person accounting for her own ignorance, and so so on.
Since you began with a link, I probably shouldn’t have written all the above and should instead have linked to this page where I discuss the problem and common related arguments. Warning: that page is pretty long. I guess I really like writing about it…
2 June, 2013 at 1:40 pm
Emery
And you are right, my variant does not take into account n=1.
10 July, 2013 at 2:39 am
Ann
I am sorry that I have translated this puzzle into Chinese but I haven’t got your permission. I beg your forgiveness…
My English is not very well, and if there is something offensive, please forgive me and tell me, and I will delete the post as fast as I can.
Here is the website:
http://www.douban.com/group/topic/41131932/?start=0
16 September, 2013 at 3:59 am
John Sidles
An alternative telling — that for some folks may be more fun (and easier to remember) — is to eliminate the foreigner, and instead assume that the islanders love to tell scary stories.
We specify that the following scary story is told upon every island “A trusted foreigner comes and says ‘I see a blue eyed person.'”
The puzzle question is, “Why does everyone, on every island, experience this story as genuinely frightening?” In other words “Why is this story’s scariness universal?”
On islands where everyone is brown-eyed, the story is universally scary for the obvious-to-islanders reason, that anyone hearing that story would have to kill themselves that very day. Scary for everyone!
On islands with exactly one blue-eyed person, that person experiences the story as scary for the same reason as the persons on the all-brown-eye islands. The remaining brown-eyed people experience the story as scary too, but they reason differently: “This story is scary (to me who sees exactly one blue-eyed person) because, if I heard that story, and no-one on my island killed themselves that same day, then I would have to kill myself on the second day!” And so, again, the story is scary for everyone, both the brown-eyed majority and the single blue-eyed listener!
By induction, the story is scary to everyone on every island, regardless of the number of brown-eyed and blue-eyed people. Because no-one can imagine hearing this story without saying to themselves “If ever I heard this story, in real-life from a trusted foreigner, then either many of my friends would have to kill themselves, or after awhile I would have to kill myself!”
Summary This puzzle could alternatively called “The universal scary-story puzzle” because it amounts to asking “Why is this story scary to every islander?”
Perhaps unconsciously, this “scariness” helps make the puzzle attractive even to us in the real world … because it vividly shows us the scary power of mathematical narratives.
16 September, 2013 at 4:35 am
John Sidles
Remark It contributes greatly to the story’s scariness, that every islander is allowed (by custom) to say “Wow! That is a SCARY story!” … and yet paradoxically, every islander is strictly forbidden (by custom) to share with anyone their personal understanding of why the story is scary.
Remark The social isolation that is implicit in the scary foreigner story is itself terrifying … and this another reason why the scary foreigner puzzle is fascinating on many levels.
Possibly the Most Scary Islander Realization “Each of my friends, and even I myself, possesses the unilateral power to wreak terrifying destruction upon our peaceful island … simply by telling a true mathematical story.”
Because there are elements of reality in this conclusion, the Blue-eyed Islanders Puzzle surely ranks among the most scary mathematical puzzles ever posed!
29 September, 2013 at 9:11 am
Kai Kaufmann
I invite readers to try out http://www.facebook.com/ThePanopticonPuzzle that is of complexity O(N^3) and not just O(N) like the “blue eyed islanders puzzle”, thus being a serious contender for xkcd’s title of ‘hardest logic puzzle in the world’. The first person after me to solve one of its’ two parts will receive a prize of $1500, respectively $150.
1 November, 2013 at 8:26 am
Drew
It seems the key ingredients to this puzzle are that
a) all islanders hear the outsider speak simultaneously
b) there is a set, timed procedure for the consequential action.
Interesting example of how speaking out in public what everybody already knows can lead to trouble.
28 November, 2013 at 9:47 am
Lucca
What prevents the brown-eyed people from thinking they have blue eyes like the actual blue-eyed do, and thus killing themselves?
13 December, 2013 at 5:43 am
ConfusedMe
If there is 4 blue eyed people in a circle that does not know their own eye color they everyone know this:
* That there is 4 people total.
* That everyone can see what one person see minus themselves.
* If one person sees 3 blue eyed people, Then one of those blue eyed persons see at least 2 blue eyed people and one of those 2 blue eyed people see atleast 2 other blue eyed people.
* everyone knows that everyone knows this times infite
Now that everyone one knows there is atleast 2 blue eyed people, they begin to wonder why the two people have not left during the night. It has to be three, but the three doesn’t leave either, it has to be four all leave forth night.
Guru/stranger is useless?
31 May, 2014 at 10:14 am
Paul
One more try.
There are n blue-eyed people in the tribe.
After any number of days nothing happens because nobody knows their own eye colour.
Then the traveller announces to everybody that there is at least one blue-eyed person in the tribe.
Proposition. Suppose that the tribe had n blue-eyed people for some positive integer n. Then n days after the traveller’s address, all n blue-eyed people commit suicide.
For the inductive argument we need to prove:
1) that it is true for n=1,
which is the case
2) that if it is true for n-1 then that implies it is true for n
Looking at the second part:
if there are exactly n blue-eyed people then they each from the start
see n-1 people with blue eyes
and
know that there are either n-1 or n blue-eyed people in the tribe.
After n-1 days:
the n-1 have not committed suicide because they do not know their eye colour at that time.
Now if it is true that if there were only n-1 persons in the tribe with blue eyes, then those n-1 would know that their own eyes were blue,
then that implies that if the n-1 don’t know that their eyes are blue then there are n persons in the tribe with blue eyes, and those n can deduce that their own eye colour is blue.
But there is another reason why the n-1 might not know their eye colour at that time;
if what the traveller said doesn’t make it a logical necessity that they have blue eyes.
In that case still nobody knows their own eye colour and life continues as before.
Considering exactly how many blue-eyed people are on the island when the traveller speaks:
If there is only 1 blue-eyed person when the traveller speaks:
he or she will see no-one else with blue eyes and so know that they are the one referred to and commit suicide the next day.
If there are 2 with blue eyes when the traveller speaks:
initially each will see 1 with blue eyes and will not know his or her own eye colour
so will know that the traveller has spoken with n=1 or n=2
if when n=1
then the 1 person will know immediately that his or her eyes are blue and commit suicide the next day.
if he or she doesn’t commit suicide then both of them know at that point that n=2 and their eyes are blue and commit suicide the next day.
If there are 3 with blue eyes when the traveller speaks:
initially each will see 2 with blue eyes and will not know his or her own eye colour
so will know that the traveller has spoken with n=2 or n=3
if when n=2 (*)
initially each will see 1 with blue eyes and will not know his or her own eye colour
so will know that the traveller has spoken with n=1 or n=2
if when n=1
this contradicts the previous knowledge of all three that the traveller spoke with n=2 or n=3,
so n is not 1 and nothing further can be deduced from the hypothesis (*) that n=2,
in particular not that if there were 2 then they would know after one day that their eyes were blue and commit suicide the next day.
So the fact that the proposition is true for n=2 does not imply that it is true for n=3, which causes the induction argument to fail.
Or then again I could be wrong :)
9 March, 2015 at 2:20 am
ZeReoN
If I might explain how n=3 works:
To try and simplify, take the following scaled down experiment.
4 people (A, B, C and D) sitting in a square, each can see the other 3, but naturally not their own eye colour.
Same rules apply: If anyone deduces their own eye colour is blue, they have to ring the bell. They CAN ONLY ring it at the start of every hour, and they’ll leave the room.
Here are the assumptions that need to be given for this to proceed:
1. All players (A, B, C and D) are equally logical and able to think independently.
2. No communication is done and nobody looks at the blue eye guy strangely.
3. Everyone can see each other for EVERY SINGLE SECOND of well, the time they’re sitting down there. Nobody’s going to toilet or something in the middle.
At T=0 hours, blindfolds from all of them are removed, and a 3rd party’s voice echoes over the speaker, saying “One or more people in this room have blue eyes.”
The clock starts.
(——————————-Start of n=1 scenario, Sceario 1————————)
This works for when n=1, since assuming A is the blue eye guy (abbreviated henceforth as BEG), this is what he sees:
A: “B, C and D all have brown eyes. But the 3rd party mentioned one or more, so assuming it is 1, AND THAT there are 4 guys here, I know 3 of them have brown eyes, so the one with blue eyes must be me. I’ll go and ring the bell at T=1 hours.”
Thus when n=1, the removal of n BEG’s at T=n hours holds true.
(————————————–End of n=1 scenario, Start of n=2 scenario, Scenario 2——————-)
T=0, the first few seconds after the announcement. A and B both have blue eyes.
A: “I see that B has blue eyes. I assume that I have brown eyes. Therefore, the annoucement didn’t tell me anything new I didn’t know already, since he said ‘there are one or more.’ Best case that I’m assuming is ‘one’, which is that BEG I see right in front of me, B. So he will ring the bell at T=1 and nothing will happen.”
T=1, nobody rings the bell, because neither A nor B have figured out their own eye colours. The announcement said n=1 or n>1, so both (A & B) assume that it has to be n=1 between T=0 and T=1, so they both expect the other BEG they see to ring the bell at T=1.
Between T=1 and T=2,
A: “Well shucks. Assuming B was perfectly logical AND followed this train of thought, he would have pressed the bell at T=1. If he didn’t and I didn’t, that means we both were wrong that n=1, and thus n must be >1. So I see C and D that are brown. n>1 could mean that n=2, 3 or 4. But since I see 1 pair of blue eyes and 2 pairs of brown eyes, and there are 4 people total in this room, that means that it can’t be 3 or 4 blue eye pairs, since 4 pairs of eyes – 2 pairs of brown = 2 pairs of blue. It can’t be 1 pair of blue as I’ve already realized, so I must be the other blue eye and I’ll ring at T=2 since I can’t ring in between T=1 and T=2.
T=2, A and B both go ahead and press the bell and leave the room.
Similarly, it still holds true that after T hours, n blue eye people will leave the room where T=n. In this case, it took 2 hours for 2 blue eye people to leave the room.
(————————————–End of n=2, Scenario 2 has been proven.—————————)
(—————–Start of n=3, Scenario 3——————————————-)
Now for n=3 (A, B and C are blue, D is brown eyed), which I don’t get.
_________________________T=0, blindfolds off and announcement has been made a few seconds ago.
A: “I see 3 other people in this room. Including me, that’s 4 guys. 4 pairs of eyes. I see two other BEGs and 1 brown eye dude, Mr. D. The announcement said there are ONE or MORE blue eye dudes in this room. Well obviously it can’t be one, I clearly see 2 dudes (B and C) with blue eyes in this room. It’s not wrong, but perhaps the guy was going for the ‘more’ part rather than the ‘one’ part.
My first task is to figure out if I’m blue or not. The only way I can know is to subtract the number of brown guys from the total number of guys in this room. That will leave me with the number of blue eye guys, since there aren’t any hidden eyes anywhere.
Hypothesis (1): Assuming I am brown. I see D is brown too. B and C are clearly blue. So 4-2 brown guys (me and D) = 2 blue guys (B and C). This is consistent with the n=1 or n>1 statement that the 3rd party said when T=0. How can I prove or disprove this hypothesis?
Hypothesis (2): Assuming I am blue. I see D is brown. So 4-1 brown means that there must be 3 blue guys (me, B and C). This is also consistent with the n=1 or n>1 statement that the 3rd party said when T=0. How can I prove or disprove this hypothesis?
Working from the first (1) hypothesis (I, A, am brown) that there are only two blue guys (neither of which are me) in the room and taking into account the assumption that B and C would be highly logical like me, they (B and C) would see only one other blue eye guy, because I (am assuming) I have brown eyes.
So they will be waiting for the other guy to ring the bell at T=1, based on their own assumption that they are brown, and the other person they are seeing is the only blue guy, in which case n=1.
When that doesn’t happen, then they’ll have to revise their assessment that n=1, and figure out that n=2, and as they see only one other blue eye guy in front of them, they must be the other blue eye guy. Both B and C will arrive at this conclusion independently, and by T=2, both would have rung the bell.
To make it simpler, putting myself (A) in B’s shoes, he will think as such FROM T=0:
(————-Start of A during T=0 to T=1 pretending he (A) is brown, imagining what B will see—————-)
B: “I have only two assumptions, n=1 or n=2. n cannot be 3 or more, because I clearly count 4 fleshy bodies (including myself) in this room. Mr A and Mr D both have brown eyes. Therefore if I am brown, n must be 1 (4-3=1). If I am blue, n must be 2 (4-2=2).
Let me run a scenario of 1 blue guy. *Snipping from Scenario 1 above where n=1*: Logically speaking, NETT result when n=1 will result in that one dude there Mr C ringing the bell at T=1.
Let me run a scenario of 2 blue guys.*Snipping from Scenario 2 above where n=2* Logically speaking, NETT result when n=2 is me and Mr C. ringing the bell at T=2.
I (B) reached this conclusion in Scenario 3 before T=1, because I’m awesome at thinking :)
So when T=1, Mr C should ring the bell. If he doesn’t, then the n=1 hypothesis is false.
If that is false, that means n=2 is correct (aka Scenario 2) (//which, from A’s point of view, is correct as he’s imagining (A) himself to be brown. So B and C will be Blue//), and as I (B) see A and D being brown, that means I must be blue. So I will ring the bell later when T=2. But I cannot ring the bell at T=1 because I cannot confirm whether n=1 hypothesis is correct unless Mr C FAILS TO ring the bell.
(—-End of A’s imagination between T=0 and T=1 of what B is thinking, but A continues thinking—)
From B’s POV/train of thoughts, he cannot tell if n=1 (in which I, A, am brown) or n=2 (in which I also am brown, seeing that I CAN SEE C is blue) unless C acts at T=1. The same Scenario 1 will happen for C, and as a result, neither of them will act at T=1, which will disprove their theories that n=1 and the only logical theory left is n=2, so it will lead to them ringing the bell at T=2.
But working from Hypothesis (2), in which n=3 and I am blue, there would be no movements at T=2 because all 3 of us are trying to find out whether the n=2 theory (Scenario 2/Hypothesis 1) is true. Obviously since we both see 2 greens, Scenario 1 where n=1 is definitely wrong. Since the n=2/Scenario 2/Hypothesis 1 theory can only be disproved at T=2 and not a second before, there will not be any movements before T=2.
The INACTION of other players in the room to press the bell to signal they’re blue serves to disprove the hypothesEs in the various minds that are running at the same time. It’s sort of like ‘I don’t know what’s correct, whether I’m blue or brown, unless X happens. Oh there we go, X happens. I’m not brown, so I’l probably blue (assuming that there are only 2 colours of eyes, which is assumed in the original question.)
19 June, 2014 at 3:13 pm
sean
lots of variables to consider
deaf people
babies with blue eyes
blind people
people who dont try to find out eye color
people who dont follow religion
blue eyed people who die without commiting suicide
more statements from outsider
more…
21 June, 2014 at 5:43 am
My Math Diary
[…] to fundamental group, hanging pictures and homology groupBlue eyed islanders problem: SE, SE2, Terry Tao, [An interesting moral dilemma: the traveler can save 99 lives after his faux pas, by naming a […]
13 August, 2014 at 9:48 am
J Ryan
You only have to commit suidice if you know your own eye color. No islander will ever have a way of confirming their own eye color, regardless of the visitor’s comment which does not provide any new information. Each islander, blue or non blue eyed, is an individual. The Nons can identify 100 Blues. The Blues can identify 99 Blues. Nobody can identify their own color. And remember, nobody can discuss eye color. Everybody assumes that the lack of suicide on the 100th day automatically reveals the presence of the 100th (and final) Blue. In fact, the only thing it reveals is that the Blues don’t know they are Blues, and nothing more. The lack of suicide doesn’t confirm to any of them that they are a Blue because each Blue can still logically assume that they are a Non.
To illustrate, assume you are one of the Blues. You hear the visitor’s announcement. You didn’t learn anything because you can count 99 Blues and logically determine that he is talking about those individuals and not you. Each Blue will individually draw this conclusion without having any way of identifying themself as a Blue. As each day passes, each Blue will think to themself, “I hope none of those Blues ever find out they are a Blue.” This was true even before the visitor came and made the announcement. You knew there were 99 Blues before the announcement was made. Even if you want to use that as the starting point… when that 100th day passes and you see that none of the 99 Blues committed suicide, you are instantly relieved that they didn’t learn their own eye color. Nothing about the lack of suicide allows you to logically conclude that you are a Blue as well.
25 September, 2014 at 6:54 pm
Dr.Bombay
Wrong.
I thought this was solved already, as the solution is pretty straightforward, but since false answers keep appearing, let me try to explain:
Let us say,for simplification,that there are 4 blue eyed people(BEPs), A,B,C,D.
A thinks there are 3 BEPs.
A thinks that B thinks that there are 2 BEPs.
A thinks that B thinks that C thinks that there is only 1 BEP.
A things that B thinks that C thinks that D thinks there is zero BEPs.
The traveler inserts this new information (for A!!!):
‘How will B act when he sees how C acts when he sees of D acts when he learns that there is more than zero BEPs”
D did not commit suicide.
A thinks that B thinks that “this invalidates C’s hypothesis of D; D sees more than 0 BEPs. C should kill himself.”
But C doesn’t.
A thinks that this invalidates B’s hypothesis of C: C sees more than one BEP: B should kill himself.
But B doesn’t.
This invalidates A’s hypothesis of B. B sees more than two BEPS. That extra BEP is A. Therefore, A must kill himself.
And as everyone reasons as A, all BEP will kill themselves on the 4th day,
6 October, 2014 at 7:18 am
Zach
That doesn’t make any sense. A would think C and D also thought there were 2 BEP. A knows C and D can see B and D or B and C respectively.
The key is that A would expect B,C, and D to kill themselves on day 3. When they don’t then A would know that B,C, and D all thought there were 3 BEP, and thus that A was a BEP.
16 November, 2014 at 8:09 pm
Bernardo Galvão-Sousa
I just came upon this post and I already knew this logic puzzle.
But I have another question to add, especially with your formulation.
Since the islanders only ever see blue or brown eyes, if they all assumed that there are no other eye colors, what happens after the foreigner’s comment?
20 November, 2014 at 12:44 pm
john riley
Suppose we have n robots of which 0<p<n are blue. Suppose further that their sole raison d'etre is to use logic to discover their colour and stop. They reason as follows:
I assume I am not blue.
1. If I see 0 blue I stop.
1. If I see 1 blue I stay. 2. If I see 1 blue I stop.
1. If I see 2 blue I stay. 2. If I see 2 blue I stay. 3. If I see 2 blue I stop.
……………………………………………………………………………………………
1. If see p-1 blue I stay. 2. ……………………….. P. If I see p-1 blue I stop.
Thus all the blue robots stop after p steps. If the remaining n-p robots are the same colour they stop immediately.
If we assume, on the other hand, that the islanders are sufficiently human not to be forced by their natures to apply remorseless logic at all times to their own destruction, there is no reason why they should ever give any thought to it. If fact they clearly never have, otherwise their little life would have lasted 100 days, give or take.
The visitor is a blue-eyed red herring.
24 February, 2015 at 1:01 am
Rosani
Okay, Seems like the foreigner dude saw only one blue-eyed person and that person knowing that there are 99 people with blue-eyes (adding the fact that he doesn’t want to die alone) he will then point (or tell them that they are blue-eyed).which means all blue-eyed will discover the colour of their eyes….and that way brown-eyed people can discover the colour of their eyes and kill themselves…
16 April, 2015 at 2:50 pm
Jafar Safavi
2nd arguement is wrong! as I see it, the foreigner makes a public speech and does not tell the number of blue eyed people. the only way a blue eyed could know about his eye color is to know that someone else sees 100 blues, while he sees 99, which shows that he is blue. so if foreigner does not mention numbers, nothing changes.
16 April, 2015 at 2:54 pm
Jafar Safavi
and by the way, if there was only one blue, then the foreigner would have made the blue eyed to suicide. but this does not work when there are 2 blue eyed people. because then every body including blue eyed already know that there are other blue eyed people there.
14 July, 2015 at 1:10 pm
Ken Laws
I have to disagree with the consensus that has been reached in this forum. (And, alas, with the Wikipedia entry on Common Knowledge, and what I perceive to be the consensus among logicians, and even with Peter Winkler’s wonderfully succinct Dot-Town Suicides explanation at https://math.dartmouth.edu/~pw/solutions.pdf. I’m going way out on a limb here.) If I am right, the 100 blues on the island would all commit suicide after the 3rd day rather than the 100th day, under certain reasonably motivated assumptions.
I recently took up the Green-Eyed Dragons problem, after a lapse of many years from my first exposure. That led me back to xkcd’s equivalent Blue Eyes puzzle, and then to the version presented here. There are currently 1361 posts in the Blue Eyes discussion (http://forums.xkcd.com/viewtopic.php?f=3&t=3), which I have not yet studied. This forum seems more approachable.
John Graeme was very close to the truth in his March 2013 arguments (though his conclusion that no blues would die if there were four or more on the island is a matter of culture and policy rather than mathematical necessity). Lenoxus and leopold were trying to win him to their own line of reasoning, and unfortunately succeeded. John folded too easily. Now I would like to take up his cause. The arguments below are my own, not derived specifically from anyone else’s.
I’m going to avoid any proof by induction. It’s not that I disagree with the approach, or that I object to basing such induction on counterfactuals. It’s just that I haven’t seen a valid proof of the inductive necessity for this problem. Let there be K blues on the island. (And let’s ignore the brown-eyed natives. They are irrelevant.) It’s true that induction works for K=2 and 3 given K=1. It also works for K=5 and above, starting from K=4. However, K=4 is a special case that must be considered individually.
I believe there was general agreement among John Graeme, Lenoxus, leopold, and others that with K=4, each blue (or non-blue, for that matter) can see at least three blues, and knows each of those three can see at least two blues and would reason that each of those two could see at least one blue.
Let me belabor that point. From our omniscient perspective, let us arbitrarily label one of the blues A. By arbitrarily, I mean that anything we prove using this label applies (“by symmetry”) equally to any other blue. We know that A can see (or knows of) three other blues as specific individuals with blue eyes. Let’s postulate that A thinks of them as B, C, and D, again in arbitrary order. (It would not work to let A label B and then have B label C, as we couldn’t all agree on who was C and whether C might be identical to A.)
With this labeling, we can agree that A (who does not know if he is a blue) can see three blues: B, C, and D. He knows that B can see at least two blues: C and D. And he knows that even if B imagined himself — B — to be a non-blue, B would know that C could see at least one other blue (D) and D could see at least one other blue (C). A knows that C and D would reason similarly about B, so B, C, and D must each know that there is at least one blue on the island. A also knows there is at least one blue (since he can see three of them), so all four know it. By symmetry, each blue knows there is at least one blue on the island.
Now for the crux of the argument: Is this mutual knowledge also common knowledge? As leopold wrote of the K=4 case on 3/3/13, “everyone knows that everyone knows there is at least 1 blue. But [he claims] none of the blues knows that everyone knows that everyone knows that everyone knows there is at least 1 blue, since none of them can be sure that any of the others can be sure that any of the others can be sure that the other blue(s) can see any others.”
I don’t agree, or even follow where that comes from. I make two assertions. First, I believe the existence of at least one blue is common knowledge. More generally, if there are K blues on the island, every blue sees K-1, knows that every blue sees at least K-2, and knows (as common knowledge) that every blue can see at least K-3 blues. (None of them knows the exact value of K, of course, or whether they themselves are blues, but they know this reasoning is true for whatever value of K is true.)
The argument I have presented above is one that any competent logician could construct. If even one of the blues could be inattentive or misinformed or slow witted, we could not know that they actually know what they “should” know. But that is not the case here. According to the Feb. 15 amendment to the puzzle, “any conclusion that can [be] logically deduced from the information and observations available to an islander, will automatically be known to that islander.” (This doesn’t state that the deductions are instantaneous, but it was intended. Otherwise these logicians could be as slow-witted as rock trolls.) Given their logical skill, every islander would understand and accept that the knowledge of at least K-3 blues is common knowledge.
(My omniscient K-based notation can be avoided, and possibly the logic chains are even simpler. Suppose a blue sees 99 blues. [We don’t care what non-blues see or know, but we do care what blues may or may not know.] He knows that if he is a blue, every blue sees 99; if he is a non-blue, every blue sees 98. K (the number of blues on the island) is at least 99, but some blues may only know that it is at least 98. If an arbitrary blue sees N blues, he knows that each blue sees at least N-1 and thus each knows K is at least N-1. No blue could see fewer than N=K-1 blues, so all know that all see at least K-2 blues and thus each knows the true K to be at least K-2. I believe this to be common knowledge, since every inhabitant can prove it and knows that every other can also prove it. I will use these formulas later, for the case of K=3. In that case each blue sees two blues and knows each can see at least one. Each also knows that K is at least 1, and also knows that all blues know this.)
I haven’t presented a formal proof that these lower limits are common knowledge, as far as I know. (I’m a computer scientist, not a mathematician.) But imagine yourself to be one of the stipulated 100 blues. You see 99 others; you know each of them sees at least 98; and you know that all know none could possibly see fewer than 97. You know that each of the others could — and therefore does — make the same inference. You can’t imagine anyone seeing at least 98 blues and thinking that there might be fewer. Yet there are those on this forum who have argued that you and the others must believe there could be one of you who could believe there is one of you who could believe … that there is no blue on the island. This is absurd. (And if Mathematics says it must be true, Mathematics is an ass.) It is something logic students have come to believe because it makes their elegant proof by induction work, even though it is inherently unbelievable.
But suppose I am wrong, and someone can prove me so. Then I offer my second assertion, that it doesn’t matter. We can agree that every blue knows that every blue can see at least K-3 blues (as at least mutual knowledge). I claim that is sufficient, whether or not it is common knowledge.
As a warm-up, consider the case of K=1. A can see no blue. A realizes that he is the blue mentioned by the foreigner and kills himself after the first day.
If K=2, A can see one blue: B. A reasons that if he himself is not a blue, B must see no blue. In which case [A reasons], B will realize that he himself is the blue mentioned by the foreigner. If A’s assumption is true, B will kill himself after the first day. When he does not, A realizes that he himself is a blue and kills himself after the second day. By symmetry, B also kills himself after that second day. [I raise some questions about this proof, in the “options” section below, but later set them aside.]
If K=3, A can see two blues: B and C. A reasons that if he himself is not a blue, B must see only one blue: C. In that case [A knows], B would reason that if he is not himself a blue, C sees no blue. In which case [A reasons that B reasons], C will realize that he himself is the blue mentioned by the foreigner. If the assumptions are true, C will kill himself after the first day. When he does not [A knows], B will realize that he himself is a blue and will kill himself after the second day. And when that doesn’t happen, A realizes that he himself is a blue and kills himself after the third day. By symmetry, all of the other blues kill themselves after that third day.
Now consider the case of K=4. A can see three blues: B, C, and D. He knows that each of those can see at least two blues, and all know that every blue can see at least one other. (This is the “K-3 minimum,” although no blue knows the value of K — or needs to.) A reasons that if he himself is not a blue, B must see only two blues: C and D. In that case [A knows], B would reason that if he is not himself a blue, C sees only one blue: D. In which case [A reasons that B reasons], C might then reason that if he is not himself a blue, D sees no blues. But that is not possible because C knows (whether common knowledge or not, as A knows that B knows that C knows) that D can see at least one blue. So [A knows B realizes], C must know that C and D are both blue. (And, by symmetry, D would also know this.) If the assumptions are true, both C and D will kill themselves after the first day. When they do not [A knows], B will realize that he is a blue and will kill himself after the second day. And when that doesn’t happen, A realizes that he is a blue and kills himself after the third day. By symmetry, all of the other blues do the same on that third day. [Third day, not fourth!]
This is the same kind of reasoning as for K=1, 2, and 3, except that knowledge of the K-3 minimum is stronger than the common-knowledge minimum provided by the blue-eyed foreigner. If each blue can see at least one other blue, there must be at least two blues. This cuts short the reasoning chain by one step and therefore all four blues die after day three rather than day four. (This is true for K greater than 4 as well: all of the blues die on the third day, regardless of their number. The reasoning chain only goes three deep before the minimum of K-3 blues cuts it off.)
Notice that there was no inductive hand-waving in this proof; no “and so forth” saying that case K=4 works just like (or depends upon) cases K=2 and 3. I am, however, leaving it to the reader to prove — by induction or by parameterizing on K and/or N — that K=5 and above follow the same pattern as K=4.
I will give you the K=5 case: A can see four blues: B, C, D, and E. He knows that each of those can see at least three blues, and all know that every blue can see at least two others. A reasons that if he himself is not a blue, B must see only three blues: C, D, and E. In that case [A knows], B would reason that if he is not himself a blue, C sees only two blues: D and E. In which case [A reasons that B reasons], C might then reason that if he is not himself a blue, D sees only one blue: E. But that is not possible because C knows (whether common knowledge or not) that D can see at least two blues. So [A knows B realizes], C must know that C, D, and E are all blue. (And, by symmetry, D and E would also know this.) If the assumptions are true, C, D, and E will kill themselves after the first day. When they do not [A knows], B will realize that he is a blue and will kill himself after the second day. And when that doesn’t happen, A realizes that he is a blue and kills himself after the third day. By symmetry, all of the other blues do the same on that third day. [Third day, not fifth!]
A problem with these solutions should now be obvious. Notice that the foreigner’s pronouncement wasn’t necessary to create the reasoning chain for K=4 and above (Actually, for K=3 and above, as I will show here.) Populations of fewer than three blues are stable in the absence of the foreigner’s comment, because the K-3 limit is less than zero and so conveys no extra information. For K=4 and above, the inference chain and “countdown” are self-triggering because all blues know that there are at least K-3 blues (even though they don’t know K). For K=3, that rule says each blue knows that each blue can see at least zero blues, which is ambiguous. (There might be one blue or none.) I invoke my previous proof that any islander who sees two blues [as here] knows there are at least two blues, all knowing (in common, I believe) that each sees at least one blue and that K is at least 1. Hence the K=3 case is also self-triggering.
Self-triggering is a problem for two reasons: because it makes the populations unstable so that we can’t know when “Day 1” occurred, and because the reasoners themselves can’t be sure that all reasoners are using the same “Day 1.”
Most of these puzzles include a statement about the island populations having been as they are since time immemorial. (Green-eyed dragons “have been living in blissful ignorance throughout the ages and haven’t seen a human for many centuries.” xkcd’s blue-eyed islanders have (or will) exist for “endless years,” presumably endless in the past.) That doesn’t mean the native population never changes, but we are meant to believe that the countdown chain has never previously been triggered (which could end that population forever, and would certainly mess with our thought experiment). Such an endless past isn’t stipulated in our particular version of the puzzle, though.
In the absence of the stipulation, it is possible that all of the blues in excess of two arrived in the recent past (or joined the community via initiation ceremony). On that day, or as soon as each was known to have made eye contact with the others, the countdown chain would be triggered. That would become “Day 1” and the blues would all commit suicide on the third day. It would not matter how many there were (if at least three) or whether a foreigner made an ill-advised remark at any time within those three days. From the foreigner’s perspective, the deaths would be entirely unpredictable — and definitely not his fault.
This is technically problematic even for the puzzles that do include a “time immemorial” stipulation, unless it extends infinitely far into the past. If there was a day of creation for the blue population, however far back, it would have triggered the countdown immediately. No more than two blues could be present on the oracular day, unless newly arrived. Thus such puzzles contain a logical contradiction if the number of blues is any value greater than two.
Let’s ignore that quibble, though, and postulate that no such line of thought is to be pursued. (This has to do with the culture of puzzle construction and intent-cognizant solution, which sometimes overrides mathematical rigor.) We then see the foreigner arrive at the island in its stated condition, with the countdown logic chain applicable — as is obvious to all perfect logicians — but not yet triggered. What does that imply for K=3 and above?
I see four possibilities (or options, if we puzzle solvers are to choose one), depending on the islanders’ policy for initiating this countdown chain. (Of course they could also follow a meta-policy, such as a rule that switches between these policies.)
First, self-protection. The islanders knew the foreigner could be a threat, so they may have agreed — explicitly or implicitly — that no statement about eye color by the foreigner could be considered reliable. Then the countdown chain is not triggered even for K=1 or 2. Whatever has been keeping it from triggering for K=3 and above continues to do so. (Perhaps this is also by mutual agreement.) The blues have already passed up an infinite number of opportunities to trigger those inferences, so why not one more? No blues commit suicide on the following days. We however, as recreational puzzle solvers, may take issue with this option. “Nothing happens” is an interesting result, and was difficult to figure out, but it didn’t derive from the problem statement and isn’t the conventional pattern for puzzles with oracular pronouncements. It’s more of a “lateral thinking” solution, and seems to violate the spirit in which the puzzle was offered.
Second, no synchronization policy. Synchronization to a specific “Day 1” isn’t logically necessary as presented. We have accepted (or postulated) that this was true in the past, and it could be true even in the face of oracular pronouncements. If any blue can’t be certain that all blues are following the same countdown chain (with synchronized days), the chain of inferences breaks (even for K=2). Reasoners wouldn’t know for certain that each is going to treat the foreigner’s announcement as a synchronizing event, so they can’t make any additional inferences from successive days without suicides. Either they always decide such reasoning is futile (and no one dies), or an indeterminate number accept the foreigner’s trigger cue, with unpredictable results and a loss of insight into other reasoners’ knowledge and thought processes. This can’t be what the puzzle author intended.
Third, “lazy inference.” Each blue realizes immediately that the stranger has imparted no new information, so no other chains of inference are triggered. This was John Graeme’s choice. It would prevent triggering in the K=3 case, but so what? That one wasn’t given to us as a “given.” As I noted above, though, the answer that “nothing happens” didn’t derive from the problem statement and isn’t very satisfying. Also, see the argument below that the foreigner can trigger a synchronization event even if imparting no information (for a logician’s usual concept of information).
Fourth, “spreadsheet updating.” This seems the most satisfactory option. No one mentioned eye color in the past, so the countdown chain was never triggered. Today the foreigner mentioned eye color, so — even though no new information was given — all reasoners update their beliefs concerning eye color. In doing so they discover that they could have come to the same conclusion previously, based on their common knowledge, had they just all thought about it on the same day. This seemingly violates the stipulation that “any conclusion that can [be] logically deduced from the information and observations available to an islander, will automatically be known to that islander,” but we can argue that these conclusions couldn’t be deduced prior to anyone mentioning eye color. Synchronization, or agreement on a “Day 1,” is a necessary precursor for K=3 and above, and synchronization was lacking.
OK, let’s go with that one. It’s reasonable and it gives the same result as the K=1 and 2 cases as analyzed above (though by a different mechanism, so further proof by induction is suspect). Fans of the Green-Eyed Dragons puzzle (http://io9.com/can-you-solve-the-hardest-logic-puzzle-in-the-world-1642492269) can now answer the question posed there: “What information did the foreigner impart?” It was not knowledge that there is at least one GED, nor even the common knowledge of at least one GED. It was the knowledge that “Today is to be considered Day 1 by all reasoners.” This is what triggers the countdown and the mass exodus of the GEDs (after Day 3, not after Day 100).
So now we have our answer: the 100 blues on the island all commit suicide after three days. Each blue sees 99 blues, knows that each of those sees at least 98, and knows that all agree there must be at least 97. The foreigner provides a synchronization event (because we insist it must be so, for a good puzzle), and the countdown begins. Because the known minimum is 97, the reasoning chain is limited to depth three and [I assert] common knowledge is present or is not required. The non-blues on the island don’t know that they are not blues, so they are also participating in these counts and deductions. However, the deaths of the blues always occurs one day before any non-blue would be forced to conclude that he is a blue. (Nothing in the puzzle statement can prove any other eye color, so no non-blue could ever learn his own color. And all are perfect logicians, so none could possibly be fooled into believing he is a blue, since such a proof is necessarily false.)
14 July, 2015 at 4:38 pm
Lenoxus
I’d like to go over this with you one post at a time, if you’re up for it. Rather than address your reasoning directly (that could get very complex very fast), I’ll propose various hypotheticals to deal with some of your conclusions. I’ll start by focusing on this:
Suppose you are an islander, and therefore don’t know your own eye color. You and six others spontaneously come into existence at the same time, so we can consider there to be a “day zero” before any visitor arrives to say anything about eye color. We’ll simplify things by assuming that you only have to kill yourself if you determine your eyes are blue; brown-eyed people are permanently safe.
You see that three of the other islanders have blue eyes, and the other three have brown.
What will happen if your eyes are in fact brown? What will happen if your eyes are in fact blue?
17 July, 2015 at 3:52 pm
Ken Laws
And thus I was enlightened.
Hi, Lenoxus! Thanks for taking me seriously (if only to show me the way).
Good test case. I’ve worked it out as rigorously as I can, and found that my previous reasoning had flaws. I had reasoned only about what blues must know, and hadn’t considered it important to reason about what non-blues in the same circumstances know. No islander can know which is his own situation, so both lines of thought are important.
Had I been a mathematician using symbolic methods, most likely I would have seen that correct reasoning by a blue necessarily includes the non-blue line of reasoning. I thought I had covered that, but it was one level too deep for my intuitive approach. One more instance of a proof that was suspect because it didn’t use LaTeX. (I was good at LaTeX, 25 years ago…)
Anyway, here’s my full analysis for anyone who wants to slog through it.
What all can see and know immediately:
If my eyes are brown (though I don’t know it): K=3, though none of us knows it. I can see three blues. Each blue sees two blues and knows that every blue sees at least one blue. All blues know (as common knowledge, since it’s tautological) that every blue can see at least no blues.
My “non-omniscient N” reasoning only states what a blue can see and know. If I assume [wrongly] that I am a blue, K=4 and each blue (including myself) sees three blues and knows that every blue sees at least two blues, and knows (as common knowledge?) that every blue can see at least one blue. But if I assume I am not a blue, that reasoning doesn’t apply. I don’t know which is the true case, so I can only assert that every blue can see at least no blue
I’d love to rescue my “proof,” so I’ll try my “non-omniscient N” reasoning for an arbitrary blue “B” that I can see. B sees two blues and knows that K is at least 2. B also knows that C and D can each see at least one blue, and that they know K is at least 1. C and D would be reasoning similarly, so all know that K is at least 2 and that each other blue knows K is at least 1. But that is not common knowledge! B can’t know if he is a blue, so he can’t know if C and D each see only one blue and thus know only that all blues see at least none.
Oops, I got that wrong in my original post. My argument required that each blue know K is at least three (or that he himself is a blue, or that each other blue sees at least two blues), which is not possible without an omniscient viewpoint. My assertion that the K=3 case is self-triggering was wrong. Each blue does know that K is at least 1, but can’t prove that each other blue knows K is at least 1. What I didn’t allow for is that it matters what brown-eyed reasoners think in these situations, since any given reasoner can’t know which case is correct.
My K=3 argument has now fallen apart, but I will soldier on to see what happens for the K=4 case (and to correct my previous post, lest it confuse anyone else).
If my eyes are blue (though I don’t know it): K=4, though none of us knows it. Each blue (including myself) sees three blues and knows that every blue sees at least two blues, and knows (as common knowledge?) that every blue can see at least one blue. Each blue knows that there are at least three blues on the island, and that every blue knows there is at least one.
[Note the question mark above. My assertion, in my original post, was that each reasoner can prove this K-3 minimum in the general case, and each knows the others must do so, so it becomes common knowledge. That result would remain true even though “I” am not an arbitrarily chosen “A” as in the proof. But is the result just mutual knowledge? And does that matter?]
I [as an islander] don’t know which case is true. I do know that at least the above K=3 minimums hold: each blue (including myself, if blue) sees at least two blues and knows that every blue sees at least one blue. All blues know (of course) that every blue can see at least no blues. I also know that the higher K=4 minimums above will apply if it is every proven that the K=4 case is true, or apply hypothetically if I were to assume that my eyes are blue.
What I can infer from events:
If I were to assume that I am a blue, arriving at a contradiction would not establish my eye color. In the following I therefore always assume that I am not a blue, and likewise for the hypothetical reasoning of blues that I can see. (Given my previous gaff, though, I’m a bit unsure that avoiding all “I am blue” assumptions is fully rigorous. Use of symbolic logic to cover all cases would be preferable.)
If my eyes are brown (though I don’t know it): This is the K=3 case, and I’ve already caved on this one. Nothing happens without a trusted oracular pronouncement.
B sees only two blues: C and D. B reasons that if he is not himself a blue, C sees only one blue: D. In which case [B reasons], C would reason that if he is not himself a blue, D sees no blues. B knows that can’t be true of D, because he knows D can see at least one blue; however, B doesn’t know if C knows that D can see at least one blue. But if a [trusted] foreigner says there is at least one blue, C knows that D would learn his own eye color and commit suicide after this first day. When that doesn’t happen, C would knows his assumption is false and commit suicide after the second day — as would D, by symmetry. When they do not, B realizes that he is a blue and kills himself after the third day; C and D do the same (again, by symmetry).
Meanwhile, I would be following similar reasoning: If I assume [correctly] that I am not a blue, B must see only two blues: C and D. B would reason that if he is not himself a blue, C sees only one blue: D. In which case [B would reason], C might then reason that if he is not himself a blue, D sees no blues. But if a [trusted] foreigner says there is at least one blue, C knows that D would learn his own eye color and commit suicide after this first day. When that doesn’t happen [B reasons], C would knows his assumption is false and commit suicide after the second day — as would D, by symmetry. When they do not, B would realize that he is a blue and kill himself after the third day; C and D would do the same (again, by symmetry). This in fact happens, and I am not forced to renounce my assumption. My eye color remains unknown.
If my eyes are blue (though I don’t know it): This is the K=4 case. The usual inductive proof holds, unless there is a source of common knowledge (or deep-enough knowledge) other than a trusted oracle. The result can also be shown non-inductively. In detail:
Assuming [falsely] that I am not a blue, B must see only two blues: C and D. B would reason that if he is not himself a blue, C sees only one blue: D. In which case [B would reason], C might then reason that if he is not himself a blue, D sees no blues. If all know there is at least one blue — or if C necessarily knows that D knows it — C knows that D would learn his own eye color and commit suicide after this first day. When that doesn’t happen, C would knows his assumption is false and commit suicide after the second day — as would D, by symmetry. When they do not, B would realize that he is a blue and kill himself after the third day; C and D would do the same (again, by symmetry). When they do not, I realize that I am a blue and kill myself after the fourth day; all other blues do likewise.
If an oracular common knowledge was required, there is nothing special about the case K=4.
Is it certain that C knows that D knows there is at least one blue, even in the absence of an oracle? [That would make this case self-triggering, and cut short the reasoning depth.] C is one of the three blues that I can see, chosen arbitrarily and given that name by me. I know that C can see at least two blues (B and D), and that C would know that D can see at least one blue (B). B, however, cannot be certain of this and is in fact hypothesizing that he is not a blue, that C can see only one blue, and that D can see no blues.
Let me dispense with a couple of misleading paths: D can actually see three blues, in this K=4 case, and so can any blue. That is not sufficient to tell anyone their own eye color, which can only be done through an asymmetric “countdown chain” of refuted assumptions. Suppose that chain were triggered by C knowing that D knows there is at least one blue, without reference to B’s assumption. Then C sees two blues (B and D), and knows D can see at least B. Likewise D sees B and C, and knows C can see at least B. Neither can prove his own eye color, and neither commits suicide. Other blues can’t draw any inferences from that, so nothing happens.
Is it sufficient that B [assuming himself non-blue] knows that C [assuming himself non-blue] knows that D knows there is at least one blue? Only if I am also assuming myself non-blue, since otherwise D could believe that I am the one. No, the chain has to go all the way down, to four levels.
My original claim was that if there are K blues on the island, every blue sees K-1, knows that every blue sees at least K-2, and knows each of those blues can see at least K-3 blues. Each can prove this to be true, and knows that each other blue can prove it. I asserted that this abstract formula is common knowledge, even though K is not known to any of the blues.
Alas (for my argument), that doesn’t create a useful lower limit. To do so, each blue would have to know K, or that he is a blue. The formula states what must be true (and knowable from an omniscient viewpoint), but not what an islander could know to be true of other islanders’ knowledge.
In our current case, K=4 and so each blue sees three, knows that each can see at least two, and knows that each of those can see at least one. This is true in reality, but not true (or not applicable) under my necessary test assumption that I am not a blue (and thus K=3 and there might be a blue who sees no other blue). It that case I know that B knows that C knows there is at least one blue, but I can’t know that D knows it in the absence of an oracular pronouncement.
Summary:
I could keep pushing this around, trying to find some error in my current thinking that would re-establish my previous argument, but I don’t believe I would succeed. The case of 100 blues still doesn’t feel right to me, but that’s the whole point of such puzzles.
To summarize, I recant my previous heresy and accept the usual proof by induction. There is nothing special or self-triggering about the K=3 or K=4 cases. A trusted oracle (or foreigner) is required, and will trigger a mass suicide after 100 days. There is nothing unstable about the island culture in the absence of an oracle, and my discussion of various policies and options is irrelevant and hereby withdrawn. Mathematics is not an ass, and I apologize to her. I leave the field with head held reasonably high, though still uncertain of my own eye color.
Somewhere in Hell there is an island with 100 blue-eyed logicians…
19 July, 2015 at 4:33 am
Lenoxus
I’m gratified to read this. You clearly put a lot of thought into the problem both before and after my comment. Your last sentence made me chuckle. Cheers!
4 October, 2015 at 9:07 am
TET1968
I offer a permutation to the puzzle just for amusement value…. “One of the islanders is blind. We do not know his/her eye color.This person knows there are 1000 total islanders.” How does this influence events?
31 December, 2015 at 4:29 pm
Anonymous
There is a brilliant and hysterically funny dramatization of this puzzle here:
http://slatestarcodex.com/2015/10/15/it-was-you-who-made-my-blue-eyes-blue/
31 December, 2015 at 9:42 pm
Peter Donis
@terrytao: “once the foreigner speaks in full view of all the islanders, it becomes common knowledge that there is at least one blue-eyed islander”
Wasn’t this already common knowledge? As long as there are 3 or more blue-eyed islanders, each islander can see at least two other people with blue eyes, and therefore each islander knows that each islander can see at least one other person with blue eyes. It seems like there is an implicit assumption that every islander knows that every islander knows what every islander can see, so every islander should know that every islander knows that there is at least one blue-eyed islander. And this should iterate indefinitely; every islander knows that every islander knows that every islander knows that there is at least one blue-eyed islander, etc. (In other words, the number of islanders that every islander knows has blue eyes does not continue to decrease as we add levels of “every islander knows”; it “bottoms out” at N – 2, where N is the number of blue-eyed islanders.)
In other words, shouldn’t the general rule be that, if there are N blue-eyed islanders, every islander knows (and knows that every islander knows, etc., indefinitely) that there are at least N – 2 blue-eyed islanders (i.e., this is common knowledge)? Thus, the foreigner can only convey new information, and therefore create new common knowledge, by observing that there are at least N – 1 blue-eyed islanders. (So in the case under discussion, he would have to observe that there are at least 99 blue-eyed islanders.)
1 January, 2016 at 5:15 pm
Peter Donis
“Wasn’t this already common knowledge?”
Never mind, I see what I was missing. Start with the case of 3 blue-eyed islanders. Call them A, B, and C. A can see that B and C have blue eyes, so A knows that B and C can each see at least one person with blue eyes. But A does *not* know, for example, that B knows that C can see at least one person with blue eyes–for if A, hypothetically, does not have blue eyes, then B can see only one person, C, with blue eyes, and in that hypothetical case, B would not know that C can see someone with blue eyes, since B does not know his own eye color.
So in the case N = 3, we see that the common knowledge does not iterate indefinitely; it fails at two levels of indirection (A does not know that B knows that C knows…). For the case N = 4, it would fail at 3 levels of indirection (A does not know that B knows that C knows that D knows…); and for N blue-eyed islanders, the common knowledge fails at N – 1 levels of indirection–i.e., common knowledge of at least one blue-eyed islander is only present up to N – 2 levels of indirection.
2 January, 2016 at 3:29 am
mike pazda
Little does A know what little B knows. If A knew what what little B knew then A would know a little.
2 January, 2016 at 12:33 am
a
communication complexity Professor Tao do you work on this as well?
11 April, 2016 at 1:15 pm
Dave
It is obvious that the second argument is flawed. If we follow this argument we are led to conclude that the natives will consider in their “logical” argument that there exist the possibility that only one native has blue eyes. However, all of them know that this possibility does not exist.
In more precise words, the recurrent argument does not apply to a set of n individuals because the analysis for the set of n-1 individuals cannot be isolated from the knowledge they have about the additional individual.
4 June, 2016 at 5:02 pm
It ought to be common knowledge that Donald Trump is not fit for the presidency of the United States of America | What's new
[…] example of the distinction comes from the blue-eyed islander puzzle, discussed previously here, here and here on the blog. (By the way, I would ask that any commentary about that puzzle be directed […]
28 June, 2016 at 7:17 am
Theory of Mind, Common Knowledge and Aumann’s Theorem | Mathematics in everyday life
[…] (also mentioned in the lecture above), consider the Muddy Children puzzle, or the more dramatic Blue Eyed Islander Puzzle. The second link leads to Terry Tao’s blog where he writes in more depth about the […]
9 July, 2016 at 5:16 am
xfyuanche
I learned the blue-eyed islanders puzzle from the Tao’s recent article about Trump. I think this puzzle is very interesting and leads us to think a technical and a little philosophic question: The islanders, and furthermore ourselves, should use mutual knowledge or common knowledge (disregarding mutual knowledge meanwhile) in the day-life reasoning? If you analyze the puzzle on the ground of mutual knowledge, then you find solution 1: the islanders will not suicide. As contrast, if you analyze the puzzle based on the common knowledge, then you get the solution 2: the islander will all suicide. The fundamental reason for different results is just simple: the foreigner only brings extra common knowledge information, but no any new mutual knowledge information, to the island.
Let me give a clear description of the two different derivations and the corresponding different results.
Case 1: The islanders are all common-knowledge thinkers, which means that they always use common knowledge (disregarding mutual knowledge meanwhile) as their starting point of logical deduction. After the foreigner leaves, if you are one of the 100 blue-eyed persons, what will you think? You will assume you were not bue-eyed and there are only 99 blue-eyed persons you have seen, then you want to see what will happens in this case. What will happens? Each of the 99 blue-eyed persons will assume himself were not blue-eyed and there are only 98 blue-eyed persons. And so forth, until they will see what happens when there were only 1 blue-eyed person. (Note: the repeated change-place- thought comes from the ignorance of the common knowledge.) OK. Let’s do the judgement. Because of the blue-eye information provided by the foreigner, if there is only 1 blue-eyed person, the person can rapidly realize this fact and suicides in the first day. If they find that no one suicides in the first day, it triggers the two blue-eye assumption and the two persons must suicide in the second day. And so forth, until all the blue-eyed person suicide.
Case 2: The islanders are all mutual-knowledge thinkers, which means that they always use mutual knowledge as their starting point of logical deduction. Since each blue-eyed person will see 99 blue-eyed person, the only reasonable assumption each of them will make is that there are 100 or 99 blue-eyed persons in the island. They will never assume that what happens when there were only 1 ,2 or 3 blue-eyed persons in the island, because such assumptions are obviously rejected by their known fact (all in all they have see 99 blue-eyed person, so there must be n>>2). If the islander still assume only a few of blue-eyed persons and use the reasoning chain like Case 1, now it will causes a obvious paradox. (We all know that the reasoning based on incorrect premise usually leads to ridiculous results, like building a delicate castle using floppy sands.) The islander starts to use the logic of Case 1 to judge whether or not n=1, whether or not n=2, whether or not n=3 ( it must go step by step rigorously, any questionable step will invalidate the next step)…..….But, meanwhile, inside himself, he clearly knows that n cannot be 1, cannot be 2, cannot be 3. (As we said above, they have known n>>2). How ridiculous! The islander will refuse this logic and they will say: why we must depend on the logic to tell us wherther or not n is 2, in the case we have known that n cannot be 2?
In conclusion, the ending of the islanders depend on that they are mutual-knowledge thinkers or common-knowledge thinkers.
9 July, 2016 at 8:18 am
xfyuanche
My addional remarks. The property of the above reasoning based on common knowledge (disregarding mutual knowledge meanwhile) is that how others think is far more important than the obvious facts or say mutual knowledge. In contrast, the property of the reasoning based on mutual knowledge is that the obvious facts are more important than how others think—-I don’t need to judge whether or not n=2 according to how the others think, because n obviously cannnot be 2!
20 July, 2016 at 9:27 am
Franky_GTH
I cannot make sense of the mutual-thinking logics. To me sounds like an attempt to oversimplify the case.
9 July, 2016 at 10:43 am
Jeffrey Helkenberg
I think it is obvious that there are zero suicides provided the islanders are devout and do not speak among themselves regarding eye color. I think we can borrow from Schrodinger to understand why this is so.
Each tribal member is in a superposition of all possible eye color states as regards measurements made on his or her own system/person. Meanwhile, individuals are able to measure/observe other systems, but (legally) cannot perturb those system by adding information as with, for example, “Hey girl, you have blue eyes. And now you must die.” Telling a person about their eye color would be considered a form of einselection, or environmentally induced superselection, resulting with self-knowledge and the requisite suicide.
A stranger who is in possession of an awareness of his own state is obviously not in a superposition; this person enters the village and claims that, like many of the islanders, he *too* has blue eyes. This visitor has added no information to the islanders self-knowledge sufficient to collapse the eye color superposition, as the statement is merely a way of reinforcing what everyone already can *legally* observe. The highly logical islanders would see this faux pas as a *test of faith* in the sense that it invites conversation among tribe members regarding eye color; if these conversations do not happen and no measurement devices exist that would allow an observation of one’s own eye color, there would be no *logical* way to certify personal ownership of *any* eye color and hence no logical/religious requirement to commit suicide.
Any given islander obviously knows the total population of the island and also knows the eye colors of anyone they can observe. They know that there are 1000 total islanders. They know they can observe 900 brown eyed people and 99 blue eyed people, or 899 brown eyed people and 100 blue eyed people; this is as close as they can *possibly* get to an accurate census of eye color without knowing their own eye color and hence existing in an a priori “mode of violation” as respects the articles of their religious code. The available census information is *not* sufficient to collapse an individual superposition of possible eye colors without injecting additional information.
I would argue that in the absence of a measurement device (mirror), the most *illogical* action would be suicide, as it would be based on a bias, prejudice or a “best guess.” I would state that there is no “purely logical” method by which any islander can arrive at a knowledge of their own eye color, and thus no suicides should occur.
9 July, 2016 at 12:11 pm
Jeffrey Helkenberg
I see the error in my response…..
There are two sets of information, external and internal. The external information can be thought of as a list of all islanders and the associated eye colors (of course, this is a private list, not shared, and is 999 rows deep for each member, as it does not include their eye color). The internal information is simply a list of all possible eye colors that a person may possess, all of which are equally probable as far as the self-referencing islander is concerned. [This will become problematic from a religious point of view, as I explain below.]
When the visitor announces the presence of a blue eyed person this triggers an awareness that corresponds with the creation of a counter. Any given brown eyed person knows there are *at least* 100 or *at most* 101 blue eyed people, as the observer has no self-knowledge. The blue eyed people know they can see 99 blue eyed people and also that they are unaware of their own eye color, so that there are at least 99 or at most 100 blue eyed people.
Blue eyed people know in advance that it will take at least 99 days before a decision can be made, as they can see that there are 99 people with blue eyes, and the only way to discern their own eye color is if other blue eyed people also count 99 (rather than 98) sets of blue eyes. Brown eyed people know in advance that it will take at least 100 days before a decision can be made based on the same logic. At the expiration of 99 days, on the 100th day at noon, all 100 blue eyed commit suicide, as each of them becomes aware they are the 100th member of that set. Since they kill themselves, the other 900 now know they *do not* have blue eyes.
I assume that the religion is tolerant of islanders in the event they become aware of what eye color they do not possess? Is self-flagellation the appropriate response in that case?
28 July, 2016 at 2:26 pm
Anonymous
I’m baffled by the number of people who, over the years, concluded that the collective suicide answer is the right solution to the puzzle… when the whole argument is on a wrong assumption.
The proof given supposes that the (n-1) individuals act as if there were only (n-1) people, i.e. the (n-1) people observed by the logician cannot observe the logician himself.
The real situations are as depicted below.
If n=1 then:
that person sees that either n=0 or n=1
that person is told that 10
Then they acquire the information n>0 and they conclude : then Z
But this is wrong, it could be Z, indeed, but it could also be A or B or C or …
The only cases where people can deduce the color of their own eyes are the cases where they can legitimately think that another person WILL ACT as if the case n=1 were possible.
When n>3 : they all know that everybody knows that the case n=1 is impossible, and so that nobody will act as if it were the case.
“But a dude with a Nobel prize said so”
No he didn’t. Go read the article he got the nobel for, and you’ll see that it requires individuals to share their “posteriors” (the information they deduced) and revise it, then share again until they agree. In the blue islanders puzzle, the individuals do not share their posteriors,
Indeed, instead of saying “I know that n is equal to X”, by not suiciding they say “I know that n is not equal to X” wiich contains less information than the previous.
The conclusion of that paper : we cannot agree to disagree, and not we cannot agree to agree that we ignore.
29 July, 2016 at 1:39 am
Anonymous
I add that the collective suicide is only possible if by convention in the tribe 0 divided by 0 equals 1. Go read the conditional probability formula and write down your chains of hypothesis and you’ll see that you divided by 0 to find the collective suicide answer.
Don’t even try to use the argument of “it’s a limit because the recursion is infinite”. It’s bullshit and you know it because n is an integer.
23 August, 2016 at 8:26 am
ViaLunaris
I’ll admit I wouldn’t know how to parse the answer in terms of conditional probability, but I’ll run through the induction proof as best I can – and in the interest of foresight, if you still believe it’s incorrect, I’ll leave a series of questions to figure out what you believe the answer to be, and then rely on proof by contradiction instead.
Using induction to prove the statement: “after the foreigner speaks, if there are exactly X blue-eyed islanders, they commit suicide on day X”
Base case – if there is 1 blue-eyed islander, he will commit suicide on day 1 after the foreigner speaks (and thereby makes him aware)
Inductive step – assume that the statement holds true for a specific N: “after the foreigner speaks, if there are exactly N blue-eyed islanders, they commit suicide on day N”, and now prove for N+1
So in the case of N+1, I will choose the perspective of a random islander from this group, who then observes N blue-eyed islanders. Since the statement holds true for specific N, he expects them to commit suicide on day N. When this does not happen, the logical conclusion is that there are not N blue-eyed islanders, and seeing nobody else, he will conclude that he is also, and will commit suicde the next day, along with the rest (by symmetry – no other blue-eyed islander should have a different thought process)
Thus if it is true that “after the foreigner speaks, if there are exactly N blue-eyed islanders, they commit suicide on day N”, it is true for (N+1) – combined with the base case, I have proven it for all natural numbers.
————————————————————————————-
Now maybe that still didn’t convince you, so I’d like to ask a couple of questions then about your conclusions (when n > 3, nobody will act after the statement)
Do you agree that the presence of any non-blue eyed islander does not affect the conclusion? (if there are 2 blue-eyed, they will commit suicide on day 2 regardless of 0-1,000 brown-eyed, and if there are 4 blue-eyed, they will not commit suicde on day 4 regardless of the 0-1,000 brown-eyed)
By your statement about n>3, I’m going to assume that you believe with n=3, that they collectively suicide on day 3, but nothing happens at n = 4. (if you believe that n = 3 still nothing happens, but n = 2 means suicide, just subtract 1 from everything I’m about to say)
Let’s simplify for this problem by establishing a clear day 1 with a statement as well.
On day 1, you and 3 blue-eyed people spontaenously pop onto the island together, along with a booming voice that says “amongst you, there is at least one pair of blue eyes”.
You observe, that after 3 days pass, they have not yet commited suicide. Do you know what your eye colour is?
If you say you don’t, I’ll tell you that your eyes were actually brown, and somehow, your previously conclusion about collective suicide on day 3 was wrong, because there were 3 blue-eyed and they did not commit suicide on day 3.
If you say you do, then I’ll tell you that your eyes are blue and that the statement about non-collective suicide on day 4 is wrong because you have figured out your eye colour and will suicide along with the rest on day 4 (by symmetry). [[side note, if you say that you know your eye colour and it isn’t blue, then you’ve contradicted case n=3 yourself]]
This can be done for any “breakpoint” where you believe that they will suicide at X but not X+1 – I’m willing to elaborate once I see how you’ve thought this part through.