However, if said strategy’s implementation requires actual infinities, eg each prisoner having an infinite memory, then that is why you find it intuitively objectionable.

It is useful here to think of computer algorithms and not just math. While mathematical arguments have no problem supposing infinite amounts of actors, the next question is whether each actor can have infinite memory.

In mathematics, infinity can be thought of as a property of a *set*. It can also be thought of as some limit of an infinite sequence of operations on sets, which is a statement that is simultaneously true about each member of that sequence.

This is useful because it can tie constructions we observe in the real world into patterns that approximate and converge to the limit of this infinite sequence. And then the question is how the computational complexity grows.

So in your example here, each FINITE set of prisoners can’t coordinate a strategy. So there is no “approaching a limit” – the thing only starts working with an infinite set of prisoners, each of whom has infinite memory etc. And that is why you get your intuition alarm bells go off ðŸ™‚

But it is even more than that. Your construction requires each prisoner to *use the axiom of choice in order to take an action based on the NAME of the chosen member* which is used to demonstrate the existence of a sequence of *actions* that satisfies a certain property. However, when the axiom of choice is used normally, it is not used to actually NAME the chosen element, but merely work with it like a black box. By NAME, I mean an id that distinguishes it from all other elementa, and lets you pick it out and examine is properties THAT ARE DIFFERENT than all other elements in that set.

In other words, Sure, you can assume that the chosen “representative” sequence has the same property as any other in the equivalence class — namely that all but finitely many terma are equal. BUT the part where you “cheat” is having the prisoner “find out” more than that about the representative sequence, in particular its initial values up to an arbitrary depth.

]]>Let’s say the prison guard must pick a sequence from a single equivalence class. Then, the prison guard still has infinitely many choices, and knowing the tails doesn’t tell anything about the heads. Therefore, intuition still says that each prisoner has only a 50% chance to know about his own hat color. However, the now obvious strategy to just agree on one representative, which now does not require AC, still works with the same result.

Some might object that the prisoners are given too easy a problem in my variation for the outcome to be surprising, but in my opinion the paradox in both cases is that we turn 50% “felt” individual chance of infinitely many prisoners into finite failure guarantees (Note that in my variation we still have intuitive independence of the individual chances: the probability that two given prisoners survive is intuitively 25%)

In both cases the 50% are not rigorous, and I think this supports previously stated opinions that the problem is not with AC but with our intuition about chances, which cannot be made rigorous in this problem. In particular, I think that the strategies in both cases require “aligning” the choices of the prisoners in a way that prevents measurability. In the original problem, this aligning happens dependent on the actual pick of the sequence (and leads to non-measurability of the survival-events), in my variation it is built into the problem (and leads to problems even defining the prison guards probability measure)

]]>