Scott Adams tries to end the free will debate through a reductionist argument using a finite state machine. In response, I ask:
Scott: Does complexity have a limit? Can infinite complexity be considered free will? If so, can you assert that we humans are not infinitely complex? Or, perhaps not at the microscopic level of humans, but rather that collective reality (“the universe” or whatever) is infinitely complex?
As the limit of complexity reaches infinity, it becomes indistinguishable from free will. Just as it’s fruitless to debate the existance of God, it’s equally fruitless to debate the existance of free will as it is both unknowable and unprovable.
Few ideas are so preposterous that no one at all takes them seriously, and this idea – that God, or at least the universe, might be the ultimate large-scale computer – is actually less preposterous than most. The first scientist to consider it, minus the whimsy or irony, was Konrad Zuse, a little-known German who conceived of programmable digital computers 10 years before von Neumann and friends. In 1967, Zuse outlined his idea that the universe ran on a grid of cellular automata, or CA. Simultaneously, Ed Fredkin was considering the same idea. Self-educated, opinionated, and independently wealthy, Fredkin hung around early computer scientists exploring CAs. In the 1960s, he began to wonder if he could use computation as the basis for an understanding of physics.
In all this, I ask: Exactly what is “free will”? Exactly how is it useful to know whether we have “it” or not? In knowing it (especially if we lack it), how would it change our thoughts or our behaviors? (By definition, it wouldn’t.)