I’ve very often argued for the existence of free will, and I will probably give those arguments in a future post. But I’m aware of the arguments against free will, certainly at least as presented by Sam Harris in his book Free Will. I flip flop between the two positions, but generally favour Sam's arguments.
However, I appear to have dreamed up a potentially novel and simple logical argument against free will existing in any universal information processing system, which obviously includes human beings. I’ve not encountered this argument anywhere else. I’m struggling to express the idea rigorously but there is the germ of an idea. I’ll do my best.
This argument I’m about to explain is additional to Sam Harris’ ideas about free will: that every volitional action or choice comes from somewhere outside our conscious awareness (our unconscious?); that free will relies upon the unspoken fantasy that the subjective entity with the free will is something outside the brain processing systems, and can therefore swoop in and select one of several options presented by the brain processing.
This latter point - the swooping in scenario - is also given the lie by the fact that no person seems able to exercise free will even when their entire attention is bent upon it and it’s the only thing they’re trying to do!
For example during meditation, when one tries to focus only on the breath. I’m sure even the most experienced Buddhist monk eventually gets lost in thought. And a lay meditator will typically lose his or her way in seconds.
Sam argued also that there is not even a satisfactory definition of free will as something other than determinism or randomness. But I’m not sure about the truth or value of this statement, because fundamentals, like probability, distance, subjectivity, love, by definition cannot be further atomised. And therefore subjectivity may well be fundamental.
So, to my idea. Let’s accept that free will would presuppose an entity, the ‘chooser’, that can objectively view the thoughts presented by a ‘presenter’ (the brain, obviously), as if from above, and decide which thought to think.
This model seems a little like a CPU executing a fetch-execute cycle, but lets go with it.
(Firstly, this disproves free will from the get go because unless we're smuggling in something supernatural, such as Idealism, the chooser cannot be anything except another part of the brain, therefore the choosing is all just more brain processing. There is no ‘chooser’ therefore no free will.)
The chooser is the brain, or mind, so it has a state. The state of the chooser has to contain the candidate thoughts and the process of viewing the thoughts. It must contain the viewing, weighing, choosing process because that action is itself a thought. So we must draw a bubble around the chooser plus the thoughts.
But the chooser state must also contain the contents of this bubble, because it is a thought. This means that we must draw an infinite series of nested bubbles, as the chooser system must always contain itself plus the thoughts, and the chooser plus the thoughts is always itself a thought.
So the problem is that for the chooser to objectively view itself and the candidate thoughts requires an infinite recursion, which is impossible.
In short, the chooser can never objectively consider the thoughts available, because this action is itself a thought. It’s the problem of a set containing itself. It seems to me that this is a real physical limitation, not just a mathematical curiosity.
There is no privileged place where the chooser (that supposedly exercises the free will) can stand. It’s activity is just more thought. It can’t examine itself, and that is what meditation reveals when you attempt to observe the observer. You simply can’t find it.
To ram home the point, in other words no intelligent entity can examine its own mind state, because the examining would then be within the mind state. And the mind state that contains the mind state would be within the mind state, and so forth.
If this reasoning is sound (it may not be!) then it denies free will for all information processing systems of whatever ilk, including humans, aliens, and AIs anywhere in the universe.
I know this is a bit hand-wavy, but someone please point out what I’ve missed.
I also like a related, and probably stronger, argument from Sam, which is summarised as: you can’t think a thought before you think it.
What Sam means is that the chooser, if it is choosing what thoughts to think, must be presented with the possible thoughts to choose from. But they just appear from the unconscious (without having been chosen by the chooser), and this viewing of them is the same thing as thinking them. Thus, before the chooser has had the opportunity to choose, it has already thought the thoughts. So it was forced to think them before it could decide to think them. And where is the free will in that?
I’m not sure I’ve explained this terribly well, and might just return soon to redraft and tighten this up!