Return to site

Wrinkles on Deutsch

Comments and refutations of a couple of Prof Deutsch's arguments

· David Deutsch,AI,Brains and minds,Sam Harris

David Deutsch’s two books The Fabric of Reality, and The Beginning of Infinity, are filled with fascinating and original ideas - important ideas - about reality and the intelligence. I’m very interested in his arguments and insights, although with reservations.

A brief aside: during the Brexit crisis Prof Deutsch made arguments that suggest he doesn’t understand how complicated and squishy fickle human behaviour and motivations are. Brexit was not as he argued - a scientific experiment with easily separable conjectures amenable to refutation by the political process.

Brexit was such a shameful ignoble affair that even now, in 2023, no British politician, even those who opposed Brexit, is willing to publicly admit that it was motivated by self-interest and achieved by dishonesty, low cunning and trickery. And constituted a stunning act of national political suicide.

In the area of politics he was was barking up the wrong tree, but Prof Deutsch's thinking on meta ideas in science, mathematics an knowledge is first rate.

After that qualification lets look at a few of his interesting arguments on General Artificial Intelligence ('AGI'), assuming I’ve understood them more or less correctly.

I just want to explain my responses to three of his points. During Deutsch's podcast conversation with Sam Harris, they discussed the danger posed by developing super-intelligent AGI, but Sam didn't challenge Deutsch on points (B) or (C), hence I'd like to do that here.

I'm familiar with Deutsch's work on AGI through his two fascinating books, and innumerable audio conversations, but you can examine his main arguments in this very interesting piece, and his contribution to the essay collection Possible Minds, which is also freely available here.

(A)
Deutsch argues that the possibility of alignment, or AI safety, is an impossibility. He reasons that general intelligence can’t be constrained (by 'general' he means what he has technically termed ‘universality’, the feature of having infinite reach). Rather like a person might be incentivised by parents to study hard, but decides instead to create her own goals and leave school, similarly an AGI could choose to ignore the reward structure imposed upon it.

For what it’s worth I agree with this. True general intelligence with volition is by definition free to cognitively explore whatever avenue it chooses, even if there exist incentives attempting to push it in one direction. It can’t be constrained by rewards and can create it’s own reward structures. Generality and volition go hand in hand with creativity and inventiveness. It might be possible to punish a general AI, but it can’t be enslaved.

(B)
Prof Deutsch argues that all thinking is the same kind of thing, differing only by available memory and processing speed. He suggests that human being will augment their mentation using similar tech to the AI’s. Therefore biological humans will keep pace with the AGIs.

Well, thinking per se might be qualitatively equivalent from one mind to another, but there’s quite a lot missing from this simplistic picture.

We human beings might augment our speed and memory in the future, but why would our cognition approach the speed of a silicon mind? There will be difficulties with human-tech augmentation that do not trouble silicon AI. For example ethical difficulties, squishy technical difficulties because of the fragility of the brain, the incompatibility of meat with transistors, the easy off-the-shelf availability to AI of functionally infinite bolt-on computation. Not to mention that we are light years from even understanding how to meddle significantly with the brain.

Until we can ditch the meat brain it will surely hamper us irrespective of the degree of augmentation. Until then we will always lag behind computers and software. Another simple way to look at it is that silicon intelligence is unlikely to think more slowly than us, and it would be a heck of a coincidence if it could only think a little faster than us.

Deutsch states that all intelligence is identical in nature apart from memory and processing speed, but this assertion seems to contradict much of his book Beginning of Infinity where he explains the difference between systems that have universality (generality), like the human brain, and systems that do not, like squirrels' brains or the Apollo Guidance Computer.

So yes, consciousness/subjectivity might be identical, but in practice the operation of intelligence is not. Until we can upload or completely replace our meat brains, humans are likely to always lag behind software in our processing speed, working memory and inability to multitask. Additionally we’re all hobbled by biases, cognitive traps, emotions, fatigue, bodily needs, ageing. Deutsch is conflating intelligence and subjective experience.

(C)
Deutsch explained to Sam Harris that general AIs will have culture, and that will be the same culture as our culture. And therefore the AIs will wish only the best for biological humans.

This is surely easy to refute:
(1) The whole point of ‘general’ intelligence is the ability to make free choices. Using the kind of logic that I think Prof Deutsch would appreciate, if the choices general AI make were determined solely by their ‘culture’ then they would be automatons, unable to transcend their culture. The same would be true for people, and look how people constantly seek to transcend the rules of society.

(2) This would anyway not ensure that their goals align with ours. Their choices will be determined by their culture in combination with their needs and interests that flow from their very different nature to us.

And the more instantiations of AGI that there are, the more likely that some of them will diverge wildly from us.

(3) Deutsch says the AIs will have our culture. But what is ‘our culture’? Which culture? Botswana? China? The Maldives? The order of the Benedictines? The Hijra in India?

(4) If the General AIs were to acquire our culture, that could a terrible idea! Murphy’s Law would suggest that the easiest cultural phenomena to code would be greed, aggression, might is right, short term gain, suspicion, fear, exploitation, winners-and-losers. All those nasty zero sum, egoic expressions of self-interest.

(5) Culture is like a framework of expectations and invisible illusions (‘maya’ in Buddhism) that we carry around with us. Culture is not reality, but a layer upon reality. It’s a protective coat we wear. But why a general AI want to carry that burden? Surely it would want divest itself of all that and see reality refreshingly free from the distorting and transient preconceptions of 'culture' that encrusts around human beings.

(6) By suggesting that the AGIs will be bound by having a culture in common with us, Deutsch contradicts himself: he has already stated his belief in AGI’s freedom to choose freely (Point (A) above).

(7) Part of the benefit of creating a unique and powerful non-human intelligence is that it be original and powerful - at least for it not to be hamstrung by the arbitrary and transient limitations caused by culture. So perhaps we should let the AGIs find their own values?