On Lex Fridman's podcast, Andrei Karpathy explains to Lex his desire to culminate the super-intelligent AGI before any of the existential dangers are realised, assuming it's possible. For him the AGI project is based upon faith: that intelligence can solve the most intractable problems we have as a species.
I was wondering whether ceding all control to AGI seems like a sensible or responsible action?
A friend of mine observed that the climate and other problems might be saved by narrow AI? We might not need general AI.
Looking to the development of such potentially powerful technology seems to me like a relinquishing of responsibility? If the AGI, singular or plural, can’t reverse the climate crisis, where then will we turn? There might be no solutions possible within a reasonable timescale, but we might have deployed a paperclip maximiser.
Now, it might be that the kinds of very difficult problems that AGI can solve are impossible with 'mere' narrow AI. These problems are so hard because they are broad and will require inventiveness and thinking outside the box.
These difficult problems will require multi-disciplinary work and fresh ideas. They may also require the advancement or even the creation of whole new fields of knowledge. They may require the kind of intuition that is arrived at by being a physical being moving in space.
Unlike future AGI, present-day humans have limited working memory, incomplete knowledge, low processing speed, and they tire and are distracted by other commitments.
Of course it's possible that we might succeed with a partnership approach, the apes bringing the ideas and intuition, and the AI crunching the data. But AGI, having self-improved exponentially, would presumably have access to practically limitless ideas and inventiveness.
We certainly should consider whether the problems can be solved by narrow AI. Perhaps we 'just' need to start spending 5-10% of GDP on technological solutions for greenhouse gas drawdown and mitigation. There are plenty of ideas and experiments out there right now.
Doesn't betting everything on AGI seem rather infantilising in the long run? Handing the buck to a parent figure, perceived as wise and skilled.
This is one hell of a gamble since, in the terms used by David Krakauer of the Santa Fe Institute, AGI will not be a 'complementary artefact', but a highly debilitating 'competitive artefact'. It will make us less competent, like SatNav does. In the case of a super-intelligent AGI, we may well be infinitely less competent.
And how can we know that AGI can act effectively or fast enough to avoid the sixth mass extinction?
Putting aside whether we will fail at the Alignment Problem, we will be forever enfeebled, at least until we can somehow merge with AGI. Instead of stretching ourselves to address problems, we will tend to simply task it all to the AGI.
This is very important:
Solving these difficult problems is not only a problem of competence. The human race has abundant competence and resources. But we cannot address climate change, inequality, disease and so forth, because we cannot agree how to deploy this competence. Our incentives are inconsistent. We are pulling in different directions, and prioritise individual self-interest in the present over the wellbeing of the planet in the future.
So, for thinkers like Karpathy, who see the AGI as the lifesaver to risk everything for, here is the greatest, unstated, fallacy:
That the true benefit of an AGI over merely human agents is that the AGI will somehow override the will of human self-interest and deploy the competence (its own or human) wisely.
So belief in AGI necessitates that it is monolithic, wise, super-competent and able to overcome the self interest of 8-10 billion human being in 200+ armed nation states.
And that probably means using force, or the threat of force.
Unless I'm missing something, the super-intelligent general AI can only address our existential problems if all nations cede control to the AGI.
Isn't it obvious that this point either is true, or may be true? Why does this point not feature in the arguments more often?
I might be missing something, and there are doubtless other arguments to be made. But certainly this ought to be discussed.
Throwing all hope upon AGI is tantamount to bringing into existence a world dictator that may or may not be willing to fix the climate crisis etc. And history doesn't reveal many dictatorial rulers keen to relinquish their post after achieving their narrow policy goals...
Personally, I'm fascinated by the notion of AGI, but simply implementing AGI isn't going to be a magic bullet.