Return to site

Could Recursion Give Us AGI?

· AI,Brains and minds,Lex Fridman

Like the mythical ouroboros, perhaps AI will need to eat its own tail.

I was listening to Sam Altman of OpenAI on Lex Fridman’s podcast telling Lex that ChatGPT4 isn’t an AGI yet, and that they will need new ideas to create an AGI.

Hmm, well that ‘new idea’ seems obvious, at least to me. I’ll explain it, but I can’t believe that there aren’t already multi-million dollar teams working on this already.

Incidentally I prefer the term ‘General AI’ to AGI, for the reason that AGI seems to have unwarranted connotations of volition - that an AGI is an entity with wants and agency. However, it seems obvious that generality, in the sense of David Deutsch’s ‘universality’, will arise before an AI obtains volition. So henceforth I will write of AI, General AI, and Volitional AI, in that order of sophistication.

Whatever the exact details, it seems obvious that for a Large Language Model ('LLM') to become a Volitional AI it will need a recursive loop that will take the output of the LLM, combine it somehow with fresh data from the external world, then feed this back into the input to the LLM:

output + sense data -> input

But this isn't exactly rocket science. This is crudely what all ‘thinking’ or ‘reasoning’ is.

Consider for example a typical model for intelligence evaluation in law enforcement. You'll always find a cycle of this form:

(1) What is the present situation?

(2) Consider the threats

(3) Consider policies and law

(4) Consider the available resources

(5) Make a decision and act

(5) Having changed the situation, return to (1)…

Advanced driving for example involves running a continuously evolving plan, modified by moment-to-moment external input.

All living beings, including pseudo-life inside software models, take in sense data and constantly adapting their internal state, even if this merely means a protozoa attempting to swim down a nutrient gradient.

The nature of existing through time in a constantly changing environment means maintaining and adapting ones state, and taking action or not, then rinsing and repeating.

The output of a future general AI, certainly one with apparent volition, would surely consist of information about, or actions to be taken in, the world.

But LLMs seem at present to be linear in that they receive information in the form of a string of text, then run this through their model to produce an output.

Unlike most, or all, intelligent life, the LLMs don't seem to maintain a conserved but continuously adapting internal state, except for the weights and biases of their neural network.

But why not combine the output skilfully with new data from the environment, then feed it back into the input of the model?

I find it interesting also that this seems to be nicely consistent with suggestions from some thinkers that an entity with general intelligence must of necessity be physically embodied in the world.

Like the above-mentioned protozoa.

output + sense data -> input

Or something very similar.