Return to site

Going Forward in AGI...

· AI

I have been thinking about Artificial General Intelligence (AGI) for a few years now, and never stopped thinking about profound questions since long before I obtained my PhD in astrophysics in 2002. 

More specifically I’m interested in the Alignment Problem (or ‘Safety Problem’). My mathematician friend Juliette is an excellent foil for all my naive thoughts, but I’ve been feeling for a while that I need to capture them somewhere. Perhaps I can conjure up the occasional gem, and also I’d like to contribute what I can to the cultivating of wider public awareness of this incipient technology. The developing field of AI will change our lives in utterly unknowable ways when the transition from narrow to general intelligence has been achieved. 

After my PhD I spent a couple of years in a climatology post-doc but then was forced to find a job in the ‘real world’ after my failure to publish. So I’ve spent the best part of the last two decades languishing in a cubicle job. I want to redeem that time by developing my own thinking about important AGI issues, so it’s likely the blog’s focus will shift from…well, whatever it is I’ve been writing about…to AGI alignment.

At the same time I’m going to be working slowly through the material in the AI Alignment Curriculum, curated by Richard Ngo. So I may report on that, or convey insights I’ve gained.

Thank you for your patience!