Geoffrey Irving departs AISI, plans Bay Area nonprofit focused on AI alignment
The former DeepMind researcher will remain an advisor at AISI and says he is recruiting senior researchers as he prepares a new nonprofit alignment org.
By Staff ·
Why it matters
A senior AI safety leader spinning out from a national institute to start a nonprofit signals growing demand for independent alignment research while keeping a bridge into government policy.

Geoffrey Irving (Geoffrey Irving (@geoffreyirving)) said he will leave the UK AI Security Institute (AISI) and move back to the Bay Area to start a new nonprofit alignment research organization, announcing the transition in a thread on X. "I will be starting a new nonprofit alignment research org (more to come)," Irving wrote, adding that he will continue at AISI in an advisory role.
Irving joined AISI as a Research Director after leaving DeepMind, accepting the role in December 2023 and starting in April 2024. In a longer set of reflections on his time at AISI, he calls the institute his favorite job to date and credits the team with sharpening government and public conversations on AI risks.
Irving reiterated the thesis that drew him to the UK institute: safety progress is slow, coordination is under-resourced, and governments have a key role in coordination. He originally laid out that reasoning in a 2024 AISI post, "Why I joined AISI", and he echoed it in today’s thread, emphasizing AISI’s ability to turn research on catastrophic or large-scale societal risks into policy.
https://x.com/geoffreyirving/status/2055241785564176552
As he prepares the new nonprofit, Irving said AISI remains a highly leveraged place for senior researchers, citing access to top government decision makers and a deep bench of talent. He invited prospective Research Director-level candidates to reach out, noting he is eager to pitch interested folks while he transitions to his next chapter.