Daniel Kokotajlo (researcher)
Daniel Kokotajlo is an artificial intelligence (AI) researcher. He was a researcher in the governance division of OpenAI from 2022 to 2024,[1] and currently leads the AI Futures Project.[2]
Biography
Kokotajlo is a former philosophy PhD candidate at the University of North Carolina at Chapel Hill where he was a recipient of the 2018–2019 Maynard Adams Fellowship for the Public Humanities.[3] In 2022, he became a researcher in the governance division of OpenAI.[1]
Kokotajlo is one of the organizers of a group of OpenAI employees that claimed the company has a secretive and reckless culture that is taking grave risks in the rush to achieve artificial general intelligence (AGI).[4][5] When he resigned in 2024, he refused to sign OpenAI's non-disparagement clause, which could have cost him approximately $2 million in equity.[1] As of May 2024, Kokotajlo confirmed he retained the vested equity.[6][7] In June 2024, he, with other former OpenAI employees, signed a letter arguing that top frontier AI companies have strong financial incentives to avoid oversight, and calling for a "right to warn" about AI risks without fear of reprisal and while protecting anonymity.[8]
In 2021, Kokotajlo wrote a blog post named "What 2026 Looks Like". In 2025, Kevin Roose commented that "A number of his predictions proved prescient."[9][2]
He cofounded and leads the AI Futures Project, a nonprofit based in Berkeley, California which researches the future impact of artificial intelligence.
In April 2025, it released AI 2027, a detailed forecast scenario predicting rapid progress in the automation of coding and AI research, followed by AGI. It laid out a scenario in which fully autonomous AI agents will be better than humans at "everything" around the end of 2027, imagining its impacts on the economy, domestic politics and international relations.[2] In November 2025, Kokotajlo clarified that his AGI median estimate had shifted to the 2030s.[10][11]
References
- ^ a b c Pillay, Tharin (September 5, 2024). "TIME100 AI 2024: Daniel Kokotajlo". TIME.
- ^ a b c Roose, Kevin (April 3, 2025). "This A.I. Forecast Predicts Storms Ahead". The New York Times. ISSN 0362-4331. Retrieved May 21, 2025.
- ^ "2019-2020 E. Maynard Adams Fellows for the Public Humanities".
- ^ "OpenAI Insiders Warn of a 'Reckless' Race for Dominance". The New York Times. June 4, 2024. Archived from the original on June 5, 2024. Retrieved April 19, 2025.
- ^ Goldman, Sharon. "OpenAI's AGI safety team has been gutted, says ex-researcher". Fortune.
- ^ Piper, Kelsey (May 22, 2024). "Leaked OpenAI documents reveal aggressive tactics toward former employees". Vox. Archived from the original on June 1, 2024. Retrieved May 6, 2025.
- ^ "Will Daniel Kokotajlo get back the equity he gave up through not signing an NDA?". Manifold. Archived from the original on June 15, 2024. Retrieved May 6, 2025.
- ^ "A Right to Warn about Advanced Artificial Intelligence". righttowarn.ai. Archived from the original on April 30, 2025. Retrieved May 6, 2025.
- ^ Kokotajlo, Daniel (August 6, 2021). "What 2026 Looks Like". AI Alignment Forum. Archived from the original on September 10, 2025. Retrieved October 8, 2025.
- ^ Down, Aisha (January 6, 2026). "Leading AI expert delays timeline for its possible destruction of humanity". The Guardian. ISSN 0261-3077. Retrieved January 6, 2026.
- ^ Kokotajlo, Daniel (December 31, 2025). "AI Futures Model: Dec 2025 Update". blog.ai-futures.org. Retrieved January 3, 2026.
External links
- AI 2027 - a forecast scenario led by Kokotajlo and published in April 2025
- 2027 Intelligence Explosion - YouTube - a podcast interview with Daniel Kokotajlo & Scott Alexander, hosted by Dwarkesh Patel