Navigating Transformative AI
Though advances in AI could benefit people enormously, we think they also pose serious risks from misuse, accidents, loss of control, and other problems.
-
440+grants made
About the Fund
Program Leads
-
Claire Zabel
Managing Director, Short Timelines Special Projects
-
Luke Muehlhauser
Managing Director, AI Governance & Policy
-
Peter Favaloro
Program Director, Technical AI Safety
-
Eli Rose
Senior Program Officer, Global Catastrophic Risks Capacity Building
In recent years, we’ve seen rapid progress in artificial intelligence. There’s a strong possibility that AI systems will soon outperform humans in nearly all cognitive domains.
We think AI could be the most important technological development in human history. If handled well, it could accelerate scientific discovery, improve health outcomes, and create unprecedented prosperity. If handled poorly, it could lead to catastrophic consequences: many experts think that risks from AI-related misuse, loss of control, or drastic societal change could endanger human civilization.
To reduce the risk of global catastrophe and help society prepare for major advances in AI, we support:
- Technical AI safety research aimed at making advanced AI systems more trustworthy, robust, controllable, and aligned
- AI governance and policy work to develop frameworks for safe, secure, and responsibly managed AI development
- Capacity building to grow and strengthen the field of researchers and practitioners working on these challenges
- New projects that we expect to be particularly impactful if timelines to transformative AI are short
Funding Opportunities
Research & Updates
-
Our AI safety strategy has evolved significantly since 2015, moving from early field-building to a comprehensive three-pillar approach: improving visibility into AI capabilities through evaluations and forecasting, developing technical and policy safeguards against catastrophic risks, and building the talent pipeline and institutional capacity the field urgently needs.
-
Many experts predict that leading AI systems will become smarter-than-human in the next decade, but efforts to mitigate the risks remain profoundly underfunded.
-
This piece compiles some of the most notable writings on AI by current and former staff over the last decade. They are not intended to represent our “institutional view” on AI. No such view exists — our program staff, as well as our grantees, often disagree with each other.