AI Program Update: Navigating Transformative AI
Editor’s note: This article was published under our former name, Open Philanthropy.
Today we’re announcing that our Potential Risks from Advanced Artificial Intelligence program is now called Navigating Transformative AI.
We’re making this change to better reflect the full scope of our AI program and address some common misconceptions about our work. While the vast majority of work we fund is still aimed at catastrophic risk mitigation, the new name better captures the full breadth of what we aim to support: work that helps humanity to successfully navigate the transition to transformative AI.
Why we’re making this change
Because Open Philanthropy’s AI grantmaking is mainly focused on reducing the risks of the technology, some have mistakenly understood us to be broadly opposed to AI development or technological progress generally. We want to correct that misconception.
As explained in two recent posts, we are strongly pro-technology and believe that advances in frontier science and technology have historically been key drivers of massive improvements in human well-being. As a result, we are major funders of scientific research and “Abundance” policies, and Open Philanthropy as an organization strives to support work on both progress and safety.
We think AI has the potential to be the most important technological development in human history. If handled well, it could generate enormous benefits: accelerating scientific discovery, improving health outcomes, and creating unprecedented prosperity. But if handled poorly, it could lead to unprecedented catastrophe: many experts think the risks from AI-related misuse, accidents, loss of control, or drastic societal change could endanger human civilization.
Today, we continue to believe that most of the highest-impact philanthropic work for navigating the AI transition successfully is aimed at mitigating the worst-case AI catastrophes, because (a) catastrophic AI risk mitigation is a public good that market incentives neglect, (b) competitive dynamics create local incentives to underinvest in AI safety, and (c) governments and civil society have been slow to respond to rapid AI progress.
However, we also think that some of the most promising philanthropic opportunities in AI may focus on realizing high-upside possibilities from transformative AI, such as those articulated here.[1]This work could be framed as “mitigating the risk of humanity missing the highest-upside outcomes,” but we think a clearer program name better serves the audiences for our work. This is a nascent area of research where we’re still developing our thinking – more work is needed. More broadly, we want to remain open-minded about how we can help others as much as possible in the context of the AI transition, and “Navigating Transformative AI” captures that better than the previous name.[2]Additional, less central reasons for the name change include (a) following the principle of “saying what you’re for, not what you’re against,” and (b) preferring a more succinct new name.
What stays the same
Our core priorities remain unchanged: we are not exiting any grantmaking areas and we are not changing how we prioritize and evaluate grantmaking opportunities. If anything, we anticipate wanting to broaden the scope of our AI grantmaking over time.
In the meantime, we’ll continue to support:
- Technical AI safety research aimed at making advanced AI systems more trustworthy, robust, controllable, and aligned.
- AI governance and policy work to develop frameworks for safe, secure, and responsibly managed AI development.
- Capacity building to grow and strengthen the field of researchers and practitioners working on these challenges.
We will also maintain the same high bar for grantmaking and the same focus on supporting work that could meaningfully improve humanity’s trajectory through this pivotal transition. Our existing RFPs remain open and unchanged.
Looking ahead
The name Navigating Transformative AI reflects both the magnitude of the challenges ahead and our commitment to helping humanity chart a successful course, however we can best do that. We believe the coming years and decades will be among the most consequential in human history. Through this work, we aim to reduce the risk of disaster and help society navigate this transition wisely.
Our program page has more information about our work and grantmaking priorities in this area.
Footnotes