Potential Risks from Advanced Artificial Intelligence
Advances in AI could lead to extremely positive developments, but could also potentially pose risks from intentional misuse or catastrophic accidents.
About the Fund
The following Open Philanthropy staff oversee the Potential Risks from Advanced Artificial Intelligence program.
-
Luke Muehlhauser
Managing Director, AI Governance & Policy
-
Peter Favaloro
Program Director, Technical AI Safety
-
Eli Rose
Senior Program Officer, Global Catastrophic Risks Capacity Building
We aim to support research and strategic work that could reduce risks and improve preparedness.
In recent years, we’ve seen rapid progress in artificial intelligence. And within a decade or two, we think it’s plausible that AI systems will arrive that can outperform humans in nearly all intellectual domains (as do many of the world’s foremost AI experts).
These systems could have enormous benefits, from accelerating scientific progress to vastly increasing global GDP. However, they could also pose severe risks from misuse, accidents, or drastic societal change — with potentially catastrophic effects. We’re interested in supporting technical, strategic, and policy work that could reduce the risk of accidents or help society prepare for major advances in AI. We’re also interested in supporting work that increases the number of people working in these areas or helps those who already do to achieve their goals.
Funding opportunities and requests for proposals
RFP on technical AI safety: Supports work across 21 research areas to help make AI systems more trustworthy, rule-following, and aligned, even as they become more capable. We’re open to proposals for grants of many sizes and purposes, from rapid funding for API credits to seed funding for new research organizations.
The deadline has passed for EOI submissions to this RFP, but the submission form will stay open until July 15 to accommodate applicants who received a “revise and resubmit” response to their EOIs. During this 3-month grace period, we will also accept late submissions from other applicants but will have a high bar for considering them.
RFP on AI governance: Supports work in six areas: technical AI governance, policy development, frontier company policy, international AI governance, law, and strategic analysis and threat modeling. We’re evaluating expressions of interest on a rolling basis.
Capacity building: We have several open RFPs aimed at increasing the number of people working on risks from advanced AI and helping those who already do to achieve their goals:
Research & Updates
See All Research & Updates in this area-
It seems to me that AI and machine learning research is currently on a very short list of the most dynamic, unpredictable, and potentially world-changing areas of science. It seems to me that AI and machine learning research is currently on a very short list of the most dynamic, unpredictable,...
-
Editor’s note: This article was published under our former name, Open Philanthropy. Some content may be outdated. You can see our latest writing here. In the wake of surprisingly rapid progress in large language models (LLMs) like GPT-4, some experts have predicted that AI systems will be able to outperform...