Open Philanthropy is now Coefficient Giving. You can read more about this decision and our growing work with multiple philanthropic partners on our new website here.
About Coefficient Giving
Coefficient Giving is a philanthropic funder and advisor. Since 2014, we’ve directed over $4 billion in grants as part of our mission to help others as much as we can with the resources available to us. We work with a range of donors who share our commitment to cost-effective, high-impact giving. Our current funds include Science and Global Health R&D, Navigating Transformative Artificial Intelligence, Abundance & Growth, Farm Animal Welfare, Biosecurity & Pandemic Preparedness, and more. In 2024, we recommended $650 million to high-impact causes.
About the AI Governance and Policy team
The AI Governance and Policy (AIGP) team funds work aimed at reducing catastrophic risks from advanced AI, and is housed under our broader work on navigating transformative AI, the largest fund at Coefficient Giving. Navigating Transformative AI | Coefficient Giving aims to distribute hundreds of millions of dollars in grants annually over the coming years across a wide range of priorities, some of which you can read about here.
This role falls under a new workstream within AIGP, focused on supporting Safety Framework Development and Implementation (SFDI): work that encourages, improves, and provides services for safety and governance work at frontier AI companies. This includes:
- Supporting the ecosystem of organizations working with AI companies on safety and governance (e.g. METR, Apollo Research).
- Funding the development and maintenance of shared infrastructure and other public goods (e.g. Inspect, PySyft).
- Understanding the goals and constraints of the safety and governance teams at frontier AI companies.
- Identifying and communicating key safety priorities for AI companies.
- Preparing for high-stakes events and opportunities as capabilities improve.
This workstream is a top strategic priority for AIGP. It spans several high-priority goals, influences many other aspects of our work, and offers promising opportunities for impact even under short AI timelines. This area has historically been under-resourced, and we see this role as a way to substantially increase the impact and pace of our AI safety work, influencing our internal strategy and the field more broadly.
About the role
This role will move between high-level strategy development and execution of top priorities. Success will require significant entrepreneurialism, social judgment, and pragmatism. Rather than evaluating inbound proposals, you should expect to spend the majority of your time actively scoping new projects and organizations and headhunting the right people to run them, while also building relationships with grantees to understand their work and evolving needs.
Depending on candidates’ skills and experience, this role could focus more on strategy development or on project and organization creation. But we expect strong candidates to be good at both and exceptional at one or both.
For this role, we’re hiring at either the Program Officer (PO) or Senior Program Officer (SPO) level. POs typically bring established expertise in AI safety and governance (at least 3 years of relevant experience), and they excel in the traits below. They would own significant components of the SFDI workstream, have significant autonomy over what to prioritize and how to make it happen, and take the lead on specific high-priority projects.
SPOs are typically recognized thought leaders in AI safety or governance work, and generally bring at least five years of highly relevant experience. They would lead the entire SFDI workstream or own a substantial portion of it, and represent Coefficient Giving’s SFDI work externally to senior stakeholders and others we engage with. Both roles will have the opportunity to substantially influence AIGP’s overall strategy, and may involve managing other team members, depending on fit.
The role will be shaped to the hire’s strengths. There is a single application for both levels; we’ll let you know which role(s) we’re considering you for when we invite you to interview.
Who might be a good fit
You might be a good fit for this work if you have:
- Deep understanding of AI company safety and governance work
- You have well-thought-out views on the safety and governance work happening at frontier AI companies, and ideally have expertise in a subset of this work.
- You understand how frontier AI safety frameworks and related governance processes work in practice. You’re also familiar with the broader ecosystem of safety and governance work at AI companies, including evaluations, audits, security practices, and the external organizations providing these services.
- Familiarity with tech company dynamics
- You know how to navigate complex organizational environments, and understand (or can quickly learn) the constraints, incentives, and decision-making processes at frontier AI companies.
- Credibility and relationship-building skills
- You can credibly engage with senior stakeholders at frontier AI companies, founders of external organizations, and key researchers. You can hold your own in fast-paced, complex conversations, quickly build high-trust relationships, and excel at productive disagreement.
- This credibility may come from existing relationships, relevant experience, or outstanding strategic thinking.
- Technical fluency
- You’re at home in technical conversations with AI safety researchers. You can critically evaluate different proposals, understand the current state of different research areas, and form your own views on what’s technically feasible and impactful.
- Good judgment
- You can understand, evaluate, and synthesize competing perspectives to form all-things-considered views. You prioritize effectively between different opportunities, considering tractability and real-world constraints — not just importance.
- You have good instincts about:
- When to move fast and when to slow down for due diligence.
- When to obsess over one project vs. spread your bets over several.
- High ownership and entrepreneurialism
- You excel at owning ambitious, poorly scoped projects and know how to make things happen — you find the right people, convince them to start new work, and support them to succeed.
- You care intrinsically about getting the work done, not about doing it yourself. You’re excited to sprint on impactful, challenging projects, but equally happy to delegate or collaborate when appropriate. Colleagues trust both your ability to execute and your judgment about when to bring others in.
- Clear communication and reasoning transparency
- You avoid buzzwords and abstractions, and give concise, clear arguments with transparent reasoning (you’ll need to produce internal strategy documents and grant writeups).
We also expect all staff to model our operating values of ownership, openness, calibration, and inclusiveness.
While not required for this role, the following would strengthen an application:
- Direct experience working on AI safety or governance at an AI company or AISI, especially with close proximity to its frontier safety framework.
- Deep technical expertise on AI development, evaluation, and safety, beyond the “technical fluency” required, e.g. a Ph.D. in a relevant field, or several years of ML engineering experience.
Note that we strongly prefer our hire to be based in the San Francisco Bay Area. We may consider genuinely exceptional candidates based elsewhere who are willing to travel frequently (i.e., monthly) to the Bay Area. We are happy to support candidates with the costs of relocation, and to consider sponsoring U.S. work authorization if needed. However, we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval.
The ideal candidate for this position will possess many of the skills and experiences described above. However, there is no such thing as a “perfect” candidate. If you are on the fence about applying because you are unsure whether you are qualified, we would strongly encourage you to apply!
Application process
We’re considering applicants on a rolling basis, but we prioritize applications received by Monday, December 15th at 8 a.m. Pacific Time or sooner. We may stop accepting new applications without notice. Please apply as soon as you can — we prioritize early applications, and we may stop accepting new applications without notice.
The initial application includes short questions on our application form. Please note that we consider each of these answers closely when evaluating applicants. Shortlisted candidates will be invited to a written work test, which will then be followed by initial interviews, final interviews, and reference checks.
We expect to make an offer by late January. If you need to hear back from us sooner (e.g. if you’re part of another hiring process with similar timelines), please let us know.
Role details & benefits
- Compensation:
- The starting compensation for a Senior Program Officer is $229,389.84 – $286,737.30, of which 15% is paid as an unconditional 401k grant, up to $23,000.
- The starting compensation for a Program Officer is $204,059.77 – $255,074.71, of which 15% is paid as an unconditional 401k grant, up to $23,000.
- These compensation figures assume a location in the San Francisco Bay Area; there would be geographic adjustments downwards for candidates based in other locations.
- Ranges within each role reflect differences in technical background. Candidates with strong technical backgrounds in AI research can expect to be paid at the top of these ranges, though we make decisions on a case-by-case basis.
- For exceptional candidates, compensation could be materially higher than the values listed above. If you’re interested in the role but are concerned about compensation, we encourage you to apply anyway and discuss this with our recruiting team.
- Benefits: Our benefits package includes:
- Excellent health insurance (we cover 100% of premiums within the U.S. for you and any eligible dependents) and an employer-funded Health Reimbursement Arrangement for certain other personal health expenses.
- Dental, vision, and life insurance for you and your family.
- Four weeks of PTO recommended per year, alongside national holidays.
- Four months of fully paid family leave.
- A generous and flexible expense policy — we encourage staff to expense the ergonomic equipment, software, and other services that they need to stay healthy and productive.
- A continual learning policy that encourages staff to spend time on professional development with related expenses covered.
- Support for remote work — we’ll cover a remote workspace outside your home if you need one, or connect you with a Coefficient coworking hub in your city. We currently have offices in San Francisco and Washington D.C., and multiple staff working from several other cities in the U.S. and elsewhere.
- We can’t always provide every benefit we offer U.S. staff to international hires, but we’re working on it (and will usually provide cash equivalents of any benefits we can’t offer in your country).
- Time zones and location:
- This is a full-time, permanent position with flexible work hours and location. We strongly prefer hires to be based in the San Francisco Bay Area, and will support candidates with the costs of relocation to the Bay. However, we will consider exceptional candidates based elsewhere who are willing to travel to the Bay Area regularly (i.e. monthly).
- We are happy to consider sponsoring U.S. work authorization. However, we don’t control who is and isn’t eligible for a visa and can’t guarantee visa approval.
- This is a full-time, permanent position with flexible work hours and location. We strongly prefer hires to be based in the San Francisco Bay Area, and will support candidates with the costs of relocation to the Bay. However, we will consider exceptional candidates based elsewhere who are willing to travel to the Bay Area regularly (i.e. monthly).
- Start date: We would ideally like a candidate to begin as soon as possible after receiving an offer, but we are willing to wait if the strongest candidates can only start later.
We aim to employ people with many different experiences, perspectives, and backgrounds who share our passion for accomplishing as much good as we can. We are committed to creating an environment where all employees have the opportunity to succeed, and we do not discriminate based on race, religion, color, national origin, gender, sexual orientation, or any other legally protected status.
If you need assistance or an accommodation due to a disability, or have any other questions about applying, please continue to contact jobs@openphilanthropy.org (this email remains the same as we transition to our new email alias).
U.S.-based Program staff are typically employed by Coefficient Giving LLC, which is not a 501(c)(3) tax-exempt organization. As such, this role is unlikely to be eligible for public service loan forgiveness programs.
We may use AI to assist in the initial screening of applications, but every application is carefully reviewed by a human before any decisions are made.
You can opt out of AI being used on your application by emailing jobs@openphilanthropy.org to let a member of the team know. Opting out will not impact your application.