Cost-Effectiveness
Measuring cost-effectiveness helps us compare very different opportunities and direct funding to where it can do the most good.
Our goal is to help others as much as we can with the resources available to us. To achieve this, we focus on maximizing expected value. That often means we’re willing to fund work with a high risk of failure if the potential impact is large enough.
To illustrate what we mean by expected value, we would see these opportunities as equally promising:
Both have an expected value of preventing 500 deaths, even though the second carries a far higher risk. In practice, real decisions are rarely this simple — we can’t calculate expected value precisely, and many of the opportunities we consider lack the kind of data needed for confident forecasting. That uncertainty doesn’t always mean the true value is lower than our estimate. But because we suspect our models and intuitions are more often optimistic than pessimistic, and because failed high-risk bets can have real costs, that uncertainty can be a practical reason to favor lower-risk opportunities, even when the expected value of a riskier option looks high in theory.
Our focus on expected value means we don’t have an inherent preference for low-risk or high-risk philanthropy. We fund work across the risk spectrum — from organizations with proven track records to individual researchers working on speculative ideas that could address some of humanity’s biggest risks. We generally expect our grants to have a range of outcomes: some will have no impact, many will perform about as well as we expected, and a few successes will account for a disproportionate share of the good we do.
Several factors make us think hits-based giving can outperform more risk-averse strategies.
History demonstrates its effectiveness.
We’ve read and commissioned many philanthropic case studies to identify patterns behind highly impactful giving. Through our research, we found that philanthropic “hits” — cases where relatively modest funding catalyzed dramatic change — often came from unconventional bets.
This pattern appears across decades and causes:
A portfolio containing just one of these successes, plus many failures, would still likely have a high overall impact per dollar.
It plays to philanthropists’ comparative advantage.
Philanthropists have less funding than governments or for-profit investors, but face fewer constraints. While governments need to justify decisions to voters and businesses must satisfy shareholders, philanthropic funding can support unproven ideas, back work that may take decades to show results, and pursue ideas that wouldn’t appeal to a wide audience. This freedom to take risks represents one of philanthropy’s key strategic advantages.
There’s a useful analogy to venture capital.
Many forms of for-profit investing are “hits businesses” where most value comes from a few enormous successes. Philanthropy resembles for-profit investing in key ways — both involve freely allocating limited funds across projects with uncertain outcomes. Because philanthropists can pursue long-term goals without needing financial returns, hits-based approaches arguably suit philanthropy even better.
Higher-risk opportunities are structurally neglected.
If an idea is backed by strong evidence, expert consensus, and obvious appeal, it’s probably already well-funded. The most promising opportunities are often neglected because they seem unconventional or uncertain. Being willing to fund “weird” ideas, or untested ones that attract skepticism, is part of how we add value.
It’s reasonable to ask: if you’re ready to make recommendations that aren’t grounded in evidence, expert consensus, or conventional wisdom, is there any principled way to distinguish between good and bad giving?
We can’t say with certainty how best to find “hits,” which by their nature are rare and hard to predict. But we can outline some principles we use to try to increase the odds.
We assess importance, neglectedness, and tractability. This is the same framework that guides our cause selection; each factor makes a grant’s success more likely. Does the opportunity address an enormously important problem? Does it target an approach neglected by other funders? Does it have a clear, tractable path to improving lives?
We don’t require conventional validation, but we do demand a clear strategy and goals. Many activities we fund — like policy advocacy or scientific research — are inherently hard to test in predictively valid ways. Requiring strong evidence would rule out most novel work. However, we try to only fund grantees with clear goals and legible outputs. If we aren’t sure what “success” would look like, we don’t fund the project.
We look for individuals and teams with strong track records. One of our most consistent findings is that past performance predicts future results. Even if a grantee’s specific approach is unproven, we look for people with excellent records of execution or insight.
We aim for a deep understanding of the issues we fund. Our funds are led by trusted, experienced specialists, supported by thoughtful generalists who learn quickly and invest significant time developing context under their managers’ guidance. Together, they build high-trust relationships with the best-informed people in their fields and try to deeply understand the funding landscape. This can help us avoid joining overcrowded areas and to spot good ideas before they’re widely recognized as good, and thus achieve “hits.”
We don’t require consensus. When decisions require compromise between many perspectives, they tend to be defensible and reasonable — but they are less likely to have extraordinarily high impact. Consensus tends to filter out distinctive, unusual ideas; if every startup was launched by a committee of seasoned professionals, we’d have many solid businesses, but few breakthroughs.
Formally, grant recommendations require approval from senior leadership. In practice, we often defer to the staff who know a cause best; strategy, priorities, and grants for a cause are largely shaped by the people closest to the work, whose judgment we’ve learned to trust.
Building that trust means asking hard questions, expecting clear reasoning, and working through key disagreements. Over time, we aim to increase autonomy: experienced staff can approve some discretionary grants on short timelines when the upside is high. (We took this grant from proposal to approval in three days.)
We consider the best and worst plausible cases. Nearly all ambitious grants involve some chance of a negative outcome. A poorly executed advocacy campaign might set back a policy goal, while a successful one could reshape an entire field.
Ideally, we’d assign probabilities to every possible outcome and focus on overall expected value. In practice, we approximate this process, often by considering how much impact a project would have if it fully achieved its long-term goals (best case) and how much damage it could do if badly misguided (worst case).
We use these scenarios to gauge both risk and importance — aiming for projects where even partial success could be highly valuable, and where the worst case is acceptable.
We act decisively but not overconfidently. Hits-based giving rewards boldness, but when you move fast without seeking consensus, it’s easy to put too much stock in your own opinions. To stay calibrated, grant investigators communicate their uncertainties, forecast the odds of success, and track those forecasts over time. That feedback helps us steadily improve our judgment without slowing us down.
High uncertainty can make it hard to reach a decision. We deal with that using our bar for impact — the minimum expected return that justifies a grant — which gives us a clear signal for when a grant is worth making.
In our experience, hits-based giving requires both selecting the right opportunities and helping our most promising grantees succeed. This can mean making multi-year grants so organizations can hire and scale with confidence, connecting grantees with professional support like communications specialists, and sometimes encouraging grantees to be more ambitious by offering more funding than they requested.
Below are a few examples of grants that illustrate what we mean by non-consensus bets — cases where we funded work that looked risky or unorthodox at the time, but where the potential upside justified the gamble. These projects attracted limited early support from other funders, yet went on to achieve outsized impact.
Of course, a hits-based approach also means accepting that some bets won’t pay off. Many of our grants haven’t yet delivered the progress we hoped for, including some we hoped would be transformative. Our funding for gene-drive technologies to combat malaria has advanced the science, but hasn’t yet saved lives. After COVID-19, our biosecurity team supported policy advocacy for stronger pandemic preparedness; despite early momentum, most proposed funding was never authorized. And in science grantmaking, many projects complete their research without materially advancing the field, while others stall or are never completed. We view these as part of the cost of pursuing large-upside opportunities.
This page draws on material from an article published in 2016.
Measuring cost-effectiveness helps us compare very different opportunities and direct funding to where it can do the most good.
Philanthropists tend to choose causes based on personal experience, geographic proximity, or emotional appeal. This approach is understandable — personal connection is what motivates many people to give in the first place. But some causes offer far greater opportunities for impact than others: the same donation might save ten lives in one place but a thousand in another.
When choosing which causes to support, we face difficult moral and empirical questions without clear "right" answers. To address this uncertainty, we divide resources across several broad approaches to doing good, each grounded in a different worldview.