Other benefits
Some grants have more indirect impacts on health or income. For example, efforts to increase housing supply or accelerate scientific progress can yield large downstream benefits.
When we fund such work, we estimate how the resulting changes — more housing, faster innovation — translate into health or income gains, so we can compare them with more direct interventions.
Our “bar” for grantmaking
We calculate the social return on investment (SROI) for each grant by dividing its philanthropic value in $CG by its cost in USD. Our current minimum “bar” for SROI is around 2,100x — meaning that for every dollar we spend, we aim to generate at least $CG 2,100 in value. Roughly speaking, that’s equivalent to giving someone at least a year of healthy life for every $50 we spend.
We usually won’t fund an opportunity unless the expected impact meets or exceeds that bar. That could mean a near-certain 3,000x opportunity, or a 10% chance of a 30,000x return. The latter is an example of our “hits-based giving” approach, where a few major successes can provide a large share of our total impact.
We periodically revisit our bar. It may rise as we find additional highly cost-effective opportunities, or fall if we have more funding to allocate. That’s because we prioritize funding the strongest opportunities first. New opportunities “crowd out” opportunities just above the old bar, while new funding lets us support opportunities just below it. (These “below the bar” opportunities would still be much more cost-effective than most we’ve evaluated — they might provide a year of healthy life for $55 or $60 instead of $50.)
Using a single bar helps us better advise donors on how to distribute money across our funds: if one area is struggling to find above-the-bar grants, while another has too many strong opportunities to fund them all, we can shift resources until the expected “last dollar” impact is roughly equal across causes. Once we’re at a point where we can’t increase our expected impact by moving a dollar between causes, we’ve optimized our “portfolio” — every dollar is doing as much good, in expectation, as it can. Because the world changes and our estimates are uncertain, we treat optimization as an ongoing process, rather than an endpoint.
How this works in practice
For very small or hard-to-model grants, we may skip a full cost-effectiveness analysis and focus on other factors instead. But whenever possible, we test each opportunity against our bar. Here’s a simplified example of how that works:
- We learn about a team developing a cheap, portable scanner to detect lead in paint.
- To make a sizable impact, the scanner must:
- Have its accuracy validated by regulatory bodies.
- Be used by national regulators to detect lead, in countries where lead paint represents a serious health risk.
- Enable regulators to reduce lead content in the paint market.
- We estimate the probability of each step under several scenarios — including a “best guess” — and use the estimates to calculate a range of plausible outcomes. By multiplying our best-guess estimates, we find that we’d expect the project to reduce the total global lead burden by 0.16%.
- That implies an expected value of roughly $CG 2.3 billion, or about 23,000 years of healthy life saved, for a cost of $350,000. The SROI would be around 6,500x, well above our bar.
- While this estimate is uncertain, it’s strong enough for us to make the grant.
For more details on this analysis, see this section of a blog post where we shared sample cost-effectiveness estimates (also known as “back-of-the-envelope calculations”, or BOTECS). In another post, we go into more detail on our process for creating BOTECs.
Limitations and uncertainties
Even with these frameworks, it can be very difficult to compare grant opportunities. Our approach has several limitations:
- Incomplete data: We often have to rely on studies with small sample sizes or uncertain assumptions about how problems will evolve over time. This makes for imprecise estimates of a problem’s current or future importance, especially for neglected areas with little existing data.
- Blind spots: Even with good data, it’s easy to miss important harms or benefits, or to overlook factors that seem obvious in hindsight. These gaps reflect both the gaps in available evidence and the limits of our own perspective.
- Philosophical challenges: Measuring our impact involves economic and philosophical questions, like how to weigh future benefits versus present-day benefits, that have no one “correct” answer.
- Risk of overreliance on quantification: Models can overvalue what’s easily measurable and undervalue harder-to-quantify benefits like basic research or policy change. We try to address this by weighing qualitative factors — such as grantee leadership or advice from experts in the field — alongside numbers. Our quantitative frameworks are only tools to aid in decision-making, not the full basis for making any particular grant.
Despite these challenges, we believe that setting clear goals and defining acceptable tradeoffs forces us to confront these questions head-on and makes our reasoning clearer and more consistent. It helps us learn from our results and continuously improve our ability to direct resources to where they can have the most impact.