In this post, “we” refers to Good Ventures and Open Philanthropy, who work as partners.
This post compares our progress with the goals we set forth a year ago, and lays out our plans for the coming year.
In brief:
- We recommended over $200 million worth of grants in 2020. The bulk of this came from recommendations to support GiveWell’s top charities and from our major current focus areas: potential risks of advanced AI, biosecurity and pandemic preparedness, criminal justice reform, farm animal welfare, scientific research, and effective altruism. [More]
- We completed and published a number of reports on the likelihood of transformative AI being developed within the next couple of decades and other topics relevant to our future funding priorities. We are now working on both publishing additional reports in this area and updating our internal views on certain key values that inform our “near-termist” giving. [More]
- We’re interested in determining how quickly we should increase our giving. As a means of answering this question, we have developed a model to optimize our spending levels across time within “near-termist” causes, which we hope to share this year. [More]
- We have also begun the process of investigating potential new areas for giving. This year, we hope to launch searches for program officers in multiple new focus areas. [More]
Progress in 2020
Last year’s post laid out plans for 2020. This section quotes from that section to allow comparisons between our plans and our progress.
Continued grantmaking
Last year, we wrote:
We expect to continue grantmaking in potential risks of advanced AI, biosecurity and pandemic preparedness, criminal justice reform, farm animal welfare, scientific research and effective altruism, as well as recommending support of GiveWell’s top charities. We expect that the total across these areas will be over $200 million.
We recommended over $200 million across these areas and in support of GiveWell’s top charities. Some highlights:
- In potential risks from advanced AI, we continued our support for the Center for Human-Compatible Artificial Intelligence and the Center for Security and Emerging Technology and recommended a number of grants to support research on adversarial robustness as a means to improve AI safety. We also welcomed the third class of Open Philanthropy AI Fellows.
- In biosecurity and pandemic preparedness, major grants included the Nuclear Threat Initiative, the Bipartisan Commission on Biodefense, Gryphon Scientific to fill critical gaps in biosafety, and the Johns Hopkins Center for Health Security.
- In criminal justice reform, major grants included Dignity and Power Now and Fair and Just Prosecution.
- In farm animal welfare, major grants included The Humane League and Compassion in World Farming.
- In scientific research, major grants and investments included Icosavax, VasoRX, the Telethon Kids Institute to develop a Strep A vaccine, and the University of Glasgow to support research on malaria prevention.
- In effective altruism, major grants included ongoing support for 80,000 Hours, the Global Priorities Institute, and the Centre for Effective Altruism.
- We also made a number of grants in response to the COVID-19 pandemic, including support for the Nuclear Threat Initiative, Stanford University to test an antiviral drug candidate against COVID-19 and other viruses, 1Day Sooner, and the Center for Global Development.
We also wrote:
By default, we plan to continue with our relatively low level of focus and resource deployment in other areas (e.g., macroeconomic stabilization policy).
Our giving in causes beyond those listed above remained at comparatively low levels. Grants in these areas included the Center for Global Development (Immigration Policy), Employ America (Macroeconomic Stabilization Policy), Mercy Corps (Immigration Policy), the International Refugee Assistance Project (Immigration Policy), and YIMBY Law (Land Use Reform).
Worldview investigations
Last year, we wrote:
This work has been significantly more challenging than expected (and we expected it to be challenging). Most of the key questions we’ve chosen to investigate and write about are wide-ranging questions that draw on a number of different fields, while not matching the focus and methodology of any one field. They therefore require the relevant Research Analyst to try to get up to speed on multiple substantial literatures, while realizing they will never be truly expert in any of them; to spend a lot of time getting feedback from experts in relevant domains; and to make constant difficult judgment calls about which sub-questions to investigate thoroughly vs. relatively superficially. These basic dynamics are visible in our report on moral patienthood, the closest thing we have to a completed, public worldview investigation writeup.
We initially started investigating a number of questions relevant to potential risks from advanced AI, but as we revised our expectations for how long each investigation might take, we came to focus the team exclusively on the question of whether there’s a strong case for reasonable likelihood of transformative AI being developed within the next couple of decades.
We now have three in-process writeups covering different aspects of this topic; all are in relatively late stages and could be finished (though still not necessarily public-ready) within months. We have made relatively modest progress on being able to scale the team and process; our assignments are better-scoped than they were a year ago, and we’ve added one new hire (Tom Davidson) focused on this work, but we still consider this a very hard area to hire for.
We have now completed and published a number of reports on the likelihood of transformative AI being developed within the next couple of decades, three of which are publicly available: Joseph Carlsmith’s report estimating how much computational power it takes to match the human brain, Ajeya Cotra’s draft report on AI timelines, and Tom Davidson’s report forecasting the development of artificial general intelligence using a “semi-informative priors” framework. We also published David Roodman’s report modeling historic and future economic growth, which may help inform future funding priorities.
Our worldview investigations team is now working on:
- More thoroughly assessing and writing up what sorts of risks transformative AI might pose and what that means for today’s priorities.
- Updating our internal views of certain key values, such as the estimated economic value of a disability-adjusted life year (DALY) and the possible spillover benefits from economic growth, that inform what we have thus far referred to as our “near-termist” cause prioritization work.
Other cause prioritization work
Last year, we wrote:
We now have a team working on investigating our odds of finding significant amounts of giving opportunities in the “near-termist” bucket that are stronger than GiveWell’s top charities, which in turn will help determine what new causes we want to enter and what our annual rate of giving should be on the “near-termist” side. By this time next year, we hope to have a working model (though subject to heavy revision) of how much we intend to give each year in this category to GiveWell’s top charities and other “near-termist” causes.
Peter Favaloro and Saarthak Gupta developed a Monte Carlo model to optimize our spending levels across time within “near-termist” causes, which informed our decision to allocate $100 million to GiveWell top charities. The model estimates what level of spending (vs. saving) can do the most good, based on estimates of parameters including: how quickly opportunities get worse as we spend more money within a given year, how fast philanthropic opportunities decline over time as the world improves, how fast other donors ramp up their spending, and our expected asset returns. We also incorporated GiveWell’s view that they expect to find more cost-effective opportunities in the coming years and accordingly would prefer funding to gradually grow (at least in the near term) rather than experience a rapid shift to a faster, but roughly constant, pace of funding. We’re still working to write up the model and hope to share the full details this year.
As part of our work seeking to identify causes that can consistently produce giving opportunities that are more cost-effective than GiveWell’s top charities, we have also begun what we expect to be a multi-year project of investigating a number of potential high-impact “near-termist” causes. Topics thus far have included South Asian air quality, global health research and development funding, and improving education in low-income countries. This work aims to help us identify additional focus areas into which we can expand our giving in the coming years. Our goal for this year is to launch multiple searches for program officers in new causes identified by this research.
As we contemplate adding new areas for giving, we have also been working on replacing the term “near-termist” with a name that we think more accurately describes work in this category, which tends to be defined by: aiming for verifiable impact within our lifetimes, being amenable to learning and changing course as we go, and relying more on highly evidence-based approaches (not just on a particular view on population ethics or timelines per se). We expect to begin incorporating the new name into internal and external materials this year.
Hiring and other capacity building
Last year, we wrote:
Hiring and other capacity building will not be a major focus for the coming year, though we will open searches for new roles as needed.
A number of new hires joined our team since our last hiring update: Asya Bergal, Lisa Briones, Saarthak Gupta, Paige Henchen, Molly Kovite, Adam Mohsen-Breen, Emily Oehlsen, Otis Reid, and Eli Rose. Our new colleagues contribute to the operations, program area, and research teams.
We will continue to post open positions on our Working at Open Phil page and encourage interested individuals to check it out.
Impact evaluation
Over the coming year, we hope to get to the point where our process is robust enough that we’re comfortable starting to hire further people for the Impact Evaluation team (this means we would have a job description ready, not necessarily that we would have made hires yet).
We have temporarily deprioritized our impact evaluation work, and are now prioritizing searching for new cause areas (as discussed above) more highly. This was due to a combination of personnel changes and re-prioritization toward cause prioritization and finding new causes. We plan to continue this work in the future, although we don’t expect to make significant progress in 2021.
Outreach to external donors
Last year, we wrote:
Outreach to external donors will remain a relatively low priority for the organization as a whole, though it may be a higher priority for particular staff.
As discussed previously, we work significantly with other donors interested in particularly mature focus areas where our Program Officers see promising giving opportunities that outstrip their budgets (especially criminal justice reform and farm animal welfare), and we do advise major donors who reach out to us seeking to generally maximize the impact of their giving.
In the immediate future, however, proactive outreach to external donors will remain a relatively low priority for the organization as a whole. Longer term, we still aim to eventually work with many donors in order to maximize our impact.
Plans for 2021
Our major goals for 2021 are:
Continued grantmaking. We expect to continue grantmaking in potential risks from advanced AI, biosecurity and pandemic preparedness, criminal justice reform, farm animal welfare, scientific research, and effective altruism, as well as recommending support for GiveWell’s top charities. We expect that the total across these areas will be over $200 million. By default, we plan to continue with our relatively low level of grantmaking in other areas (e.g., macroeconomic stabilization policy).
Worldview investigations. We expect to publish at least two more reports in 2021 on transformative AI (timelines and risks). We’re also working to update our internal views of certain key values that inform our “near-termist” cause prioritization work.
Other cause prioritization work. By this time next year, we hope to have shared our model for optimizing our spending levels across time within “near-termist” causes and to have launched multiple searches for program officers in new areas.