Our Progress in 2023 and Plans for 2024

Like many organizations, Open Philanthropy has had multiple founding moments. Depending on how you count, we will be either seven, ten, or thirteen years old this year. Regardless of when you start the clock, it’s possible that we’ve changed more in the last two years than over our full prior history. We’ve more than doubled the size of our team (to ~110), nearly doubled our annual giving (to >$750M), and added five new program areas.

As our track record and volume of giving have grown, we are seeing more of our impact in the world. Across our focus areas, our funding played a (sometimes modest) role in some of 2023’s most important developments:

  • We were among the supporters of the clinical trials that led to the World Health Organization (WHO) officially recommending the R21 malaria vaccine. This is the second malaria vaccine recommended by WHO, which expects it to enable “sufficient vaccine supply to benefit all children living in areas where malaria is a public health risk.” Although the late-stage clinical trial funding was Open Philanthropy’s first involvement with R21 research, that isn’t the case for our new global health R&D program officer, Katharine Collins, who invented R21 as a grad student.
  • Our early commitment to AI safety has contributed to increased awareness of the associated risks and to early steps to reduce them. The Center for AI Safety, one of our AI grantees, made headlines across the globe with its statement calling for AI extinction risk to be a “global priority alongside other societal-scale risks,” signed by many of the world’s leading AI researchers and experts. Other grantees contributed to many of the year’s other big AI policy events, including the UK’s AI Safety Summit, the US executive order on AI, and the first International Dialogue on AI Safety, which brought together scientists from the US and China to lay the foundations for future cooperation on AI risk (à la the Pugwash Conferences in support of nuclear disarmament).
  • The US Supreme Court upheld California’s Proposition 12, the nation’s strongest farm animal welfare law. We were major supporters of the original initiative and helped fund its successful legal defense.
  • Our grantees in the YIMBY (“yes in my backyard”) movement — which works to increase the supply of housing in order to lower prices and rents — helped drive major middle housing reforms in Washington state and California’s legislation streamlining the production of affordable and mixed-income housing. We’ve been the largest national funder of the YIMBY movement since 2015.

We’ve also encountered some notable challenges over the last couple of years. Our available assets fell by half and then recovered half their losses. The FTX Future Fund, a large funder in several of our focus areas, including pandemic prevention and AI risks, collapsed suddenly and left a sizable funding gap in those areas. And Holden Karnofsky — my friend, co-founder, and our former CEO — stepped down to work full-time on AI safety.

Throughout these changes, we’ve remained devoted to our mission of helping others as much as we can with the resources available to us. But it’s a good time to step back and reflect.

The rest of this post covers:

  • Brief updates on grantmaking from each of our 12 programs. [More]
  • Our leadership changes over the past year. [More]
  • Our chaotic macro environment over the last couple of years. [More]
  • How that led us to revise our priorities, and specifically to expand our work to reduce global catastrophic risks. [More]
  • Other lessons we learned over the past year. [More]
  • Our plans for the rest of 2024. [More]

Because it feels like we have more to share this year, this post is longer and aims to share more than I have the last two years. I’m curious to hear what you think of it — if you have feedback, you can find me on Twitter/X at @albrgr or email us at [email protected].

Program updates

The diverse areas we work in may not appear to have a unifying theme, but we chose them all through the same process. In order to achieve our mission, we aim to maximize the cost-effectiveness of our giving. And to do that, we seek out causes that are some combination of:

  • Important — They have a substantial impact across a large number of individuals.
  • Neglected — They get little attention from others, especially other philanthropists, relative to their scope.
  • Tractable — They offer clear opportunities for us to support progress.

Sometimes, we are one funder among many in an area that is well-understood and well-funded, but big enough that we still find many strong opportunities. Other times, we enter an area that receives virtually no funding, which can be a sign of great promise but also great uncertainty. We’re open to both types of grantmaking and currently pursue a mix of hits-based and evidence-based opportunities.

This approach guided the hundreds of grants we made last year across our 12 active focus areas, three of which we added last year.

Here are some updates on grantmaking from our more established programs:

Global Health and Development: Our largest grants in this program went toward charities recommended by GiveWell, targeting issues like dewormingmalaria, and vitamin A supplementation (those last two grants total $129 million, and are the largest in Open Phil’s history). We also funded anti-poverty research in Ethiopia.

Scientific Research: Most of our science funding this year went toward health-related research, from testing vaccines to studying parasites. The projects we funded typically (though not always) targeted health issues that are especially prevalent in the developing world, since that kind of work is highly impactful and habitually underfunded.

Farm Animal Welfare: We supported projects focused on improving conditions for chicken and fish, and on building advocacy groups around the world. This was also a busy year for grants supporting new welfare-improving technology in areas like fish slaughter and reducing the need for chick culling.

Land Use Reform: Our grantmaking this year was largely focused on organizations in the YIMBY movement, which work to increase housing supply and have begun to win major victories across the US.

Global Aid Policy: This program aims to fund effective strategies for increasing aid levels and boosting the impact of current aid spending. Last year, the team built out work in DC to support evidence-informed aid and expanded its grantmaking in Japan and Korea, partly by funding and attending parliamentary delegations to educate policymakers about impactful health programs. It also scoped and initiated grantmaking in new areas, including support for work in Scandinavia, as well as work to grow support for multilateral global health organizations in emerging donor countries.

Effective Altruism (Global Health and Wellbeing): This program has supported a number of groups that raise leveraged funding for effective charities (like GiveWell’s recommendations), and organizations like Charity Entrepreneurship, which helps people start charities focused on neglected issues in global health and animal welfare.

Potential Risks from Advanced AI: We continued to fund an ecosystem of technical infrastructureinstitutions, and research projects working to address the technical and governance challenges posed by rapidly improving AI capabilities. To better understand the onslaught of new AI models and other tools released in the last year, we issued a call for two kinds of proposals — one for new research on benchmarking large language model agents, and the other for studying and forecasting the impacts of large language model systems. (Both are still open to new applications!)

Biosecurity and Pandemic Preparedness: We made grants across many different institutions pursuing research on potentially catastrophic biosecurity risks — including universitiesthink tanks, and the World Health Organization. We also launched a request for information on the potential use of far-UVC light to reduce pathogen transmission.

Global Catastrophic Risks Capacity Building: The team led our work to source applications from grantees affected by the FTX Future Fund’s collapse, which led us to strong opportunities across our GCR portfolio. Their other work covered fundraisingeducationcareer coachingmedia productioncareer transitionsresearch mentorship, and university groups around the world. We have a number of opportunities for funding in this area — if you’ve thought about working in this space, consider applying!


Beyond our existing focus areas, Open Philanthropy continues to actively seek out the new areas where an additional dollar can go the furthest. We launched these focus areas in 2023:

Global Health R&D: Diseases that primarily affect the world’s poorest people tend to get much less research and development funding than they should, given their considerable burden. This program seeks to fund R&D for drugs, diagnostics, and other products to reduce that burden — as well as efforts to make those products more accessible. Early grants in this program have supported (among other projects) development of monoclonal antibodies as malaria interventionsclinical trials for tuberculosis treatment, and market shaping for sickle cell diagnostics and treatment. Relative to our original Scientific Research team, this team tends to be focused on later-stage development, though the teams work together closely and report to the same person.

Global Public Health Policy: Non-communicable diseases account for a large and growing share of the world’s health burden. Public health policy can alleviate this burden by addressing environmental and behavioral risk factors. This program seeks to help governments implement more effective public health policy, and is initially focused on expanding its work in four areas where we’ve already made grants: South Asian air qualitylead exposurealcohol policy, and pesticide suicide prevention.

Innovation Policy: Economic growth and scientific innovation have lifted billions of people out of poverty and improved health outcomes around the world. This program aims to accelerate growth and innovation through a number of different strategies — without unduly increasing risks from emerging technology. Early grants in the program have supported high-skilled immigrationreplication studies, and research on science and innovation.


Finally, I’ll highlight a set of grants we didn’t make — because we funded other people to make them instead.

Last year, with co-funding from Lucinda Southworth, we completed a $150 million Regranting Challenge, supporting exceptional grantmakers outside of Open Philanthropy to tackle projects and ideas outside of our program areas. We can’t — and shouldn’t need to — launch a new program ourselves for every problem worth working on, and I’m thrilled that the Regranting Challenge helped us find strong opportunities to address issues like malnutritioneducation, and climate change. I’m particularly excited about a major project one of the recipients funded: a large clinical trial of tuberculosis vaccine MTBVAC, a leading candidate for efficacy in adults and adolescents.

Recent leadership changes

Open Philanthropy began life as GiveWell Labs, an initiative within GiveWell that was designed to think more expansively about finding outstanding giving opportunities. I was one of the co-founders of GiveWell Labs, along with Holden Karnofsky, who became the Executive Director of Open Philanthropy when we spun off as an independent organization in 2017.

I led our initial work picking causes, especially in US policy, and then oversaw our work on Global Health and Wellbeing, focusing on areas like animal welfare, scientific research, and land use policy. In 2021, Holden asked me to join him as co-CEO, and last year, when he decided to focus on AI risk full-time, the board appointed me as sole CEO.

As I stepped into my new job in July 2023, I promoted Emily Oehlsen to Managing Director and she took over my previous role overseeing Global Health and Wellbeing. It has been amazing to watch Emily grow as a leader over her time at Open Philanthropy, and I am excited to see what she does with her new responsibilities.

As part of a broader wave of hiring, we brought on several people to join the leadership team. Eric Parrie joined us as Managing Director of Operations and Jasmine Dhaliwal became our new Chief of Staff. Howie Lempel, one of our earliest employees, returned to Open Philanthropy after 7 years away, and is now serving as Senior Advisor and Interim Managing Director for our Global Catastrophic Risks work.

As CEO, I work more closely with Cari Tuna and Dustin Moskovitz, our primary funders and founding board members, than I had in the past. Dustin and especially Cari were very involved at the founding of Open Philanthropy — our grant approval process in the very early days was an email to Cari. But their level of day-to-day involvement has ebbed and flowed over time. Cari, in particular, has recently had more appetite to engage, which I’m excited about because I find her to be a wise and thoughtful board president and compelling champion for Open Philanthropy and our work. Dustin has also been thinking more about philanthropy and moral uncertainty recently, as reflected in this essay he posted last month.

It’s worth noting that their higher level of engagement means that some decisions that would have been made autonomously by our staff in the recent past (but not in the early days of the organization) will now reflect input from Cari and Dustin. Fundamentally, it has always been the case that Open Philanthropy recommends grants; we’re not a foundation and do not ultimately control the distribution of Cari and Dustin’s personal resources, though of course they are deeply attentive to our advice and we all expect that to continue to be the case. All things considered, I think Cari and Dustin have both managed to be involved while also offering an appropriate — and very welcome — level of deference to staff, and I expect that to continue.

A rollercoaster couple of years

The macro environment in which we operate has been particularly volatile over the last couple of years with a decline in our available assets, the collapse of the FTX Future Fund, and the surge of interest in AI.

Over the course of 2022, our available assets fell by roughly half, though they have since recovered about half of the total losses (i.e., they are now down about 25% from the end of 2021). Much — but by no means all — of this volatility was driven by changes in the price of Meta stock (which Dustin was heavily exposed to because he cofounded the company). Dustin has continued to gradually diversify out of Meta over this period, so he and we are less (but still somewhat) exposed to those specific price swings going forward.

The change in available assets, along with other factors, led us to raise the cost-effectiveness bar for our grants to Global Health and Wellbeing by roughly a factor of two. That means that for every dollar we spend, we now aim to create as much value as giving $2,000 to someone earning $50,000/year (the anchor for our logarithmic utility function). That roughly equates to giving someone an extra year of healthy life for every ~$50 we spend.

In late 2022, the cryptocurrency exchange FTX also rapidly collapsed. Before its collapse, the founder and CEO of FTX, Sam Bankman-Fried, had quickly become a major funder of work on global catastrophic risks via his Future Fund, supporting some of the same organizations we had been funding.

To be clear, Open Philanthropy never received any donations from FTX, the Future Fund, or any of the individuals who worked there. And shortly before the collapse, Holden wrote a post expressing concern with the reckless, maximizing strand of effective altruist (EA) thinking that Bankman-Fried has come to illustrate. From that post:

“I think it’s a bad idea to embrace the core ideas of EA without limits or reservations; we as EAs need to constantly inject pluralism and moderation. That’s a deep challenge for a community to have – a constant current that we need to swim against.”

When the fraud at FTX was exposed, we felt angry and shocked. So many people and organizations were victimized by the company’s callous risk-taking and criminal behavior. Among the many who suffered due to Bankman-Fried’s fraud were grantees of the Future Fund, who had made decisions based on funding promises that never materialized or were clawed back. We stepped in to help some of those organizations. But the sudden evaporation of such a large funder also influenced the math behind our own planning.

Once the Future Fund was no longer there to fund projects to reduce global catastrophic risks, stronger marginal funding opportunities became available to us. Early in 2023, we lifted a pause on new funding commitments to our Global Catastrophic Risks portfolio, and since then we’ve been ramping up our work in those areas. In 2023, this expansion was more about staffing than funding: our grantmaking to confront catastrophic risks hasn’t increased much yet, but we launched a big hiring round to increase our grantmaking capacity for future years.

Finally, over the last two years, generative AI models like ChatGPT have captured public attention and risen to remarkable prominence in policy debates. While we were surprised by the degree of public interest, we weren’t caught off guard by the underlying developments: since 2015, we’ve supported a new generation of organizations, researchers, and policy experts to address the potential risks associated with AI. As a result, many of our grantees have been working on this issue for years, and they were well-prepared to play important roles in the policy debate about AI as it came to the fore over the last year.

Without the efforts we’ve made to develop the field of AI risk, I think that fewer people with AI experience would have been positioned to help, and policymakers would have been slower to act. I’m glad that we were paying attention to this early on, when it was almost entirely neglected by other grantmakers. AI now seems more clearly poised to have a vast societal impact over the next few decades, and our early start has put us in a strong position to provide further support going forward.

But the sudden uptick in policymaker and public discussion of potential existential risks from AI understandably led to media curiosity (and skepticism) about our influence. Some people suggested that we had an undue influence over such an important debate.

We think it’s good that people are asking hard questions about the AI landscape and the incentives faced by different participants in the policy discussion, including us. We’d also like to see a broader range of organizations and funders getting involved in this area, and we are actively working to help more funders engage. In the meantime, we are supporting a diverse range of viewpoints: while we are focused on addressing global catastrophic risks, our grantees (and our staff) disagree profoundly amongst themselves about the likelihood of such risks, the forms they could take, and the best ways to address them.[1]

Revising our priorities

The big changes buffeting Open Philanthropy from different directions led us to revisit our funding priorities. At the end of 2022, we started an internal process to make a recommendation to Cari and Dustin about how they should allocate funding across the two main areas where we work: Global Health and Wellbeing (GHW) and Global Catastrophic Risks (GCR).

The process concluded that on balance, the factors above strengthen the case for allocating marginal funding to mitigate global catastrophic risks if we can find sufficiently cost-effective opportunities:

  • The decline of our available assets should disproportionately affect funding for GHW relative to GCR because we think that opportunities in our GHW portfolio vary less in terms of expected cost-effectiveness. That is, we think GHW opportunities are more closely clustered around the “bar” we use to define which grants meet our standards for cost-effectiveness. Meanwhile, the cost-effectiveness of the GCR portfolio is more spread out — some opportunities are very far above the bar, and some (which we don’t fund) are very far below. Accordingly, if the bar moves, it has a much bigger impact on GHW funding: a slight fall in the bar allows many more GHW opportunities to meet our standard, while a slight rise pushes many of them below it.
    • We previously explained this mechanism here, comparing GiveWell recommendations to other GHW opportunities that were less clustered around our bar.
  • The FTX Future Fund was primarily focused on mitigating GCRs. Prior to the Fund’s collapse, we were (very wrongly, in hindsight) predicting they would eventually grant tens of billions of dollars in those areas. That means their collapse reduced our estimate of the expected future funding available for GCR philanthropy by more than a factor of two. And that should affect marginal cost-effectiveness: the less funding in an area, the more you should expect to accomplish with each marginal dollar (assuming that better opportunities get funded first). In the areas where we don’t have clear data, we tend to think about returns to grantmaking as logarithmic by default, which means that a 1% reduction in available funding should make marginal opportunities ~1% more cost-effective. Accordingly, a >2x drop in expected spending for a field makes us expect the marginal cost-effectiveness to increase by >2x.
  • The increased salience of AI is a more complicated consideration. It’s useful to review our three traditional criteria for cause selection: importance, neglectedness, and tractability.
    • With the huge surge in interest, the potentially catastrophic risks from advanced AI have become a common topic of conversation in mainstream news. That makes these risks less neglected in terms of attention — but we still see little other philanthropic funding devoted to addressing them. That makes us as eager as ever to be involved.
    • On tractability, one need only look at the raft of legislationhigh-level international meetings, and associated new AI Safety Institutes (USUKJapan) to see the sea change. More generally, the range of what is considered possible — the Overton window — has significantly widened.
    • When it comes to expected importance, some of my colleagues already assumed a high probability of breakthroughs like we’ve seen over the past couple of years, so they’ve been less surprised. But for me personally, the continued rapid advances have led me to expect more transformative outcomes from AI, and accordingly increased my assessment of the importance of avoiding bad outcomes.

While these factors generally make us think it would be more impactful to allocate more of our resources to reducing global catastrophic risks, we also want to avoid the rashness and profligacy that characterized some of the “FTX era” of GCR funding. Accordingly, after the FTX collapse, we raised our cost-effectiveness bar for GCR spending.[2]

As a result of our internal process, we decided to keep that new higher bar, while also aiming to roughly double our GCR spending over the next few years — if we can find sufficiently cost-effective opportunities. (We don’t want to just increase the budget if it means accepting a lower level of cost-effectiveness, at least right now.) We believe that may be possible because of the rapid growth in the GCR opportunity set: we estimate that due to the growth in the community of people working to reduce global catastrophic risks, the set of fundable opportunities above our bar has been increasing by around 50% per year over the past few years (though preliminary evidence suggests that may be slowing more recently). If that growth slows dramatically or stops, we would eventually face a choice between lowering our bar on the GCR side or re-allocating funding to another portfolio area, and we have not yet made a decision about which we would choose.

In our GHW portfolio, we decided — and announced last year — that we would scale back our donations to GiveWell’s recommendations to $100M/year, the level they were at in 2020. At the same time, we are planning to maintain or increase the amount we give through our internal GHW programs in areas like global health R&D and animal welfare. The contrast between these two decisions arises from a couple of factors, as we described previously:

  • Since 2019, we’ve been working to identify causes that “[combine] multiple sources of leverage for philanthropic impact (e.g., advocacy, scientific research, helping the global poor) to get more humanitarian impact per dollar (for instance via advocacy around scientific research funding or policies, or scientific research around global health interventions, or policy around global health and development).” This has led to many new program areas, which have significantly increased our ability to find opportunities with a higher expected ROI (though also a higher risk of failure).
  • GiveWell has had remarkable success in attracting other large funders and expanding their influence in the philanthropic world. This has allowed them to fund many strong opportunities and pushed their marginal opportunity further out the declining marginal returns curve.

On net, these changes leave our overall current spending plans across GHW and GCR within the range we’ve communicated publicly over the years, but after a lot of volatility. Much of our planning is focused on trying to make sure we keep our options open over the near term, and we expect to revisit these allocation decisions before the end of 2026.

Some lessons learned

Here are a few concrete areas where we’ve learned lessons over the past year.

Moving faster on lead exposure

When we launched our Global Public Health Policy (GPHP) program last year, we named lead exposure as one of four focus areas for the program, and we’re now planning to do significantly more work to combat lead exposure in LMICs in the coming years. Looking back, I think we should have done more sooner.

Lead exposure is an enormous problem: one in two children living in low- and middle-income countries (LMICs) are estimated to have blood lead levels exceeding 5μg/dL (the point at which the World Health Organization recommends taking action to reduce exposure).

We’d thought of lead exposure as a potential cause area since 2019 (when GiveWell made its first grant in the area), and we commissioned a report on the topic from Rethink Priorities that was published in 2021. But it took another two and a half years before we launched GPHP and decided to work on lead exposure in earnest. In the interim, we made only three lead-related grants.

We missed several chances to move faster:

  • When the Rethink Priorities report initially came out, GiveWell was actively making grants in this space; we reasoned that they “had it covered” and decided not to get into it ourselves for the time being.
  • When James Snowden, who had led GiveWell’s work on lead exposure, joined Open Philanthropy in July 2022, we agreed that James would transition his lead grantmaking portfolio from GiveWell to Open Philanthropy. Over the next nine months, our Global Health and Wellbeing Cause Prioritization team conducted their own shallow investigation on lead in LMICs as a problem, deeper investigations on the cognitive and health impacts of lead exposure, and in-depth comparisons to other global public health policy issues and thoughts on how we should staff this work. That work continued to validate the case for focusing on lead reduction.
  • In March 2023, we were on the precipice of launching a search for a program officer to lead a lead/public health regulations portfolio, but we decided to hold off until we had concluded the allocation process described above, given uncertainty about where it would leave our budgets. We finally decided to make this part of our GPHP program and gave it a dedicated budget line six months later.

In isolation, all of these decisions seemed reasonable. However, they added up to a painfully long delay between seeing the opportunity and taking action. In each case, we took a risk-averse approach, choosing to wait until we’d gathered more information. That’s an understandable mistake, but we think it’s a mistake nonetheless.

In retrospect, especially at the later stages, we should have acted faster, prioritizing executing on the most cost-effective opportunities in front of us even if that meant risking some retrenchment later. There’s a general lesson for us here about not letting the perfect be the enemy of the good and taking more 80/20 steps to fill the most promising opportunities while we continue our investigations as needed. Especially as we get bigger and develop more processes and layers of management, it’s too easy for us to fall prey to mistakes of omission like this. We need to be careful to not let our own need for confidence prevent us from moving quickly and taking smart risks where warranted.

Wytham Abbey

Another place where I have changed my mind over time is the grant we gave for the purchase of Wytham Abbey, an event space in Oxford.

We initially agreed to help fund that purchase as part of our effort to support the growth of the community working to reduce global catastrophic risks (GCRs). The original idea presented to us was that the space could serve as a hub for workshops, retreats, and conferences, to cut down on the financial and logistical costs of hosting large events at private facilities. This was pitched to us at a time when FTX was making huge commitments to the GCR community, which made resources appear more abundant and lowered our own bar. Since its purchase, the space has gotten meaningful use for community events and gatherings. But with the collapse of FTX, our bar for this kind of work rose, and the original grant would no longer have risen to the level where we would want to provide funding.

Because this was a large asset, we agreed with Effective Ventures ahead of time that we would ask them to sell the Abbey if the event space, all things considered, turned out not to be sufficiently cost-effective. We recently made that request; funds from the sale will be used for EV’s operating costs and other valuable projects they run.

While this grant retroactively came in below our new bar, I don’t think that alone is a big problem. If you didn’t make some grants that look less attractive when the expected funding drops by half, you weren’t spending aggressively enough before.

But I still think I personally made a mistake in not objecting to this grant back when the initial decision was made and I was co-CEO. My assessment then was that this wasn’t a major risk to Open Philanthropy institutionally, so it wasn’t my place to try to stop it. I missed how something that could be parodied as an “effective altruist castle” would become a symbol of EA hypocrisy and self-servingness, causing reputational harm to many people and organizations who had nothing to do with the decision or the building.

This is a tough balance to strike because I think it’s easy for organizations to be paralyzed by concerns over reputational risk, rendering them unable to make nearly any decisions. And I think a core part of our hits-based giving philosophy is being able to make major bets that can fail outright, even in embarrassing ways. I want to maintain that openness to risk when the upside justifies it. But this example has made me want to raise our bar for things that could end up looking profligate or irresponsible to the detriment of broader communities we’re associated with.

Hiring more aggressively on the GCR side

We have a careful and intense hiring process that has served us well in many respects. In retrospect, though, I believe that we’ve hired too slowly for our work on the Global Catastrophic Risks (GCR) side of the organization. As a result:

  • Our grantmaking across the GCR portfolio increased more than 7X between 2019 and 2022. But the number of GCR program staff only grew 2X over the same period. As a result, we weren’t staffed to take advantage of all of the opportunities that have emerged as the salience of potential large-scale risks from AI have increased over the last couple of years.
  • Prior to January, our four most senior program leaders on the GCR side had all been at Open Philanthropy for a long time (three of them for at least eight years, one for five). That kind of tenure is a remarkable testament to the early recruiting Holden did, but also suggests to me that we generally haven’t been as strong as we could have been at creating opportunities for others to take on leadership roles, especially as our GCR work has evolved and grown.
  • My own under-investment here meant I almost missed out on hiring Howie, who’s now managing all of our GCR work. It was Luke, one of our lead GCR grantmakers, who alerted me last year to the fact that Howie was thinking about his next career steps and flagged that he could be a strong candidate for leading the GCR team. Given how helpful Howie has already been, missing his potential availability would have been a huge oversight. I don’t want to miss similarly promising candidates even if we don’t have the perfect role posted.

An obvious and important challenge to acknowledge here is that because global catastrophic risks have historically been so neglected by other funders, there isn’t as large a professional community to draw from as in some other areas where we work.

Nonetheless, I think we can and should be doing more to hire more aggressively at all levels than we have been in the past. In terms of absolute staff numbers on the GCR side, we’ve already been investing more in hiring over the past year, and I think we’ll make more strong hires this year, hopefully “catching up” to our opportunity set from a basic capacity perspective. But I think the default path may be for those hires to skew too far towards relatively junior staff, so I’m planning to work with GCR leadership and our recruiting team to make sure that we’re both investing in more senior hiring and developing our existing staff for more senior responsibilities.

Looking forward to the rest of 2024

We have some big plans for 2024.

On the Global Health and Wellbeing side:

  • Our GHW Cause Prioritization team is working through a shortlist of causes for deeper engagement, aiming to select one new program to add to our portfolio. One option is to keep a larger budget line for outstanding opportunities outside our defined focus areas, which is a strategy we think served Open Philanthropy well in our early days. Another is to do more funding around accelerating economic growth in low- and middle-income countries: we’ve considered this space compelling for a long time, but have historically struggled to find many concrete opportunities that seemed promising.
  • We’re planning to run the first iteration of what may be an annual top-up process for our existing programs to bid for additional funding on top of their core budget based on their marginal opportunities for impact. We’re not sure exactly what this will look like, but it should serve as another mechanism to try to equalize the marginal returns across our different focus areas (which is necessary for maximizing impact at the portfolio level).
  • We’re also aiming to experiment with collaborating with other funders by creating a multi-donor fund in an area that we think is particularly ripe for it. We’ll have more news to share on that later this year.

On the Global Catastrophic Risks side:

  • We’re continuing to hire more. We’re very close to the end of the joint round we announced in the fall and are expecting to make between ten and fifteen hires across a number of roles. I expect there will also be more hiring later this year, hopefully including at more senior levels.
  • Howie is still getting up to speed right now. But later this year, in collaboration with the rest of GCR leadership, he expects to develop a strategy to guide the GCR team’s growth and grantmaking over the next couple of years as they try to substantially increase our giving without lowering our bar.

A couple of additional priority areas I foresee for myself:

  • I’ll be focused on selecting and onboarding a new leader for our communications team and working with them to develop and execute on a revised strategy on presenting Open Philanthropy to the public. We want to make it easier to understand our goals and motivations and to draw more attention to some of the still-neglected causes where we focus.
  • I expect to spend some time thinking about how we should relate to the effective altruism (EA) community. While we are the largest funder of organizations in that space, many of our programs have little or no connection to EA. I’d like to see if there are ways for us to continue to capture the huge upside some of our EA funding has enabled while having a little more independence from a community and brand that we can’t — and don’t want to — control.
  • When we spun out our criminal justice reform program as an independent organization in late 2021, we wrote: “we see this as a valuable experiment for Open Philanthropy. In the long run, we could imagine that the optimal structure for us is to focus on cause selection and incubation of new programs, regularly spinning out mature programs in order to give them more autonomy as we focus on our core competency of cause selection and resource allocation.” I want to revisit that optimal structure question, and learn from other funders that have also experimented in this vein.

Finally, across the organization, we’re hiring. If you want to join our team, check out the open positions on our careers page. (At the time of publishing, we’re looking for a people operations leader and a finance operations coordinator.) If you don’t see something you want to apply for, you can fill out our general application, and we’ll reach out if we post a position we think might be a good fit. On that note, we’re always looking for referrals; if you refer someone and we hire them, we’ll pay you $5,000.

Appendix: Publications and Media

Open Philanthropy staff shared and discussed their work across a variety of outlets this year. Here are some highlights:

Global Health and Wellbeing

Global Catastrophic Risks

Open Philanthropy was also profiled as an organization by Inside Philanthropy, in a piece focused on our biosecurity and farm animal welfare programs.

For more updates, see the “In the News” section of our website.

Footnotes
  1. [1] Some examples of diverse views among our grantees:

    • A recent Politico piece discussed a paper from researchers who found that AI language models could help people with little training develop new pathogens, and a paper from RAND which reported that current language models didn’t increase the risk of a biological attack. The two papers look at somewhat different conditions, but it’s fair to say they point in opposite directions regarding AI’s impact on biorisk. And Open Philanthropy funded the groups behind both papers.
    • We ran a competition to find essays that would influence our views on AI risk, and awarded a prize to the Forecasting Research Institute for work that posed an especially strong challenge to our thinking.
    • Paul Scharre of the Center for a New American Security, whose work we’ve funded, has written about his opposition to the Biden administration’s export controls on semiconductors — controls supported by multiple researchers at the Center for Security and Emerging Technology (CSET), another grantee.
    • CSET’s Andrew Lohn has argued that the growth of AI labs’ compute spending is slowing down and may soon flatten out. In contrast, researchers at Epoch (another grantee) have argued that compute spending will continue to steeply increase, which leads them to very different expectations around how the most powerful AI systems’ performance will increase in the near future.

  2. [2] Note that this post refers to “longtermist” funding. We changed the “Longtermism” portfolio name to “Global Catastrophic Risks” last year. The new name better reflects our view that AI risk and biorisk aren’t only “long-term” issues; we think that both could threaten the lives of many people in the near future.


Topics: