Throughout the post, “we” refers to GiveWell and Good Ventures, who work as partners on the Open Philanthropy Project.
This post gives an overall update on progress and plans for the Open Philanthropy Project. Our last update was about six months ago, and the primary goals it laid out were six-month goals.
Summary:
- U.S. policy (previous update): we have prioritized hiring and are ahead of the goal we set. We made a full-time hire for criminal justice reform and, more recently, made another hire (pending finalization of a work visa) for a program officer role focused on the treatment of animals in industrial agriculture. Going forward, our priorities will be (1) working with new hires; (2) looking for giving opportunities in immigration policy and land use reform; and (3) either hiring or grantmaking in macroeconomic policy. More
- Global catastrophic risks (previous update): progress has been in line with our goals in some ways and not in others (details below). When the opportunity to help support the Future of Life Institute’s work on mitigating potential risks from advanced artificial intelligence, we decided to prioritize it. We ended up spending a large amount of time investigating this opportunity and recommending a grant of ~$1.2 million. Since then, we have raised the priority of this cause. Our top priorities going forward will be further investigating the cause of potential risks from advanced artificial intelligence, considering a full-time hire for it, and continuing our continuing search for a full-time biosecurity hire. We hope to resolve both within the next six months. More
- Scientific research (previous update): we did not have a six-month goal, but we are very unlikely to accomplish the goals we set for the year. Our priority is building scientific advisory capacity, via our ongoing search and our current part-time arrangement with Lily Kim (who is now working 20 hours per week, up from 5 previously). More
- Public content: we are hoping to launch a separate website for the Open Philanthropy Project by the end of the calendar year. We still have many public writeups still in progress, though we have completed writeups for the highest-priority causes. More
The overall theme is that we are putting most of our effort into capacity building (recruiting, trial hires, onboarding new hires). This is in contrast to six months ago, when most of our effort went into selecting focus areas. Six months from now, we hope to be putting most of our effort into recommending grants and putting out public content. (Specifically, we hope that our efforts within the “U.S. policy” and “global catastrophic risks” categories will fit this description. We expect it to take longer to choose focus areas within scientific research.)
U.S. policy
Our previous update stated:
- Our new goal is to be in the late stages of making at least one “big bet” – a major grant ($5+ million) or full-time hire – in the next six months. We think there is a moderate likelihood that we will hit this goal; if we do not, we will narrow our focus to a smaller number of causes in order to raise our odds.
- Our highest priority is to make a full-time hire on criminal justice reform, factory farming (pending a last bit of cause investigation, focused on the prospects for research on meat alternatives), or macroeconomic policy. Our second-highest priority is to further explore immigration policy and land use reform, with an eye to either finding more giving opportunities (hopefully including at least one major one) or to developing a full-time job description. A more extensive summary of our priorities is available as a Google sheet.
Since then:
- We hired Chloe Cockburn to lead our work on criminal justice reform. She started in late August.
- Lewis Bollard accepted our offer (pending our ability to obtain a work visa) to lead our work on the treatment of animals in industrial agriculture. He expects to start in October.
- We are not sure whether it is necessary to make a full-time hire for macroeconomic policy, though we currently have a consultant who could potentially play that role advising us.
- We did some research on meat alternatives (writeup forthcoming), and determined that the prospects for near-term meat alternatives are not strong enough to justify de-prioritizing a full-time hire to work on other aspects of the treatment of animals in industrial agriculture.
- We have also put time into a number of grants within other causes, particularly macroeconomic policy, immigration policy, and land use reform. Most are not yet public.
- We conducted investigations into a number of other causes, most of which are not yet public.
- David Roodman completed his analysis of the likely impact of raising alcohol taxes. Overall, we have not identified anything that we wish to prioritize comparably to the causes listed in previous bullet points.
Over the next six months, our top priority will be working with Chloe and Lewis to get in sync about goals and plans for those fields. Over time, we hope that the new program officers will lead this work with less involvement from us, but we believe it is important to invest early on in understanding each other’s thinking.
We expect that Alexander Berger, who leads our work on U.S. policy, will also have substantial time to pursue giving opportunities in causes that we aren’t currently expecting to make full-time hires for – particularly immigration policy and land use reform.
We expect to continue working on macroeconomic stabilization policy in some way, but we aren’t yet sure whether we will make a full-time hire for this cause. If we do not, it will be one of Alexander’s top priorities along with the two causes in the previous paragraph.
As a much lower priority, we will continue to conduct investigations on other causes.
We have updated our spreadsheet summary of our priority causes. We also provide a version that highlights the cells that have changed since our last public spreadsheet. In brief, we expect to focus primarily on immigration policy, land use reform and macroeconomic stabilization policy in addition to the two causes (criminal justice reform and factory farming) where we will be working with full-time hires. We may investigate a couple of particular giving opportunities in foreign aid policy and soil lead reduction; we are unlikely to work on other causes in the next 6 months beyond grant maintenance and general investigation.
Global catastrophic risks
Our previous update stated:
- Our new goal is to be in the late stages of making at least one “big bet” – a major grant ($5+ million) or full-time hire – in the next six months. We think there is a moderate likelihood that we will hit this goal; if we do not, we will narrow our focus to a smaller number of causes in order to raise our odds.
- Our highest priority is to make a full-time hire for working on biosecurity. As a second priority, we are spending significant time on various aspects of geoengineering, geomagnetic storms, potential risks from advanced artificial intelligence, and some issues that cut across different global catastrophic risks. A more extensive summary of our priorities and reasoning is available as a Google sheet.
Progress has been in line with our goals in some ways and not in others.
The main update has been regarding the cause of potential risks from advanced artificial intelligence. At the time of our last update, we hadn’t determined how to prioritize this cause, and it’s worth reviewing the basic progression of our thinking on the matter:
- Since the beginning of our work on global catastrophic risks, we believed that this topic was worth looking into due to the high potential stakes and our impression that it was getting little attention from philanthropists. We were already broadly familiar with the arguments that this issue is important, and we initially focused on trying to determine why these arguments hadn’t seemed to get much engagement from mainstream computer scientists.
- However, we paused our investigations (other than keeping up on major new materials and some of the critical response to them) when we learned about the Future of Life Institute’s January conference specifically on this topic, which Howie Lempel and Jacob Steinhardt attended as our representatives.
- Our last update took place shortly following the conference. At that point, we had become convinced that this cause was highly important and worthy of investment. (This despite the fact that we remained uncertain about the details of some key people’s views – more here.) Our remaining concern was crowdedness: we wrote, “It remains unclear to us how to think about [the cause’s] ‘crowdedness’ [in light of Elon Musk’s $10 million gift], and we plan to coordinate closely with the Future of Life Institute to follow what gets funded and what gaps remain.”
- Since then, we became convinced that there was a strong case for providing more funding to the Future of Life Institute’s research grant program, as discussed in our writeup on this program.
- We decided to prioritize investigating the possibility of helping to support the Future of Life Institute research grants program. We ended up spending a large amount of time investigating this opportunity. It was difficult to tell in advance what the size of our support (if any) would be; we ended up recommending a grant of ~$1.2 million. In some sense, this grant was a “big bet” given that we saw it as a major opportunity and invested significant time in it, although the grant size we ended up with was below the working definition of a “big bet” above.
- Regardless of how one maps the grant to our goal, we see our progress as suboptimal in hindsight. We feel we could have invested less time in investigating this grant (while still ultimately recommending it) and instead made more progress on our continuing search for a full-time biosecurity hire.
- We now have the impression that the field of research on potential risks of advanced artificial intelligence is changing rapidly – in particular, the amount of interest and the number of projects are growing – and if we do prioritize this area it may call for a full-time specialist. (This is a change from our previous position, where we saw the space as quite thin and unlikely to be a fit for a full-time position). Accordingly, we have begun working with a contractor (who could become a Program Officer in the future) to do a more in-depth investigation of the different possible activities in this space.
Other progress in this category:
- We published a writeup on risks from atomically precise manufacturing, and have a writeup in progress on environmental catastrophic risks from synthetic biology. We do not plan to prioritize work on either in the near future.
- David Roodman completed his report on geomagnetic storm risk. We may investigate funding research on electrical grid robustness at some point.
Over the next six months, our top priorities will be our search for a full-time biosecurity hire and the above-mentioned work on investigating the field around potential risks from advanced artificial intelligence. We have updated our spreadsheet summary of our priority causes. We also provide a version that highlights the cells that have changed since our last public spreadsheet.
Scientific research
Since our last update:
- We have been working with Lily Kim and Melanie Smith on neglected goal investigations, and hope to publish our first writeup (on animal product alternatives) in the next few months. A major challenge of work in this category is that we haven’t identified an effective way to do “shallow” investigations; investigating any given scientific field takes a large amount of work from both generalist staff and scientific advisors.
- Lily was previously working for us ~5 hours per week, and is now working ~20 hours per week.
- We have created a posting for a full-time scientific advisor and put significant effort into the search, but have not made an offer yet.
- We have lowered the priority of the other work in this category. Unlike with the above two categories, we have not yet set focus areas for scientific research, and we believe that doing so will require a large amount of investigation – preferably with high involvement from scientific advisors. We see our top priority as building scientific advisory capacity, and don’t expect to make substantial headway on setting focus areas until we have much more of it.
- We have started an exploration of social sciences research. We are doing quick skims of the literature on questions that seem to have high potential social value, in order to identify potential important gaps in the literature. We might address such gaps by directly funding research or by working on systemic issues that affect many literatures. This work is quite preliminary, and so far we have been focused on developing a process for surveying the literature on a given question.
Public content
We are hoping to launch a website for the Open Philanthropy Project by the end of the year. We believe that the new website will make it much easier to understand our work and current priorities. Creating it has been a significant amount of work, and it remains difficult to forecast exactly when the website will be ready to launch.
We previously wrote:
We have recently been prioritizing investigation over public writeups, and our public content is running well behind our private investigations. We are experimenting with different processes for writing up completed investigations – in particular, trying to assign more of the work to more junior staff. If we could do this, it would make a major difference to our capacity, since senior staff already have a substantial challenge keeping up with all of our priority causes. By the end of 2015, we hope that our public content will be no further behind our private investigations than it is at the moment.
We have made some progress on this front, particularly on high-priority causes. We have published new writeups on land use reform, potential risks from advanced artificial intelligence, health care policy, and potential risks from atomically precise manufacturing as well as an updated and more in-depth writeup on nuclear security; the first three are particularly high-priority causes. We also have several other writeups in progress, including a more in-depth writeup on biosecurity.
We’re still not where we want to be on public content: we have many writeups still in progress, and our content is not well-organized (largely because it is on the GiveWell website rather than a separate Open Philanthropy Project website). By the end of the year, we hope that the situation will be much better due to launching our new website and publishing most of our still-pending writeups, but we expect that we will still have significant progress to make at that time.