We want to maximize the impact of our portfolio.

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. University of Illinois — AI Alignment Research

    Award Date:
    03/2023
    Amount:
    $80,000
    Potential Risks from Advanced AI
  2. Compassion in World Farming USA — Corporate Pledges (2023)

    Award Date:
    03/2023
    Amount:
    $550,000
    Farm Animal Welfare
  3. Eleanor Crook Foundation — Global Malnutrition Regranting 

    Award Date:
    03/2023
    Amount:
    $17,000,000
    Global Health & Development
  4. Successif — Career Advising

    Award Date:
    02/2023
    Amount:
    $778,570
    Effective Altruism
  5. Legal Priorities Project — Litigation Research

    Award Date:
    02/2023
    Amount:
    $34,000
    Potential Risks from Advanced AI
  6. Giving What We Can — General Support (2023)

    Award Date:
    02/2023
    Amount:
    $2,361,905
    Effective Altruism
  7. Probably Good — Career Advice (2023)

    Award Date:
    02/2023
    Amount:
    $455,800
    Effective Altruism
  8. Longview Philanthropy — Far-UVC Event (2023)

    Award Date:
    02/2023
    Amount:
    $165,000
    Biosecurity and Pandemic Preparedness
  9. Brian Christian — Psychology Research

    Award Date:
    02/2023
    Amount:
    $37,903
    Potential Risks from Advanced AI
  10. Cornell University — AI Safety Research

    Award Date:
    02/2023
    Amount:
    $342,645
    Potential Risks from Advanced AI