We want to maximize the impact of our portfolio.

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. Center for a New American Security — Work on AI Governance

    Award Date:
    07/2022
    Amount:
    $5,149,398
    Potential Risks from Advanced AI
  2. CMU — Research on Adversarial Examples

    Award Date:
    07/2022
    Amount:
    $343,235
    Potential Risks from Advanced AI
  3. Biosecure — Biological Weapons Convention Verification Workshop

    Award Date:
    07/2022
    Amount:
    $155,714
    Biosecurity and Pandemic Preparedness
  4. Stanford University — AI Alignment Research (Barrett and Viteri)

    Award Date:
    07/2022
    Amount:
    $153,820
    Potential Risks from Advanced AI
  5. Non-trivial Pursuits — General Support

    Award Date:
    07/2022
    Amount:
    $1,005,000
    Effective Altruism
  6. Berkeley Existential Risk Initiative — Language Model Alignment Research

    Award Date:
    06/2022
    Amount:
    $40,000
    Potential Risks from Advanced AI
  7. Sinergia Animal — Farm Animal Welfare in Southeast Asia and Latin America

    Award Date:
    06/2022
    Amount:
    $2,376,103
    Farm Animal Welfare
  8. Convergent Research — Sequencing Roadmap

    Award Date:
    06/2022
    Amount:
    $250,000
    Biosecurity and Pandemic Preparedness
  9. Columbia University — Far-UVC Sterilization Research

    Award Date:
    06/2022
    Amount:
    $3,200,000
    Biosecurity and Pandemic Preparedness
  10. Flanders Institute of Biotechnology — Syphilis Vaccine Research

    Award Date:
    06/2022
    Amount:
    $731,828
    Scientific Research