We want to maximize the impact of our portfolio.

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. Eleuther AI — Interpretability Research

    Award Date:
    11/2023
    Amount:
    $2,642,273
    Potential Risks from Advanced AI
  2. Fiocruz — Ethical Protections for Potential Tuberculosis Vaccine Trial

    Award Date:
    11/2023
    Amount:
    $165,770
    Scientific Research
  3. Stanford University – Meat Purchasing Behavioral Studies

    Award Date:
    11/2023
    Amount:
    $200,000
    Farm Animal Welfare
  4. Clinton Health Access Initiative — Tuberculosis Drug Price Reduction

    Award Date:
    11/2023
    Amount:
    $600,000
    Global Health R&D
  5. AI Safety Support — MATS Program (November 2023)

    Award Date:
    11/2023
    Amount:
    $732,631
    Global Catastrophic Risks
  6. Berkeley Existential Risk Initiative — MATS (November 2023)

    Award Date:
    11/2023
    Amount:
    $2,641,368
    Global Catastrophic Risks
  7. Rutgers University — Ethical Protections for Potential Tuberculosis Vaccine Trial

    Award Date:
    11/2023
    Amount:
    $150,416
    Scientific Research
  8. Center for Global Development — General Support (2023)

    Award Date:
    11/2023
    Amount:
    $1,000,000
    Global Aid Policy
  9. London Initiative for Safe AI (LISA) — General Support

    Award Date:
    11/2023
    Amount:
    $237,000
    Potential Risks from Advanced AI
  10. CalHDF — General Support (2023)

    Award Date:
    11/2023
    Amount:
    $440,000
    Land Use Reform